Seagate Sets New Record With 36TB Hard Drive And Teases Upcoming 60TB Model (techcrawlr.com)
from TheImpressiveX@lemm.ee to technology@lemmy.world on 21 Jan 11:17
https://lemm.ee/post/53206471

#technology

threaded - newest

Lost_My_Mind@lemmy.world on 21 Jan 11:28 next collapse

I’m still not buying a seagate.

Dran_Arcana@lemmy.world on 21 Jan 11:43 collapse

Why?

Spacehooks@reddthat.com on 21 Jan 11:47 next collapse

They have had reliability issues in the past.

AlternateRoute@lemmy.ca on 21 Jan 12:12 next collapse

Nearly all brands have produced unreliable and a reliable series of hard drives.

Really have to look at them based on series / tech.

None of the big spinning rust brands really can be labeled as unreliable across the board

frezik@midwest.social on 21 Jan 12:20 collapse

Backblaze.com gives stats on drive failures across their datacenters:

backblaze.com/…/backblaze-drive-stats-for-q3-2024…

Seagate’s results stick out. Most of the drives with >2% failure rates are theirs. They even have one model over 11%.

Baggie@lemmy.zip on 21 Jan 13:36 next collapse

Seconding this. Anecdotally from my last job in support, every drive failure we had was a Seagate. WDs and samsungs never seemed to have an issue.

deranger@sh.itjust.works on 21 Jan 14:42 collapse

Why would Backblaze use so many Seagate drives if they’re significantly worse? Seagate also has some of the highest Drive Days on that chart. It’s clear Backblaze doesn’t think they’re bad drives for their business.

frezik@midwest.social on 21 Jan 18:34 collapse

I can only speculate on why. Perhaps they come as a package deal with servers, and they would prefer to avoid them otherwise.

There are plenty of drives of equivalent or more runtime than the Seagate drives. They cycle their drives every 10 years regardless of failure. The standout failure rate, the Seagate ST12000NM0007 at 11.77% failure, has less than half that average age.

JayleneSlide@lemmy.world on 21 Jan 12:27 next collapse

Got a source on that? According to Backblaze, Seagate seems to be doing okay (Backblaze Drive Stats for Q1 2024 backblaze.com/…/backblaze-drive-stats-for-q1-2024…), especially given how many models are in operation.

Spacehooks@reddthat.com on 21 Jan 12:41 next collapse

Looks like another person commented above you with some stuff. I recall looking this up a year ago and the ssd I was looking at was in the news for unreliability. It was just that specific model.

frezik@midwest.social on 21 Jan 14:40 collapse

I wouldn’t call those numbers okay. They have noticeably higher failure rates than anybody else. On that particular report, they’re the only ones with failure rates >3% (save for one Toshiba and one HGST), and they go as high as 12.98%. Most drives on this list are <1%, but most of the Seagate drives are over that. Perhaps you can say that you’re not likely to encounter issues no matter what brand you buy, but the fact is that you’re substantially more likely to have issues with Seagate.

TheHobbyist@lemmy.zip on 21 Jan 12:46 collapse

What brand is currently recommended? WD is taking the enshittification highway…

Latest story I know of: arstechnica.com/…/clearly-predatory-western-digit…

ryan213@lemmy.ca on 21 Jan 12:16 next collapse

I’ve bought 2 Seagate drives and both have failed. Meanwhile, I still have my 2 15yo WD drives working.

I hope I didn’t just jinx myself. Lol

TimeSquirrel@kbin.melroy.org on 21 Jan 12:32 next collapse

Click...click...click...click...

ShepherdPie@midwest.social on 21 Jan 12:33 next collapse

Same here. I have a media server and just spent an afternoon of my weekend replacing a failed Seagate drive that was only used to to backup my more important files nightly that was purchased maybe 4-5 years ago. In the past 10 years, this is the third failed Seagate drive I’ve encountered (out of 5 total) while I have 9 WD drives that have had zero issues. One of them is even dedicated to torrents with constant R/W that is still chugging along just fine.

deranger@sh.itjust.works on 21 Jan 14:41 next collapse

I’ve got the opposite experience, with WD.

You know who uses loads of Seagate drives? Backblaze. They also publish the stats. They wouldn’t be buying Seagate drives if they were significantly worse than the others.

The important thing is to back up your shit. All drives fail.

neon_nova@lemmy.dbzer0.com on 21 Jan 15:10 collapse

I get it, I’ve had the opposite experience with wd, but they were 2.5” portable drives. All my desktop stuff works perfectly still 🤞

Lost_My_Mind@lemmy.world on 21 Jan 13:00 collapse

I bought a seagate. Brand new. 250gb, back when 250gb on one hard drive cost a fuckton.

It sat in a box until I was done burning the files on my old 60gb hard drive onto dvd-r’s.

Finally, like 2 months later, I open the box. Install the drive. Put all the files from dvds onto the hard drive.

And after I finished, 2 weeks later it totally dies. Outside of return window, but within the warranty period. Seagate refused to honor their warranty even though I still had the reciept.

That was like 2005. Western Digital has now gotten my business ever since. Multiple drives bought. Not because the drives die, but because datawise I outgrow them. My current setup is 18TB and a 12TB. I figure by 2027 I’ll need to update that 12TB to a 30TB. Which I assume will still cost $400 at that point.

Return customer? No no. We’ll hassle our customer and send bad vibes. Make him frustrated for ever shopping our brznd! Gotta protect that one time $400 purchase! It’s totally worth losing 20 years of sales!

renegadespork@lemmy.jelliefrontier.net on 21 Jan 13:35 next collapse

  1. Seagate drives are generally way more reliable now than the pre-TB days.
  2. There is always a risk of premature failure with all hard drives (see the bathtub curve). You should never have only one copy of any data you aren’t okay with losing.

FYI: Backblaze is a cloud storage provider that uses HDDs at scale, and they publish their statistics every year regarding which models have the highest and lowest failure rates.

sugar_in_your_tea@sh.itjust.works on 21 Jan 14:51 next collapse

Backblaze… failure rates

Take this data with a grain of salt. They buy consumer drives and run them in data centers. So unless your use case is similar, you probably won’t see similar results. A “good” drive from their data may fail early in a frequent spin up/down scenario, and a “bad” drive may last forever if you’re not writing very often.

It’s certainly interesting data, but don’t assume it’s directly applicable to your use case.

Boomkop3@reddthat.com on 21 Jan 15:29 next collapse

Or just read their raw charts. Their claims don’t tend to line up with their data. But their data does show that Seagate tends to fail early

sugar_in_your_tea@sh.itjust.works on 21 Jan 17:50 collapse

All that tells you is that Seagate drives fail more in their use case. You also need to notice that they’ve consistently had more Seagate drives than HGST or WD, which have lower failure rates on their data. Since they keep buying them, they must see better overall value from them.

You likely don’t have that same use case, so you shouldn’t necessarily copy their buying choices or knee-jerk avoid drives with higher failure rates.

What’s more useful IMO is finding trends, like failure rate by drive size. 10TB drives seem to suck across the board, while 16TB drives are really reliable.

Boomkop3@reddthat.com on 21 Jan 19:31 collapse

Ye, Seagate is cheap, that’s the value. I’ve had a tonne myself and they’re terrible for my use too

renegadespork@lemmy.jelliefrontier.net on 21 Jan 16:44 next collapse

Sure, YMMV for any statistical study but it’s also the best source that exists for stats on consumer Hard Drives tested at scale.

sugar_in_your_tea@sh.itjust.works on 22 Jan 00:13 collapse

It’s absolutely useful data, but there are a bunch of caveats that are easy to ignore.

For example, it’s easy to sort by failure rate and pick the manufacturer with the lowest number. But failures are clustered around the first 18 months of ownership, so this is more a measure of QC for these drives and less of a “how long will this drive last” thing. You’re unlikely to be buying those specific drives or run them as hard as Backblaze does.

Also, while Seagate has the highest failure rates, they are also some of the oldest drives in the report. So for the average user, this largely impacts how likely they are to get a bad drive, not how long a good drive will likely last. The former question matters more for a storage company because they need to pay people to handle drives, whereas a user cares more about second question, and the study doesn’t really address that second question.

The info is certainly interesting, just be careful about what conclusions you draw. Personally, as long as the drive has >=3 year warranty and the company honors it without hassle, I’ll avoid the worst capacities and pick based on price and features.

renegadespork@lemmy.jelliefrontier.net on 22 Jan 03:17 collapse

You’re correct, but this is pretty much “Statistics 101”. Granted most people are really bad at interpreting statistics, but I recommend looking at Backblaze reports because nothing else really comes close.

boonhet@lemm.ee on 21 Jan 20:36 collapse

Is a home NAS a frequent spin up/down scenario though? I’d imagine you’d keep the drives spinning to improve latency and reduce spin-up count. Not that I own any spinning drives currently though - so that’s why I’m wondering.

sugar_in_your_tea@sh.itjust.works on 21 Jan 23:36 collapse

My drives are usually spun down because it’s not used a ton. Everything runs off my SSD except data access, so unless there’s a backup or I’m watching a movie or something, the drives don’t MHD need to be spinning.

If I was running an office NAS or something, I’d probably keep them spinning, but it’s just me and my family, so usage is pretty infrequent.

Lost_My_Mind@lemmy.world on 22 Jan 01:20 collapse

At this point it’s less about the current quality of the product, and more about the company. I had every right to have my item replaced. I was within warrenty. It’s not MY warrenty policy. I didn’t set the terms. I didn’t set the duration. They did. They said if any issues arrise within a certain time of purchase I could get a replacement. I had the proof. I sent them the proof. I was told something along the lines of “In this case we’re not able to replace the drive”. When I asked what was wrong, I was told it was a high capacity drive with an electronic failure point. I even called on the phone, pulled up a pdf of their warrenty and asked them to show me where in the warrenty there was an exclusion for this situation. They didn’t even attempt to try. They just argued that it couldn’t be done, because the drive failed. I said "Yes. The drive certainly did fail within warrenty period. That’s what’s covered within the warrenty. That’s the whole purpose of the warrenty. To provide reassurance to the customer that if they should so happen to buy one of the 1% of drives with a malfunction beyond their control, that the product they paid for will be replaced without worry. "

They then told me I was wrong, transfered me to their boss, and while on hold hung up.

I understand if I buy a western digital, I run the risk of also buying a dud drive. However I assume they will honor their warrenty.

Seagate doesn’t need to honor any warrenty. They don’t need to offer any warrenty. However as the customer, I’m free to inquire about warrenty terms before buying. If I see a product that does not offer warrenty on new items, or doesn’t allow returns? That tells me the company doesn’t stand by their product. It’s then MY decision on if I want to gamble.

Seagate DID offer a warrenty that they set the terms for. That tells me they stand behind their product. So when they told me no, and gave no reason besides “the drive is dead”? That’s called bait and switch. Which breaks trust between customer and business.

They might have 36TB SSD hard drives at $100 that they guarentee will last 100 years. I still won’t buy it, because I’ve lost trust in the company to stand behind their claims.

And here we are, 20 years later. Still haven’t bought a single seagate product since. And often times being interested in a sale or offer, until I saw the brand. I’ve multiple times in 20 years went out of my way to avoid seagate.

And if they would have honored the warrenty? I’d have moved on from any grudge. Back when Logitech was still a good company I called and asked how much to repair an out of warrenty mouse I have. I understood I’d have to pay. I was getting a price quote to see if it was worth it, as I LOVED that mouse model in 2000. Sad when it died in 2006. Dude on the phone just said “Ah, here. Lets not even repair it. I’m just going to send you the same model”

And sent me a brand new (old stock) replacement of the same mouse I had. That mouse lasted until 2014.

So I used the same model mouse from 2000-2014. And I also still buy logitech products, even though I recognize the company is not as high quality as they used to be. Call it nostolgia, call it brand loyalty, whatever. It still just feels right buying logitech, and a huge part of that is what they did in the past.

sugar_in_your_tea@sh.itjust.works on 22 Jan 01:55 next collapse

Yup, service is way more important IMO than bad products, because if the company is willing to make things right, I’m willing to gamble a bit on a new product.

renegadespork@lemmy.jelliefrontier.net on 22 Jan 03:19 collapse

Okay, fair enough.

morbidcactus@lemmy.ca on 21 Jan 14:24 next collapse

As @renegadespork@lemmy.jelliefrontier.net said, infant mortality is a concern with spinning disks, if I recall (been out of reliability for a few years) things like bearings are super sensitive to handling and storage, vibrations and the like can totally cause microscopic damage causing premature failure, once they’re good though they’re good until they wear out. A lot of electronics follow that or the infant mortality curve, stuff dying out of the box sucks, but it’s not unexpected from a reliability POV.

Shitty of Seagate not to honour the warranty, that’d turn me off as well. Mine is pettier, when I was building my nas/server I initially bought some WD reds, returned those and went for some Seagate ironwolf drives because the reds made this really irritating whine you could hear across the room, at the time we had a single room apartment so was no good.

Boomkop3@reddthat.com on 21 Jan 15:28 collapse

I’ve had a lot of seagates simply because they’re the cheapest crap on the market and my budget was low. But unfortunately, crap is what you get.

iturnedintoanewt@lemm.ee on 21 Jan 11:48 next collapse

OK…what’s this HAMR technology and how does it play compared to the typical CMR/SMR performance differences?

JayleneSlide@lemmy.world on 21 Jan 12:15 collapse

Heat-Assisted Magnetic Recording. It uses a laser to heat the drive platter, allowing for higher areal density and increased capacity.

I am ignorant on the CMR/SMR differences in performance

iturnedintoanewt@lemm.ee on 21 Jan 15:56 collapse

I fear HAMR sounds like a variation on the idea of getting a coarser method to prepare the data to be written, just like on SMR. These kind of hard drives are good for slow predictable sequential storage, but they suck at writing more randomly. They’re good for surveillance storage and things like that, but no good for daily use in a computer.

stephen01king@lemmy.zip on 21 Jan 16:37 next collapse

My poor memory is telling me the heat is used to make the bits easier to flip, so you can use a weaker magnetic field that only affects a smaller area, allowing you to pack in bits more closely. It shouldn’t have the same problem as SMR.

drosophila@lemmy.blahaj.zone on 21 Jan 21:03 collapse

That sounds absolutely fine to me.

Compared to an MVME SSD, which is what I have my OS and software installed on, every spinning disk drive is glacially slow. So it really doesn’t make much of a difference if my archive drive is a little bit slower at random R/W than it otherwise would be.

In fact I wish tape drives weren’t so expensive because I’m pretty sure I’d rather have one of those.

If you need high R/W performance and huge capacity at the same time (like for editing gigantic high resolution videos) you probably want some kind of RAID array.

iturnedintoanewt@lemm.ee on 22 Jan 01:26 collapse

These are still not good for a RAID array, was my point. Unless just storing sequentially, at a kinda slow rate. At least for SMR. I fear HAMR might be similar (it reminds me of Sony’s minidisk idea but applied to a hard drive).

small44@lemmy.world on 21 Jan 13:29 next collapse

What about the writing and reading speeds?

Senseless@feddit.org on 21 Jan 17:54 next collapse

It has some.

Cornelius_Wangenheim@lemmy.world on 21 Jan 18:07 next collapse

If you care about that, spinning rust is not the right solution for you.

JGrffn@lemmy.world on 21 Jan 20:39 collapse

I mean, newer server-grade models with independent actuators can easily saturate a SATA 3 connection. As far as speeds go, a raid-5 or raid-6 setup or equivalent should be pretty damn fast, especially if they start rolling out those independent actuators into the consumer market.

As far as latency goes? Yeah, you should stick to solid state…but this breathes new life into the HDD market for sure.

cmnybo@discuss.tchncs.de on 21 Jan 20:42 collapse

The speed usually increases with capacity, but this drive uses HAMR instead of CMR, so it will be interesting to see what effect that has on the speed. The fastest HDDs available now can max out SATA 3 on sequential transfers, but they use dual actuators.

Boomkop3@reddthat.com on 21 Jan 15:26 next collapse

Now you can store even more data unsafely!

SkunkWorkz@lemmy.world on 21 Jan 21:46 collapse

You are not supposed to use these in a non-redundant config.

Boomkop3@reddthat.com on 21 Jan 21:59 next collapse

Especially these, ye

Naia@lemmy.blahaj.zone on 22 Jan 02:32 collapse

Even in an array I’d be terrified of more drive fails in a rebuild that is gonna take a long time.

Ugurcan@lemmy.world on 21 Jan 16:26 next collapse

I’m going to remind you that these fuckers are LOUD, like ROARING LOUD, so might not be suitable for your living room server.

sugar_in_your_tea@sh.itjust.works on 22 Jan 01:52 collapse

DON’T TELL ME WHAT I CAN HANDLE!! I HOPE YOU CAN HEAR ME, MY PC’S FANS ARE A LITTLE NOISY!!

kandoh@reddthat.com on 21 Jan 18:52 next collapse

Only ssd for me

shalafi@lemmy.world on 21 Jan 20:05 next collapse

Yeah, but I can’t afford 2TB of SSD, and I need to expand soon.

cmnybo@discuss.tchncs.de on 21 Jan 20:30 collapse

You can’t get SSDs that big except for some extremely expensive enterprise drives.

somedev@aussie.zone on 21 Jan 20:12 next collapse

I would not risk 36TB of data on a single drive let alone a Seagate. Never had a good experience with them.

boonhet@lemm.ee on 21 Jan 20:33 next collapse

They seem to be very hit and miss in that there are some models with very low failure rates, but then there are some with very high.

That said, the 36 TB drive is most definitely not meant to be used as a single drive without any redundancy. I have no idea what the big guys at Backblaze for an example, are doing, but I’d want to be able to lose two drives in an array before I lose all my shit. So RAID 6 for me. Still, I’d likely be going with smaller drives because however much a 36 TB drive costs, I don’t wanna feel like I’m spending 2x the cost of one of those just for redundancy lmao

somedev@aussie.zone on 21 Jan 22:35 next collapse

Could you imagine the time it would take to resilver one drive… Crazy.

sugar_in_your_tea@sh.itjust.works on 22 Jan 01:51 next collapse

I use mirrors, so RAID 1 right now and likely RAID 10 when I get more drives. That’s the safest IMO, since you don’t need the rest of the array to resilver your new drive, only the ones in its mirror pool, which reduces the likelihood of a cascading failure.

BorgDrone@lemmy.one on 22 Jan 02:19 collapse

I’d want to be able to lose two drives in an array before I lose all my shit. So RAID 6 for me.

Repeat after me: RAID is not a backup solution, RAID is a high-availability solution.

The point of RAID is not to safeguard your data, you need proper backups for that (3-2-1 rule of backups: 3 copies of the data on 2 different storage media, with 1 copy off-site). RAID will not protect your data from deletion from user error, malware, OS bugs, or anything like that.

The point of RAID is so everyone can keep working if there is a hardware failure. It’s there to prevent downtime.

boonhet@lemm.ee on 22 Jan 02:27 collapse

It’s 36 TB drives. Most people are planning on keeping anything legal or self-produced there. It’s going to be pirated media and idk about you but I’m not uploading that to any cloud provider lmao

BorgDrone@lemmy.one on 22 Jan 02:43 collapse

These are enterprise drives, they aren’t going to contain anything pirated. They are probably going to one of those cloud providers you don’t want to upload your data to.

boonhet@lemm.ee on 22 Jan 02:46 collapse

I can easily buy enterprise drives for home use. What are you on about?

Jimmycakes@lemmy.world on 21 Jan 21:53 next collapse

You couldn’t afford this drive unless you are enterprise so there’s nothing to worry about. They don’t sell them by the 1. You have to buy enough for a rack at once.

somedev@aussie.zone on 22 Jan 02:44 collapse

100%. 36tb is peanuts for data centres

ByteOnBikes@slrpnk.net on 22 Jan 00:43 next collapse

Ignoring the Seagate part, which makes sense… Is there a reason with 36TB?

I recall IT people losing their minds when we hit the 1TB, when the average hard drive was like 80GB.

So this growth seems right.

cupcakezealot@lemmy.blahaj.zone on 22 Jan 00:53 next collapse

I recall IT people losing their minds when we hit the 1TB

1TB? I remember when my first computer had a state of the art 200MB hard drive.

thirteene@lemmy.world on 22 Jan 01:18 next collapse

It’s so consistent it has a name: Moore’s law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. en.m.wikipedia.org/wiki/Moore's_law

I heard that we were at the theoretical limit but apparently there’s been a break through: phys.org/news/2020-09-bits-atom.html

Keelhaul@sh.itjust.works on 22 Jan 02:29 collapse

Quick note, HDD storage is not using transistors to store the data, so is not really directly related to Moore’s law. SSDs do use transistors/nano structures (NAND) for storage and it’s storage capacity is more related to Moore’s law.

schizo@forum.uncomfortable.business on 22 Jan 02:26 collapse

It’s raid rebuild times.

The bigger the drive, the longer the time.

The longer the time, the more likely the rebuild will fail.

That said, modern raid is much more robust against this kind of fault, but still: if you have one parity drive, one dead drive, and a raid rebuild, if you lose another drive you’re fucked.

LodeMike@lemmy.today on 22 Jan 01:05 collapse

The only thing I want is reasonably cheap 3.5" SSDs. Sata is fine just let me pay $500 for a 12TB SSD please.

SocialMediaRefugee@lemmy.world on 21 Jan 20:27 next collapse

Managing that many files becomes the challenge

cupcakezealot@lemmy.blahaj.zone on 22 Jan 00:53 collapse

me: torrents the entire spn series