homesweethomeMrL@lemmy.world
on 14 Jul 19:37
nextcollapse
Werd
ordnance_qf_17_pounder@reddthat.com
on 14 Jul 19:57
nextcollapse
Just say it’s full of porn, it’s easier to explain
Fuck_u_spez_@sh.itjust.works
on 14 Jul 21:08
nextcollapse
Depends on the audience tbh
LadyAutumn@lemmy.blahaj.zone
on 14 Jul 21:10
collapse
Always keep an nsfw tab open to swap to so your family doesnt see you on the arch linux wiki.
Atomicbunnies@lemmy.dbzer0.com
on 14 Jul 23:45
nextcollapse
I have around 150 distros seeding 🤣. I need to get those numbers up!
sugar_in_your_tea@sh.itjust.works
on 15 Jul 02:30
collapse
Honestly, when I first got into forums, I thought they were literally talking about Linux distros, because at the time, that’s literally all I was seeding since that’s what I was into.
rottingleaf@lemmy.world
on 14 Jul 19:56
nextcollapse
You’d go broke. Of course it’s all Linux, family archives and DNA test data, BTC blockchain, backed up FOSS projects, archives of Wikipedia, Project Gutenberg and OpenStreetMap, and of course - POVRay renders.
I paid $600+ for a 24 TB drive, tax free. I feel robbed. Although I’m glad not to shop at Newegg.
PancakesCantKillMe@lemmy.world
on 14 Jul 21:09
collapse
Yes, fuck Newegg (and amazon too). I’ve been using B&H for disks and I have no complaints about them. They have the Seagate Ironwolf Pro 24TB at $479 currently, but last week it was on sale for $419. (I only look at 5yr warranty disks.)
I was not in a position to take advantage as I’ve already made my disk purchase this go around, so I’ll wait for the next deep discount to hit if it is timely.
I hate amazon but haven’t been following stuff about newegg and have been buying from them now and then. No probs so far but yeah, B&H is also good. Also centralcomputer.com if you are in the SF bay area. Actual stores.
PancakesCantKillMe@lemmy.world
on 14 Jul 21:39
collapse
Newegg was the nerd’s paradise 10+ years ago. I would spend thousands each year on my homelab back then. They had great customer service and bent over backwards for them. Then they got bought out and squeezed and passed that squeeze right down to the customers. Accusing customers of damaging parts, etc. Lots of slimeball stuff. They also wanted to be like amazon, so they started selling beads, blenders and other assorted garbage alongside tech gear.
After a couple of minor incidents with them I saw the writing on the wall and went to amazon who were somewhat okay then. Once amazon started getting bad, I turned to B&H and fleaBay. I don’t buy as much electronic stuff as I used to, but when I do these two are working…so far.
I’ve recently bought a series of 24TB drives from both Amazon and Newegg. Each one I got was either DOA or shortly thereafter. I just gave up but I would love to have a better source.
PancakesCantKillMe@lemmy.world
on 14 Jul 23:05
collapse
I got some 16TB drives recently for around $200 each, though they were manufacturer recertified. Usually a recertified drive will save you 20-40%.
Shipping can be a fortune though.
EDIT: I used manufacturer recertified, not refurbished drives.
neon_nova@lemmy.dbzer0.com
on 15 Jul 00:13
collapse
Refurbished drives sound scary. Any data to point towards that not being a problem?
pulsewidth@lemmy.world
on 15 Jul 04:07
nextcollapse
I would absolutely not use refurbs personally. As part of the refurb process they wipe the SMART data which means you have zero power-on hours listed, zero errors, rewrite-count, etc - absolutely no idea what their previous life was.
neon_nova@lemmy.dbzer0.com
on 15 Jul 04:08
nextcollapse
Thanks! It seems too risky for something like a hard drive.
If you’ve got a RAID array with 1 or 2 parity then manufacturer recertified drives are fine; those are typically drives that just aged out before being deployed, or were traded in when a large array upgraded.
If you’re really paranoid you should be mixing mfg dates anyway, so keep some factory new and then add the recerts so the drive pools have a healthy split.
Yep staggering manufacturing dates is a good suggestion. I do it but it does make purchasing during sales periods to get good prices harder. Better than losing multiple drives at once, but RAID needs a backup anyway and nobody should skip that step.
I mean a backup of a RAID pool is likely just another RAID pool (ideally off-site) – maybe a tape library if you’ve got considerable cash.
Point is that mfg refurbs are basically fine, just be responsible, if your backup pool runs infrequently then that’s a good candidate for more white label drives.
As mentioned by another user, all drives fail, it’s a matter of when, not if. Which is why you should always use RAID arrangement with at least one redundant drive and/or have full backups.
Ultimately, it’s a money game. If you save 30% on a recertified drive and it has 20% less total life than a new one, you’re winning.
I looked around a bit, and either search engines suck nowadays (possibly true regardless) or there are no independent studies comparing certified and new drives.
All you get mostly opinion pieces or promises by resellers that actually, their products are good. Clearly no conflict of interest there. /s
The best I could find was this, but that’s not amazing either.
What I do is look at backblaze’s drive stats for their new drives, find a model that has a good amount of data and low failure rate, then get a recertified one and hope their recertification process is good and I don’t get a lemon.
And usually by the time they break they have been obsolete anyways, at least for 24/7 use in a NAS where storage density and energy efficiency are a big concern. So you would have replaced most of them long before they break, even with recertified drives
Omg I really have been out of the loop. I originally filled my 8 bay NAS with 6tb drives starting back in 2018. Once they would fill, i added another. 3 years ago, I finally ran out of space and started swapping out the 6tb for 10tb. Due to how it works, I needed to do 2 before I saw any additional space. I think i have 3 or 4 now, and the last one was 2 years ago. They did cost around $250 at the time, and I think i got 1 for just over $200. The fact that I can more than double that for only $300 is crazy news to me. Guess I am going to stop buying 10tb now. The only part that sucks is having to get 2 up front…
sugar_in_your_tea@sh.itjust.works
on 15 Jul 02:27
nextcollapse
I bought 8TB for something like $300. 36TB seems quite attractive.
Depends on your use case. The linked drive according to seagate’s spec sheet is only rated for about ~6.5 power-on hours per day(2400 per year). So if just in your desktop for storage then sure. In an always (or mostly) on NAS then I’d find a different drive. It’ll work fine but expect higher failure rates for that use.
mehdi_benadel@lemmy.balamb.fr
on 14 Jul 19:30
nextcollapse
You need a week to fill the hecking disk. flips server rack up in disappointment
But this would be great for tape-like storage where you only need to write once and maybe query little individual bits of it. Slap RAID on top of it and you’ve got yourself your own nation state intelligence service datastore.
homesweethomeMrL@lemmy.world
on 14 Jul 19:37
nextcollapse
So how much data would I lose when it dies?
Edit for those who didn’t read the smirk, yes 36Tb, as a way to point out what someone answered below: if you’re using a drive this big have your data recovery procedures on fleek.
NuXCOM_90Percent@lemmy.zip
on 14 Jul 19:40
nextcollapse
Assuming you aren’t striping, up to 36 TB. If you follow even halfway decent practices with basically any kind of RAID other than 0, hopefully 0 Bytes.
The main worry with stuff like this is that it potentially takes a while to recover from a failed drive even if you catch it in time (alert systems are your friend). And 36 TB is a LOT of data to work through and recover which means a LOT of stress on the remaining drives for a few days.
But even with striping you have backups right? Local redundancy is for availability, not durability.
NuXCOM_90Percent@lemmy.zip
on 14 Jul 20:24
collapse
Words hard
And I would go so far as to say that nobody who is buying 36 TB spinners is doing offsite backups of that data. For any org doing offsites of that much data you are almost guaranteed using a tape drive of some form because… they pay for themselves pretty fast and are much better for actual cold storage backups.
Seagate et al keep pushing for these truly massive spinners and I really do wonder who the market is for them. They are overly expensive for cold storage and basically any setup with that volume of data is going to be better off slowly rotating out smaller drives. Partially because of recovery times and partially because nobody but a sponsored youtuber is throwing out their 24 TB drives because 36 TB hit the market.
I assume these are a byproduct of some actually useful tech that is sold to help offset the costs while maybe REALLY REALLY REALLY want 72 TBs in their four bay Synology.
I wouldn’t buy a Synology but either way I’d want a 5 or 6 bay for raid-6 with two parity drives. Going from 4 bay (raid 6 or 10) to 5 bay (raid 6) is 50% more user data for 25% more drives. I wouldn’t do raid 5 with drives of this size.
LilB0kChoy@midwest.social
on 14 Jul 22:16
nextcollapse
Been a long time since I set foot in a data center; are tape drives not still king for cold storage of data?
NuXCOM_90Percent@lemmy.zip
on 14 Jul 22:19
collapse
It depends on the size/“disruptiveness” of the company but yeah. You either have your own tape back up system or you contract out to someone who does and try not to think about what it means to be doing a glorified rsync of all your data offsite every week.
I wouldn’t quite go so far as to say anyone doing genuine offsite backups using a spinning disc is wrong but…
The caveat I’ll carve out is the hobbyist space where a lot of us will back up truly essential data to a cloud bucket or even a friend/family member’s NAS. I… still think that is more wrong than not but (assuming you trust them and they have proper practices) it is probably the best way for a hobbyist to keep a backup without worrying about that USB drive degrading since it gets plugged in once a year.
LilB0kChoy@midwest.social
on 14 Jul 22:30
collapse
I can’t criticize other hobbyists. I only back up locally and I use Synology Hybrid Raid to do it.
And I would go so far as to say that nobody who is buying 36 TB spinners is doing offsite backups of that data.
Was this a typo? I would expect that almost everyone who is buying these is doing offsite backups. Who has this amount of data density and is ok with losing it?
Yes, they are quite possibly using tape for these backups (either directly or through some cloud service) but you still want offsite backups. Otherwise a bad fire and you lose it all.
AmbiguousProps@lemmy.today
on 15 Jul 06:46
collapse
It would probably take days to rebuild the array.
It’s important to also note that RAID (or alternatives such as unRAID) are not backup systems and should not be relied on as such. If you have a severe brownout that fries more than two or three drives at once, for example, you will lose data if you’re not backing up.
whyNotSquirrel@sh.itjust.works
on 14 Jul 19:41
collapse
about 36TB?
homesweethomeMrL@lemmy.world
on 14 Jul 20:10
collapse
Nooooooooo not all my pr0ns!!
HappySkullsplitter@lemmy.world
on 14 Jul 19:40
nextcollapse
cmnybo@discuss.tchncs.de
on 14 Jul 20:18
nextcollapse
I’ve never had to defragment the ext4 drives in my server. Ext4 is fairly resistant to fragmentation.
The_Decryptor@aussie.zone
on 15 Jul 09:20
collapse
It’s not really Ext4 doing that, it’s a bunch of tricks in the OS layer and the way apps write files to storage that limits it.
You’ll see it if you use something like a BT client without pre-allocation, those files can get heavily fragmented depending on the download speed.
walden@sub.wetshaving.social
on 14 Jul 22:47
nextcollapse
Man, I used to LOVE defragmenting drives. I felt like I was actually doing something productive, and I just got to sit back and watch the magic happen.
One of the worst things that the newer Windows versions did is get rid of that little view of defragmenting. It was much more interesting than watching a number slowly tick up.
wise_pancake@lemmy.ca
on 14 Jul 19:43
nextcollapse
Do you need it? Probably not. Do you want it? Oh, yeah.
I feel seen
ArchmageAzor@lemmy.world
on 14 Jul 19:44
nextcollapse
I think if I needed to store 36TB of data, I would rather get several smaller disks.
I don’t think the target audience of this drive is buying one. They are trying to optimize for density and are probably buying in bulk rather than paying the $800 price tag.
That’s roughly what I have now, and I only have about 200gb left, so I kind of wish I could get a little more right now. This is across 7 drives. I really hope storing data becomes faster and cheaper in the future because as it keeps growing over the past few decades, it gets longer and longer to replace and move this much data…
sugar_in_your_tea@sh.itjust.works
on 15 Jul 02:35
nextcollapse
Well, it does cost less and less every year. I bought two 8TB drives for $300 each or so, and today a 24TB drive is about that much.
If you need 10tb of storage, you could get 2x used 10tb hdds in raid 1 for $200, but 6x used 2tb nvme in raid 5 is only $600 and 100x faster. Both take up the same amount of space.
Woah I haven’t thought about that since high school. I vaguely remember an inside joke between some dope smoking buddies and i where we would say call the police in that nervous voice
punkwalrus@lemmy.world
on 14 Jul 20:30
nextcollapse
Yeah, but it’s Seagate. I have worked in data centers, and Seagate drives had the most failures of all my drives and somehow is still in business. I’d say I was doing an RMA of 5-6 drives a month that were Seagate, and only 4-5 a year Western Digital.
CmdrShepard49@sh.itjust.works
on 14 Jul 21:17
nextcollapse
Out of the roughly 20 drives I’ve bought over the last decade or so, the only two failures were Seagate and they only made up five of the drives purchased. The other 15 are WD and all have been great (knock on wood).
I’ve had the same experience. The first HDD that failed on me was a Barricuda 7200.11 with the infamous firmware self-brick issue, and a second 7200.11 that just died slowly from bad sectors.
From then on I only bought WD, I have a Caviar Black 1TB from oh, 2009-ish that’s still in service, though it’s finally starting to concern me with it’s higher temperature readings, probably the motor bearings going.
After that I’ve got a few of the WD RE4 1TBs still running like new, and 6 various other WD Gold series drives, all running happily.
The only WD failure I’ve had was from improper shipping, when TigerDirect (rip) didn’t pack the drive correctly, and the carrier football tossed the thing at my porch, it was losing sectors as soon as it first started, but the RMA drive that replaced it is still running in a server fine.
All over the map: Barracuda, SkyHawk, Ironwolf, Constellation, Cheetah, etc…
jordanlund@lemmy.world
on 14 Jul 23:41
nextcollapse
Every drive I’ve had fail has been a Seagate. I replace them out of habit at this point.
Atomicbunnies@lemmy.dbzer0.com
on 14 Jul 23:58
nextcollapse
I use all WD Golds for storage now but I have some Seagate barracudas from 2005 that still work. I don’t use them anymore but the data is still there. I fire them up every so often to see. I know that’s purely situational. I pretty much only buy WD now.
sugar_in_your_tea@sh.itjust.works
on 15 Jul 02:32
nextcollapse
And they do have more Seagate failures than other brands, but that’s because they have more Seagates than other brands. Seagate is generally pretty good value for the money.
IMO, its not a brand issue. Its a seller/batch/brand issue. Hard drives are sensitive to vibration, and if you buy multiple drives from the same place, at the same time, and all the same brand and model, you might be setting yourself up for a bad experience if someone accidentally slammed those boxes around earlier in their life.
I highly recommend everyone buy their drives from different sellers, at different times, spread out over various models from different brands. This helps eliminate the bad batch issue.
Yeah. In the Backblaze data, you can see that annualized failure rates vary significantly by drive model within the same manufacturer.
But if maintaining drive diversity isn’t your thing, just buy a cold spare and swap it out when a failure inevitably happens (and then replace the spare).
DarkDarkHouse@lemmy.sdf.org
on 15 Jul 04:46
collapse
The problem with same batch drives is failing together, potentially beyond your ability to recover with replacement drives.
Three companies, kept track, but not after I left. It was always funny to me that they bought out Atlas and Maxtor. “Of course they did. Why not dominate the market on shitty drives? lol” I am surprised they hadn’t bought Deskstar.
paraphrand@lemmy.world
on 14 Jul 21:16
nextcollapse
Hello! 👋
EfficientEffigy@lemmy.world
on 14 Jul 22:08
nextcollapse
The thing is I’m a data hoarder who buys lots of HDD’s; both new and used. I have only bought a few Seagates. It’s always the Seagates that are fucked. I had a Toshiba and Western Digital fail on me but I have had 5 Seagates fail on me. Could be a coincidence, sure but the brand I have bought the fewest of had the most failures. I recognize this is not scientific in any way. I recently bought a brand new 8TB Seagate Barracuda and its still going strong. I hope if lasts a good while. My oldest drive is a 1TB Hitachi (RIP) from 2008. I can’t wait for 8TB SSD’s to become cheaper.
AmbiguousProps@lemmy.today
on 15 Jul 06:35
collapse
Nah, as a fellow data hoarder you’re 100% correct. I have a couple of dozen disks, and I’ve had failures from both Seagate and WD, but the Seagates have failed much more often. For the past couple of years, I’ve only purchased WD for this reason. I’m down to two Seagate drives now.
I feel like many people with a distaste for WD got burned by the consumer drives (especially the WD Greens). WD’s DC line is so good though, especially HC530.
daggermoon@lemmy.world
on 15 Jul 08:49
nextcollapse
I mostly buy new Toshiba drives now. The WD blue drives are fine. I have a few of them. I have a WD red that is reporting surface errors, it’s still going and the number of errors hasn’t increased so I’m not stressing replacing it. Also, btrfs gives me peace of mind because I can periodiclly check if my filesystem has corrupted data.
I’ve had my 16TB ironwolf pros spinning for 5 years in my NAS, no issues. People love to trash Seagate but I can’t say I’ve had any issues. I also have 6x10TB barracuda pros and they’re fine too, for about 10 years.
GlassCaseofEmotion@lemmy.world
on 14 Jul 23:46
nextcollapse
That’s fine…they don’t need to release it under their Exos line of enterprise drives. SMR don’t do well in raid arrays especially not highly utilized ones. They require idle time to cleanup and the rebuild times are horrendous.
There are a number of enterprise storage systems optimized specifically for SMR drives. This is targeting actual data centers, not us humble homelabbers masquerading as enterprises.
A lot of modern AAA games require an SSD, actually.
On top of my head: Cyberpunk, Marvel’s Spider-Man 2, Hogwarts Legacy, Dead Space remake, Starfield, Baulder’s Gate 3, Palworld, Ratchet & Clank: Rift Apart
tobogganablaze@lemmus.org
on 15 Jul 07:48
nextcollapse
Indeed, as others have said this isn’t a hard requirement. Anyone with a handheld (e.g. Steam Deck) playing off a uSD card uses a device that’s an order of magnitude slower for sequential I/O
Nalivai@discuss.tchncs.de
on 15 Jul 14:51
nextcollapse
Both Cyberpunk and BG3 work flawlessly on the external USB hard drive that I use. The loading times suffer a bit, but not to an unplayable degree, not even close
ArsonButCute@lemmy.dbzer0.com
on 15 Jul 16:07
nextcollapse
Cyberpunk literally has an HDD mode, I play it of an HDD every day.
With sufficient ram to load everything in you’ll just have longer load times, no hdd hitchiness
RisingSwell@lemmy.dbzer0.com
on 16 Jul 03:15
collapse
Forza Horizon 4 and 5 don’t say they require an SSD I think, but when I had it on my hard drive any cars that did over 250kph caused significant world loading issues, as in I’d fall out of the world because it didn’t load the map.
If a game isn’t fully playable without an SSD, then I consider it a requirement.
Ever try playing Perfect Dark without an Expansion Pak back in the day? It’ll technically work, but you’ll get locked out of 90% of the game, including the campaign. Similar thing with SSDs today.
Makes me shudder. I have to replace a drive in my array, because it is degraded. It’s a 4TB. Imagine having to replace one of these. I’d much rather have a bunch of cheaper drives, even if they are a bit more expensive per TB, because the replacement cost will eventually make the total cost of ownership lower.
Also, repeat with me: “Please give me a Toshiba or Hitachi, please”
Yes I remeber Deathstars. However, these past years I perfunctorly peruse Backblaze’s yearly drive failure reports, and have noticed a trend, which is that most drives are fine, but every year there are a few that stand out as very bad, and they usually Seagate/WDC.
Exceptions yada, yada
Matriks404@lemmy.world
on 15 Jul 08:46
nextcollapse
Do people actually use such massive hard drives? I still have my 1 TB HDD in my PC (and a 512 GB SSD), lol.
Data hoarders could be happy, but otherwise it’s mostly enterprise use.
Still, I personally hold about 4 TB of files, and I know people holding over 30 TB.
As soon as your storage needs exceed 1-2 games and a bunch of old photos, demand for space raises quickly.
UnsavoryMollusk@lemmy.world
on 15 Jul 10:50
nextcollapse
I have 50t of data total : archival, old project, backups, backups of my physical medias, etc
Trainguyrom@reddthat.com
on 15 Jul 15:18
nextcollapse
This is an enterprise drive, so it’s useful for any usecase where a business needs to store a lot of lightly used data, like historical records that might be accessed infrequently for reporting and therefore shouldn’t get be transfered to cold storage.
For a real world example, the business I’m currently contracting at is legally required to retain safety documentation for every machine in every plant they work in. Since the company does contract work in other people’s plants that’s hundreds of PDFs (many of which are 50+ page scans of paper forms) per plant and hundreds of plants. It all adds up very quickly. We also have a daily log processes where our field workers will log with photographs all of their work every single workday for the customer. Some of these logs contain hundreds of photographs depending on the customer’s requirements. These logs are generated every day at every plant so again it adds up to a lot of data being created each month
I have just shy of 8TB of data on my home file server.
That’s not including my NVR (for security cameras) which has a single 6TB SATA drive sitting around 40% capacity.
NigelFrobisher@aussie.zone
on 15 Jul 08:50
nextcollapse
Pretty sure I had a bigger hard drive than that for my Amiga. You could have broken a toe if you’d dropped it.
GreenKnight23@lemmy.world
on 15 Jul 08:55
nextcollapse
no thanks Seagate. the trauma of losing my data because of a botched firmware with a ticking time bomb kinda put me off your products for life.
see you in hell.
WhyJiffie@sh.itjust.works
on 15 Jul 11:28
nextcollapse
but then wd and their fake red nas drives with smr tech?
what else we have?
spookedintownsville@lemmy.world
on 15 Jul 13:03
collapse
Wait… fake? I just bought some of those.
WhyJiffie@sh.itjust.works
on 15 Jul 13:14
collapse
they were selling wd red (pro?) drives with smr tech, which is known to be disastrous for disk arrays because both traditional raid and zfs tends to throw them out. the reason for that is when you are filling it up, especially when you do it quickly, it won’t be able to process your writes after some time, and write operations will take a very long time, because the disk needs to rearrange its data before writing more. but raid solutions just see that the drive is not responding to the write command for a long time, and they think that’s because the drive is bad.
it was a few years ago, but it was a shitfest because they didn’t disclose it, and people were expecting that nas drives will work fine in their nas.
I’ve had a couple random drop from my array recently, but they were older so I didn’t think twice about it. Does this permafry them or can you remove from the array and reinitiate for it to work?
WhyJiffie@sh.itjust.works
on 16 Jul 03:57
collapse
well, it depends. if they were dropped just because they are smr and were writing slowly, I think they are fine. but otherwise…
what array system do you use? some raid software, or zfs?
Windows Server storage solutions. I took them out of the array and they still weren’t recognized in Disk Management so I assume they’re shot. It was just weird having 2 fail the same way.
WhyJiffie@sh.itjust.works
on 16 Jul 16:57
collapse
I don’t have experience with windows server, but that indeed sounds like these are dead. you could check them with some pendrive bootable live linux, whether it sees them, like gparted’s edition, in case windows just hides them because it blacklisted them or something
they were selling wd red (pro?) drives with smr tech
Didn’t they used to have only one “Red” designation? Or maybe I’m hallucinating. I thought “Red Pro” was introduced after that curfuffel to distinguish the SMR from the CMR.
WhyJiffie@sh.itjust.works
on 16 Jul 03:59
collapse
I don’t know, because haven’t been around long enough, but yeah possibly they started using the red pro type there
In my case, 10+years ago I had 6 * 3tb Seagate disks in a software raid 5. Two of them failed and it took me days to force it back into the raid and get some of the data off. Now I use WD and raid 6.
I read 3 or 4 years ago that it was just the 3tb reds I used had a high failure rate but I’m still only buying WDs
this thread has multiple documented instances of poor QA and firmware bugs Seagate has implemented at the cost of their own customers.
my specific issue was even longer ago, 20+ years. there was a bug in the firmware where there was a buffer overflow from an int limit on runtime. it caused a cascade failure in the firmware and caused the drive to lock up after it ran for the maximum into limit. this is my understanding of it anyway.
the only solution was to purchase a board online for the exact model of your HDD and swap it and perform a firmware flash before time ran out. I think you could also use a clip and force program the firmware.
at the time a new board cost as much as a new drive, finances of which I didn’t have at the time.
eventually I moved past the 1tb of data I lost, but I will never willingly purchase another Seagate.
ZILtoid1991@lemmy.world
on 15 Jul 12:21
nextcollapse
Can someone recommend me a hard drive that won’t fail immediately? Internal, not SSD, from which cheap ones will die even sooner, and I need it for archival reasons, not speed or fancy new tech, otherwise I have two SSDs.
Hard drives aren’t great for archival in general, but any modern drive should work. Grab multiple brands and make at least two copies. Look for sales. Externals regularly go below $15/tb these days.
I’ve got 6 in a random mix of brands (Seagate and WD) 8-16Tb that are all older than that. Running 24/7 storing mostly random shit I download. Pulled one out recently because the USB controller died. Still works in a different enclosure now.
I’d definitely have a different setup for data I actually cared about.
lightnsfw@reddthat.com
on 15 Jul 15:07
nextcollapse
If you’re relying on one hard drive not failing to preserve your data you are doing it wrong from the jump. I’ve got about a dozen hard drives in play from seagate and WD at any given time (mostly seagate because they’re cheaper and I don’t need speed either) and haven’t had a failure yet. Backblaze used to publish stats about the hard drives they use, not sure if they still do but that would give you some data to go off. Seagate did put out some duds a while back but other models are fine.
The back blaze stats were always useless because they would tell you what failed long after that run of drives was available.
There are only 3 manufactures at this point so just buy one or two of each color and call it a day. ZFS in raid z2 is good enough for most things at this point.
My WD Red Pros have almost all lasted me 7+ years but the best thing (and probably cheapest nowadays) is a proper 3-2-1 backup plan.
AdrianTheFrog@lemmy.world
on 16 Jul 01:45
collapse
I think refurbished enterprise drives usually have a lot of extra protection hardware that helps them last a very long time. Seagate advertises a mean time to failure on their exos drives of ~200 years with a moderate level of usage. I feel like it would almost always be a better choice to get more refurbished enterprise drives than fewer new consumer drives.
I personally found an 8tb exos on serverpartdeals for ~$100 which seems to be in very good condition after checking the SMART monitoring. I’m just using it as a backup so there isn’t any data on it that isn’t also somewhere else, so I didn’t bother with redundancy.
I’m not an expert, but this is just from the research I did before buying that backup drive.
Every manufacturer has made a product that failed.
GreenKnight23@lemmy.world
on 15 Jul 23:22
collapse
but not every manufacturer has had class action lawsuits filed against their continued shitty products.
muusemuuse@sh.itjust.works
on 15 Jul 21:24
nextcollapse
I can certainly understand holding grudges against corporations. I didn’t buy anything from Sony for a very long time after their fuckery with George Hotz and Nintendo’s latest horseshit has me staying away from them, but that was a single firmware bug that locked down hard drives (note, the data was still intact) a very long time ago. Seagate even issued a firmware update to prevent the bug from biting users it hadn’t hit yet, but firmware updates at the time weren’t really something people thought to ever do, and operating systems did not check for them automatically back then like they do now.
Seagate fucked up but they also did everything they could to make it right. That matters. Plus, look at their competition. WD famously lied about their red drives not being SMR when they actually were. And I’ve only ever had WD hard drives and sandisk flash drives die on me. And guess who owns sandisk? Western Digital!
I guess if you must go with a another company, there’s the louder and more expensive Toshiba drives but I have never used those before so I know nothing about them aside from their reputation for being loud.
And I’ve only ever had WD hard drives and sandisk flash drives die on me
Maybe it’s confirmation bias but almost all memory that failed on me has been sandisk-flash storage. Zhe only exception being a corsair ssd which failed after 3 yrs as the main laptop drive + another 3 as a server boot and log-drive.
muusemuuse@sh.itjust.works
on 16 Jul 21:40
collapse
The only flash drive I ever had fail me that wasn’t made by sandisk was a generic microcenter one, which was so cheap I couldn’t bring myself to care about it.
MystikIncarnate@lemmy.ca
on 16 Jul 02:50
nextcollapse
I had a similar experience with Samsung. I had a bunch of evo 870 SSDs up and die for no reason. Turns out, it was a firmware bug in the drive and they just need an update, but the update needs to take place before the drive fails.
I had to RMA the failures. The rest were updated without incident and have been running perfectly ever since.
I’d still buy Samsung.
I didn’t lose a lot of data, but I can certainly understand holding a grudge on something like that. From the other comments here, hate for Seagate isn’t exactly rare.
Some of Seagate’s drives have terrible scores on things like Blackblaze. They are probably the worst brand, but also generally the cheapest.
I have been running a raid of old Seagate barracuda’s for years at things point, including a lot of boot cycles and me forcing the system off because Truenas has issues or whatnot and for some fucking reason they won’t die.
I have had a WD green SSD that I use for Truenas boot die, I had some WD external drive have its controller die (the drive inside still work) and I had some crappy WD mismatched drives in a raid 0 for my Linux ISO’s and those failed as well.
Whenever the Seagate start to die, I guess ill be replacing them with Toshiba’s unless somebody has another suggestion.
Fair point. But still pretty bad. Literally two days after the warranty expired my Seagate drive was broken. This was my first and only Seagate drive. Never again.
Meanwhile my old Western Digital drive is still kicking way beyond it’s warranty. Almost 10 years now.
Does it really matter that much if the first copy takes a while though? Only doing it once and you don’t even have to do it all in 1 go. Just let it run over the weekend would do though.
Sure, if you have many TBs of data changes per day you probably want a different solution. But that would also suggest you don’t need to keep it for very long.
Write speeds on SMR drives start to stagnate after mere gigabytes written, not after terabytes. As soon as the CMR cache is full, you’re fucked, and it stagnates to utterly unusable speeds as it’s desperately trying to balance writing out blocks to the persistent area of the disk and accepting new incoming writes. I have 25 year old consumer level IDE drives that perform better than an SMR drive in this thrashing state.
Also, I often use hard drives as a temporary holding area for stuff that I’m transferring around for one reason or another and that absolutely sucks if an operation that normally takes an hour or two is suddenly becoming a multi-day endeavour tying up my computing resources. I was burned once when Seagate submarined SMR drives into the Barracuda line, and I got a drive that was absolutely unfit for purpose. Never again.
ArsonButCute@lemmy.dbzer0.com
on 15 Jul 21:22
collapse
My primary storage use-case is physical media backups. I literally don’t care how long it takes to store, a bluray is 70GB and I’ve got around 200 of em to backup.
samus12345@sh.itjust.works
on 15 Jul 16:09
nextcollapse
That’s a lot of porn. And possibly other stuff, too.
I ‘only’ have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.
I remember renting a game, and it was on a high density 5.25" inch floppy at a whopping 1.2MB; but or family computer only had a standard density 5.25".
So we went to the neighbors house, who was one of the first computer nerds (I’m not sure he’s still alive now), who copied the game to a 3.5" high density 1.44MB disk, then we returned the rental because we couldn’t play it on the 1.2 MB HD 5.25" floppy.
… And that was the first time I was party to piracy.
I’m not in the know of having your own personal data centers so I have no idea. … But how often is this necessary? Does accessing your own data on your hard drive require a scrub? I just have a 2tb on my home pc. Is the equivalent of a scrub like a disk clean up?
You usually scrub you pool about once a month, but there are no hard rules on that. The main problem with scrubbing is, that it puts a heavy load on the pool, slowing it down.
Accessing the data does not need a scrub, it is only a routine maintenance task.
A scrub is not like a disk cleanup. With a disk cleanup you remove unneeded files and caches, maybe de-fragment as well. A scrub on the other hand validates that the data you stored on the pool is still the same as before. This is primarily to protect from things like bit rot.
There are many ways a drive can degrade. Sectors can become unreadable, random bits can flip, a write can be interrupted by a power outage, etc. Normal file systems like NTFS or ext4 can only handle this in limited ways. Mostly by deleting the corrupted data.
ZFS on the other hand is built using redundant storage. Storing the data spread over multiple drives in a special way allowing it to recover most corruption and even survive the complete failure of a disk. This comes at the cost of losing some capacity however.
Thank you for all this information. One day when my ADHD forces me into a making myself a home server I’ll remember this and keep it in mind. I’ve always wanted to store movies but these days just family pictures and stuff. Definitely don’t have terabytes but I’m getting up 100s of gb.
I have 2*12TB whitelabel WD drives (harvested from external drives but Datacenter drives accourding to the SN) and one 16 TB Toshiba white-label (purchased directly also meant for datacenters) in a raidz1.
How full is your pool? I have about 2/3rds full which impacts scrubbing I think.
I also frequently access the pool which delays scrubbing.
It’s like 90% full, scrubbing my pool is always super fast.
Two weeks to scrub the pool sounds like something is wrong tbh.
SuperUserDO@sh.itjust.works
on 16 Jul 04:44
nextcollapse
There is an enterprise storage shelf (aka a bunch of drives that hooks up to a server) made by Dell which is 1.2 PB (yes petabytes). So there is a use, but it’s not for consumers.
That’s a use-case for a fuckton of total capacity, but not necessarily a fuckton of per-drive capacity. I think what the grandparent comment is really trying to say is that the capacity has so vastly outstripped mechanical-disk data transfer speed that it’s hard to actually make use of it all.
For example, let’s say you have these running in a RAID 5 array, and one of the drives fails and you have to swap it out. At 190MB/s max sustained transfer rate (figure for a 28TB Seagate Exos; I assume this new one is similar), you’re talking about over two days just to copy over the parity information and get the array out of degraded mode! At some point these big drives stop being suitable for that use-case just because the vulnerability window is so large that the risk of a second drive failure causing data loss is too great.
SuperUserDO@sh.itjust.works
on 16 Jul 15:54
collapse
I get it. But the moment we invoke RAID, or ZFS, we are outside what standard consumers will ever interact with, and therefore into business use cases. Remember, even simple homelab use cases involving docker are well past what the bulk of the world understands.
The trouble with spinning platters this big is that if a drive fails, it will take a long time to rebuild the array after shoving a new one in there. Sysadmins will be nervous about another failure taking out the whole array until that process is complete, and that can take days. There was some debate a while back on if the industry even wanted spinning platters >20TB. Some are willing to give up density if it means less worry.
I guess Seagate decided to go ahead, anyway, but the industry may be reluctant to buy this.
Tollana1234567@lemmy.today
on 15 Jul 07:55
nextcollapse
for POrn, but who downloads porn nowadays. unless its the illegal kind. to the idiot below who brigaded and downvoted, i was repeating other people saying it was for porn.
threaded - newest
Imagine having that…then dropping it…
Summon Linus
That’s a lot of porn.
And linux distros
Werd
Just say it’s full of porn, it’s easier to explain
Depends on the audience tbh
Always keep an nsfw tab open to swap to so your family doesnt see you on the arch linux wiki.
I have around 150 distros seeding 🤣. I need to get those numbers up!
Honestly, when I first got into forums, I thought they were literally talking about Linux distros, because at the time, that’s literally all I was seeding since that’s what I was into.
You’d go broke. Of course it’s all Linux, family archives and DNA test data, BTC blockchain, backed up FOSS projects, archives of Wikipedia, Project Gutenberg and OpenStreetMap, and of course - POVRay renders.
You wouldn’t download your mom.
No, but I have downloaded yours.
bstix’s mom, has got it going on.
I have seeded your mom.
Well, largest this week. And
Nah, a 24TB is $300 and some 20TB’s are even lower $ per TB.
I paid $600+ for a 24 TB drive, tax free. I feel robbed. Although I’m glad not to shop at Newegg.
Yes, fuck Newegg (and amazon too). I’ve been using B&H for disks and I have no complaints about them. They have the Seagate Ironwolf Pro 24TB at $479 currently, but last week it was on sale for $419. (I only look at 5yr warranty disks.)
I was not in a position to take advantage as I’ve already made my disk purchase this go around, so I’ll wait for the next deep discount to hit if it is timely.
I hate amazon but haven’t been following stuff about newegg and have been buying from them now and then. No probs so far but yeah, B&H is also good. Also centralcomputer.com if you are in the SF bay area. Actual stores.
Newegg was the nerd’s paradise 10+ years ago. I would spend thousands each year on my homelab back then. They had great customer service and bent over backwards for them. Then they got bought out and squeezed and passed that squeeze right down to the customers. Accusing customers of damaging parts, etc. Lots of slimeball stuff. They also wanted to be like amazon, so they started selling beads, blenders and other assorted garbage alongside tech gear.
After a couple of minor incidents with them I saw the writing on the wall and went to amazon who were somewhat okay then. Once amazon started getting bad, I turned to B&H and fleaBay. I don’t buy as much electronic stuff as I used to, but when I do these two are working…so far.
What is B&H?
I’ve recently bought a series of 24TB drives from both Amazon and Newegg. Each one I got was either DOA or shortly thereafter. I just gave up but I would love to have a better source.
bhphotovideo.com/…/seagate_st24000nt002_ironwolf_…
They are a retailer in NYC. Their specialties (historically) lie in photography and all the tech surrounding that.
Christ, remember when NewEgg was an actual store? Now they’re just a listing service for the scum-level of retailer and drop shippers. What a shame.
I got some 16TB drives recently for around $200 each, though they were manufacturer recertified. Usually a recertified drive will save you 20-40%. Shipping can be a fortune though.
EDIT: I used manufacturer recertified, not refurbished drives.
Refurbished drives sound scary. Any data to point towards that not being a problem?
I would absolutely not use refurbs personally. As part of the refurb process they wipe the SMART data which means you have zero power-on hours listed, zero errors, rewrite-count, etc - absolutely no idea what their previous life was.
Thanks! It seems too risky for something like a hard drive.
If you’ve got a RAID array with 1 or 2 parity then manufacturer recertified drives are fine; those are typically drives that just aged out before being deployed, or were traded in when a large array upgraded.
If you’re really paranoid you should be mixing mfg dates anyway, so keep some factory new and then add the recerts so the drive pools have a healthy split.
Yep staggering manufacturing dates is a good suggestion. I do it but it does make purchasing during sales periods to get good prices harder. Better than losing multiple drives at once, but RAID needs a backup anyway and nobody should skip that step.
I mean a backup of a RAID pool is likely just another RAID pool (ideally off-site) – maybe a tape library if you’ve got considerable cash.
Point is that mfg refurbs are basically fine, just be responsible, if your backup pool runs infrequently then that’s a good candidate for more white label drives.
As mentioned by another user, all drives fail, it’s a matter of when, not if. Which is why you should always use RAID arrangement with at least one redundant drive and/or have full backups.
Ultimately, it’s a money game. If you save 30% on a recertified drive and it has 20% less total life than a new one, you’re winning.
Here’s where I got some.
serverpartdeals.com/…/manufacturer-recertified-dr…
I looked around a bit, and either search engines suck nowadays (possibly true regardless) or there are no independent studies comparing certified and new drives.
All you get mostly opinion pieces or promises by resellers that actually, their products are good. Clearly no conflict of interest there. /s
The best I could find was this, but that’s not amazing either.
What I do is look at backblaze’s drive stats for their new drives, find a model that has a good amount of data and low failure rate, then get a recertified one and hope their recertification process is good and I don’t get a lemon.
And usually by the time they break they have been obsolete anyways, at least for 24/7 use in a NAS where storage density and energy efficiency are a big concern. So you would have replaced most of them long before they break, even with recertified drives
Omg I really have been out of the loop. I originally filled my 8 bay NAS with 6tb drives starting back in 2018. Once they would fill, i added another. 3 years ago, I finally ran out of space and started swapping out the 6tb for 10tb. Due to how it works, I needed to do 2 before I saw any additional space. I think i have 3 or 4 now, and the last one was 2 years ago. They did cost around $250 at the time, and I think i got 1 for just over $200. The fact that I can more than double that for only $300 is crazy news to me. Guess I am going to stop buying 10tb now. The only part that sucks is having to get 2 up front…
I bought 8TB for something like $300. 36TB seems quite attractive.
Depends on your use case. The linked drive according to seagate’s spec sheet is only rated for about ~6.5 power-on hours per day(2400 per year). So if just in your desktop for storage then sure. In an always (or mostly) on NAS then I’d find a different drive. It’ll work fine but expect higher failure rates for that use.
You need a week to fill the hecking disk. flips server rack up in disappointment
But this would be great for tape-like storage where you only need to write once and maybe query little individual bits of it. Slap RAID on top of it and you’ve got yourself your own nation state intelligence service datastore.
So how much data would I lose when it dies?
Edit for those who didn’t read the smirk, yes 36Tb, as a way to point out what someone answered below: if you’re using a drive this big have your data recovery procedures on fleek.
Assuming you aren’t striping, up to 36 TB. If you follow even halfway decent practices with basically any kind of RAID other than 0, hopefully 0 Bytes.
The main worry with stuff like this is that it potentially takes a while to recover from a failed drive even if you catch it in time (alert systems are your friend). And 36 TB is a LOT of data to work through and recover which means a LOT of stress on the remaining drives for a few days.
I think you mean “are striping”.
But even with striping you have backups right? Local redundancy is for availability, not durability.
Words hard
And I would go so far as to say that nobody who is buying 36 TB spinners is doing offsite backups of that data. For any org doing offsites of that much data you are almost guaranteed using a tape drive of some form because… they pay for themselves pretty fast and are much better for actual cold storage backups.
Seagate et al keep pushing for these truly massive spinners and I really do wonder who the market is for them. They are overly expensive for cold storage and basically any setup with that volume of data is going to be better off slowly rotating out smaller drives. Partially because of recovery times and partially because nobody but a sponsored youtuber is throwing out their 24 TB drives because 36 TB hit the market.
I assume these are a byproduct of some actually useful tech that is sold to help offset the costs while maybe REALLY REALLY REALLY want 72 TBs in their four bay Synology.
I wouldn’t buy a Synology but either way I’d want a 5 or 6 bay for raid-6 with two parity drives. Going from 4 bay (raid 6 or 10) to 5 bay (raid 6) is 50% more user data for 25% more drives. I wouldn’t do raid 5 with drives of this size.
Been a long time since I set foot in a data center; are tape drives not still king for cold storage of data?
It depends on the size/“disruptiveness” of the company but yeah. You either have your own tape back up system or you contract out to someone who does and try not to think about what it means to be doing a glorified rsync of all your data offsite every week.
I wouldn’t quite go so far as to say anyone doing genuine offsite backups using a spinning disc is wrong but…
The caveat I’ll carve out is the hobbyist space where a lot of us will back up truly essential data to a cloud bucket or even a friend/family member’s NAS. I… still think that is more wrong than not but (assuming you trust them and they have proper practices) it is probably the best way for a hobbyist to keep a backup without worrying about that USB drive degrading since it gets plugged in once a year.
I can’t criticize other hobbyists. I only back up locally and I use Synology Hybrid Raid to do it.
Was this a typo? I would expect that almost everyone who is buying these is doing offsite backups. Who has this amount of data density and is ok with losing it?
Yes, they are quite possibly using tape for these backups (either directly or through some cloud service) but you still want offsite backups. Otherwise a bad fire and you lose it all.
It would probably take days to rebuild the array.
It’s important to also note that RAID (or alternatives such as unRAID) are not backup systems and should not be relied on as such. If you have a severe brownout that fries more than two or three drives at once, for example, you will lose data if you’re not backing up.
about 36TB?
Nooooooooo not all my pr0ns!!
Defragmenting…
<img alt="" src="https://lemmy.world/pictrs/image/223032f5-e35e-4d73-b638-84143da4602f.gif">
I’ve never had to defragment the ext4 drives in my server. Ext4 is fairly resistant to fragmentation.
It’s not really Ext4 doing that, it’s a bunch of tricks in the OS layer and the way apps write files to storage that limits it.
You’ll see it if you use something like a BT client without pre-allocation, those files can get heavily fragmented depending on the download speed.
Man, I used to LOVE defragmenting drives. I felt like I was actually doing something productive, and I just got to sit back and watch the magic happen.
Now I know better.
One of the worst things that the newer Windows versions did is get rid of that little view of defragmenting. It was much more interesting than watching a number slowly tick up.
I feel seen
I think if I needed to store 36TB of data, I would rather get several smaller disks.
But if you hate your data there’s no quicker way to lose it than a single 36TB Seagate drive.
That’s why Seagate is the last word in the title.
I don’t think the target audience of this drive is buying one. They are trying to optimize for density and are probably buying in bulk rather than paying the $800 price tag.
But if you need a Petabyte of data you’ll appreciate this existing
That’s roughly what I have now, and I only have about 200gb left, so I kind of wish I could get a little more right now. This is across 7 drives. I really hope storing data becomes faster and cheaper in the future because as it keeps growing over the past few decades, it gets longer and longer to replace and move this much data…
Well, it does cost less and less every year. I bought two 8TB drives for $300 each or so, and today a 24TB drive is about that much.
SSDs are getting crazy cheap.
If you need 10tb of storage, you could get 2x used 10tb hdds in raid 1 for $200, but 6x used 2tb nvme in raid 5 is only $600 and 100x faster. Both take up the same amount of space.
Multiple drives in a RAID.
Hello
It will take about 36 hours to fill this drive at 270mb/s
That’s a long time to backup your giraffe porn collection.
What kind of degenerate do you think I am? That’s 36 hours to back up my walrus porn collection.
<img alt="" src="https://lemmy.world/pictrs/image/c7eb808e-2c07-4801-8dde-525b7953f28f.jpeg">
…or at least call a rubber walrus protector salesman!
Woah I haven’t thought about that since high school. I vaguely remember an inside joke between some dope smoking buddies and i where we would say call the police in that nervous voice
How you 'bout to call me out like that ?
How did you know about my giraffe porn?
Yeah, but it’s Seagate. I have worked in data centers, and Seagate drives had the most failures of all my drives and somehow is still in business. I’d say I was doing an RMA of 5-6 drives a month that were Seagate, and only 4-5 a year Western Digital.
Out of the roughly 20 drives I’ve bought over the last decade or so, the only two failures were Seagate and they only made up five of the drives purchased. The other 15 are WD and all have been great (knock on wood).
I’ve had the same experience. The first HDD that failed on me was a Barricuda 7200.11 with the infamous firmware self-brick issue, and a second 7200.11 that just died slowly from bad sectors.
From then on I only bought WD, I have a Caviar Black 1TB from oh, 2009-ish that’s still in service, though it’s finally starting to concern me with it’s higher temperature readings, probably the motor bearings going. After that I’ve got a few of the WD RE4 1TBs still running like new, and 6 various other WD Gold series drives, all running happily.
The only WD failure I’ve had was from improper shipping, when TigerDirect (rip) didn’t pack the drive correctly, and the carrier football tossed the thing at my porch, it was losing sectors as soon as it first started, but the RMA drive that replaced it is still running in a server fine.
I hear you. I’m not sure I’ve ever had a Seagate drive not fail on me.
What models of Seagate drives?
I’ve been running x4 Seagate ST8000NC0002s 24/7 for almost 5 years, plus 2 more I added about 6 months ago and they’ve never given me any trouble.
To be fair, the only HDDs I’ve ever had that failed were two I dropped because I wasn’t being careful enough.
All over the map: Barracuda, SkyHawk, Ironwolf, Constellation, Cheetah, etc…
Every drive I’ve had fail has been a Seagate. I replace them out of habit at this point.
I use all WD Golds for storage now but I have some Seagate barracudas from 2005 that still work. I don’t use them anymore but the data is still there. I fire them up every so often to see. I know that’s purely situational. I pretty much only buy WD now.
.
Is that just observational, or did you keep track? Backblaze does track their failures, and publishes their data: backblaze.com/…/backblaze-drive-stats-for-q1-2025…
And they do have more Seagate failures than other brands, but that’s because they have more Seagates than other brands. Seagate is generally pretty good value for the money.
IMO, its not a brand issue. Its a seller/batch/brand issue. Hard drives are sensitive to vibration, and if you buy multiple drives from the same place, at the same time, and all the same brand and model, you might be setting yourself up for a bad experience if someone accidentally slammed those boxes around earlier in their life.
I highly recommend everyone buy their drives from different sellers, at different times, spread out over various models from different brands. This helps eliminate the bad batch issue.
Yeah. In the Backblaze data, you can see that annualized failure rates vary significantly by drive model within the same manufacturer.
But if maintaining drive diversity isn’t your thing, just buy a cold spare and swap it out when a failure inevitably happens (and then replace the spare).
The problem with same batch drives is failing together, potentially beyond your ability to recover with replacement drives.
Three companies, kept track, but not after I left. It was always funny to me that they bought out Atlas and Maxtor. “Of course they did. Why not dominate the market on shitty drives? lol” I am surprised they hadn’t bought Deskstar.
Hello! 👋
Hello
Hello
👋
Why does this have so many up votes
Check the post title ;)
Howdy! 🤠
Seagate so how long before it fails?
About 3 hours.
At least it’s not a WD POS
It comes with three monkeys inside for redundancy:
<img alt="" src="https://lemmy.today/pictrs/image/73f563ec-cf6d-47f0-aa4e-640b3f490fb3.webp">
In my experience, not all Seagates will fail but most HDD’s that fail will be Seagates.
Because Seagate sell the most drives and all drives fail?
The thing is I’m a data hoarder who buys lots of HDD’s; both new and used. I have only bought a few Seagates. It’s always the Seagates that are fucked. I had a Toshiba and Western Digital fail on me but I have had 5 Seagates fail on me. Could be a coincidence, sure but the brand I have bought the fewest of had the most failures. I recognize this is not scientific in any way. I recently bought a brand new 8TB Seagate Barracuda and its still going strong. I hope if lasts a good while. My oldest drive is a 1TB Hitachi (RIP) from 2008. I can’t wait for 8TB SSD’s to become cheaper.
Nah, as a fellow data hoarder you’re 100% correct. I have a couple of dozen disks, and I’ve had failures from both Seagate and WD, but the Seagates have failed much more often. For the past couple of years, I’ve only purchased WD for this reason. I’m down to two Seagate drives now.
I feel like many people with a distaste for WD got burned by the consumer drives (especially the WD Greens). WD’s DC line is so good though, especially HC530.
I mostly buy new Toshiba drives now. The WD blue drives are fine. I have a few of them. I have a WD red that is reporting surface errors, it’s still going and the number of errors hasn’t increased so I’m not stressing replacing it. Also, btrfs gives me peace of mind because I can periodiclly check if my filesystem has corrupted data.
Any hint about the ironwolfs?
I’ve had my 16TB ironwolf pros spinning for 5 years in my NAS, no issues. People love to trash Seagate but I can’t say I’ve had any issues. I also have 6x10TB barracuda pros and they’re fine too, for about 10 years.
Not really techradar.com/…/worlds-largest-ssd-is-on-sale-for…
SSD ≠ HDD
Never change pedantic Internet, never change!
.
I wanna fuck this HDD. To have that much storage on one drive when I currently have ~30TB shared between 20 drives makes me very erect.
nephew
Average Lemmy user
Ain’t nothing about me is average except for the size of my cock.
Your array sounds pretty average to be fair
twenty!?
Yeah, lots of drives of varrying capacity.
Why did they make an enterprise grade drive SMR? I’m out.
<img alt="" src="https://lemmy.world/pictrs/image/3ee957f7-f5f4-4b3d-9e69-edd0ed162968.jpeg">
For affordable set it and forget it cold storage, this is incredible. For anything actively being touched, yeah definitely a pass.
Because they simply cannot do it otherwise.
That’s fine…they don’t need to release it under their Exos line of enterprise drives. SMR don’t do well in raid arrays especially not highly utilized ones. They require idle time to cleanup and the rebuild times are horrendous.
SMR is designed for enterprise raid that is SMR-aware.
I’m not aware of any open-source zoned storage raid but I think Ceph is planning to add support next month.
zonedstorage.io/docs/getting-started/smr-disk
There are a number of enterprise storage systems optimized specifically for SMR drives. This is targeting actual data centers, not us humble homelabbers masquerading as enterprises.
LMAO!!
with this I can store at least 3 modern “AAA” games
More like zero, cause modern AAA games require an NVME (or at least an SSD) and this is a good old fashioned 7200 RPM drive.
Surely no games actually require an SSD?
A lot of modern AAA games require an SSD, actually.
On top of my head:
Cyberpunk, Marvel’s Spider-Man 2, Hogwarts Legacy, Dead Space remake, Starfield, Baulder’s Gate 3, Palworld, Ratchet & Clank: Rift ApartIt’s not a hard requirement.
They stream data from it while you play, so if you don’t have an SSD you’ll get pauses in game play.
Sure, you might.
But Baulder’s Gate 3 for example, which claims to require an SSD in it’s system requirements runs just fine on a HDD.
It’s the developer making sure you get optimal performance.
Once upon a time there was minimum/recommended specs. 😔
I can personally guarantee that it is a hard requirement for Spider-Man and Ratchet
That’s not how computers work, but sure bro.
Okay well try telling that to my computer when the games wouldn’t run without constantly freezing to load assets every few seconds.
But it is a hard drive requirement.
Indeed, as others have said this isn’t a hard requirement. Anyone with a handheld (e.g. Steam Deck) playing off a uSD card uses a device that’s an order of magnitude slower for sequential I/O
Both Cyberpunk and BG3 work flawlessly on the external USB hard drive that I use. The loading times suffer a bit, but not to an unplayable degree, not even close
Cyberpunk literally has an HDD mode, I play it of an HDD every day.
With sufficient ram to load everything in you’ll just have longer load times, no hdd hitchiness
Forza Horizon 4 and 5 don’t say they require an SSD I think, but when I had it on my hard drive any cars that did over 250kph caused significant world loading issues, as in I’d fall out of the world because it didn’t load the map.
Forza Horizon 4 actually does include an SSD in its requirements. Thank you for reminding me about that.
It does technically work without it, just don’tgo over A class, don’t do sprints and there was 1 normal circuit that’s a tad big in a forest bit
If a game isn’t fully playable without an SSD, then I consider it a requirement.
Ever try playing Perfect Dark without an Expansion Pak back in the day? It’ll technically work, but you’ll get locked out of 90% of the game, including the campaign. Similar thing with SSDs today.
Oh definitely, game sizes are getting extreme and I prefer smaller indie games now 🥲
Are people still mining chia ?
.
Get your meds, man
Makes me shudder. I have to replace a drive in my array, because it is degraded. It’s a 4TB. Imagine having to replace one of these. I’d much rather have a bunch of cheaper drives, even if they are a bit more expensive per TB, because the replacement cost will eventually make the total cost of ownership lower.
Also, repeat with me: “Please give me a Toshiba or Hitachi, please”
.
Until you run out of ports or cage space 😂
So if you have been around long enough you might remember the Hitachi (IBM) deathstars wizardprang.wordpress.com/…/the-last-deathstar/
I see Hitachi and think no fucking way, where as Seagate I used to see as an always yes. Now I just stick the disks in a zfs array and call it done
What I’m really waiting for is large capacity ssds with sata.
Yes I remeber Deathstars. However, these past years I perfunctorly peruse Backblaze’s yearly drive failure reports, and have noticed a trend, which is that most drives are fine, but every year there are a few that stand out as very bad, and they usually Seagate/WDC.
Exceptions yada, yada
Do people actually use such massive hard drives? I still have my 1 TB HDD in my PC (and a 512 GB SSD), lol.
Data hoarders could be happy, but otherwise it’s mostly enterprise use.
Still, I personally hold about 4 TB of files, and I know people holding over 30 TB.
As soon as your storage needs exceed 1-2 games and a bunch of old photos, demand for space raises quickly.
I have 50t of data total : archival, old project, backups, backups of my physical medias, etc
This is an enterprise drive, so it’s useful for any usecase where a business needs to store a lot of lightly used data, like historical records that might be accessed infrequently for reporting and therefore shouldn’t get be transfered to cold storage.
For a real world example, the business I’m currently contracting at is legally required to retain safety documentation for every machine in every plant they work in. Since the company does contract work in other people’s plants that’s hundreds of PDFs (many of which are 50+ page scans of paper forms) per plant and hundreds of plants. It all adds up very quickly. We also have a daily log processes where our field workers will log with photographs all of their work every single workday for the customer. Some of these logs contain hundreds of photographs depending on the customer’s requirements. These logs are generated every day at every plant so again it adds up to a lot of data being created each month
I have just shy of 8TB of data on my home file server.
That’s not including my NVR (for security cameras) which has a single 6TB SATA drive sitting around 40% capacity.
Pretty sure I had a bigger hard drive than that for my Amiga. You could have broken a toe if you’d dropped it.
no thanks Seagate. the trauma of losing my data because of a botched firmware with a ticking time bomb kinda put me off your products for life.
see you in hell.
but then wd and their fake red nas drives with smr tech?
what else we have?
Wait… fake? I just bought some of those.
they were selling wd red (pro?) drives with smr tech, which is known to be disastrous for disk arrays because both traditional raid and zfs tends to throw them out. the reason for that is when you are filling it up, especially when you do it quickly, it won’t be able to process your writes after some time, and write operations will take a very long time, because the disk needs to rearrange its data before writing more. but raid solutions just see that the drive is not responding to the write command for a long time, and they think that’s because the drive is bad.
it was a few years ago, but it was a shitfest because they didn’t disclose it, and people were expecting that nas drives will work fine in their nas.
I’ve had a couple random drop from my array recently, but they were older so I didn’t think twice about it. Does this permafry them or can you remove from the array and reinitiate for it to work?
well, it depends. if they were dropped just because they are smr and were writing slowly, I think they are fine. but otherwise…
what array system do you use? some raid software, or zfs?
Windows Server storage solutions. I took them out of the array and they still weren’t recognized in Disk Management so I assume they’re shot. It was just weird having 2 fail the same way.
I don’t have experience with windows server, but that indeed sounds like these are dead. you could check them with some pendrive bootable live linux, whether it sees them, like gparted’s edition, in case windows just hides them because it blacklisted them or something
Didn’t they used to have only one “Red” designation? Or maybe I’m hallucinating. I thought “Red Pro” was introduced after that curfuffel to distinguish the SMR from the CMR.
I don’t know, because haven’t been around long enough, but yeah possibly they started using the red pro type there
Elaborate please?
In my case, 10+years ago I had 6 * 3tb Seagate disks in a software raid 5. Two of them failed and it took me days to force it back into the raid and get some of the data off. Now I use WD and raid 6.
I read 3 or 4 years ago that it was just the 3tb reds I used had a high failure rate but I’m still only buying WDs
Thanks, yeah that makes sense.
I had a single red 2TB in an old tivo roamio for almost a decade.
Pulled out this weekend, and finally tested it. Failed.
I was planning to move my 1.5T music collection to it. Glad I tested it first, lol.
eevblog.com/…/whats-behind-the-infamous-seagate-b…
this thread has multiple documented instances of poor QA and firmware bugs Seagate has implemented at the cost of their own customers.
my specific issue was even longer ago, 20+ years. there was a bug in the firmware where there was a buffer overflow from an int limit on runtime. it caused a cascade failure in the firmware and caused the drive to lock up after it ran for the maximum into limit. this is my understanding of it anyway.
the only solution was to purchase a board online for the exact model of your HDD and swap it and perform a firmware flash before time ran out. I think you could also use a clip and force program the firmware.
at the time a new board cost as much as a new drive, finances of which I didn’t have at the time.
eventually I moved past the 1tb of data I lost, but I will never willingly purchase another Seagate.
Can someone recommend me a hard drive that won’t fail immediately? Internal, not SSD, from which cheap ones will die even sooner, and I need it for archival reasons, not speed or fancy new tech, otherwise I have two SSDs.
Hard drives aren’t great for archival in general, but any modern drive should work. Grab multiple brands and make at least two copies. Look for sales. Externals regularly go below $15/tb these days.
Word for the wise, those externals usually won’t last 5+ years of constant use as an internal.
I’ve got 6 in a random mix of brands (Seagate and WD) 8-16Tb that are all older than that. Running 24/7 storing mostly random shit I download. Pulled one out recently because the USB controller died. Still works in a different enclosure now.
I’d definitely have a different setup for data I actually cared about.
If you’re relying on one hard drive not failing to preserve your data you are doing it wrong from the jump. I’ve got about a dozen hard drives in play from seagate and WD at any given time (mostly seagate because they’re cheaper and I don’t need speed either) and haven’t had a failure yet. Backblaze used to publish stats about the hard drives they use, not sure if they still do but that would give you some data to go off. Seagate did put out some duds a while back but other models are fine.
The back blaze stats were always useless because they would tell you what failed long after that run of drives was available.
There are only 3 manufactures at this point so just buy one or two of each color and call it a day. ZFS in raid z2 is good enough for most things at this point.
My WD Red Pros have almost all lasted me 7+ years but the best thing (and probably cheapest nowadays) is a proper 3-2-1 backup plan.
I think refurbished enterprise drives usually have a lot of extra protection hardware that helps them last a very long time. Seagate advertises a mean time to failure on their exos drives of ~200 years with a moderate level of usage. I feel like it would almost always be a better choice to get more refurbished enterprise drives than fewer new consumer drives.
I personally found an 8tb exos on serverpartdeals for ~$100 which seems to be in very good condition after checking the SMART monitoring. I’m just using it as a backup so there isn’t any data on it that isn’t also somewhere else, so I didn’t bother with redundancy.
I’m not an expert, but this is just from the research I did before buying that backup drive.
Every manufacturer has made a product that failed.
but not every manufacturer has had class action lawsuits filed against their continued shitty products.
I can certainly understand holding grudges against corporations. I didn’t buy anything from Sony for a very long time after their fuckery with George Hotz and Nintendo’s latest horseshit has me staying away from them, but that was a single firmware bug that locked down hard drives (note, the data was still intact) a very long time ago. Seagate even issued a firmware update to prevent the bug from biting users it hadn’t hit yet, but firmware updates at the time weren’t really something people thought to ever do, and operating systems did not check for them automatically back then like they do now.
Seagate fucked up but they also did everything they could to make it right. That matters. Plus, look at their competition. WD famously lied about their red drives not being SMR when they actually were. And I’ve only ever had WD hard drives and sandisk flash drives die on me. And guess who owns sandisk? Western Digital!
I guess if you must go with a another company, there’s the louder and more expensive Toshiba drives but I have never used those before so I know nothing about them aside from their reputation for being loud.
Maybe it’s confirmation bias but almost all memory that failed on me has been sandisk-flash storage. Zhe only exception being a corsair ssd which failed after 3 yrs as the main laptop drive + another 3 as a server boot and log-drive.
The only flash drive I ever had fail me that wasn’t made by sandisk was a generic microcenter one, which was so cheap I couldn’t bring myself to care about it.
I had a similar experience with Samsung. I had a bunch of evo 870 SSDs up and die for no reason. Turns out, it was a firmware bug in the drive and they just need an update, but the update needs to take place before the drive fails.
I had to RMA the failures. The rest were updated without incident and have been running perfectly ever since.
I’d still buy Samsung.
I didn’t lose a lot of data, but I can certainly understand holding a grudge on something like that. From the other comments here, hate for Seagate isn’t exactly rare.
Some of Seagate’s drives have terrible scores on things like Blackblaze. They are probably the worst brand, but also generally the cheapest.
I have been running a raid of old Seagate barracuda’s for years at things point, including a lot of boot cycles and me forcing the system off because Truenas has issues or whatnot and for some fucking reason they won’t die.
I have had a WD green SSD that I use for Truenas boot die, I had some WD external drive have its controller die (the drive inside still work) and I had some crappy WD mismatched drives in a raid 0 for my Linux ISO’s and those failed as well.
Whenever the Seagate start to die, I guess ill be replacing them with Toshiba’s unless somebody has another suggestion.
Is Seagate still producing shitty drives that fail a few days after the warranty expired?
Hey, they told you how long they expected it to last 😅
Fair point. But still pretty bad. Literally two days after the warranty expired my Seagate drive was broken. This was my first and only Seagate drive. Never again.
Meanwhile my old Western Digital drive is still kicking way beyond it’s warranty. Almost 10 years now.
Mine have been going strong for five years. Ironwolf Pros.
Some models are quite a bit worse than average while some are on par with competition
<img alt="" src="https://mander.xyz/pictrs/image/259b460a-acd0-4d34-9f0d-1c1a4cd9e41a.png">
Sorry but without a banana for scale it’s hard to tell how big it really is
36 Typical Bananas
That quite large then.
I wonder how many pictures of nude bananas you could fit inside??
Depending on the quality you want to deal with, at least 3.
28 plantains
I’m gonna need like 6 of these
monkey’s paw curls They’re SMR
Seems fine with a couple TB of SSDs to act as active storage with regular rsyncs back to the HDDs. This is fine.
The first copy of anything big will suck ass… and why else would you get a 36TB drive if not to copy a lot of data to it?
Does it really matter that much if the first copy takes a while though? Only doing it once and you don’t even have to do it all in 1 go. Just let it run over the weekend would do though.
It matters to me. I got stuff to back up regularly, and I ain’t got all weekend.
It’s only the first copy that takes such a long time. After that you only copy the changes.
That depends entirely on your usecase.
Sure, if you have many TBs of data changes per day you probably want a different solution. But that would also suggest you don’t need to keep it for very long.
Write speeds on SMR drives start to stagnate after mere gigabytes written, not after terabytes. As soon as the CMR cache is full, you’re fucked, and it stagnates to utterly unusable speeds as it’s desperately trying to balance writing out blocks to the persistent area of the disk and accepting new incoming writes. I have 25 year old consumer level IDE drives that perform better than an SMR drive in this thrashing state.
Also, I often use hard drives as a temporary holding area for stuff that I’m transferring around for one reason or another and that absolutely sucks if an operation that normally takes an hour or two is suddenly becoming a multi-day endeavour tying up my computing resources. I was burned once when Seagate submarined SMR drives into the Barracuda line, and I got a drive that was absolutely unfit for purpose. Never again.
My primary storage use-case is physical media backups. I literally don’t care how long it takes to store, a bluray is 70GB and I’ve got around 200 of em to backup.
That’s a lot of porn. And possibly other stuff, too.
It isn’t as much as you think, high resolution, high bitrate video files are pretty large.
Especially VR files
Can it actually transfer data fast enough to save or play them back in real-time, though?
Ehhh don’t test me
Nah, the other stuff will all fit on your computer’s hard drive, this is only for porn. They should call it the Porn Drive.
I “only” have a 1TB SDD. If I wanted to download a new game I would have to delete one that’s already on here.
Is it worth replacing within a year only to be sent a refurbished when it dies?
Use redundancy. Don’t be a pleb.
This hard drive is so big that when it sits around the house, it sits around the house.
This hard drive is so big when it moves, the Richter scale picks it up.
This hard drive is so big when it backs up it makes a beeping sound.
This hard drive is so big, when I tried to weigh it the scale just said “one at a time please”.
This hard drives so big, that two people can access it at the same time and never meet.
This hard drive is so big, that astronomers thought it was a planet.
This hard drive is so big, it’s got its own area code
I’m amazed it’s only $800. I figured that shit was gonna be like 8-10 thousand.
Yeah, I expected it to level out around $800 after a few years, not out of the gate. 20TB are still $300 ish new.
Well, it’s a Seagate, so it still comes out to about a hundred bucks a month.
Why do you wound me like this?
Me who stores important data on seagate external HDD with no backup reading the comments roasting seagate:
Uh oh!!! Uh oh uh oh uh oh uh oh
What is the usecase for drives that large?
I ‘only’ have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.
there was a time i asked this question about 500 megabytes
I too, am old.
I’m older than that but didn’t want to self report. the first hard disk i remember my father buying was 40mb.
I remember renting a game, and it was on a high density 5.25" inch floppy at a whopping 1.2MB; but or family computer only had a standard density 5.25".
So we went to the neighbors house, who was one of the first computer nerds (I’m not sure he’s still alive now), who copied the game to a 3.5" high density 1.44MB disk, then we returned the rental because we couldn’t play it on the 1.2 MB HD 5.25" floppy.
… And that was the first time I was party to piracy.
I am not questioning the need for more storage but the need dor more storage without increased speeds.
What’s scrubbing for?
A ZFS Scrub validates all the data in a pool and corrects any errors.
I’m not in the know of having your own personal data centers so I have no idea. … But how often is this necessary? Does accessing your own data on your hard drive require a scrub? I just have a 2tb on my home pc. Is the equivalent of a scrub like a disk clean up?
You usually scrub you pool about once a month, but there are no hard rules on that. The main problem with scrubbing is, that it puts a heavy load on the pool, slowing it down.
Accessing the data does not need a scrub, it is only a routine maintenance task. A scrub is not like a disk cleanup. With a disk cleanup you remove unneeded files and caches, maybe de-fragment as well. A scrub on the other hand validates that the data you stored on the pool is still the same as before. This is primarily to protect from things like bit rot.
There are many ways a drive can degrade. Sectors can become unreadable, random bits can flip, a write can be interrupted by a power outage, etc. Normal file systems like NTFS or ext4 can only handle this in limited ways. Mostly by deleting the corrupted data.
ZFS on the other hand is built using redundant storage. Storing the data spread over multiple drives in a special way allowing it to recover most corruption and even survive the complete failure of a disk. This comes at the cost of losing some capacity however.
Thank you for all this information. One day when my ADHD forces me into a making myself a home server I’ll remember this and keep it in mind. I’ve always wanted to store movies but these days just family pictures and stuff. Definitely don’t have terabytes but I’m getting up 100s of gb.
It’s to play Ark: Survival Evolved.
What drives do you have exactly? I have 7x6TB WD Red Pro drives in raidz2 and I can do a scrub less than 24 hours.
I have 2*12TB whitelabel WD drives (harvested from external drives but Datacenter drives accourding to the SN) and one 16 TB Toshiba white-label (purchased directly also meant for datacenters) in a raidz1.
How full is your pool? I have about 2/3rds full which impacts scrubbing I think. I also frequently access the pool which delays scrubbing.
It’s like 90% full, scrubbing my pool is always super fast.
Two weeks to scrub the pool sounds like something is wrong tbh.
There is an enterprise storage shelf (aka a bunch of drives that hooks up to a server) made by Dell which is 1.2 PB (yes petabytes). So there is a use, but it’s not for consumers.
That’s a use-case for a fuckton of total capacity, but not necessarily a fuckton of per-drive capacity. I think what the grandparent comment is really trying to say is that the capacity has so vastly outstripped mechanical-disk data transfer speed that it’s hard to actually make use of it all.
For example, let’s say you have these running in a RAID 5 array, and one of the drives fails and you have to swap it out. At 190MB/s max sustained transfer rate (figure for a 28TB Seagate Exos; I assume this new one is similar), you’re talking about over two days just to copy over the parity information and get the array out of degraded mode! At some point these big drives stop being suitable for that use-case just because the vulnerability window is so large that the risk of a second drive failure causing data loss is too great.
Thats exactly what I wanted to say, yes :D.
I get it. But the moment we invoke RAID, or ZFS, we are outside what standard consumers will ever interact with, and therefore into business use cases. Remember, even simple homelab use cases involving docker are well past what the bulk of the world understands.
I would think most standard consumers are not using HDDs at all.
I worked on a terrain render of the entire planet. We were filling three 2 Tb drives a day for a month. So this would have been handy.
High capacity storage pools for enterprises.
Space is at a premium. Saving space should/could equal to better pricing/availability.
Not necessarily.
The trouble with spinning platters this big is that if a drive fails, it will take a long time to rebuild the array after shoving a new one in there. Sysadmins will be nervous about another failure taking out the whole array until that process is complete, and that can take days. There was some debate a while back on if the industry even wanted spinning platters >20TB. Some are willing to give up density if it means less worry.
I guess Seagate decided to go ahead, anyway, but the industry may be reluctant to buy this.
I would assume with arrays they will use a different way to calculate parity or have higher redundancy to compensate the risk.
If there’s higher redundancy, then they are already giving up on density.
We’ve pretty much covered the likely ways to calculate parity.
Jesus, my pool takes a little over a day, but I’ve only got around 100 tb how big is your pool?
The pool is about 20 usable TB.
Something is very wrong if it’s taking 2 weeks to scrub that.
Sounds like something is wrong with your setup. I have 20TB drives (x8, raid 6, 70+TB in use) … scrubbing takes less than 3 days.
Data centers???
It’s like the petronas towers, everytime they’re finished cleaning the windows they have to start again
Great, can’t wait to afford it in 60 years.
for POrn, but who downloads porn nowadays. unless its the illegal kind. to the idiot below who brigaded and downvoted, i was repeating other people saying it was for porn.
.
my qbittorrent is gonna love that
finally i’ll be able to self-host one piece streaming
Finally, a hard drive which can store more than a dozen modern AAA games
Can’t wait to see this bad boy on serverpartdeals in a couple years if I’m still alive
That goes without saying, unless you anticipate something. Do you?
Really sad that S3 prices are still that high… also hetzner storage boxes