When the computer wants a bit to be a 1, it pops it down. When it wants it to be a 0, it pops it up.
If it were like a punch card, it couldn’t be rewritten as writing to it would permanently damage the disc. A CD-RW is basically a microscopic punch card though, because the laser actually burns away material to write the data to the CD.
Semi_Hemi_Demigod@lemmy.world
on 18 Dec 04:04
collapse
They work through electron tunneling through a semiconductor, so something does go through them like an old punch card reader
dual_sport_dork@lemmy.world
on 18 Dec 20:29
collapse
Current ones also store multiple charge levels per cell, so they’re no longer one bit each. They have multiple levels of “punch” for what used to just be one bit.
sugar_in_your_tea@sh.itjust.works
on 17 Dec 23:59
nextcollapse
That’s how most technology is:
combustion engines - early 1900s, earlier if you count steam engines
missiles - 13th century China, gunpowder was much earlier
wind energy - windmills appeared in the 9th century, potentially as early as the 4th
Almost everything we have today is due to incremental improvements from something much older.
Just about all of the products and technology we see are the results of generations of innovations and improvements.
Look at the automobile, for example. It’s really shaped my view of the significance of new industries; we could be stuck with them for the rest of human history.
pressanykeynow@lemmy.world
on 18 Dec 23:09
collapse
Talking about steam, steam-powered things are 2 thousand years old at least and we still use the technology when we crack atoms to make energy.
What the Romans had wasn’t comparable with an industrial steam engine. The working principle of steam pushing against a cylinder was similar, but they lacked the tools and metallurgy to build a steam cauldron that could be pressurized, so their steam engine could only do parlor tricks like opening a temple door once, and not perform real continuous work.
dependencyinjection@discuss.tchncs.de
on 17 Dec 20:14
collapse
Where is a good place to search for decommissioned ones?
quixotic120@lemmy.world
on 17 Dec 22:18
nextcollapse
Serverpartdeals has done me well, drives often come new enough that they still have a decent amount of manufacturers warranty remaining (exos is 5yr) and depending on the drive you buy from them spd will rma a drive for 5 years from purchase (but not always, depends on the listing, read the fine print).
I have gotten 2 bad drives from them out of 18 over 5 years or so. Both bad drives were found almost immediately with basic maintenance steps prior to adding to the array (zeroing out the drives, badblocks) and both were rma’d by seagate within 3-5 days because they were still within the mfr warranty.
If you’re running a gigantic raid array like me (288tb and counting!) it would be wise to recognize that rotational hard drives are doomed and you need a robust backup solution that can handle gigantic amounts of data long term. I have a tape drive for that because I got it cheap at an electronics recycler sold as not working (thankfully it was an easy fix) but this is typically a super expensive route. If you only have like 20tb then you can look into stuff like cloud services, bluray, redundant hard drive, etc. or do like I did in the beginning and just accept that your pirated anime collection might go poof one day lol
What kind of tape drive are you using? My array isn’t as large as yours (120tb physical), but it’s big enough that my only real options for backup are tape or a whole secondary array for just backup.
Based on what I’ve seen, my options are a prohibitively large number tapes with an older LTO standard or prohibitively expensive tapes with a newer LTO standard.
My current backup strategy consists of automated backups to Backblaze B2 for the really important stuff like personal documents or projects and hoping my ZFS array doesn’t fail for everything else.
I have an ibm qualstar lto8 drive. I got it because I gambled, it was cheap because it was throwing an error (I forget what the number was) but it was one that indicates an issue in the tape path. I was able to get the price to $150 because I was buying some other stuff and because ultimately if the head was toast it was basically useless. But I got lucky and cleaning the head and tape path brought it back to life. Dunno how long it will last. I’ll live with it though because buying one that’s confirmed working can be thousands
You’re right that lto8 tapes are pricey but they’re quite a bit cheaper than building an equivalent array for backup that is significantly more reliable long term. A tape is about 12tb and $40-50, although sometimes they pop up cheaper. I generally don’t back up stuff continually with this method, I back up newer files that haven’t been synced to tape once every six weeks or so. It’s also something that you can buy a bit at a time to soften the financial blow of course. Maybe if you get a fancy carousel drive you’d want to fill it up but frankly that just seems like it would break much easier
More modern tapes have support for ltfs and I can basically use it like an external hard drive that way. So it’s pretty much I pop a tape in, once a week or so I sync new files to said tape, then as it gets full I swap it for a new tape. Towards the end I print a directory of what’s on it because admittedly doing it this way is messy. But my intention with this is to back up my “medium critical” files. Stuff that if I lost I would be frustrated over, but not heartbroken. Movies and TV shows that I did custom muxes of to have my ideal subtitles, audio tracks, etc. all my dockers so stuff like my Jellyfin watch status and komga library stay intact, stuff like that. That takes up the bulk of my nas and my primary concerns are either the array fully failing or significant bit rot, and if either of those occur I would rebuild from scratch and just copy all the tapes back over anyway so the messy filing isn’t really a huge issue.
I also do sometimes make it a point to copy harder to find files onto at least 2 tapes on the outside chance a tape goes bad. It’s unlikely given I only buy new tapes and store them properly (I even go to the effort to store them offsite just in case my house burns down) but you never know I suppose
The advertised values of tape capacity is crap for this use. You’ll see like lto 8 has a native capacity of 12tb but a compressed capacity of 30tb per disk! And the disks will frequently just say 30tb on them. That’s nonsense here. Maybe for a more typical server environment where they’re storing databases and text files and shit but compressed movies and music? Not so much. I get some advantage because I keep most of my stuff in archival quality (remux/flac/etc) but even then I still usually dont get anywhere near 30tb
It’s pretty slow. Not the end of the world but just something to keep in mind. Lto8 is supposed to be 360MBps for uncompressed and 750MBps for compressed data but I don’t seem to hit those speeds at all. I’m not really in a rush though and everything verifies fine and works after copying back over so I’m not too worried. But it can take like 10-14 hours to fill a tape. If I ever do have to rebuild the array it will take AGES
For my “absolutely priceless” data I have other more robust backup solutions that are basically the same as yours (literally down to using backblaze, ha).
You got an incredible deal on your tape drive. For LTO8 drives, I’m seeing “for parts only” drives sold for around $500. I’d be willing to throw away $100 or $200 on the possibility that I could repair a drive; $500 is a bit too much. It looks like LTO6 is more around what my budget would be.; it would require a much larger number of tapes, but not excessively so.
I remember when BD-R was a reasonable solution for backup. There’s no way that’s true now. It really seems like hard drive capacity has far outpaced removable media. If most people are streaming everything, those of us who actually want to save their data locally are really the minority these days. There’s just not as much of a compelling reason for companies to develop cheap high-capacity removable discs.
I’m sure I’ll invest in a tape backup solution eventually, but for now, at least I have ZFS with paranoid RAIDZ.
It was about a year ago and I’ve found general prices have gone up on basically everything, even stuff for parts, in the past few years, but more importantly it was also a local sale in person with a vendor I know. I find that’s the only way to actually get deals anymore. If you buy stuff like this and are stuffing a network rack at home it makes sense to befriend a local electronics recycler or two if you live in an area where that’s a thing.
I actually moved about two years ago to a less developed area but I will still drive to where I used to live (which is like 90-120 minute drive) 1-2x a year for stuff like this. It’s worth it bc these guys still know me and they’ll cut me deals on stuff like this where ebay sellers will list it for 2-3x as much. but if you watch 8x out of 10 their auctions never sell at those prices, at best they sometimes sell for an undisclosed “best offer” if they even have that option. It’s crazy how many ebay sellers will let shit sit on the market for inflated prices for weeks, months, or longer rather than drop their prices to promote an artificial economy in the hopes that eventually a clueless buyer with fat pockets will come along. They get that and they don’t want to waste the space storing shit for ages
Full disclosure: when I lived in the area I ran a refurbishing business on the side and would buy tons of stuff from them to fix and resell, that probably helped get me on their good side. From like 2013-2019 I would buy tons of broken phones, consoles, weird industrial shit, etc, fix it, and resell it. They loved it because it was a guaranteed cash sale with no ebay/paypal fees, no risk of negative feedback for their ebay store, no risk of a buyer doing a chargeback or demanding to return, etc. I wanted their broken shit and if I couldn’t fix it I accepted the loss, would bring it back to them to recycle and admit defeat in shame
eBay sellers that have tons of sales and specialize. You can learn to read between the lines and see that decom goods are what they do.
SaveMyServer is a perfect example. Don’t know if they sell drives though.
NuXCOM_90Percent@lemmy.zip
on 17 Dec 15:33
nextcollapse
Just a reminder: These massive drives are really more a “budget” version of a proper tape backup system. The fundamental physics of a spinning disc mean that these aren’t a good solution for rapid seeking of specific sectors to read and write and so forth.
So a decent choice for the big machine you backup all your VMs to in a corporate environment. Not a great solution for all the anime you totally legally obtained on Yahoo.
Not sure if the general advice has changed, but you are still looking for a sweet spot in the 8-12 TB range for a home NAS where you expect to regularly access and update a large number of small files rather than a few massive ones.
Gradually_Adjusting@lemmy.world
on 17 Dec 15:38
nextcollapse
Gradually_Adjusting@lemmy.world
on 17 Dec 16:00
collapse
I am troubled in my heart. I would not have been told so in this way.
IrateAnteater@sh.itjust.works
on 17 Dec 15:43
nextcollapse
HDD read rates are way faster than media playback rates, and seek times are just about irrelevant in that use case. Spinning rust is fine for media storage. It’s boot drives, VM/container storage, etc, that you would want to have on an SSD instead of the big HDD.
And oftentimes some or all of the metadata that helps the filesystem find the files on the drive is stored in memory (zfs is famous for its automatic memory caching) so seek times are further irrelevant in the context of media playback
CarbonatedPastaSauce@lemmy.world
on 17 Dec 15:45
nextcollapse
I’m real curious why you say that. I’ve been designing systems with high IOPS data center application requirements for decades so I know enterprise storage pretty well. These drives would cause zero issues for anyone storing and watching their media collection with them.
Not sure what you’re going on about here. Even these discs have plenty of performance for read/wrote ops for rarely written data like media. They have the same ability to be used by error checking filesystems like zfs or btrfs, and can be used in raid arrays, which add redundancy for disc failure.
The only negatives of large drives in home media arrays is the cost, slightly higher idle power usage, and the resilvering time on replacing a bad disc in an array.
Your 8-12TB recommendation already has most of these negatives. Adding more space per disc is just scaling them linearly.
Additionally, most media is read in a contiguous scan. Streaming media is very much not random access.
Your typical access pattern is going to be seeking to a chunk, reading a few megabytes of data in a row for the streaming application to buffer, and then moving on. The ~10ms of access time at the start are next to irrelevant. Particularly when you consider that the OS has likely observed that you have unutilized RAM and loads the entire file into the memory cache to bypass the hard drive entirely.
Blue_Morpho@lemmy.world
on 17 Dec 16:11
nextcollapse
The fundamental physics of a spinning disc mean that these aren’t a good solution for rapid seeking of specific sectors to read and write and so forth.
It’s no ssd but is no slower than any other 12TB drive. It’s not shingled but HAMR. The sectors are closer together so it has even better seeking speed than a regular 12TB drive.
Not a great solution for all the anime you totally legally obtained on Yahoo.
???
It’s absolutely perfect for that. Even if it was shingled tech, that only slows write speeds. Unless you are editing your own video, write seek times are irrelevant. For media playback use only consistent read speed matters. Not even read seek matters except in extreme conditions like comparing tape seek to drive seek. You cannot measure 10 ms difference between clicking a video and it starting to play because of all the other delays caused by media streaming over a network.
But that’s not even relevant because these have faster read seeking than older drives because sectors are closer together.
barkingspiders@infosec.pub
on 17 Dec 19:58
nextcollapse
honestly curious, why the hell was this downvoted? I work in this space and I thought this was still the generally accepted advice?
Blue_Morpho@lemmy.world
on 17 Dec 20:32
nextcollapse
Because everything he said was wrong?
NuXCOM_90Percent@lemmy.zip
on 17 Dec 20:35
nextcollapse
Because people are thinking through specific niche use cases coupled with “Well it works for me and I never do anything ‘wrong’”.
I’ll definitely admit that I made the mistake of trying to have a bit of fun when talking about something that triggers the dunning kruger effect. But people SHOULD be aware of how different use patterns impacts performance, how that performance impacts users, and generally how different use patterns impact wear and tear of the drive.
Come on man, everything, and mean everything you said is wrong.
Budget tape backup?
No, you can’t even begin to compare drives to tape. They’re completely different use cases. A hard drive can contain a backup but it’s not physically robust to be unplugged, rotated off site , and put into long term storage like tape. You might as well say a Honda Accord is a budget Semi tractor trailer.
Then you specifically called out personal downloads of anime as a bad use case. That’s absolutely wrong in all cases.
It is absurd to imply that everyone else except for you is less knowledgeable and using a niche case except you.
CarbonatedPastaSauce@lemmy.world
on 17 Dec 20:51
nextcollapse
Not a great solution for all the anime you totally legally obtained on Yahoo.
Mainly because of that. Spinning rust drives are perfect for large media libraries.
There isn’t a hard drive made in the last 15 years that couldn’t handle watching media files. Even the SMR crap the manufacturers introduced a while back could do that without issue. For 4k video you’re going to see average transfer speeds of 50MB/s and peak in the low 100MB/s range, and that’s for high quality videos. Write speed is irrelevant for media consumption, and unless your hard drive is ridiculously fragmented, seek speed is also irrelevant. Even an old 5400 RPM SATA drive is going to be able to handle that load 99.99% of the time. And anything lower than 4K video is a slam dunk.
Everything I just said goes right out the window for a multi-user system that’s streaming multiple media files concurrently, but the vast majority of people never need to worry about that.
Do you know about tape backup systems for consumers? From my (brief) search it looks like tape is more economical at the scale used by a data center, but it seems very expensive and more difficult for consumers.
I dunno if you would want to run raidz2 with disks this large. The resilver times would be absolutely bazonkers, I think. I have 24 TB drives in my server and run mirrored vdevs because the chances of one of those drives failing during a raidz2 resilver is just too high. I can't imagine what it'd be like with 30 TB disks.
taladar@sh.itjust.works
on 17 Dec 16:25
nextcollapse
A few years ago I had a 12 disk RAID6 array and the power distributor (the bit between the redundant PSUs and the rest of the system) went and took 5 drives with them, lost everything on there. Backup is absolutely essential but if you can’t do that for some reason at least use RAID1 where you only lose part of your data if you lose more than 2 drives.
Yeah I agree. I just got 20tb in mine. Decided to just z2, which in my case should be fine. But was contemplating the same thing. Going to have to start doing z2 with 3 drives in each vdev lol.
sugar_in_your_tea@sh.itjust.works
on 18 Dec 00:14
collapse
Is RAID2 ever the right choice? Honestly, I don’t touch anything outside of 0, 1, 5, 6, and 10.
Edit: missed the z, my bad. I don’t use ZFS and just skipped over it.
I couldn’t imagine seek times on any disk that large. Or rebuild times…yikes.
ricecake@sh.itjust.works
on 17 Dec 17:10
nextcollapse
Definitely not for either of those. Can get way better density from magnetic tape.
They say they got the increased capacity by increasing storage density, so the head shouldn’t have to move much further to read data.
You’ll get further putting a cache drive in front of your HDD regardless, so it’s vaguely moot.
RedWeasel@lemmy.world
on 17 Dec 17:49
nextcollapse
For a full 32GB at the max sustained speed(275MB/s), 32ish hours to transfer a full amount, 36 if you assume 250MB/s the whole run. Probably optimistic. CPU overhead could slow that down in a rebuild. That said in a RAID5 of 5 disks, that is a transfer speed of about 1GB/s if you assume not getting close to the max transfer rate. For a small business or home NAS that would be plenty unless you are running greater than 10GiBit ethernet.
WolfLink@sh.itjust.works
on 17 Dec 20:34
nextcollapse
Random access times are probably similar to smaller drives but writing the whole drive is going to be slow
up your block size bro 💪 get them plates stacking 128KB+ a write and watch your throughput gains max out 🏋️ all the ladies will be like🙋♀️. Especially if you get those reps sequentially it’s like hitting the juice 💉 for your transfer speeds.
it honestly could have been a 10mb, I don’t even remember. only thing I really do remember is thinking it was interesting how it used the floppy and second cable, and how the sound it made was used in every 90’s and early 2000’s tv and movie show as generic computer noise :)
You have me beat on the XT, mine was a 286, although it did replace an Apple 2e (granted both were aquired several years after they were already considered junk in the 386 era).
I_Miss_Daniel@lemmy.world
on 18 Dec 08:40
collapse
I remember the sound. Also, it was on a three wheel table, and the whole thing would shake when defragging.
My first one was a Seagate ST-238R. 32 MB of pure storage, baby. For some reason I thought we still needed the two disk drives as well, but I don’t remember why.
“Oh what a mess we weave when we amiss interleave!”
We’d set the interleave to, say, 4:1 (four revolutions to read all data in a track, IIRC), because the hard drive was too fast for the CPU to deal with the data… ha.
3aqn5k6ryk@lemmy.world
on 17 Dec 16:18
nextcollapse
couch1potato@lemmy.dbzer0.com
on 17 Dec 20:21
nextcollapse
I run docker services and host virtual machines from Unraid OS
WolfLink@sh.itjust.works
on 17 Dec 20:36
nextcollapse
Not programming skills, but sysadmin skills.
Buy a used server on EBay (companies often sell their old servers for cheap when they upgrade). Buy a bunch of HDDs. Install Linux and set up the HDDs in a ZFS pool.
sugar_in_your_tea@sh.itjust.works
on 18 Dec 00:06
collapse
Or install TruNAS and chill.
I went with Linux and BTRFS because I just need a mirror. Lots of options and even more guides.
sugar_in_your_tea@sh.itjust.works
on 18 Dec 00:10
nextcollapse
Cheapest is probably a Raspberry Pi with a USB external drive. Look up “Raspberry Pi NAS,” there are a bunch of guides.
Or you can repurpose an old PC, install some NAS distro, and then configure.
There are a ton of options, very few of which require any programming.
One problem is that larger drives take longer to rebuild the RAID array when one drive needs replacing. You’re sitting there for days hoping that no other drive fails while the process goes. Current SATA and SAS standards are as fast as spinning platters could possibly go; making them go even faster won’t help anything.
There was some debate among storage engineers if they even want drives bigger than 20TB. The potential risk of data loss during a rebuild is worth trading off density. That will probably be true until SSDs are closer to the price per TB of spinning platters (not necessarily the same; possibly more like double the price).
RememberTheApollo_@lemmy.world
on 17 Dec 17:41
nextcollapse
Yep. It’s a little nerve wracking when I replace a RAID drie in our NAS, but I do it before there’s a problem with a drive. I can mount the old one back in, or try another new drive. I’ve only ever had one new DOA, here’s hoping those stay few and far between.
What happened to using different kinds of drives in every mirrored pair? Not best practice any more? I’ve had Seagates fail one after another and the RAID was intact because I paired them with WD.
You can, but you might still be sweating bullets while waiting for the rebuild to finish.
GamingChairModel@lemmy.world
on 17 Dec 18:32
collapse
If you’re writing 100 MB/s, it’ll still take 300,000 seconds to write 30TB. 300,000 seconds is 5,000 minutes, or 83.3 hours, or about 3.5 days. In some contexts, that can be considered a long time to be exposed to risk of some other hardware failure.
Cornelius_Wangenheim@lemmy.world
on 17 Dec 22:41
nextcollapse
Avoid these like the plague. I made the mistake of buying 2 16 TB Exos drives a couple years ago and have had to RMA them 3 times already.
SupraMario@lemmy.world
on 18 Dec 00:29
nextcollapse
I stopped buying seagates when I had 4 of their 2TB barracuda drives die within 6 months… constantly was RMAing them. Finally got pissed and sold them and bought WD reds, still got 2 of the reds in my Nas Playing hot backups with nearly 8 years of power time.
I have several WDs with almost 15 years of power on time, not a single failure. Whereas my work bought a bunch of Seagates and our cluster was basically halved after less than 2 years. I have no idea how Seagate can suck so much.
About 10 years ago now, at a past employer, had a NAS setup that housed a bunch of medical data…all seagate drives. During my xmas PTO…I was lead on DR…yea fuckers all started failing one after another. Took out 14 drives before the storage team said fuck this pulled it offline and had a new NAS brought in from EMC, was a fun xmas restoring all that shit. Seagate used to be my go to, but it seems like every single interaction I have with them ends in disaster.
Seagate was my go-to after I had bought those original IBM DeathStars and had to RMA the RMA replacement drive after a few months. But brand loyalty is for suckers. It seemed Seagate had a really bad run after they acquired Maxstor who always had a bad reputation.
Had that issue with the 3tb drives. Bought 4, had to RMA all 4, and then RMA 2 of the replacement drives all within a few months.
The last 2 are still operating 10 years later though. 2 out of 6.
gravitas_deficiency@sh.itjust.works
on 18 Dec 00:20
nextcollapse
Lmao the HDD in the first machine I built in the mid 90s was 1.2GB
nova_ad_vitum@lemmy.ca
on 18 Dec 00:41
nextcollapse
My dad had a 286 with a 40MB hard drive in it. When it spun up it sounded like a plane taking off. A few years later he had a 486 and got a 2gb Seagate hard drive. It was an unimaginable amount of space at the time.
The computer industry in the 90s (and presumably the 80s, I just don’t remember it) we’re wild. Hardware would be completely obsolete every other year.
My 286er had 2MB RAM and no hard drive, just two 5.25" floppy drives. One to boot the OS from, the other for storage and software.
I upgrade it to 4 MB RAM and bought a 20 MB hard drive, moved EVERY piece of software I had onto it, and it was like 20% full. I sincerely thought that should last forever.
Today I casually send my wife a 10 sec video from the supermarket to choose which yoghurt she wants and that takes up about 25 MB.
Our first computer was a Macintosh Classic with a 40 MB SCSI hard disk. My first “own” computer had a 120 MB drive.
I keep typoing TB as GB when talking about these huge drives, it’s just so weird how these massive capacities are just normal!
Sixtyforce@sh.itjust.works
on 19 Dec 14:52
collapse
We had family computers first, I can’t recall original specs but I think my mother added in a 384MB drive to the 486 desktop before buying a win98se prebuilt with a 2GB drive. I remember my uncle calling that Pentium II 350MHZ, 64MB SDRAM, Rage 2 Pro Turbo AGP tower “a NASA computer” haha.
GreenKnight23@lemmy.world
on 18 Dec 00:54
nextcollapse
cool never will buy another seagate ever though.
interdimensionalmeme@lemmy.ml
on 18 Dec 00:58
collapse
Same but western digital, 13gb that failed and lost all my data 3 time and 3rd time was outside the warranty! I had paid 500$, the most expensive thing I had ever bought until tgat day.
wreckedcarzz@lemmy.world
on 18 Dec 01:25
nextcollapse
I mean, cool and all, but call me when sata or m2 ssds are 10TB for $250, then we’ll talk.
Not sure whether we’ll arrive there the tech is definitely entering the taper-out phase of the sigmoid. Capacity might very well still become cheaper, also 3x cheaper, but don’t, in any way, expect them to simultaneously keep up with write performance that ship has long since sailed. The more bits they’re trying to squeeze into a single cell the slower it’s going to get and the price per cell isn’t going to change much, any more, as silicon has hit a price wall, it’s been a while since the newest, smallest node was also the cheapest.
OTOH how often do you write a terabyte in one go at full tilt.
dual_sport_dork@lemmy.world
on 18 Dec 20:42
collapse
I don’t think anyone has much issue with our current write speeds, even at dinky old SATA 6/GB levels. At least for bulk media storage. Your OS boot or game loading, whatever, maybe not. I’d be just fine with exactly what we have now, but just pack more chips in there.
Even if you take apart one of the biggest, meanest, most expensive 8TB 2.5" SSD’s the casing is mostly empty inside. There’s no reason they couldn’t just add more chips even at the current density levels other than artificial market segmentation, planned obsolescence, and pigheadedness. It seems the major consumer manufacturers refuse to allow their 2.5" SSD’s to get out of parity with the capacities on offer in the M.2 form factor drives that everyone is hyperfixated on for some reason, and the pricing structure between 8TB and what few greater than 8 models actually are on offer is nowhere near linear even though the manufacturing cost roughly should be.
If people are still willing to use a “full size” 3.5" form factor with ordinary hard drives for bulk storage, can you imagine how much solid state storage you could cram into a casing that size, even with current low-cost commodity chips? It’d be tons. But the only options available are “enterprise solutions” which are apparently priced with the expectation you’ll have a Fortune 500 or government expense account.
It’s bullshit all the way down; there’s nothing new under the sun in that regard.
the M.2 form factor drives that everyone is hyperfixated on for some reason
The reason is transfer speeds. SATA is slow, M.2 is a direct PCIe link. And SSDs can saturate it, at least in bursts. Doubling the capacity of a 2.5" SSD is going to double its price as you need twice as many chips, there’s not really a market for 500 buck SATA SSDs, you’re looking for U.2 / U.3 ones. Yes, they’re quite a bit more expensive per TB but look at the difference in TBW to consumer SSDs.
If you’re a consumer and want a data grave, buy spinning platters. Or even a tape drive. You neither want, nor need, a high-capacity SSD.
Also you can always RAID them up.
dual_sport_dork@lemmy.world
on 18 Dec 22:27
collapse
For the context of bulk consumer storage (or even SOHO NAS) that’s irrelevant, though, because people are already happily using spinning mechanical 3.5" hard drives for this purpose, and they’re all already SATA. Therefore there’s no logical reason to worry about the physical size or slower write speeds of packing a bunch of flash chips into the same sized enclosure for those particular use cases.
There are reasons a big old SSD would be suitable for this. Silence, reliability, no spin up delay, resistance to outside mechanical forces, etc.
Sure it makes sense: Pretty much noone, but you, is going to buy them, and stocking shelves and warehouses with product costs money. All that unmoved stock would make them more expensive, making even more people not buy them. It’s inefficient.
dragonlobster@programming.dev
on 18 Dec 01:48
nextcollapse
These things are unreliable, I had 3 seagate HDDs in a row fail on me. Never had an issue with SSDs and never looked back.
WhyJiffie@sh.itjust.works
on 18 Dec 02:07
nextcollapse
well until you need capacity why not use an SSD. It’s basically mandatory for the operating system drive too
prosp3kt@lemmy.dbzer0.com
on 18 Dec 18:50
collapse
Capacity for what?. There are 4tb SSD m.2 costing $200 bucks cmon…
WhyJiffie@sh.itjust.works
on 18 Dec 20:33
collapse
I would rather not buy so large SSDs. for most stuff the performance advantage is useless while the price is much larger, and my impression is still that such large SSDs have a shorter lifespan (regarding how many writes will it take to break down). recovering data fron a failing HDD is also easier: SSDs just turn read-only or completely fail at one point, in the latter case often even data recovery companies being unable to recover anything, while HDDs will often give signs that a good monitoring software can detect weeks or months before, so that you know to be more cautious with it
prosp3kt@lemmy.dbzer0.com
on 19 Dec 12:07
collapse
How is it easier? Do you open your HDDs and take info from there? Do you have specialized equipment and knowledge? Second, if you detect on smart that you are closer to TBW, change the SSD duh… Smart is a lot more effective on SSDs depending the model it even gives you time to live…
WhyJiffie@sh.itjust.works
on 20 Dec 02:17
collapse
How is it easier? Do you open your HDDs and take info from there?
obviously not. often they don’t break all at once, but start with corrupting smaller areas of sectors
Seagate in general are unreliable in my own anecdotal experience. Every Seagate I’ve owned has died in less than five years. I couldn’t give you an estimate on the average failure age of my WD drives because it never happened before they were retired due to obsolescence. It was over a decade regularly though.
prosp3kt@lemmy.dbzer0.com
on 18 Dec 18:49
collapse
HDD is unreliable with all those moving parts and arms and cilinders.
Alexstarfire@lemmy.world
on 18 Dec 03:56
nextcollapse
Everybody taking shit about Seagate here. Meanwhile I’ve never had a hard drive die on me. Eventually the capacity just became too little to keep around and I got bigger ones.
Oldest I’m using right now is a decade old, Seagate. Actually, all the HDDs are Seagate. The SSDs are Samsung. Granted, my OS is on an SSD, as well as my most used things, so the HDDs don’t actually get hit all that much.
Trainguyrom@reddthat.com
on 18 Dec 04:51
nextcollapse
Seagate had some bad luck with their 3TB drives about 15 years ago now if memory serves me correctly.
Since then Western Digital (the only other remaining HDD manufacturer) pulled some shenanigans with not correctly labeling different technologies in use on their NAS drives that directly impacted their practicality and performance in NAS applications (the performance issues were particularly agregious when used in a zfs pool)
So basically pick your poison. Hard to predict which of the duopoly will do something unworthy of trusting your data upon, so uh…check your backups I guess?
Alexstarfire@lemmy.world
on 18 Dec 04:56
nextcollapse
That decade old one is 3TB. 😅
mohammed_alibi@lemmy.world
on 18 Dec 05:51
collapse
Unfortunately, I have about 10 dead 3TB drives sitting around in my closet. I took the sacrifice so you don’t have to :-)
Alexstarfire@lemmy.world
on 18 Dec 05:55
nextcollapse
Thanks. 👍
A_Random_Idiot@lemmy.world
on 18 Dec 15:20
collapse
at least you have a bunch of nice coasters and cool magnets now.
Had good impressions and experiences with Toshiba drives. Chugged along quiet nicely.
Trainguyrom@reddthat.com
on 18 Dec 16:51
nextcollapse
Ah I thought I had remembered their hard drive division being aquired but I was wrong! Per Wikipedia:
At least 218 companies have manufactured hard disk drives (HDDs) since 1956. Most of that industry has vanished through bankruptcy or mergers and acquisitions. None of the first several entrants (including IBM, who invented the HDD) continue in the industry today. Only three manufacturers have survived—Seagate, Toshiba and Western Digital
Yeah our file server has 17 Toshiba drives in the 10/14 TiB sizes ranging from 2-4 years of power-on age and zero failures so far (touch wood).
Of our 6 Seagate drives (10 TiB), 3 of them died in the 2-4 year age range, but one is still alive 6 years later.
We’re in Japan and Toshiba is by far the cheapest here (and have the best support - they have advance replacement on regular NAS drives whereas Seagate takes 2 weeks replacement to ship to and from a support center in China!) so we’ll continue buying them.
I’ve had a Samsung SSD die on me, I’ve had many WD drives die on me (also the last drive I’ve had die was a WD drive), I’ve had many Seagate drives die on me.
Buy enough drives, have them for a long enough time, and they will die.
Yeah, same. I switched to seagate after 3 WD drives failed in less then 3 years. Never had problems since.
fuck_u_spez_in_particular@lemmy.world
on 18 Dec 13:03
collapse
I had 3 drives from seagate (including 1 enterprise) that died or got file-corruption issues when I gave up and switched to SSDs entirely…
dsilverz@thelemmy.club
on 18 Dec 04:18
nextcollapse
That’s good, really good news, to see that HDDs are still being manufactured and being thought of. Because I’m having a serious problem trying to find a new 2.5" HDD for my old laptop here in Brazil. I can quickly find SSDs across the Brazilian online marketplaces, and they’re not much expensive, but I’m intending on purchasing a mechanical one because SSDs won’t hold data for much longer compared to HDDs, but there are so few HDD for sale, and those I could find aren’t brand-new.
Trainguyrom@reddthat.com
on 18 Dec 05:02
nextcollapse
SSDs won’t hold data for much longer compared to HDDs
Realistically this is not a good reason to select SSD over HDD. If your data is important it’s being backed up (and if it’s not backed up it’s not important. Yada yada 3.2.1 backups and all. I’ll happily give real backup advise if you need it)
In my anecdotal experience across both my family’s various computers and computers I’ve seen bite the dust at work, I’ve not observed any longevity difference between HDDs and SSDs (in fact I’ve only seen 2 fail and those were front desk PCs that were effectively always on 24/7 with heavy use during all lobby hours, and that was after multiple years of that usecase) and I’ve never observed bit rot in the real world on anything other than crappy flashdrives and SD cards (literally the lowest quality flash you can get)
Honestly best way to look at it is to select based on your usecase. Always have your boot device be an SSD, and if you don’t need more storage on that computer than you feel like buying an SSD to match, don’t even worry about a HDD for that device. HDDs have one usecase only these days: bulk storage for comparatively low cost per GB
I replaced my laptop’s DVD drive with a HDD caddy adapter, so it supports two drives instead of just one. Then, I installed a 120G SSD alongside with a 500G HDD, with the HDD being connected through the caddy adapter. The entire Linux installation on this laptop was done in 2019 and, since then, I never reinstalled nor replaced the drives.
But sometimes I hear what seems to be a “coil whine” (a short high pitched sound) coming from where the SSD is, so I guess that its end is near. I have another SSD (240G) I bought a few years ago, waiting to be installed but I’m waiting to get another HDD (1TB or 2TB) in order to make another installation, because the HDD was reused from another laptop I had (therefore, it’s really old by now, although I had no I/O errors nor “coil whinings” yet).
Back when I installed the current Linux, I mistakenly placed /var and /home (and consequently, /home/me/.cache and /home/me/.config, both folders of which have high write rates because I use KDE Plasma) on the SSD. As the years passed by, I realized it was a mistake but I never had the courage to relocate things, so I did some “creative solutions” (“gambiarra”) such as creating a symlinked folder for .cache and .config, pointing them to another folder within the HDD.
As for backup, while I have three old spare HDDs holding the same old data (so it’s a redundant backup), there are so many (hundreds of GBs) new things I both produced and downloaded that I’d need lots of room to better organize all the files, finding out what is not needed anymore and renewing my backups. That’s why I was looking for either 1TB or 2TB HDDs, as brand-new as possible (also, I’m intending to tinker more with things such as data science after a fresh new installation of Linux). It’s not a thing that I’m really in a hurry to do, though.
Edit: and those old spare HDDs are 3.5" so they wouldn’t fit the laptop.
I doubt the high pitched whine that you’re hearing is the SSD failing. The sheer amount of writes to fully wear out an SSD is…honestly difficult to achieve in the real world. I’ve got decade old budget SSDs in some of my computers that are still going strong!
prosp3kt@lemmy.dbzer0.com
on 18 Dec 18:47
collapse
Dude i had a 240 gb ssd 14 years old. And the SMART is telling me that has 84% life yet. This was a main OS drive and was formatted multiple times. Literally data is going to be discontinued before this disk is going to die. Stop spreading fake news. Realistically how many times you fill a SSD in a typical scenario?
As per my previous comment, I had /var, /var/log, /home/me/.cache, among many other frequently written directories on the SSD since 2019. SSDs have fewer write cycles than HDDs, it’s not “fake news”.
“However, SSDs are generally more expensive on a per-gigabyte basis and have a finite number of write cycles, which can lead to data loss over time.”
I’m not really sure why exactly mine it’s coil whining, it happens occasionally and nothing else happens aside from the high-pitched sound, but it’s coil whining.
prosp3kt@lemmy.dbzer0.com
on 19 Dec 12:08
collapse
How the hell a SSD can coil whine… Without mobile parts lol… Second, realistically for a normal user, it’s probable that SSD is going to last more than 10 years. We aren’t talking about intensive data servers here. We are talking about The hardcorest of the gamers for example, normal people. And of course, to begin with HDDs haven’t a write limit lol. They fail because of its mechanical parts. Finally, cost benefit. The M.2 I was suggesting is $200 buck for 4Tb. Cmon it’s not the end of the world and you multiply speeds… By 700…
How the hell a SSD can coil whine… Without mobile parts lol…
Do you even know what “coil whine” is? It has nothing to do with moving parts! “Coil whine” is a physical phenomenon which happens when electrical current makes an electronic component, such as an inductor, to slightly vibrate, emitting a high-pitched sound. It’s a well-known phenomenon for graphic cards (whose only moving part is the cooler, not the source of their coil whinings). SSDs aren’t supposed to make coil whines, and that’s why I’m worried about the health of mine.
Finally, cost benefit. The M.2 I was suggesting is $200 buck for 4Tb. Cmon it’s not the end of the world and you multiply speeds… By 700…
I’m not USian so pricing and cost benefits may differ. Also, the thing is that I already have another SSD, a 240G SSD. I don’t need to buy another one, I just need a HDD which is what I said in my first comment. Just it: a personal preference, a personal opinion regarding personal experiences and that’s all. The only statement I said beyond personal opinions was regarding the life span which I meant the write rate thing. But that’s it: personal opinion, no need for ranting about it.
prosp3kt@lemmy.dbzer0.com
on 20 Dec 00:32
collapse
Imagine a M.2 with coil whine, what is the posibility…
Good. However, 2 x 16TB Seagate HDDs still cheaper, isn’t it?
schizo@forum.uncomfortable.business
on 18 Dec 22:46
collapse
These drives aren’t for people who care how much they cost, they’re for people who have a server with 16 drive bays and need to double the amount of storage they had in them.
(Enterprise gear is neat: it doesn’t matter what it costs, someone will pay whatever you ask because someone somewhere desperately needs to replace 16tb drives with 32tb ones.)
In addition to needing to fit it into the gear you have on hand, you may also have limitations in rack space (the data center you’re in may literally be full), or your power budget.
Seagate. The company that sold me an HDD which broke down two days after the warranty expired.
No thanks. laughing in Western Digital HDD running for about 10 years now
zarkanian@sh.itjust.works
on 18 Dec 20:23
nextcollapse
I had the opposite experience. My Seagates have been running for over a decade now. The one time I went with Western Digital, both drives crapped out in a few years.
Manifish_Destiny@lemmy.world
on 19 Dec 09:08
collapse
I have 10 year old WDs and 8 year old Seagates still kicking. Depends on the year. Some years one is better than others.
satans_methpipe@lemmy.world
on 18 Dec 21:37
nextcollapse
Funny because I have a box of Seagate consumer drives recovered from systems going to recycling that just won’t quit. And my experience with WD drives is the same as your experience with Seagate.
Edit: now that I think about it, my WD experience is from many years ago. But the Seagate drives I have are not new either.
Survivorship bias. Obviously the ones that survived their users long enough to go to recycling would last longer than those that crap out right away and need to be replaced before the end of the life of the whole system.
I mean, obviously the whole thing is biased, if objective stats state that neither is particularly more prone to failure than the other, it’s just people who used a different brand once and had it fail. Which happens sometimes.
satans_methpipe@lemmy.world
on 19 Dec 03:11
collapse
Ah I wasn’t thinking about that. I got the scrappy spinny bois.
I’m fairly sure me and my friends had a bad batch of Western digitals too.
Any 8 years old hard drive is a concern. Don’t get sucked into thinking Seagate is a bad brand because of anecdotal evidence. He might’ve bought a Seagate hard drive with manufacturing defect, but actual data don’t really show any particular brand with worse reliability, IIRC. What you should do is research whether the particular model of your drive is known to have reliability problems or not. That’s a better indicator than the brand.
Had the same experience and opinion for years, they do fine on Backblaze’s drive stats but don’t know that I’ll ever super trust them just 'cus.
That said, the current home server has a mix of drives from different manufacturers including seagate to hopefully mitigate the chances that more than one fails at a time.
Not worth the risk for me to find out lol. My granddaddy stored his data on WD drives and his daddy before him, and my daddy after him. Now I store my data on WD drives and my son will to one day. Such is life.
I bought 16TB one as an urgent replacement for a failing raid.
It arrived defective, so I can’t speak on the longevity.
BoxOfFeet@lemmy.world
on 19 Dec 13:58
nextcollapse
I have one Seagate drive. It’s a 500 GB that came in my 2006 Dell Dimension E510 running XP Media Center. When that died in 2011, I put it in my custom build. It ran until probably 2014, when suddenly I was having issues booting and I got a fresh WD 1 TB. Put it in a box, and kept it for some reason. Fast forward to 2022, I got another Dell E510 with only an 80 GB. Dusted off the old 500 GB and popped it in. Back with XP Media Center. The cycle is complete. That drive is still noisy as fuck.
Was using one 4TB Seagate for 11 years then bought a newer model to replace it since I thought it was gonna die any day. That new one died within 6 months. The old one still works although I don’t use it for for anything important now.
SpaceScotsman@startrek.website
on 19 Dec 15:15
collapse
“The two models, the 30TB … and the 32TB …, each offer a minimum of 3TB per disk”. Well, yes, I would hope something advertised as being 30TB would offer at least 3TB. Am I misreading this sentence somehow?
threaded - newest
It never ceases to amaze me how far we can still take a piece of technology that was invented in the 50s.
That's like developing punch cards to the point where the holes are microscopic and can also store terabytes of data. It's almost Steampunk-y.
Solid state is kinda like a microscopic punch card.
So are optical discs
Much more so than solid state.
More like microscopic fidget bubble poppers.
<img alt="" src="https://lemmy.world/pictrs/image/b5083a10-3933-4c0c-862b-83079bfd907f.jpeg">
When the computer wants a bit to be a 1, it pops it down. When it wants it to be a 0, it pops it up.
If it were like a punch card, it couldn’t be rewritten as writing to it would permanently damage the disc. A CD-RW is basically a microscopic punch card though, because the laser actually burns away material to write the data to the CD.
They work through electron tunneling through a semiconductor, so something does go through them like an old punch card reader
Current ones also store multiple charge levels per cell, so they’re no longer one bit each. They have multiple levels of “punch” for what used to just be one bit.
That’s how most technology is:
Almost everything we have today is due to incremental improvements from something much older.
This isn’t unique to computing.
Just about all of the products and technology we see are the results of generations of innovations and improvements.
Look at the automobile, for example. It’s really shaped my view of the significance of new industries; we could be stuck with them for the rest of human history.
Talking about steam, steam-powered things are 2 thousand years old at least and we still use the technology when we crack atoms to make energy.
What the Romans had wasn’t comparable with an industrial steam engine. The working principle of steam pushing against a cylinder was similar, but they lacked the tools and metallurgy to build a steam cauldron that could be pressurized, so their steam engine could only do parlor tricks like opening a temple door once, and not perform real continuous work.
radarr goes brrrrrr
barrrr?
…dum tss!
sonarr goes brrrrrr…
I can’t wait for datacenters to decommission these so I can actually afford an array of them on the second-hand market.
Home Petabyte Project here I come (in like 3-5 years 😅)
better start preparing with a 10G network!
Way ahead of you… I have a Brocade ICX6650 waiting to be racked up once I’m not limited to just the single 15A circuit my rack runs off of currently 😅
Hopefully 40G interconnect between it and the main switch everything using now will be enough for the storage nodes and the storage network/VLAN.
Exactly, my nas is currently made up of decommissioned 18tb exos. Great deal and I can usually still get them rma’d the handful of times they fail
Nice, where do you get yours?
also curious, buying new is getting too pricey for me
I personally use goharddrive and serverpartdeals on eBay and have had good luck, but I’m always looking for others
Never used goharddrive but can def endorse spd
Where is a good place to search for decommissioned ones?
Serverpartdeals has done me well, drives often come new enough that they still have a decent amount of manufacturers warranty remaining (exos is 5yr) and depending on the drive you buy from them spd will rma a drive for 5 years from purchase (but not always, depends on the listing, read the fine print).
I have gotten 2 bad drives from them out of 18 over 5 years or so. Both bad drives were found almost immediately with basic maintenance steps prior to adding to the array (zeroing out the drives, badblocks) and both were rma’d by seagate within 3-5 days because they were still within the mfr warranty.
If you’re running a gigantic raid array like me (288tb and counting!) it would be wise to recognize that rotational hard drives are doomed and you need a robust backup solution that can handle gigantic amounts of data long term. I have a tape drive for that because I got it cheap at an electronics recycler sold as not working (thankfully it was an easy fix) but this is typically a super expensive route. If you only have like 20tb then you can look into stuff like cloud services, bluray, redundant hard drive, etc. or do like I did in the beginning and just accept that your pirated anime collection might go poof one day lol
What kind of tape drive are you using? My array isn’t as large as yours (120tb physical), but it’s big enough that my only real options for backup are tape or a whole secondary array for just backup.
Based on what I’ve seen, my options are a prohibitively large number tapes with an older LTO standard or prohibitively expensive tapes with a newer LTO standard.
My current backup strategy consists of automated backups to Backblaze B2 for the really important stuff like personal documents or projects and hoping my ZFS array doesn’t fail for everything else.
I have an ibm qualstar lto8 drive. I got it because I gambled, it was cheap because it was throwing an error (I forget what the number was) but it was one that indicates an issue in the tape path. I was able to get the price to $150 because I was buying some other stuff and because ultimately if the head was toast it was basically useless. But I got lucky and cleaning the head and tape path brought it back to life. Dunno how long it will last. I’ll live with it though because buying one that’s confirmed working can be thousands
You’re right that lto8 tapes are pricey but they’re quite a bit cheaper than building an equivalent array for backup that is significantly more reliable long term. A tape is about 12tb and $40-50, although sometimes they pop up cheaper. I generally don’t back up stuff continually with this method, I back up newer files that haven’t been synced to tape once every six weeks or so. It’s also something that you can buy a bit at a time to soften the financial blow of course. Maybe if you get a fancy carousel drive you’d want to fill it up but frankly that just seems like it would break much easier
More modern tapes have support for ltfs and I can basically use it like an external hard drive that way. So it’s pretty much I pop a tape in, once a week or so I sync new files to said tape, then as it gets full I swap it for a new tape. Towards the end I print a directory of what’s on it because admittedly doing it this way is messy. But my intention with this is to back up my “medium critical” files. Stuff that if I lost I would be frustrated over, but not heartbroken. Movies and TV shows that I did custom muxes of to have my ideal subtitles, audio tracks, etc. all my dockers so stuff like my Jellyfin watch status and komga library stay intact, stuff like that. That takes up the bulk of my nas and my primary concerns are either the array fully failing or significant bit rot, and if either of those occur I would rebuild from scratch and just copy all the tapes back over anyway so the messy filing isn’t really a huge issue.
I also do sometimes make it a point to copy harder to find files onto at least 2 tapes on the outside chance a tape goes bad. It’s unlikely given I only buy new tapes and store them properly (I even go to the effort to store them offsite just in case my house burns down) but you never know I suppose
The advertised values of tape capacity is crap for this use. You’ll see like lto 8 has a native capacity of 12tb but a compressed capacity of 30tb per disk! And the disks will frequently just say 30tb on them. That’s nonsense here. Maybe for a more typical server environment where they’re storing databases and text files and shit but compressed movies and music? Not so much. I get some advantage because I keep most of my stuff in archival quality (remux/flac/etc) but even then I still usually dont get anywhere near 30tb
It’s pretty slow. Not the end of the world but just something to keep in mind. Lto8 is supposed to be 360MBps for uncompressed and 750MBps for compressed data but I don’t seem to hit those speeds at all. I’m not really in a rush though and everything verifies fine and works after copying back over so I’m not too worried. But it can take like 10-14 hours to fill a tape. If I ever do have to rebuild the array it will take AGES
For my “absolutely priceless” data I have other more robust backup solutions that are basically the same as yours (literally down to using backblaze, ha).
You got an incredible deal on your tape drive. For LTO8 drives, I’m seeing “for parts only” drives sold for around $500. I’d be willing to throw away $100 or $200 on the possibility that I could repair a drive; $500 is a bit too much. It looks like LTO6 is more around what my budget would be.; it would require a much larger number of tapes, but not excessively so.
I remember when BD-R was a reasonable solution for backup. There’s no way that’s true now. It really seems like hard drive capacity has far outpaced removable media. If most people are streaming everything, those of us who actually want to save their data locally are really the minority these days. There’s just not as much of a compelling reason for companies to develop cheap high-capacity removable discs.
I’m sure I’ll invest in a tape backup solution eventually, but for now, at least I have ZFS with paranoid RAIDZ.
It was about a year ago and I’ve found general prices have gone up on basically everything, even stuff for parts, in the past few years, but more importantly it was also a local sale in person with a vendor I know. I find that’s the only way to actually get deals anymore. If you buy stuff like this and are stuffing a network rack at home it makes sense to befriend a local electronics recycler or two if you live in an area where that’s a thing.
I actually moved about two years ago to a less developed area but I will still drive to where I used to live (which is like 90-120 minute drive) 1-2x a year for stuff like this. It’s worth it bc these guys still know me and they’ll cut me deals on stuff like this where ebay sellers will list it for 2-3x as much. but if you watch 8x out of 10 their auctions never sell at those prices, at best they sometimes sell for an undisclosed “best offer” if they even have that option. It’s crazy how many ebay sellers will let shit sit on the market for inflated prices for weeks, months, or longer rather than drop their prices to promote an artificial economy in the hopes that eventually a clueless buyer with fat pockets will come along. They get that and they don’t want to waste the space storing shit for ages
Full disclosure: when I lived in the area I ran a refurbishing business on the side and would buy tons of stuff from them to fix and resell, that probably helped get me on their good side. From like 2013-2019 I would buy tons of broken phones, consoles, weird industrial shit, etc, fix it, and resell it. They loved it because it was a guaranteed cash sale with no ebay/paypal fees, no risk of negative feedback for their ebay store, no risk of a buyer doing a chargeback or demanding to return, etc. I wanted their broken shit and if I couldn’t fix it I accepted the loss, would bring it back to them to recycle and admit defeat in shame
eBay sellers that have tons of sales and specialize. You can learn to read between the lines and see that decom goods are what they do.
SaveMyServer is a perfect example. Don’t know if they sell drives though.
Just a reminder: These massive drives are really more a “budget” version of a proper tape backup system. The fundamental physics of a spinning disc mean that these aren’t a good solution for rapid seeking of specific sectors to read and write and so forth.
So a decent choice for the big machine you backup all your VMs to in a corporate environment. Not a great solution for all the anime you totally legally obtained on Yahoo.
Not sure if the general advice has changed, but you are still looking for a sweet spot in the 8-12 TB range for a home NAS where you expect to regularly access and update a large number of small files rather than a few massive ones.
Oh hey, I did something right. That’s kinda neat
youtu.be/tKXO02VGrQ0
I am troubled in my heart. I would not have been told so in this way.
HDD read rates are way faster than media playback rates, and seek times are just about irrelevant in that use case. Spinning rust is fine for media storage. It’s boot drives, VM/container storage, etc, that you would want to have on an SSD instead of the big HDD.
And oftentimes some or all of the metadata that helps the filesystem find the files on the drive is stored in memory (zfs is famous for its automatic memory caching) so seek times are further irrelevant in the context of media playback
I’m real curious why you say that. I’ve been designing systems with high IOPS data center application requirements for decades so I know enterprise storage pretty well. These drives would cause zero issues for anyone storing and watching their media collection with them.
Not sure what you’re going on about here. Even these discs have plenty of performance for read/wrote ops for rarely written data like media. They have the same ability to be used by error checking filesystems like zfs or btrfs, and can be used in raid arrays, which add redundancy for disc failure.
The only negatives of large drives in home media arrays is the cost, slightly higher idle power usage, and the resilvering time on replacing a bad disc in an array.
Your 8-12TB recommendation already has most of these negatives. Adding more space per disc is just scaling them linearly.
Additionally, most media is read in a contiguous scan. Streaming media is very much not random access.
Your typical access pattern is going to be seeking to a chunk, reading a few megabytes of data in a row for the streaming application to buffer, and then moving on. The ~10ms of access time at the start are next to irrelevant. Particularly when you consider that the OS has likely observed that you have unutilized RAM and loads the entire file into the memory cache to bypass the hard drive entirely.
It’s no ssd but is no slower than any other 12TB drive. It’s not shingled but HAMR. The sectors are closer together so it has even better seeking speed than a regular 12TB drive.
???
It’s absolutely perfect for that. Even if it was shingled tech, that only slows write speeds. Unless you are editing your own video, write seek times are irrelevant. For media playback use only consistent read speed matters. Not even read seek matters except in extreme conditions like comparing tape seek to drive seek. You cannot measure 10 ms difference between clicking a video and it starting to play because of all the other delays caused by media streaming over a network.
But that’s not even relevant because these have faster read seeking than older drives because sectors are closer together.
honestly curious, why the hell was this downvoted? I work in this space and I thought this was still the generally accepted advice?
Because everything he said was wrong?
Because people are thinking through specific niche use cases coupled with “Well it works for me and I never do anything ‘wrong’”.
I’ll definitely admit that I made the mistake of trying to have a bit of fun when talking about something that triggers the dunning kruger effect. But people SHOULD be aware of how different use patterns impacts performance, how that performance impacts users, and generally how different use patterns impact wear and tear of the drive.
Come on man, everything, and mean everything you said is wrong.
Budget tape backup?
No, you can’t even begin to compare drives to tape. They’re completely different use cases. A hard drive can contain a backup but it’s not physically robust to be unplugged, rotated off site , and put into long term storage like tape. You might as well say a Honda Accord is a budget Semi tractor trailer.
Then you specifically called out personal downloads of anime as a bad use case. That’s absolutely wrong in all cases.
It is absurd to imply that everyone else except for you is less knowledgeable and using a niche case except you.
Mainly because of that. Spinning rust drives are perfect for large media libraries.
There isn’t a hard drive made in the last 15 years that couldn’t handle watching media files. Even the SMR crap the manufacturers introduced a while back could do that without issue. For 4k video you’re going to see average transfer speeds of 50MB/s and peak in the low 100MB/s range, and that’s for high quality videos. Write speed is irrelevant for media consumption, and unless your hard drive is ridiculously fragmented, seek speed is also irrelevant. Even an old 5400 RPM SATA drive is going to be able to handle that load 99.99% of the time. And anything lower than 4K video is a slam dunk.
Everything I just said goes right out the window for a multi-user system that’s streaming multiple media files concurrently, but the vast majority of people never need to worry about that.
Do you know about tape backup systems for consumers? From my (brief) search it looks like tape is more economical at the scale used by a data center, but it seems very expensive and more difficult for consumers.
So I'm guessing you don't really know what you're talking about.
Just one would be a great backup, but I’m not ready to run a server with 30TB drives.
I’m here for it. The 8 disc server is normally a great form factor for size, data density and redundancy with raid6/raidz2.
This would net around 180TB in that form factor. Thats would go a long way for a long while.
I dunno if you would want to run raidz2 with disks this large. The resilver times would be absolutely bazonkers, I think. I have 24 TB drives in my server and run mirrored vdevs because the chances of one of those drives failing during a raidz2 resilver is just too high. I can't imagine what it'd be like with 30 TB disks.
A few years ago I had a 12 disk RAID6 array and the power distributor (the bit between the redundant PSUs and the rest of the system) went and took 5 drives with them, lost everything on there. Backup is absolutely essential but if you can’t do that for some reason at least use RAID1 where you only lose part of your data if you lose more than 2 drives.
Yeah I agree. I just got 20tb in mine. Decided to just z2, which in my case should be fine. But was contemplating the same thing. Going to have to start doing z2 with 3 drives in each vdev lol.
Is RAID2 ever the right choice? Honestly, I don’t touch anything outside of 0, 1, 5, 6, and 10.
Edit: missed the z, my bad. I don’t use ZFS and just skipped over it.
raidz2 is analogous to RAID 6. It's just the ZFS term for double parity redundancy.
Yeah, I noticed the “z” in there shortly after posting. I don’t use ZFS much, so I kinda skimmed over it.
This is for cold and archival storage right?
I couldn’t imagine seek times on any disk that large. Or rebuild times…yikes.
Definitely not for either of those. Can get way better density from magnetic tape.
They say they got the increased capacity by increasing storage density, so the head shouldn’t have to move much further to read data.
You’ll get further putting a cache drive in front of your HDD regardless, so it’s vaguely moot.
For a full 32GB at the max sustained speed(275MB/s), 32ish hours to transfer a full amount, 36 if you assume 250MB/s the whole run. Probably optimistic. CPU overhead could slow that down in a rebuild. That said in a RAID5 of 5 disks, that is a transfer speed of about 1GB/s if you assume not getting close to the max transfer rate. For a small business or home NAS that would be plenty unless you are running greater than 10GiBit ethernet.
Random access times are probably similar to smaller drives but writing the whole drive is going to be slow
up your block size bro 💪 get them plates stacking 128KB+ a write and watch your throughput gains max out 🏋️ all the ladies will be like🙋♀️. Especially if you get those reps sequentially it’s like hitting the juice 💉 for your transfer speeds.
This is my favorite post ever.
My first HDD had a capacity of 42MB. Still a short way to go until factor 10⁶.
My first HD was a 20mb mfm drive :). Be right back, need some “just for men” for my beard (kidding, I’m proud of it).
So was mine, but the controller thought it was 10mb so had to load a device driver to access the full size.
Was fine until a friend defragged it and the driver moved out of the first 10mb. Thereafter had to keep a 360kb 5¼" drive to boot from.
That was in an XT.
Oh noooo 😭
it honestly could have been a 10mb, I don’t even remember. only thing I really do remember is thinking it was interesting how it used the floppy and second cable, and how the sound it made was used in every 90’s and early 2000’s tv and movie show as generic computer noise :)
You have me beat on the XT, mine was a 286, although it did replace an Apple 2e (granted both were aquired several years after they were already considered junk in the 386 era).
I remember the sound. Also, it was on a three wheel table, and the whole thing would shake when defragging.
<img alt="" src="https://lemmy.world/pictrs/image/774b4911-7241-4e16-abe0-f1fce7a4ac7f.jpeg">
My first one was a Seagate ST-238R. 32 MB of pure storage, baby. For some reason I thought we still needed the two disk drives as well, but I don’t remember why.
“Oh what a mess we weave when we amiss interleave!”
We’d set the interleave to, say, 4:1 (four revolutions to read all data in a track, IIRC), because the hard drive was too fast for the CPU to deal with the data… ha.
Here i am still rocking 6TB.
How can someone without programming skills make a cloud server at home for cheap?
Lemmy’s Spoiler Doesn’t Make Sense
(Like connected to WiFi and that’s it)
Yes. You’ll have to learn some new things regardless, but you don’t need to know how to program.
What are you hoping to make happen?
Raspberry Pi or an old office PC are the usual methods. It’s not so much programming as Linux sysadmin skills.
Beyond that, you might consider OwnCloud for an app-like experience, or just Samba if all you want is local network files.
Debian, virtualmin, podman with cockpit, install these on any cheap used pc you find, after initial setup all other is gui managed
The easiest way is NextCloud.
I run docker services and host virtual machines from Unraid OS
Not programming skills, but sysadmin skills.
Buy a used server on EBay (companies often sell their old servers for cheap when they upgrade). Buy a bunch of HDDs. Install Linux and set up the HDDs in a ZFS pool.
Or install TruNAS and chill.
I went with Linux and BTRFS because I just need a mirror. Lots of options and even more guides.
Cheapest is probably a Raspberry Pi with a USB external drive. Look up “Raspberry Pi NAS,” there are a bunch of guides.
Or you can repurpose an old PC, install some NAS distro, and then configure.
There are a ton of options, very few of which require any programming.
The $0 home server:
youtu.be/IuRWqzfX1ik
30/32 = 0.938
That’s less than a single terabyte. I have a microSD card bigger than that!
;)
Can’t even put it into simplest form.
Now now, no self-shaming about the size of your card. It’s how you use it!
Some IOT perverts are into microSD
I thought I read somewhere that larger drives had a higher chance of failure. Quick look around and that seems to be untrue relative to newer drives.
One problem is that larger drives take longer to rebuild the RAID array when one drive needs replacing. You’re sitting there for days hoping that no other drive fails while the process goes. Current SATA and SAS standards are as fast as spinning platters could possibly go; making them go even faster won’t help anything.
There was some debate among storage engineers if they even want drives bigger than 20TB. The potential risk of data loss during a rebuild is worth trading off density. That will probably be true until SSDs are closer to the price per TB of spinning platters (not necessarily the same; possibly more like double the price).
Yep. It’s a little nerve wracking when I replace a RAID drie in our NAS, but I do it before there’s a problem with a drive. I can mount the old one back in, or try another new drive. I’ve only ever had one new DOA, here’s hoping those stay few and far between.
What happened to using different kinds of drives in every mirrored pair? Not best practice any more? I’ve had Seagates fail one after another and the RAID was intact because I paired them with WD.
You can, but you might still be sweating bullets while waiting for the rebuild to finish.
If you’re writing 100 MB/s, it’ll still take 300,000 seconds to write 30TB. 300,000 seconds is 5,000 minutes, or 83.3 hours, or about 3.5 days. In some contexts, that can be considered a long time to be exposed to risk of some other hardware failure.
Huh? The hell is this supposed to mean? Are they talking about the internal platters?
More than likely
Avoid these like the plague. I made the mistake of buying 2 16 TB Exos drives a couple years ago and have had to RMA them 3 times already.
I stopped buying seagates when I had 4 of their 2TB barracuda drives die within 6 months… constantly was RMAing them. Finally got pissed and sold them and bought WD reds, still got 2 of the reds in my Nas Playing hot backups with nearly 8 years of power time.
I have several WDs with almost 15 years of power on time, not a single failure. Whereas my work bought a bunch of Seagates and our cluster was basically halved after less than 2 years. I have no idea how Seagate can suck so much.
About 10 years ago now, at a past employer, had a NAS setup that housed a bunch of medical data…all seagate drives. During my xmas PTO…I was lead on DR…yea fuckers all started failing one after another. Took out 14 drives before the storage team said fuck this pulled it offline and had a new NAS brought in from EMC, was a fun xmas restoring all that shit. Seagate used to be my go to, but it seems like every single interaction I have with them ends in disaster.
Seagate was my go-to after I had bought those original IBM DeathStars and had to RMA the RMA replacement drive after a few months. But brand loyalty is for suckers. It seemed Seagate had a really bad run after they acquired Maxstor who always had a bad reputation.
Maxstor… that is a name I have not heard in a long time lol
I recently had to send back a Barracuda drive as well. I’m seeing if the Ironwolf drive fares any better.
I have heard good things about their ironwolf drives, but that’s a enterprise solution drive, so hopefully it’s worth it
They seem to be real hit or miss. I also have 2 6TB barracudas that have 70,000 power on hours (8 yrs) that are still going fine.
Nice, I agree, I’m sure there is an opposite of me, telling their story of a bunch of failed WD drives and having swore them off.
“Hit or miss” is unfortunately not good enough for consumer electronics.
It means you’re essentially gambling with bad odds so the business you’re giving money to can get away with cutting corners.
Their 3tb and 16 TB are super trash. I’m running 20tb and 24tb and they’ve been solid… So far
Had that issue with the 3tb drives. Bought 4, had to RMA all 4, and then RMA 2 of the replacement drives all within a few months.
The last 2 are still operating 10 years later though. 2 out of 6.
Lmao the HDD in the first machine I built in the mid 90s was 1.2GB
My dad had a 286 with a 40MB hard drive in it. When it spun up it sounded like a plane taking off. A few years later he had a 486 and got a 2gb Seagate hard drive. It was an unimaginable amount of space at the time.
The computer industry in the 90s (and presumably the 80s, I just don’t remember it) we’re wild. Hardware would be completely obsolete every other year.
My 286er had 2MB RAM and no hard drive, just two 5.25" floppy drives. One to boot the OS from, the other for storage and software.
I upgrade it to 4 MB RAM and bought a 20 MB hard drive, moved EVERY piece of software I had onto it, and it was like 20% full. I sincerely thought that should last forever.
Today I casually send my wife a 10 sec video from the supermarket to choose which yoghurt she wants and that takes up about 25 MB.
I had 128KB of RAM and I loaded my games from tape. And most of those only used 48KB of it.
Yeah we still had an old 8086 with tape drive and all from my dad’s university times around, but I never acutely used that one.
It really was doubling in speed about every 18 months.
Back then that was very impressive!
Yup. My grandpa had 10 MB in his DOS machine back then.
I had a 20mb hard drive
I had a 1gb hard drive that weighed like 20 kgs, some 40 odd pounds
Our first computer was a Macintosh Classic with a 40 MB SCSI hard disk. My first “own” computer had a 120 MB drive.
I keep typoing TB as GB when talking about these huge drives, it’s just so weird how these massive capacities are just normal!
We had family computers first, I can’t recall original specs but I think my mother added in a 384MB drive to the 486 desktop before buying a win98se prebuilt with a 2GB drive. I remember my uncle calling that Pentium II 350MHZ, 64MB SDRAM, Rage 2 Pro Turbo AGP tower “a NASA computer” haha.
cool never will buy another seagate ever though.
Same but western digital, 13gb that failed and lost all my data 3 time and 3rd time was outside the warranty! I had paid 500$, the most expensive thing I had ever bought until tgat day.
I mean, cool and all, but call me when sata or m2 ssds are 10TB for $250, then we’ll talk.
Not sure whether we’ll arrive there the tech is definitely entering the taper-out phase of the sigmoid. Capacity might very well still become cheaper, also 3x cheaper, but don’t, in any way, expect them to simultaneously keep up with write performance that ship has long since sailed. The more bits they’re trying to squeeze into a single cell the slower it’s going to get and the price per cell isn’t going to change much, any more, as silicon has hit a price wall, it’s been a while since the newest, smallest node was also the cheapest.
OTOH how often do you write a terabyte in one go at full tilt.
I don’t think anyone has much issue with our current write speeds, even at dinky old SATA 6/GB levels. At least for bulk media storage. Your OS boot or game loading, whatever, maybe not. I’d be just fine with exactly what we have now, but just pack more chips in there.
Even if you take apart one of the biggest, meanest, most expensive 8TB 2.5" SSD’s the casing is mostly empty inside. There’s no reason they couldn’t just add more chips even at the current density levels other than artificial market segmentation, planned obsolescence, and pigheadedness. It seems the major consumer manufacturers refuse to allow their 2.5" SSD’s to get out of parity with the capacities on offer in the M.2 form factor drives that everyone is hyperfixated on for some reason, and the pricing structure between 8TB and what few greater than 8 models actually are on offer is nowhere near linear even though the manufacturing cost roughly should be.
If people are still willing to use a “full size” 3.5" form factor with ordinary hard drives for bulk storage, can you imagine how much solid state storage you could cram into a casing that size, even with current low-cost commodity chips? It’d be tons. But the only options available are “enterprise solutions” which are apparently priced with the expectation you’ll have a Fortune 500 or government expense account.
It’s bullshit all the way down; there’s nothing new under the sun in that regard.
The reason is transfer speeds. SATA is slow, M.2 is a direct PCIe link. And SSDs can saturate it, at least in bursts. Doubling the capacity of a 2.5" SSD is going to double its price as you need twice as many chips, there’s not really a market for 500 buck SATA SSDs, you’re looking for U.2 / U.3 ones. Yes, they’re quite a bit more expensive per TB but look at the difference in TBW to consumer SSDs.
If you’re a consumer and want a data grave, buy spinning platters. Or even a tape drive. You neither want, nor need, a high-capacity SSD.
Also you can always RAID them up.
For the context of bulk consumer storage (or even SOHO NAS) that’s irrelevant, though, because people are already happily using spinning mechanical 3.5" hard drives for this purpose, and they’re all already SATA. Therefore there’s no logical reason to worry about the physical size or slower write speeds of packing a bunch of flash chips into the same sized enclosure for those particular use cases.
There are reasons a big old SSD would be suitable for this. Silence, reliability, no spin up delay, resistance to outside mechanical forces, etc.
Sure it makes sense: Pretty much noone, but you, is going to buy them, and stocking shelves and warehouses with product costs money. All that unmoved stock would make them more expensive, making even more people not buy them. It’s inefficient.
These things are unreliable, I had 3 seagate HDDs in a row fail on me. Never had an issue with SSDs and never looked back.
well until you need capacity why not use an SSD. It’s basically mandatory for the operating system drive too
Capacity for what?. There are 4tb SSD m.2 costing $200 bucks cmon…
I would rather not buy so large SSDs. for most stuff the performance advantage is useless while the price is much larger, and my impression is still that such large SSDs have a shorter lifespan (regarding how many writes will it take to break down). recovering data fron a failing HDD is also easier: SSDs just turn read-only or completely fail at one point, in the latter case often even data recovery companies being unable to recover anything, while HDDs will often give signs that a good monitoring software can detect weeks or months before, so that you know to be more cautious with it
How is it easier? Do you open your HDDs and take info from there? Do you have specialized equipment and knowledge? Second, if you detect on smart that you are closer to TBW, change the SSD duh… Smart is a lot more effective on SSDs depending the model it even gives you time to live…
obviously not. often they don’t break all at once, but start with corrupting smaller areas of sectors
Seagate in general are unreliable in my own anecdotal experience. Every Seagate I’ve owned has died in less than five years. I couldn’t give you an estimate on the average failure age of my WD drives because it never happened before they were retired due to obsolescence. It was over a decade regularly though.
HDD is unreliable with all those moving parts and arms and cilinders.
Everybody taking shit about Seagate here. Meanwhile I’ve never had a hard drive die on me. Eventually the capacity just became too little to keep around and I got bigger ones.
Oldest I’m using right now is a decade old, Seagate. Actually, all the HDDs are Seagate. The SSDs are Samsung. Granted, my OS is on an SSD, as well as my most used things, so the HDDs don’t actually get hit all that much.
Seagate had some bad luck with their 3TB drives about 15 years ago now if memory serves me correctly.
Since then Western Digital (the only other remaining HDD manufacturer) pulled some shenanigans with not correctly labeling different technologies in use on their NAS drives that directly impacted their practicality and performance in NAS applications (the performance issues were particularly agregious when used in a zfs pool)
So basically pick your poison. Hard to predict which of the duopoly will do something unworthy of trusting your data upon, so uh…check your backups I guess?
That decade old one is 3TB. 😅
Unfortunately, I have about 10 dead 3TB drives sitting around in my closet. I took the sacrifice so you don’t have to :-)
Thanks. 👍
at least you have a bunch of nice coasters and cool magnets now.
Had good impressions and experiences with Toshiba drives. Chugged along quiet nicely.
Ah I thought I had remembered their hard drive division being aquired but I was wrong! Per Wikipedia:
Yeah our file server has 17 Toshiba drives in the 10/14 TiB sizes ranging from 2-4 years of power-on age and zero failures so far (touch wood).
Of our 6 Seagate drives (10 TiB), 3 of them died in the 2-4 year age range, but one is still alive 6 years later.
We’re in Japan and Toshiba is by far the cheapest here (and have the best support - they have advance replacement on regular NAS drives whereas Seagate takes 2 weeks replacement to ship to and from a support center in China!) so we’ll continue buying them.
I’ve had a Samsung SSD die on me, I’ve had many WD drives die on me (also the last drive I’ve had die was a WD drive), I’ve had many Seagate drives die on me.
Buy enough drives, have them for a long enough time, and they will die.
Yeah, same. I switched to seagate after 3 WD drives failed in less then 3 years. Never had problems since.
I had 3 drives from seagate (including 1 enterprise) that died or got file-corruption issues when I gave up and switched to SSDs entirely…
That’s good, really good news, to see that HDDs are still being manufactured and being thought of. Because I’m having a serious problem trying to find a new 2.5" HDD for my old laptop here in Brazil. I can quickly find SSDs across the Brazilian online marketplaces, and they’re not much expensive, but I’m intending on purchasing a mechanical one because SSDs won’t hold data for much longer compared to HDDs, but there are so few HDD for sale, and those I could find aren’t brand-new.
Realistically this is not a good reason to select SSD over HDD. If your data is important it’s being backed up (and if it’s not backed up it’s not important. Yada yada 3.2.1 backups and all. I’ll happily give real backup advise if you need it)
In my anecdotal experience across both my family’s various computers and computers I’ve seen bite the dust at work, I’ve not observed any longevity difference between HDDs and SSDs (in fact I’ve only seen 2 fail and those were front desk PCs that were effectively always on 24/7 with heavy use during all lobby hours, and that was after multiple years of that usecase) and I’ve never observed bit rot in the real world on anything other than crappy flashdrives and SD cards (literally the lowest quality flash you can get)
Honestly best way to look at it is to select based on your usecase. Always have your boot device be an SSD, and if you don’t need more storage on that computer than you feel like buying an SSD to match, don’t even worry about a HDD for that device. HDDs have one usecase only these days: bulk storage for comparatively low cost per GB
I replaced my laptop’s DVD drive with a HDD caddy adapter, so it supports two drives instead of just one. Then, I installed a 120G SSD alongside with a 500G HDD, with the HDD being connected through the caddy adapter. The entire Linux installation on this laptop was done in 2019 and, since then, I never reinstalled nor replaced the drives.
But sometimes I hear what seems to be a “coil whine” (a short high pitched sound) coming from where the SSD is, so I guess that its end is near. I have another SSD (240G) I bought a few years ago, waiting to be installed but I’m waiting to get another HDD (1TB or 2TB) in order to make another installation, because the HDD was reused from another laptop I had (therefore, it’s really old by now, although I had no I/O errors nor “coil whinings” yet).
Back when I installed the current Linux, I mistakenly placed
/var
and/home
(and consequently,/home/me/.cache
and/home/me/.config
, both folders of which have high write rates because I use KDE Plasma) on the SSD. As the years passed by, I realized it was a mistake but I never had the courage to relocate things, so I did some “creative solutions” (“gambiarra”) such as creating a symlinked folder for.cache
and.config
, pointing them to another folder within the HDD.As for backup, while I have three old spare HDDs holding the same old data (so it’s a redundant backup), there are so many (hundreds of GBs) new things I both produced and downloaded that I’d need lots of room to better organize all the files, finding out what is not needed anymore and renewing my backups. That’s why I was looking for either 1TB or 2TB HDDs, as brand-new as possible (also, I’m intending to tinker more with things such as data science after a fresh new installation of Linux). It’s not a thing that I’m really in a hurry to do, though.
Edit: and those old spare HDDs are 3.5" so they wouldn’t fit the laptop.
I doubt the high pitched whine that you’re hearing is the SSD failing. The sheer amount of writes to fully wear out an SSD is…honestly difficult to achieve in the real world. I’ve got decade old budget SSDs in some of my computers that are still going strong!
Dude i had a 240 gb ssd 14 years old. And the SMART is telling me that has 84% life yet. This was a main OS drive and was formatted multiple times. Literally data is going to be discontinued before this disk is going to die. Stop spreading fake news. Realistically how many times you fill a SSD in a typical scenario?
As per my previous comment, I had
/var
,/var/log
,/home/me/.cache
, among many other frequently written directories on the SSD since 2019. SSDs have fewer write cycles than HDDs, it’s not “fake news”.(en.wikipedia.org/wiki/Solid-state_drive)
I’m not really sure why exactly mine it’s coil whining, it happens occasionally and nothing else happens aside from the high-pitched sound, but it’s coil whining.
How the hell a SSD can coil whine… Without mobile parts lol… Second, realistically for a normal user, it’s probable that SSD is going to last more than 10 years. We aren’t talking about intensive data servers here. We are talking about The hardcorest of the gamers for example, normal people. And of course, to begin with HDDs haven’t a write limit lol. They fail because of its mechanical parts. Finally, cost benefit. The M.2 I was suggesting is $200 buck for 4Tb. Cmon it’s not the end of the world and you multiply speeds… By 700…
Do you even know what “coil whine” is? It has nothing to do with moving parts! “Coil whine” is a physical phenomenon which happens when electrical current makes an electronic component, such as an inductor, to slightly vibrate, emitting a high-pitched sound. It’s a well-known phenomenon for graphic cards (whose only moving part is the cooler, not the source of their coil whinings). SSDs aren’t supposed to make coil whines, and that’s why I’m worried about the health of mine.
I’m not USian so pricing and cost benefits may differ. Also, the thing is that I already have another SSD, a 240G SSD. I don’t need to buy another one, I just need a HDD which is what I said in my first comment. Just it: a personal preference, a personal opinion regarding personal experiences and that’s all. The only statement I said beyond personal opinions was regarding the life span which I meant the write rate thing. But that’s it: personal opinion, no need for ranting about it.
Imagine a M.2 with coil whine, what is the posibility…
Great, can’t wait to afford one in 2050.
Fleebay? Yup, me too!
$4.99 for the drive plus $399.00 s&h
How many platters?!
30 to 32 platters. You can write a file on the edge and watch it as it speeds back to the future!
Good. However, 2 x 16TB Seagate HDDs still cheaper, isn’t it?
These drives aren’t for people who care how much they cost, they’re for people who have a server with 16 drive bays and need to double the amount of storage they had in them.
(Enterprise gear is neat: it doesn’t matter what it costs, someone will pay whatever you ask because someone somewhere desperately needs to replace 16tb drives with 32tb ones.)
In addition to needing to fit it into the gear you have on hand, you may also have limitations in rack space (the data center you’re in may literally be full), or your power budget.
Heck yeah.
Always a fan of more storage. Speed isn’t everything!
HP servers have more fans!
Seagate. The company that sold me an HDD which broke down two days after the warranty expired.
No thanks.
laughing in Western Digital HDD running for about 10 years now
I had the opposite experience. My Seagates have been running for over a decade now. The one time I went with Western Digital, both drives crapped out in a few years.
I have 10 year old WDs and 8 year old Seagates still kicking. Depends on the year. Some years one is better than others.
Did you buy consumer Barracuda?
Funny because I have a box of Seagate consumer drives recovered from systems going to recycling that just won’t quit. And my experience with WD drives is the same as your experience with Seagate.
Edit: now that I think about it, my WD experience is from many years ago. But the Seagate drives I have are not new either.
Survivorship bias. Obviously the ones that survived their users long enough to go to recycling would last longer than those that crap out right away and need to be replaced before the end of the life of the whole system.
I mean, obviously the whole thing is biased, if objective stats state that neither is particularly more prone to failure than the other, it’s just people who used a different brand once and had it fail. Which happens sometimes.
Ah I wasn’t thinking about that. I got the scrappy spinny bois.
I’m fairly sure me and my friends had a bad batch of Western digitals too.
I currently have an 8 year old Seagate external 4TB drive. Should I be concerned?
Any 8 years old hard drive is a concern. Don’t get sucked into thinking Seagate is a bad brand because of anecdotal evidence. He might’ve bought a Seagate hard drive with manufacturing defect, but actual data don’t really show any particular brand with worse reliability, IIRC. What you should do is research whether the particular model of your drive is known to have reliability problems or not. That’s a better indicator than the brand.
Had the same experience and opinion for years, they do fine on Backblaze’s drive stats but don’t know that I’ll ever super trust them just 'cus.
That said, the current home server has a mix of drives from different manufacturers including seagate to hopefully mitigate the chances that more than one fails at a time.
Western digital so good
whoa
Dude
Haven’t bought Seagate in 15 years. They improve their longevity?
Vastly. I’m running all seagate ironwolf pros. Best drives Ive ever used.
Used to be WD all the way.
I’m going to have to pass though. They cost too much. I buy refurb with 5 year warranty
Not worth the risk for me to find out lol. My granddaddy stored his data on WD drives and his daddy before him, and my daddy after him. Now I store my data on WD drives and my son will to one day. Such is life.
And here I am with HGST drives hitting 50k hours
Edit: no one ever discusses the Backblaze reliability statistics. Its interesting to see how they stack up against the anecdotes.
I bought 16TB one as an urgent replacement for a failing raid.
It arrived defective, so I can’t speak on the longevity.
I have one Seagate drive. It’s a 500 GB that came in my 2006 Dell Dimension E510 running XP Media Center. When that died in 2011, I put it in my custom build. It ran until probably 2014, when suddenly I was having issues booting and I got a fresh WD 1 TB. Put it in a box, and kept it for some reason. Fast forward to 2022, I got another Dell E510 with only an 80 GB. Dusted off the old 500 GB and popped it in. Back with XP Media Center. The cycle is complete. That drive is still noisy as fuck.
www.backblaze.com/…/hard-drive-test-data
Backblaze reports are cool
Nice data but I stick with Toshiba old HGST and WD. For me they seem to last much longer than Seagate
My personal experience has been hit n miss.
Was using one 4TB Seagate for 11 years then bought a newer model to replace it since I thought it was gonna die any day. That new one died within 6 months. The old one still works although I don’t use it for for anything important now.
“The two models, the 30TB … and the 32TB …, each offer a minimum of 3TB per disk”. Well, yes, I would hope something advertised as being 30TB would offer at least 3TB. Am I misreading this sentence somehow?
They probably mean the hard drive has 10 platters, each containing at least 3TB.