Backblaze releases their HDD failure rates

pharma

Legend
As you can see, the tables they published simply group the drives by model and that's it. No grouping by age, writes/reads quantity, conditions and so on. According to this table, WD tops out the failure rates with 11.31% failure on their WD60EFRX drive. This is a 6TB WD Red drive designed for small home NAS systems, not large datacentre environments. Coming in second with 10.20% failure is the Seagate St4000DX000, which is a 4TB Barracuda drive, which is an internal desktop drive. In terms of raw numbers of failed drives, the Seagate ST4000DM000 was top with 278 drives failing. This is also a 4TB Barracuda drive, but they did use 34,744 of them and they had a total of 3,187,409 drive days run time, more than three times any of the other drives.

blog_q3_2016_stats_table_1.jpg

https://linustechtips.com/main/topi...me-more-flawed-hdd-failure-statistics-for-q3/
 

They say Backblaze reliability statistics are null and void because they use desptop drives in servers. They aren't. IMO, the article is a hit-job on Backblaze.

It used to be, the only difference between enterprise and desktop class HDD with similar specs were the firmware (optimized performance for low queue depth on desktop drives, max I/Os for enterprise) and interface(SAS vs SATA). If anything, drives in desktops have a more hostile environment, they see more temperature cycles since they are typically power cycled multiple times per day. Real enterprise drives were 10 and 15K rpm drives, and they had awful reliability.

If Seagate and Western Digital are selling HDDs that dies under sustatined load, they are selling inferior products, period!

If they don't like bad press, they should improve their reliability

edit: There's a really good analysis of the BackBlaze numbers here. It's also worth noting, in the most recent 2016 statistics, Seagate drives have one third the failure rate of prior years.

Cheers
 
Last edited:
They say Backblaze reliability statistics are null and void because they use desptop drives in servers. They aren't. IMO, the article is a hit-job on Backblaze.

It used to be, the only difference between enterprise and desktop class HDD with similar specs were the firmware (optimized performance for low queue depth on desktop drives, max I/Os for enterprise) and interface(SAS vs SATA). If anything, drives in desktops have a more hostile environment, they see more temperature cycles since they are typically power cycled multiple times per day. Real enterprise drives were 10 and 15K rpm drives, and they had awful reliability.

If Seagate and Western Digital are selling HDDs that dies under sustatined load, they are selling inferior products, period!

If they don't like bad press, they should improve their reliability

edit: There's a really good analysis of the BackBlaze numbers here. It's also worth noting, in the most recent 2016 statistics, Seagate drives have one third the failure rate of prior years.

Cheers

Yeah, it's good that Seagate started to take their reliability seriously again. I wonder how much the Samsung HDD acquisition helped with that? Samsung drives had been very good WRT reliability. And Intel's reliability started to gradually improve after the acquisition.

Seagate had been my go-to drive manufacturer up until 3-4 years ago when I had 4 out of 10 drives fail on me after about 2-3 years of use. Those are the first drives I've had fail on me since the bad Maxtor ones back in the late 90's.

That report gives me a bit of hope as I just recently purchased 3x Seagate 4 TB 2.5" drives. Granted, not the models they use, but hopefully things have improved across their entire HDD lines. Still not great, but significantly better than a few years ago.

That report also mirrors what I've been seeing in online user reviews that feature more reports of failed drives in the past few years than I'd ever seen previously for WD. While it's anecdotal, it does match up with that report relatively well.

HGST have been the drives I've been going to if I need the best reliability possible. Their prices seem to reflect that to a degree as well as they've steadily gone up over the past few years. 3-4 years ago they were one of the cheaper drives you could get, but that isn't really the case anymore. What's interesting is that the trouble with WD drives aren't reflected in the HGST drives despite them being acquired by WD about 5 years ago. I'm guessing that WD are operating the two drive lines completely separate from each other.

Regards,
SB
 
I don't think I will buy a mechanical drive anymore since I dont have big storage needs. When I upgrade my desktop it's gonna be just one NVMe 1TB SSD.
 
I was just thinking the exact opposite. I finally have an SSD that I use for my system drive and some games, but mechanical drives are sooo dirt cheap right now that I think I wanna pick up a big one just to back up everything I have and to install a linux partition on, plus I can always use more storage room.

EDITED BITS: Thanks Gubbi for the information/opinion on the validity of these figures. I've always liked these reports as they just seem to give a great approximation of who has the best/worst longevity. :)
 
the HGST drive that starts clicking a while ago and you guys recommended me to replace it is still going fine right now (after changed from SATA power to MOLEX power), finger crossed.
 
I also have 3 * HGST >= 6 TB drives (cant recall the exact size now) and I'm quite dissapointed by the amount of vibration noise they make (everything else is fine)
 
I also have 3 * HGST >= 6 TB drives (cant recall the exact size now) and I'm quite dissapointed by the amount of vibration noise they make (everything else is fine)

by default my HGST work on performance mode. CHanged to silent mode and it does work more silent.
 
Well so much for my luck with Seagate drives. 1 of the 3 drives that I got for the M-ITX system I built for the trip to Japan has turned out to be flaky. Ah well, they were cheap (109 USD) for a 4 TB 2.5" drive. Unfortunately being in Japan there isn't much I can do about a warranty replacement for a US market drive. Guess when I get back to the US I'll have to see about getting Seagate to replace it.

Regards,
SB
 
seagate technologies, cuz two out of three aint bad™

good luck with your rma, they will replace it but it might take a while.
 
Backblaze lists SSD failure rates, they die faster than HDDs in lifetime (guru3d.com)
Cloud storage Backblaze occasionally publishes data on its storage medium, providing fascinating facts about the hard disks used. SSDs have recently been added. Now the company reports that SSDs are failing at the same rate as HDDs especially in lifetime measurements.
...
  • The average age of the SSD drives is 14.2 months, and the average age of the HDD drives is 52.4 months.
  • The oldest SSD drives are about 33 months old and the youngest HDD drives are 27 months old.
Basically, the timelines for the average age of the SSDs and HDDs don’t overlap very much. The HDDs are, on average, more than three years older than the SSDs. This places each cohort at very different points in their lifecycle. If you subscribe to the idea that drives fail more often as they get older, you might want to delay your HDD shredding party for just a bit.
 
I was about to write HDD's off. Most people viewed HDD's (at least I did) as being more susceptible to failure when in fact it's very similar to SSD's. Failure rates now have less importance in purchase decisions, so HDD's are still very much in the picture.
 
Yeah, I have drives from 2005 that still work just fine. That said, not all of them from that period have survived. All of my WD drives from around that time are now dead (about 6 of them) while about half the Seagate drives have died (about 4) while both of my Samsung drives and all my Hitachi drives are still going strong. I'm going to be permanently retiring all of the drives from that era soon, however. Not due to reliability but their size is getting too small to justify continued use.

I'm hoping all the WD drives in my NAS (a mix of WD and HGST branded drives) don't suffer the same fate as those older WD drives. While I'd love to build a NAS with just SSDs, the cost per TB is way too high to justify their use considering how the NAS is used.

Regards,
SB
 
What makes you think that?
(also somehow managed to quote a portion of your post that doesn' t include those aphostrophe plurals thankfully :p )
A prior advantage of SSD was a low failure rate but always at a higher cost. One less worry about failure rate means other concerns like speed, form factor, thermal, power draw, capacity and cost are more important in the purchase depending on how I use the drive. After a few HDD failures in the past, failure rate doesn't have the same importance now when deciding between a HDD and SSD.
 

For home NAS purposes, it's more likely that you'd use a SATA drive like the Amazon.com: SAMSUNG 870 QVO SATA III 2.5" SSD 8TB (MZ-77Q8T0B) : Electronics (currently 746.89 USD). Especially if you are limited to 1 Gb ethernet. And even if you had 10 Gb ethernet or higher, there are various raid arrays that can still saturate your link speed while using SATA drives.

Although I'm not sure I'd trust QLC in a NAS if I was going for longevity.

Either way, still quite a bit higher cost than a good quality HDD. I don't even want to think about the cost of a server grade SSD in those capacities (a lot!). :)

Regards,
SB
 
I use a handful of SSDs on my TrueNAS Core build at home. A pair of Z-mirrored M.2 64GB Optanes for ZIL/SLOG, a pair of Z-mirrored SATA Samsung 860 Pro 256GB drives for high speed scratch and temp space, and pair of (I can't remember the brand) Z-mirrored SATA 120GB SLC drives for boot. Primary storage on that build is eight 12gbps SAS Seagate EXOS 14TB drives (two RAIDZ-2 vdevs to create a single ~84TB zpool.)

Those Optane drives get quite the workout; the SATA scratch disks definitely get some work, and the SATA boot disks do almost nothing heh. I use a 10GbE adapter to feed that storage into a switch, which has a bunch of 1Gbps ports branching out to various locations in the house. The TrueNAS box runs a small handful of BSD jails and two thick VMs (one of which is PiHole x86.) It's a bit of overkill, however I have repeatedly measured more than 2Gbps worth of disk I/O when a few machines are backing up at night or one of my jails is doing something (Plex indexing or transcoding a few streams for family members watching home video we recently posted.)
 
Back
Top