Suggestions for hard drives and setup

hoho

Veteran
Over the years I've filled my PC with five HDDs ranging from 160G-1T, each at different size. As I'm about to get quite substantial sum of money quite soon I decided I should "fix" things and actually get somewhat reliable setup going. Main usecase is just storing random data for extended periods of time at home. I'll likely keep some of the older drives still in use for cases where single separate drive performs better.

I was thinking about getting 4 1.5-2T drives and set them up in RAID5 array. That should protect me from single-drive errors but obviously not for filesystem errors. As I'm running linux it gives me quite a lot of freedom for setting up the RAID. I was thinking about combining all the drives into one huge array and making different partitions on them. Something like one partition at 80% size for main storage and 20% for the most important stuff that gets copied over as needed, maybe even though some kind of incremental backup system. Or would it make more sense to make two different RAID arrays? Maybe the main one as RAID5 and the smaller one for important stuff as RAID1 or 10?

I think that should give me reasonable reliability as long as the box doesn't blow up. Obviously having different external boxes with drives would give higher reliability but I'd like to keep things somewhat simple.


So question is, does that kind of setup sound reasonable? Are there any other RAID mechanisms that would be similar or better in terms of speed, reliability and useable capacity? What specific HDDs wold you suggest?
 
Nothing beats RAID10 if you can afford it. IMHO the best balance of size and speed right now is WD Black. Raptors are faster but much smaller.
 
1 SSD as boot drive, and 4 WD Green 5400 rpm 2TB HDD's for storage, either in Raid10 (4TB) or Raid5(6TB). Raid10 will have more performance and it'll be much easier to setup and do disaster recovery, but raid5 will have more capacity. No need to have power hungry drives just for home use storage.
 
1 SSD as boot drive, and 4 WD Green 5400 rpm 2TB HDD's for storage, either in Raid10 (4TB) or Raid5(6TB). Raid10 will have more performance and it'll be much easier to setup and do disaster recovery, but raid5 will have more capacity. No need to have power hungry drives just for home use storage.

I agree on the power issue. I just had a very bad experience with a WD Green (overheated it's PCB and died young) so I swore off them :)
 
Don't do RAID5, a second drive failure during rebuild is way too likely. Use RAID6 or RAID10 (preferably with two sets of drives from different brands).

If you don't need/want the increased throughput you get from RAID striping there are a couple of other alternatives. Windows Home Server v1 with transparent duplication is nice, you can just add drives on the fly, but it has a limitation of 10 simultaneous connections which is annoying. Unraid is an option. Maybe Flexraid once it's ready, although new storage solutions aren't the thing you want to beta test with important data. Same goes for Amahi/Greyole (I think the implementation is very inelegant too).
 
Since you said you run Linux, you really should check out unRAID [ http://lime-technology.com/ ] . It has several benefits in that it can conglomerate several different sized drives into acting/looking like a single drive, will allow idle disks to spin down when not in use, offers parity protection so if one drive fails you can rebuild the drive through the use of the parity and the other drives, and should you ever encounter a 2 disk failure only the data on those 2 drives are lost while the data on your other drives remain perfectly safe. It boots from a simple USB flash/thumb drive so as not to take up any drive slots. They have a free version which allows for 2 data drives and 1 parity drive. The license costs for larger arrays upto 22 drives is very reasonable.

As for Green/5400rpm drives, I use them extensively for my multimedia collection. They're plenty fast enough to do full BluRay streaming to my XBMC front-ends.
 
Lately one can get a multitude of different branded 2TB 5400/Green drives for $70 - $80, while their 7200rpm counterparts are around twice the price on extreme deals $150 but were closer to the $180-$200 range. The one exception is a new Hitachi 7K300 and the old 7K200 2TB drives on special for $100 with normal price $120. So yes, the 7200rpm drives are about 50% more expensive than the 5400/Green drives.
 
Nothing beats RAID10 if you can afford it.
I'm not going to be streaming much data so I won't really need huge speeds. Also, I'm limited by how many disks I can fit to my box and thus it would be kind of nice to get decent amount of usable space per-disk.
1 SSD as boot drive, and 4 WD Green 5400 rpm 2TB HDD's for storage, either in Raid10 (4TB) or Raid5(6TB).
Yea, that was pretty much my plan actually, just thought maybe someone knows a better way of doing things :)
Don't do RAID5, a second drive failure during rebuild is way too likely.
Any chance to know how likely it is? My logic sais it would pretty much need the drive series to have some systematic error to have two or more of them fail at same time at similar usage.
MfA said:
Windows Home Server v1 with transparent duplication is nice, you can just add drives on the fly, but it has a limitation of 10 simultaneous connections which is annoying. Unraid is an option. Maybe Flexraid once it's ready, although new storage solutions aren't the thing you want to beta test with important data. Same goes for Amahi/Greyole (I think the implementation is very inelegant too).
Well, as I want to have the drives physically inside my regular PC then those kind of external solutions aren't quite suitable. Flexraid seems interesting though
BRiT said:
Since you said you run Linux, you really should check out unRAID
Only problem with that is they don't seem to offer an installable version I could use in my existing OS.

As for slow vs fast rotation speeds, the slower ones provide me plenty enough for my needs.


Also, what about that idea of making two separate RAID arrays on same physical discs? Big and less-wasting one for not-so-important data and smaller and more reliable for the important stuff? To me it seems a good idea but maybe I'm missing something vital.
 
Why are you recommending 5400rpm drives ?

Lower power consumption and heat, and STR isn't far behind. Random seeks suffer a bit, but as a data drive and media streamer as long as it has a decent STR, that's peachy. Random seek performance isn't much of a factor for those useage cases.

The latest "green" 5400 rpm drives have phenomenal power useage profiles.

At idle for example...

Seagate Green 2TB - 4.4 watts
WD Green 2TB EARS - 6.31 watts
Seagate LP 2TB - 4.36 watts
Samsung F3 green 2TB - 4.89 watts.

The new WD 3TB green drive is particularly impressive at 3.89 watts.

Not sure why WD have such high idle power use on the EARS Green drives.

And load useage is generally between 5.69 watts and 7.14 watts depending on drive and useage except for the Samsung which hit a whopping 8.39 watts with constant random 4k reads. But when used as a data or streaming device, that's a non-factor.

Compare that to WD Caviar Black 2TB. 6.41 watts at idle and useage power over 10 watts.

Ah, but what's 2 or 4 watts between friends? Well if you start to stack up drives for a storage server it adds up. As well those lower watts and lower rotational speeds translates into cooler running and longer living drives, especially, if you start adding multiple drives for storage needs.

If you don't need a lot of performance oriented storage (for boot and application drive) you can save power by going with a 7200 rpm drive with fewer platters or an SSD. Of course, an SSD is also an order of magnitude more expensive, but if you can afford it, it's nice.

Regards,
SB
 
Lower power consumption and heat, and STR isn't far behind. Random seeks suffer a bit, but as a data drive and media streamer as long as it has a decent STR, that's peachy. Random seek performance isn't much of a factor for those useage cases.

The latest "green" 5400 rpm drives have phenomenal power useage profiles.

At idle for example...

Seagate Green 2TB - 4.4 watts
WD Green 2TB EARS - 6.31 watts
Seagate LP 2TB - 4.36 watts
Samsung F3 green 2TB - 4.89 watts.
Where are those numbers from, because I have quite a different set of numbers from this site:
http://www.silentpcreview.com/article1127-page5.html
I think you're quoting storagereview.com, which tested an older 4-platter version of the WD20EARS, compared to the 3 platter version tested at a later date by SPCR.

In any case, they're all so close that any 5400 2TB drive will do as long as you don't get a bad batch that fails, so as long as the newegg reviews are decent for the most part, I just go for the cheapest one. I just don't see much of a reason for 7200 rpm drives in general anymore, we're either going for power with 5400rpm or speed with SSD. I don't see how a home user would need both a large and fast drive.
 
Last edited by a moderator:
I build a Linux box as server/NAS a month ago, with 2 bootdisks in RAID 1 and 4 Samsung F2 EcoGreen 1.5 TB disks in RAID 5 for data, with XFS. As I'm using it exclusively through Samba, I should still add a recycle bin, for if I do something stupid.

I see little use in making multiple partitions, except perhaps for a swap partition or a RAID 0 one if you need the speed. You don't want to boot from your data array, anyway.

Rebuilding the array took ~17 hours and I configured mdadm to send me an email if something goes wrong (and tested that!), so if a drive went down on friday evening (and you only check your email during working hours), it would be rebuild by tuesday evening (get a drive on monday after work), for a total risk time of 4 days.


The main things to take into account for the chance that two drives go down in a period of three days are your power supply, the stupid SATA connectors and heat buildup.

If your power supply breaks and peaks, you can fry the drives. So use a good one, and calculate all the spin-up wattages.

I had a drive that broke due to a loosely-fitted SATA connector. And really, the design of those connectors is one of the worst I've ever seen: any small thing can make them slip off, like a fan going full RPM. So, buy a set of decent ones, with clips on them.

If the operating temperature of your hard disks is outside the 30-50 degrees Celsius range, breakdown risks start increasing rapidly. And when the temperature starts to climb, it tends to continue until the drives break.

I have a fan that blows air along and between the data drives, but the bootdrives are stacked and outside the flow, and become about 50 C. That's still fine, but you should install hdtemp, disconnect any single fan and check how hot your drives become.
 
Any chance to know how likely it is?
Nope, just know that it happens (search some storage forums for anecdotal evidence, then try to find someone who had it happen with RAID6).
My logic sais it would pretty much need the drive series to have some systematic error to have two or more of them fail at same time at similar usage.
Defacto identical parts wearing out at about the same time with defacto identical usage ... what are the odds?
Well, as I want to have the drives physically inside my regular PC
Why exactly? If I wanted to burn some money on something like this I'd go for SSD+RAID1 on my desktop with mass storage and back up on a RAID6 NAS (I'd prefer transparent duplication ala WHS, but I wouldn't want to administer a windows server ... RAID6 seems the best alternative at the moment IMO).

PS. if you worry about cooling, these 4 in 3 adapters with fans are nice (although I just ripped the dust filters out).
 
Last edited by a moderator:
The part I do not like about typical RAID5 and RAID6 is the inability to idle drives not in use, since all drives are always in use. That's a major benefit of unRAID. Since it does not stripe the data across drives, it can spin down any drives not in use. This also helps to alleviate the RAID5/RAID6 issue of "defactor identical pars wearing out at about the same time with defactor identical usage" since they do not have identical usage patterns. A potential downside to not striping data is performance, however for multimedia and even critical data storage the speed of a Green/5400rpm drive is more than enough. Another benefit, much like WHS, is not having to go through extensive routines to grow or shrink the array size.

It would be nicer if unRAID was merely a plugin module to any Linux distro, but at the time it isn't. however it can be made to run on a full Slackware Distro, 32bit or even 64bit. I have my unRAID array running on a Slackware Current 64bit distro. It's pretty slick.

While the appeals of software raid6 is nice using Linux mdadm, the upkeep on expanding the array is too much of a hassle for me to be bothered with it.

While FlexRAID looks interesting, it simply isn't ready for primetime yet.

As others have pointed out, definitely get LOCKING SATA cables. They will save you time and headaches down the long haul.

If you're going to be power a few drives, make sure you get a high quality single 12Volt rail PSU, that way all the ampage is available for the drives to use. There are some 750Watt dual/split rail 12V PSUs which can barely power 7 Green HDDs because they barely provide 18Amps on the rail used by the MB/Fans/HDDs.
 
In addition to locking SATA connectors, you should be careful when installing or removing SATA drives. The plastic "tab" holding the metal connectors on the drive is surprising fragile. I broke one on a WD drive years ago (luckily I was able to fix it with some super glue), and have had friends break them occasionally.

Also in regards to spin-up power of multiple HDDs. Some MBs feature the ability to stagger the spinup of HDDs. That's assuming you aren't using an add in storage controller with the ability to stagger HDD spin up.

Regards,
SB
 
You guys must have terrible power supplies (or simply a shitload of drives) if you need to worry about spin-up power.

If that is a worry, you might want to stay away from Seagate as they've had awful performance in that regard (upwards of 50W peak draw), at least some years past. Maybe they've improved now that most, if not all 3.5" drives use head-ramps instead of being of the contact start/stop type.

Hitachi has typically had very low spin-up power, usually class-leading performance actually. Typically 20W or less even for their 5-platter units.
 
You guys must have terrible power supplies (or simply a shitload of drives) if you need to worry about spin-up power.

No, we are merely educated on the subject.

When drives use anywhere from 2-2.8 Amps on spin-up, a mere 9 drives will draw more than the meager 18 Amps that most split-rail power supplies use for the 12Volt line, even the 80 Plus Gold Rated models. You need to realize that the same rail used for HDDs also needs to power the motherboard, cpu, and cooling fans on split-rail PSUs. The split-rail power supplies are only useful for multi-GPU systems. When you're talking about storing data, you would be a fool to not be concerned about power tolerances.

Of course when your server houses 23 drives, that's anywhere from 46-64.4 Amps just for the drives on spin-up, you need to really be concerned about power supply specs.

Also, the current Seagate model drives are well within the power ranges of other current drives.
 
Back
Top