Opinions of this HardDrive?

*shrugs* OK. Do they need RAID capability at all then?

Come now, non-parity RAID is not useless. If all one cares about is increased transfer rates then striping works great. Also, if one just needs simple redundancy than mirroring works well.

Certainly parity is desirable in data-sensitive environments. We use RAID 5 arrays at work to store customer data on (PC repair shop). All of our customers that do utilize RAID only ever use level 0 or 1 however, and it works fine for their needs.
 
Come now, non-parity RAID is not useless.

Indeed not, but I think a lot of people think they need it when they don't. Then find out the hard way that a RAID0 system volume holding Windows and My Photos isn't a good idea. :)

Certainly parity is desirable in data-sensitive environments. We use RAID 5 arrays at work to store customer data on (PC repair shop).

My stuffs all RAID 6 and RAID 10 these days, from proper hardware controllers with battery-backed caches. I used to use Linux s/w RAID to cut costs, but it's such a PITA when a drive fails - it's not worth the heartache, downtime and manpower to save a few hundred quid on a proper card with hot-swap, etc.
 
If no cpu is used then I am sure it is hardware raid. The reason why I say mine is hardware raid is because no cpu is used on mine because i ran HD Tune and it reported my cpu usage as being 0.1 or something.
 
If no cpu is used then I am sure it is hardware raid. The reason why I say mine is hardware raid is because no cpu is used on mine because i ran HD Tune and it reported my cpu usage as being 0.1 or something.

But for RAID0 or RAID1 the CPU usage will be trivial even with software RAID, that's what folks have been saying here. That's kind of why I was being so dismissive above, performance-wise the distinction between hardware and software is pretty irrelevant when you're doing RAID0 and RAID1.

Parity RAID takes more grunt, but it's still not necessarily a killer for modern CPU. Hell, there are benchmarks around that show that Linux software RAID (which is *all* done on the CPU and intentionally doesn't require "RAID" controllers on the motherboard) is performs better than real hardware RAID at RAID5 and RAID6.

The real issue to my mind is what happens when a disc in your RAID set fails. Any style of RAID that requires an OS kernel to boot to a level of self-awareness that it can deal with a degraded array is "software". If your BIOS and/or bootloader need to understand the layout of the kernel on the physical discs to boot your kernel, you've got a big problem.

With a decent h/w controller the BIOS, kernel and all the rest are broadly speaking unaware and/or don't care about what physical discs are attached to the controller, how broken they are, the RAID level or anything else. All they see is a single block device (usually manifesting itself as a SCSI block device) that they can happily boot from, read from, write to as if it were a bog standard 15TB SAS hard-drive.

The host OS RAID controller driver may well be able to extract more information about the real state of the array, but the host OS doesn't have to care, and for good reason. At critical times like boot time, when the OS isn't around yet, issues like "can my BIOS find my boot sector" are easier with a proper h/w setup.
 
The real issue to my mind is what happens when a disc in your RAID set fails. Any style of RAID that requires an OS kernel to boot to a level of self-awareness that it can deal with a degraded array is "software". If your BIOS and/or bootloader need to understand the layout of the kernel on the physical discs to boot your kernel, you've got a big problem.

I guess that's why they moved the RAID stuff off the drives/OS and into the BIOS. The tools to (re)build the array are not on the array itself. Or in my case on the BIOS and the bootable SSD.

And of course your scenario presupposes that RAID is being used for the OS, as opposed to (say) data storage with the OS on a non-raid drive where the problem of a broken array doesn't trash your OS.

The host OS RAID controller driver may well be able to extract more information about the real state of the array, but the host OS doesn't have to care, and for good reason. At critical times like boot time, when the OS isn't around yet, issues like "can my BIOS find my boot sector" are easier with a proper h/w setup.

I agree, but this level of consumer RAID isn't meant to duplicate an enterprise level of RAID, it's just meant to give you something better than a single drive.
 
Last edited by a moderator:
Any style of RAID that requires an OS kernel to boot to a level of self-awareness that it can deal with a degraded array is "software".

Mine doesnt.
"The Silicon Image’s SiI 4723 SteelVine Storage Processor when configured as raid 1 will protect the stored content and will continue to operate even in the face of a drive failure. It supports auto rebuild, auto fail over and hot-swap without the use of host resources."
 
Back
Top