Question: SCSI Raid Controller Cards (BUS Dependency?)

SugarCoat

Veteran
I'm looking into setting up a few SCSI drives for server use. Question is pretty basic so hopefully someone can answer this quickly. I'm looking at all the different types of cards and am unsure about one thing. How dependant is the bus width on performance. I noticed intel has a PCI Express 8X SCSI card and of course the alternative is PCI or PCI-X by intel or adaptec. Would there be any measureable performance difference (advantage or disadvantage) caused by using one over the other? Or would PCI/PCI-X be just as efficient for every day use as PCI-Express? How bus dependant are raid cards? The server will be running at most 10 15k drives.


Question is a bit specialized since SCSI isnt exactly popular amongst forum users so my hopes arent too high of an answer, but any insight would certainly be appriciated.
 
Last edited by a moderator:
What kind of RAID setup are you planning on using them in? If you do 10x 15k SCSI drives in RAID-0 the Express x8 would likely help considerably. Typically you only need as much bandwidth as you can read/write from the drives under the best circumstances. More is always better. On top of that it's dedicated bandwidth and doesn't have to be shared with other devices so it could be a nice improvement if needed.

More than likely performance will come out fairly equal under most circumstances. Express is more future proof and probably the better solution to choose at this point in time. IMHO
 
Parallel SCSI tops out at what, 320MB/sec per channel? I doubt even a PCIe 8x controller card is going to have a vast amount of channels on it... There's little hope of ever being able to max out 2GB/sec bidirectional of controller<->host bandwidth. Even with serial SAS drives (where it is easier to accomplish many concurrent I/O channels), there's not going to be an awfully big need for such huge disk bandwidths in most systems. Almost no app software benefits to any large degree from tremendous sequential read/write speed; almost all types of disk activity are hugely limited by drive access times (and on-board caching effectiveness I should add).

What purpose are you going to use these SCSI drives for, and how many drives are we talking about here? Like I said, for most situations, even a PCIe 16x RAID board isn't likely to noticeably speed up your disk I/O simply because of higher bandwidths. A little more info would be needed I think...
 
With 10x 15k drives you'll want either PCI-X, or PCIe. PCIe is newer (and faster), but it depends on the rest of the server specs as well.

Also, SCSI is being slowly phased out in favour of SAS (Serial Attached SCSI). You might want to look into SAS controllers and drives as well.
 
Thanks this topic is quite interesting to me since I am gonna be building a fatty rig soon. It will need to have at least 2 tb of just storage but I will make these non-SCSI...so 4 x 500 gig hdds should do. But I was gonna get 2 x 15k 147 gig scsi drives for the os and apps being installed in raid 0 and another 2 of the same in the same setup for use as a scratch disk. I was gonna get me a Tyan Thunder K8WE IIRC and was thinking of not getting the onboard SCSI but rather invest in an 8 channel SCSI and an 8 channel SATA controller...any suggestions?
 
suryad said:
Thanks this topic is quite interesting to me since I am gonna be building a fatty rig soon. It will need to have at least 2 tb of just storage but I will make these non-SCSI...so 4 x 500 gig hdds should do. But I was gonna get 2 x 15k 147 gig scsi drives for the os and apps being installed in raid 0 and another 2 of the same in the same setup for use as a scratch disk. I was gonna get me a Tyan Thunder K8WE IIRC and was thinking of not getting the onboard SCSI but rather invest in an 8 channel SCSI and an 8 channel SATA controller...any suggestions?
A SAS controller will run both Serial Attached SCSI and SATA drives, so you might want to just look for one PCIe 8x SAS controller (~12 port). Then you could just mix and match.

Or get an 8 port (cheaper), and get an external 5 bay eSATA enclosure with a built in port mulitplier. 5x 500GB = 2TB RAID 5 redundant storage, and it will only use one port on your controller (maximum array transfer speed would be 300MB/s).

Something like this: http://graphics.adaptec.com/pdfs/SAS4800_4805_0106L.pdf for a controller.

Throw five 500GB SATA drives in this and run in RAID 5: http://www.cooldrives.com/firemorasaii.html

Use SAS drives like these for OS/apps/scratch: http://www.fujitsu.com/global/services/computing/storage/hdd/ehdd/max3xxxrc-catalog.html

You'd need a motherboard with a PCIe 8x or 16x slot.

Wouldn't be cheap 'tho.

Anyways, just a thought.
 
That is a big 10-4 on the not being cheap part.

Thanks for all the links and references. That will keep me busy for a while. I appreciate the suggestions.
 
suryad said:
But I was gonna get 2 x 15k 147 gig scsi drives for the os and apps being installed in raid 0
This is a total waste. Not only won't you see any noticeable speedup, you'll double the risk of dataloss by spreading your stuff over two drives with zero redundancy protection. In other words, it's just dumb to do raid0 for your system drive. :) And doubly so for data, in almost all cases save for video editing and such.
 
Back
Top