Thanks for the replies guys!
MfA said:
The extra cost for using the more expensive switch is 50$ per node for the 40 node setup ... meh.
Why not start with a really really cheap 8 or 16 port gigabit switch for now and then decide what to do when you scale up to 40 nodes? Those 1000$ switches might be cheaper than the 2500$ ones with 10Gb uplinks, but if you have to replace them anyway in the end it's still 1000$ down the drain.
PS. it's Gb ... so divided by 48 it's 1e9/48/8 = 2.6 MB/s.
Will have to work out how we will scale up the farm, might be cheaper to just get a 48-port switch, maybe not. Will run some numbers once I have more information.
Cool, 2.6 MBytes/s is enough for each blade? Seems on the low side. 10 sec to copy over a 26 MB file seems a little slow? Actually on full capacity it will be 3.125 MB/s per blade (10^9/40).
-tkf- said:
In theory i would guestimate a 16 Disk System with 15 drives in raid 6 and one hot spare, to be able to sustain up to 1200 MB pr sec. A 10 GB network card can roughly handle 1000MB.
So with 80 Clients at 1Gbit/100MB pr machine, max throughput you are still hitting the ceiling on the network, it´s just 10 times faster than an all out 1 Gbit network.
All my drives is SATA btw, Raid Certified, i prefer WD
True. Heh, well SATA should be fine, I mean if the each machine is writing out at 3.125 MB/s it's hardly going to hurt performance as the drives have a bandwidth of 300 MB/s, and it's highly unlikely that all blades will write concurrently. Max bandwidth through the server would be 125 MB/s so it won't hurt performance. In that case, perhaps a RAID 1 will be fine? On 6 drives that gives 3 TB total capacity and pretty much 0 chance of data loss.
If I seem a little consistent on my replies above (i.e. on RAID 1) that's because I only just saw these posts. I often write out my questions in notepad while I'm doing research on the net and then post.
Switch:
48-port gigabit switch with 1/2 10GB uplink slots.
Storage server:
Dual-port 10GB ethernet (onboard or add-on card). Single port would be cheaper, not sure if that is available though?
This looks good (Intel Server SR2612 UR):
http://www.intel.com/products/server/systems/SR2612UR/SR2612UR-overview.htm
Up to 12 SAS/SATA hot-swap bays. 2U rackmount.
And prices with markup here:
http://www.wantitall.co.za/PC-Hardware/Intel-Server-SR2612UR__B002V3FDH4
For the above, single socket Xeon E5502, S5520UR server board, 4 GB DDR3800/1066/1333 (RAM is so cheap, so might just opt for 1333). It's not a bad price actually, we would get at roughly 70-75% of price quoted on link directly above (we get at cost).
Will need to add on CPU, heatsink, drives, and RAID controller (onboard RAID only supports levels 0/1/10 - although level 10 might just be perfect).
The following components are included:
Server Board S5520UR
Server Chassis SR2612
12 hot swap drive bays
2 x 760 W PSU's
4 fans
2 CPU heatsinks (weird, Xeon's don't come with heatsinks anymore)
Would need to add on Dual-port 10 GB ethernet card.
Only one thing bugs me, on the S5520UR specs, it says the board only has 6 SATA-300 ports? Strange. Might have to get a different board? Still with RAID10, we can get up to 4 TB storage, which should be fine anyway. What do you think?
RAID config:
RAID 5 looks fine, min drives is 3, space efficiency is n-1, and fault tolerance is 1 drive. That should be safe enough?
So with 6 x 1 TB SATA-300 drives, that gives 5 TB space. Not quite sure which is better, level 5 or 10?
As I understand it, RAID10 on 6 1TB drives gives:
Half of the drives mirrored for striping, and half the drives for parity. So with 6 drives, that gives 3 GB total storage, is that right?
All that is needed next is a controller card (depending on which RAID level is chosen, perhaps level 6, in that case there is no onboard support for level 6).
RAID controller:
RAID 0,1,5,6,10,50
http://www.wantitall.co.za/PC-Hardw...RAID-0-1-5-6-10-50-PCI-Express-x8__B0017QZLVE
RAID 0,1,5,6,10,50,60
http://www.wantitall.co.za/PC-Hardw...le-300-MBps-RAID-0-1-5-6-10-50-60__B000W7PNW6
First one seems good?
So the above, that might just complete the storage issue in itself. Then need to find a switch.