building up a render farm

What you guys reckon?
I'd be hesitant going with 1U if you don't know for 100% certain that things will work thermally ... with the supermicro barebones you have a thermal package designed by experts and tested in practice, if you just throw something together you have a reliability issue waiting to happen.

PS. how well do those server boards overclock? :p
 
Last edited by a moderator:
I'd be hesitant going with 1U if you don't know for 100% certain that things will work thermally ... with the supermicro barebones you have a thermal package designed by experts and tested in practice, if you just throw something together you have a reliability issue waiting to happen.

PS. how well do those server boards overclock? :p

I dropped 1U for the same reason, but if i had time i would have bought one and done an inhouse test, fairly easy to check temps.
 
When do you need the systems up and running? AMD will have there G34 up soon that will have 4,6,8, and 12 cores per socket. That might do what you need and my be cheaper to. As for SATA and SAS drivers for what your doing the SAS will be faster but have a much higher reliability then SATA drives.
 
-tkf-: Yeah, its a real problem not being able to do proper benchmarks on the hardware. It would actually help alot. We don't have an exact timeline for building up the farm, the job first has to be confirmed. It might happen in the next month, maybe longer.

I'll look at the current i7 processors, I'll take all the i7's with QPI bus type into consideration, that gives the 920, 940, 950 and 960 which range in prices from US 284 to 562. Each uses 130 W of power. Each processor certainly has more processing power in comparision to the E5520, but still only single socket support. The highest spec i7, the 960 has a clock of 3.2 Ghz, giving say, 12.8 Ghz. A dual E5520 gives 18.08 Ghz, and the price difference in favour of the i7 is US 184. This hardly helps in software savings does it?

I expect that the 6 core i7 would cost more than dual E5520s?

Also the i7's have a lower transfer rate of 4.8 GT/s, the E5520 has a system bus of 5.86 GT/s.

So by the reasoning above, the Xeon still makes sense, any suggestions?

MfA: The Rackmount ATXBlade S800 gives 528 CFM airflow, which is actually pretty high considering. It appears that this is close to optimal (perhaps even optimal) airflow. I'm trying to find out if this is good enough. So far I'm really thinking the ATXBlade is a pretty good deal, chassis comes with 6 x 120 mm fans, holds 8 blade enclosures, each enclosure comes with a 400 W PSU. Looking at prices for EATX cases like suggested above and PSU actually comes to the same, if not more?

I dunno about overclocking? I wouldn't even think about doing this :p

-tfk-: See above on the CFM airflow of the S800, I'm trying to figure out if this is adequate. Perhaps even put an aircon/fans around the chassis if needed, perhaps this isn't even necessary.

Blazkowicz: Dunno?

{Sniping}Waste: See above, could be in the next month, could be longer. Could be never ;) I did think about AMD, just have zero experience with AMD. I've always heard mixed information from people on AMD. One thing I have heard is that AMD CPU's can run at much higher temperatures than Intel CPUs? This could be an advantage? What do you reckon on AMD, which models should I look at?

True, SAS drives are much faster, just cost more. It doesn't really make much sense to invest in SAS drives. So what if you lose a little performance on farm output, most of the time when the farm is crunching away you will be busy on other parts of the project. At least that's my experience with our workflow.

Wow, I'm seriously so lost in this maze of hardware options. Hopefully I'm moving in the right direction. Already have cut the quote we got from IBM BP by over 180% (not to badmouth anyone, their systems are well worth investing in I believe, but for us, as a young company we need to preserve as much capital as possible), which is a great deal. Could even cut it more hopefully.
 
From what I'm hearing about the G34 is the 12 core CPU will have TDP of around 125W/ or APC of 95W. APC is a wattage measure of average use like what Intels TDP is. AMD TDP rating is the max the CPU will use if all the transistors are pulling max amps. From what I'm hearing the 12 core will be around APC 95W and clocked at around 2.4gig. This will be just on one socket too so with a duel socket G34 theres a max of 24 cores. Each socket can have up to 12 dimms of RDDR3 quad channel too. This might be a good thing to look at if its on the market by the time your going to buy.

As for the SAS, I would go that way not for the speed but for the reliability of the HDD's. I still have 1 gig SCSI HDD's that still work. the SAS HDD's are build better to last longer and can help with saving power and producing less heat with the many power saving modes built in the SAS HDD's and the SAS controller.
 
MfA: The Rackmount ATXBlade S800 gives 528 CFM airflow, which is actually pretty high considering. It appears that this is close to optimal (perhaps even optimal) airflow. I'm trying to find out if this is good enough. So far I'm really thinking the ATXBlade is a pretty good deal, chassis comes with 6 x 120 mm fans, holds 8 blade enclosures, each enclosure comes with a 400 W PSU. Looking at prices for EATX cases like suggested above and PSU actually comes to the same, if not more?

I just don't see it ...

Lets set the price for a mobo to 300 and for coolers to 70.

ATXBladeS800-Cluster10 is 2750, so 343 per node plus 70 for coolers is ~410$ per node.

Supermicron barebone is 812 per node, minus 300$ for mobo minus 70 is ~430$ per node.

The 1U case you linked earlier comes out to ~300$ per node with rails.

The supermicron comes with some stuff for free to set it apart ... a riser (for if you ever want to put a GPU in :)) a SATA DVD drive (not very useful, but hey it's free), hot swap drive trays and a 560 Watt power supply. The ATXBlade will probably have the quietest cooling.

But if you're pinching every penny the 1U cases are 100$ cheaper per node.
 
As for the SAS, I would go that way not for the speed but for the reliability of the HDD's. I still have 1 gig SCSI HDD's that still work. the SAS HDD's are build better to last longer and can help with saving power and producing less heat with the many power saving modes built in the SAS HDD's and the SAS controller.
Only Seagate even puts SAS on it's high capacity (ie. 7.2K RPM 3.5 inch) enterprise drives ... but Barracude ES.2 is getting a bit long in the tooth though and the Constellation ES has been in limbo for a year. The high RPM IOPS optimized drives seem like a huge waste of money here.
 
an AMD core is slower than a nehalem core, also 3D rendering is where Intel really excels ; AMD is good for high end database, high number of test VM but not so much for media/rendering tasks.

an approximation is six AMD cores are as fast as four Intel cores, but here the workloads benefits nehalem on top of that.

if we can propose unreleased hardware, nehalem EX is due out soon :) but may be crazy expensive. It's an 8 core CPU. a 2-way box will be faster than the 24 core AMD box.
if 4-way is remotely affordable, that's as fast as you'll get for a single machine
 
an AMD core is slower than a nehalem core, also 3D rendering is where Intel really excels ; AMD is good for high end database, high number of test VM but not so much for media/rendering tasks.

an approximation is six AMD cores are as fast as four Intel cores, but here the workloads benefits nehalem on top of that.

if we can propose unreleased hardware, nehalem EX is due out soon :) but may be crazy expensive. It's an 8 core CPU. a 2-way box will be faster than the 24 core AMD box.
if 4-way is remotely affordable, that's as fast as you'll get for a single machine

Yes but with licensing fees (per socket), using 2 socket 24 core AMD systems might be better performing and cheaper than using 2 socket 16 core Intel systems even with the relative advantage Intel holds in 3D rendering.

Then again all this is a bit moot if none of that comes out before he needs to implement their render farm.

Regards,
SB
 
I'm lost on the licensing fees, two sockets cheaper than two sockets?

Do we know what cost of a 2 socket 24 core AMD machine would be versus the cost of a 2 socket 16 core Intel machine?

Or the relative performance? If you need more machines of one than the other for equivalent performance? Yes, there are many scenarios where one machine with 2 sockets will cost you more in licensing fees than another machine with 2 sockets if you need more of them.

Regards,
SB
 
Hey guys, thanks for the replies!

MfA said:
I just don't see it ...

Lets set the price for a mobo to 300 and for coolers to 70.

ATXBladeS800-Cluster10 is 2750, so 343 per node plus 70 for coolers is ~410$ per node.

Supermicron barebone is 812 per node, minus 300$ for mobo minus 70 is ~430$ per node.

The 1U case you linked earlier comes out to ~300$ per node with rails.

The supermicron comes with some stuff for free to set it apart ... a riser (for if you ever want to put a GPU in ) a SATA DVD drive (not very useful, but hey it's free), hot swap drive trays and a 560 Watt power supply. The ATXBlade will probably have the quietest cooling.

But if you're pinching every penny the 1U cases are 100$ cheaper per node.

Wow, trying to cut costs is just getting very tiring. I looked into the Intel Server System SR1630BC which comes with the S5500BC server board, 400W PSU, 2 fans, seems decently priced. The advantage of this is we can get direct from our suppliers. This comes to ~ US 750 from a supplier in the states, over here it comes to around US 820 which beats the S800, just need to get a rack to house the blades.

This might just be better for us, as if there is a problem we just send it back. It would be too much pain to have to send hardware back overseas and would cost too much. Besides the chassis fits our hardware perfectly, so less to worry about.

That brings the cost of a single blade to around US 2000 excl. Seems pretty reasonable, and its all available locally.

I haven't yet confirmed the prices with the supplier, will do that Monday. I'm deciding to draw the line here, it's either the SR1630BC build or the S800.

MfA, thanks for trying to help finding cheaper chassis options, I think I'm getting really lost, and think what I've got so far is reasonable.

Blazkowicz said:
an AMD core is slower than a nehalem core, also 3D rendering is where Intel really excels ; AMD is good for high end database, high number of test VM but not so much for media/rendering tasks.

an approximation is six AMD cores are as fast as four Intel cores, but here the workloads benefits nehalem on top of that.

if we can propose unreleased hardware, nehalem EX is due out soon but may be crazy expensive. It's an 8 core CPU. a 2-way box will be faster than the 24 core AMD box.
if 4-way is remotely affordable, that's as fast as you'll get for a single machine

Cool, well I'm glad to hear I'm at least looking at the right processors! There won't be time to look into any new options, have to get BP together for the 12th of Feb.

Be interesting to see how much the EX would cost when it's launched.

Silent_Buddha said:
Yes but with licensing fees (per socket), using 2 socket 24 core AMD systems might be better performing and cheaper than using 2 socket 16 core Intel systems even with the relative advantage Intel holds in 3D rendering.

Then again all this is a bit moot if none of that comes out before he needs to implement their render farm.

Regards,
SB

Yeah, thats true.

Ok, I think it's time to stop pushing the blades and start focusing on storage. That's my biggest problem, I'm LOST here! I've never built a RAID array or anything like that.

Did some research on the net, and I'm not quite sure what the best option for us would be.

We would need around 4 TB available storage, and backup is cruicial!

I hope you guys might be able to help me with the storage problem, I contacted some people, here are the following:

2U rackmount holding 8 SAS/SATA drives, 250 W PSU - with shipping comes to US 1500

I'm just not sure how this would work? This is it here: http://www.rackmountmart.com/html/SS2001.htm

How do I set this thing up?

The other quote I got, from a local business here is (copied from email):

Tyan Xeon server board (dual S771 socket) with one Xeon CPU, more stable compare to the desktop board,
4G memory full buffered ECC Reg
3Ware 8 SATA port RAID controller,
Rack mount chassis with 8 hot swap HDD bays
Hard drive SATA 1TB x 5 for now (RAID 5 give you total 4GB storage),
upgrade to 8 max in the future.

This comes to ~ US 5000 incl. which is pretty steep I think? I had asked for a really low spec CPU and board, specifically a Core2 Duo, cheap board, 4 GB ram, that should be perfectly adequate?

It would be cheaper for us to get the enclosure, and get our drives, CPU, board etc from our local suppliers. What can you guys suggest?

Then we will need to get a rack and a 48 port ethernet switch (and obviously power cables, network cables etc) to complete the setup.
 
You really want a switch which supports at least one 10 Gb link like tkf suggested to hang the fileserver/distributor on ... something like this (it's called an uplink port, but you can use it as a normal port AFAIK).

Open racks are cheaper.
 
You really want a switch which supports at least one 10 Gb link like tkf suggested to hang the fileserver/distributor on ... something like this (it's called an uplink port, but you can use it as a normal port AFAIK).

Open racks are cheaper.

Thanks MfA, thats neat :) Link on 10 GB switch doesn't seem to be working? Will do a search
 
Back
Top