Caselabs cases

I'm thinking of making a new watercooling build next year and as I was browsing for a case, I stumbled upon this Caselabs company. Their cases are more expensive than regular cases that's for sure, but the features are simply amazing.

http://www.caselabs-store.com/

All I can say is WOW! These things are amazing, the customization options are virtually unlimited. Many different case models and you can order them with plenty of options, some of the bigger models are absolute monsters. I mean take a look at this:


This one takes 2 480 radiators at the bottom...


Even the "smaller" units offer crazy possibilities for a loop. At the moment it looks like I might be ordering something from them quite soon. :)
 
My dream case arrived!

STH10 is massive and easily accessible which is perfect for my main rig as it's often messed with.

But before I can get one of them I need to move to bigger house to accommodate that thing! :p
 
My dream case arrived!

STH10 is massive and easily accessible which is perfect for my main rig as it's often messed with.

But before I can get one of them I need to move to bigger house to accommodate that thing! :p

I've sent some inquiries about their TH10 with an added pedestal all in black and the chances of me buying one is pretty high at the moment. that thing is an overkill and a monster and I'm pretty sure I shall call it "The Executor 4000" :)
 
Actually reading about the shipping costs, I'm now inclining to get a STH10 instead, but I still need some info from them. SMH10 is also a possibility at this point.
 
Actually reading about the shipping costs, I'm now inclining to get a STH10 instead, but I still need some info from them. SMH10 is also a possibility at this point.

STH10 looks to be perfect for a PC project I had in mind for many years now. I'm not much into bling cases and lighting them up from inside but more into solid, practical, accessible and powerful! Almost like overclockable server. The case is so huge that I would probably fit main PC to it and ITX based upcoming Kabini (Jaguar core) board as micro server, low power browsing. Make it very quiet with water cooling where appropriate and add few HDD's and SSD's for quick and secure data storage.

My current case is already quite big as it is full Tower case, but it still lacks space for me. I have 4 HDD's and 2 SSD's plus BD-RAM and memory card reader / docking station and amount of cabling this creates is hard to manage in a space my case offers. On top of that it is now wide enough to put bigger water cooler and limits freedom of how I can install singe 120x120 unit.
All the problems I've encountered with my current case seems to be non-present with STH-10 :D
 
STH10 looks to be perfect for a PC project I had in mind for many years now. I'm not much into bling cases and lighting them up from inside but more into solid, practical, accessible and powerful! Almost like overclockable server.

Pretty much my thoughts as well. I won't be installing any leds or somesuch either. I will pay some addition to neat cable management etc.

Almost wish the Christmas holidays are over so I can get a hold of their customer service to make stuff happen! I already got a new power supply. Actually it's a good thing that I have to wait a bit, so I can plan the build more before ordering, but my order finger is itching pretty badly already :)
 
People posting In this thread: a bunch of guys who are covering for their small manhood by buying a HUGE case instead. :rolleyes:





...




Just kidding, ;) I have a ThermalTake Armor TT which is pretty large, although not quite as this beast. I've gone back and forth over the years, from truly huge (an AT mild steel Intel-branded server case from the late 80's that originally house four double height SCSI drives), to truly small (a MicroATX that I built for my mother that was about the size of a pair of VHS tapes stacked on top of eachother).

My favorite thus far is the MicroATX tower case that I used to build my home combo Hyper-V Server + HTPC rig. I hated my mom's uATX case utter lack of space, but my HTPC case came out perfect. Physical room and capable cooling path for two full-length PCI-E video cards (or in my case, a single video card and a hardware RAID card), LOTS of room for a fat air cooler on the CPU (or if you want, a 120x120 radiator + pump + waterblock), internal space for four 3.5" drives, one 2.5" drive, and two 5.25" externally accessible drives. All in a case that's about the size of a large shoebox, give or take.

I love my Armor TT for all the ridiculous space it has, but even with the watercooling I once had, I simply don't use all the space. My next "uber" rig will probably be another MicroATX setup, if I can find the board I want.
 
People posting In this thread: a bunch of guys who are covering for their small manhood by buying a HUGE case instead. :rolleyes:

Actually I'm covering it with the thick black tubing inside the case.

I have quite a bit of equipment coming in my way at the moment. I will be placing the order for Caselabs TH10 later today. I won't be getting the pedestal though. I will have 2 x 480mm and 2x 360mm EK Coolstream XTX rads in the case among other things.
 
Actually I'm covering it with the thick black tubing inside the case.
:LOL:

I have quite a bit of equipment coming in my way at the moment. I will be placing the order for Caselabs TH10 later today. I won't be getting the pedestal though. I will have 2 x 480mm and 2x 360mm EK Coolstream XTX rads in the case among other things.
As the unemployed copy editor once said: Holy Carp!

That's a LOT of cooling; what kind of awesomesauce are you going to have in that rig?
 
That's a LOT of cooling; what kind of awesomesauce are you going to have in that rig?

The cooling setup will be built to be future proof. I won't be upgrading the MB and CPU just yet. So it'll be Asus X58D-E, i7 990x and 2x GTX 680 with Aquacomputer full cover blocks. I kind of want to see Haswell and next gen consoles before I upgrade, but you never know. I might move these parts to my office computer and get a platform around 3930k, and/or get a third 680 but at the moment it's unlikely. I haven't run this 990x for very long and I'm still quite happy with it, I'm not feeling to go from 32nm chip to another 32nm chip. I really should have done the Sandy-E upgrade last summer when I bought the 990x from a clearance sale... If nVidia 700-series is only a minor upgrade, I'll probably get a cheap third 680 after, or if It's a GK110 beast. I might do a total overhaul... At this point (phase 1 :)) my main focus is the case itself and the loop.

I bought a Corsair AX1200 for the new case and upgraded memory from 6 to 12GB, the second 680 is also less than two weeks old, although I mainly bought it because of my new 3D monitor. I'm waiting delivery on Swiftech Maelstrom dual bay reservoir for 2 DDC 3.25 pumps, I already have one pump and another is on it's way. I'm also waiting for a Koolance CTR-SPD12X2 dual pump controller, Scythe Kaze Master Pro Ace six channel fan controller, 33 Gentle Typhoon AP14s, two different type of Silverstone fan filters for the intakes, lot's of fittings, accessories and a 1/2"ID 3/4"OD black tubing. I guess that's about it for the moment. (just realised that I'm not actually compensating ANYTHING! with that tubing :oops: :))

I will build a single loop, but the planning is still in it's infancy. At the moment I'm thinking I'll put the 480s in the top of the case, one in each side and the 360s on the bottom. Every rad is going to be push/pull. Air Intake from the bottom of the case, (8x fans) front of the case (4-6), one at the back and right side of the case (4), exhaust at the top (8). I have to buy few more fans... One alternate solution is to to have the right top 480 to push air into the case PSU side and have the four side mounted fans on the right side to exhaust air out from both right side radiators, but I'm still planning... It probably takes close to two weeks for the case to arrive, so I'm not in a hurry.
 
Not sure if you want to do the 3930k route to be honest. I have a 3930k myself, doing 4.5Ghz on air cooling, along with 32GB (eight 4GB sticks) of ram at 6-7-7-8 1T timings. Don't get me wrong, it hauls ass at everything, but I could've done all of this on a Z77 platform with Ivy Bridge 3770k and most of my workload would've been the same speed or even slightly faster for a LOT less money.

However, if you're really gonna go for triple video cards, the SB-E series is certainly where you want to be. 48 lanes of true PCI-E 3.0 bandwidth is righteous, and is really the only thing other than epic memory capacity that Socket 2011 truly brings to the table over SB or IB rigs.
 
The Gaming Gods Demand you go over the top and batsh1t insane with this build

Sadly I'm about as over the top I can afford at this point already :cry: The final cost for the case alone is going to be quite massive after shipping and taxes.

Not sure if you want to do the 3930k route to be honest. I have a 3930k myself, doing 4.5Ghz on air cooling, along with 32GB (eight 4GB sticks) of ram at 6-7-7-8 1T timings. Don't get me wrong, it hauls ass at everything, but I could've done all of this on a Z77 platform with Ivy Bridge 3770k and most of my workload would've been the same speed or even slightly faster for a LOT less money.

Yep It's very unlikely I'll go for the Sandy-E at this point, but also going to 4 cores feels somewhat undesired. To be honest I'm not needing the extra cores for much either. I do use Handbrake to convert videos and it scales well with cores, but that's about it.

However, if you're really gonna go for triple video cards, the SB-E series is certainly where you want to be. 48 lanes of true PCI-E 3.0 bandwidth is righteous, and is really the only thing other than epic memory capacity that Socket 2011 truly brings to the table over SB or IB rigs.

I haven't following closely, but is the PCI-E 3.0 X79 and nVidia issues sorted out completely now? Also is the "true" bandwidth better than these 3rd party chip solutions ( PLX 8747) that you see on some Z77 boards. Asrock has a X79 board that has that chip and thus has 64 lanes for the GPUs if I'm not mistaken. Haswell with that chip might do, or I'll just this current chip until I really feel the need to upgrade.
 
Last edited by a moderator:
I haven't following closely, but is the PCI-E 3.0 X79 and nVidia issues sorted out completely now?
I've actually not the faintest idea. There hasn't been any talk about it on websites for ages now, but I would like to think so. What does Nvidia themselves say?

Asrock has a X79 board that has that chip and thus has 64 lanes for the GPUs if I'm not mistaken.
With integrated PCIe controllers in the CPU it won't matter how many lanes there are on the board, you won't get higher total performance than what the CPU can handle anyway. 64 lanes sounds like a total marketing gimmick, maybe they count lanes off of the southbridge too and those are usually slow performing and have high(er) latency.
 
With integrated PCIe controllers in the CPU it won't matter how many lanes there are on the board, you won't get higher total performance than what the CPU can handle anyway. 64 lanes sounds like a total marketing gimmick, maybe they count lanes off of the southbridge too and those are usually slow performing and have high(er) latency.

There is not a whole lot of information going around on that chip, but reading this Anandtech piece, it leaves me with an impression that the solution provided by the PLX chip is not comparable to a real extra lanes, but that it can optimize the lane usages to such a degree that it's definitely more than just a gimmick.

http://www.anandtech.com/show/6531/...ga-cooler-review-not-for-the-faint-of-heart/5

So what does the PLX chip do on a motherboard? Our best reasoning is that it acts as a data multiplexer with a buffer that organizes a first in, first out (FIFO) data policy for the connected GPUs. Let us take the simplest case, where the PLX chip is powering two GPUs, both at ‘x16’. The GPUs are both connected to 16 lanes each to the PLX chip.

The PLX chip, in hardware, allows the CPU and memory to access the physical addresses of both GPUs. Data is sent to the first GPU only at the bandwidth of 16 lanes. The PLX chip recognizes this, and diverts all the data to the first GPU. The CPU then sends data from memory to the second GPU, and the PLX changes all the lanes to work with the second GPU.

Now let us take the situation where data is needed to be sent to each GPU asynchronously (or at the same time). The CPU can only send this data to the PLX at the bandwidth of 16 lanes, perhaps either weighted to the master/first GPU, or divided equally (or proportionally how the PLX tells the CPU at the hardware level). The PLX chip will then divert the correct proportion of lanes to each GPU. If one GPU requires less bandwidth, then more lanes are diverted to the other GPU.

This ultimately means that in the two-card scenario, at peak throughput, we are still limited to x8/x8. However, in the situation when only one GPU needs the data, it can assign all 16 lanes to that GPU. If the data is traveling upstream from the GPU to the CPU, the PLX can fill its buffer at full x16 speed from each GPU, and at the same time send as much of the data up to the CPU in a continuous stream at x16, rather than switching between the GPUs which could add latency.

This is advantageous – without a PLX chip, the GPUs have a fixed lane count that is modified only by a simple switch when other cards are added. This means in a normal x8/x8 setup that if data is needed by one GPU, the bandwidth is limited to those eight lanes at maximum.
 
Stupid Paypal needs some credit card confirmation thingy before I can pay the TH10 case...
I need to pick a 4 digit code from their check up transaction from my credit card statement and my bank has been in the past super slow to show the transactions. This probably postpones the case arrival for about a week and getting it for my birthday (15th of Jan) is starting to look like it ain't going to happen...

Well gives me more time to plan the setup...
 
If you decide that SB-E is worth it due to the higher quantity of native PCI-E 3.0 lanes, then you might consider the far lesser expensive 3820. Yeah, it "only" has four cores and isn't fully unlocked, but tou can still overclock it to death using the higher FSB straps on any quality motherboard. In fact, rather than using the unlocked multipliers on my 3930k, I too used the higher strap speed instead.

Sucks about PayPal; I haven't fought with those dirtbags in eons. Hopefully it gets sorted soon :)
 
If you decide that SB-E is worth it due to the higher quantity of native PCI-E 3.0 lanes, then you might consider the far lesser expensive 3820. Yeah, it "only" has four cores and isn't fully unlocked, but tou can still overclock it to death using the higher FSB straps on any quality motherboard. In fact, rather than using the unlocked multipliers on my 3930k, I too used the higher strap speed instead.

Sucks about PayPal; I haven't fought with those dirtbags in eons. Hopefully it gets sorted soon :)

I managed to get the Paypal verification code from the customer service of my bank. I called them three separate times and I finally got it last Wednesday. It was only today that it actually showed up on my online CC statement. I paid the case immediately last wednesday and they said that they will ship it out on friday or monday, hopefully it will ship today and if it does, I should get it prior to my birthday, not that it really matters, but anyway...

I've pretty much decided not to upgrade CPU+MB combo at this time. The current setup will serve me fine. It's a little bit pity that the workstation Intel lineup has fallen more behind the mainstream lineup. Sandy-E followed Sandy, but we have to wait Ivy-e to come out after Haswell, so how long till Haswell-e?...

In the meantime I have quite a bit of components ready and waiting. Tomorrow I'll receive the rest and only the case is missing then, at least until I figure out I have forgotten or not realized something.


11559964.jpg
 
Back
Top