nVidia - expect 120Watt GPUs soon!

Mmmm - watercooling. According to this watercooling FAQ, a good watercooling setup can reach a thermal resistance of 0.05K/W. This comes in addition to the 0.15K/W reistance of flip-chip packages the size of P3/athlon chips for a total of 0.20K/W (for chips around 1cm2; I expect this figure to be roughly inversely proportional to chip area). If we assume an ambient temperature of 20 degrees C and a maximum operating temperature of 85 degrees C, we get that a good watercooling setup can handle a heat output of (85-20)K/ 0.20K/W cm2 = 325 watts per square centimeter. For cooling beyond this point, one might use isotopically pure silicon, which should reduce the thermal resistance of the flip-chip die by ~35-40%, increasing maximum heat output to ~65/0.145 = 450 watts per square centimeter.

As for 0.13 micron, yes, it will generally reduce the power draw per time each transistor switches, but it allows transistors to both be packed into a smaller area per transistor and switch more frequently (higher clock speed), increasing the wattage per area unit for any given design. In addition, the power losses in interconnect is unchanged on a per-wire basis, and the per-transistor leakage power increases sharply with shrinking transistor dimensions.
 
BenSkywalker said:
That said, how much of a pain in the ass was it to set up?

It is a bit of a pain of course since you have mod your computercase to fit in a radiator (amongst other things).

I decided to go for some slightly expensive innovatek gear http://www.innovatek.de/ mainly because it had good reviews and was really easy to set up since it is using the same fittings etc.

The pump itself (Eheim 1048) is silent so the only question is how much noise the fans blowing through the radiator add. I would recommend a large radiator (for better cooling ability) as a number one priority. The one I got was built to work with two 120 mm fans. I got two Papst fans which is known for a very good noise/air throughput ratio and I can even tell quite a difference in noise whether they are running at 8 or 6 volt (which cost me 2 C more in temp.)

Anyway, it not too difficult IMO to setup if you buy the same gear (otherwise the problem can be that fittings and tubes are different from vendor to vendor) or make sure in advance that the parts fit together.

There is a lot more to be said on the topic, so I suggest everyone that is interested to do some digging on the internet. I dont have the links anymore but a number of websites has made some really good guides.

... and for a hardcore watercooling forum, visit:

http://forums.procooling.com/vbb/forumdisplay.php?s=&forumid=9
 
DemoCoder said:
Servers don't always run in cabinets. First of all, today, people are squeezing servers into 1U, 1/2 U and 1/4U form factors, much smaller than your desktop. Secondly, not every business has a data center or co-location. Many small businesses run their servers in a normal office suite. Itanium and Opteron will eventually find their way into desktops (once they come down from $4000 per chip)

There's a difference between 1U etc servers, and huge mainframes. Correct me if I'm wrong, but I do not believe you can get an Itanium in a 1U form factor. If not, then pointing out this difference is ridiculous. If so, then I admit I'm wrong.

Secondly, as long as the CPU is in a different room it doesn't count as being the same as a desktop. It doesn't matter if it's in a data center, or an office down the hall, if people don't work in the same room it's not the same conditions as a desktop situation. I could put a lot of ridiculously overpowered fans on my computer and it wouldn't bother me if I could sacrifice the use of a whole room of my house!

You're not arguing with me about heat: you're arguing with Intel themselves. See for yourself (page 4 onward).

http://www.intel.com/research/mrl/news/files/MRFKeynote_Overview.pdf

I honestly don't have any in-depth knowledge in that area, outside of the fact that my AMD processor has a monstrous heatsink on it, which is still hot to the touch despite the fact my case is well ventillated and I have a heatsink with an 80mm fan. I don't even want to think how hot it would be with a stock HSF. Let alone that wimpy GFFX heatsink/fan (which ironically is 10 times louder!).

DemoCoder said:
I frankly do not care if it is hot or loud. I pay a premium for power and progress. If you don't want a Dragster, buy a Civic. The top of the line future GPUs are going to put out heat and suck power like crazy. If you don't like it, but a "cut down" version tailored for heat, power, and sound. But just because some of us want muscle cars, and you want a quiet luxury car, don't tell me to trade power for comfort.

You're welcome to do what you like. As for me, I'll buy from the company that can come up with an equally fast solution but can do it quieter and cooler. Honestly, the arguments you're making in favor of more heat, are about as sensible as those who don't want to increase car fuel economy.

Who exactly is going to buy these insanely loud graphics cards? I mean, people were already upset about the GFFX! I don't really think you're being realistic about the size of the market of people out there who just don't care about noise, heat and power usage. In fact, I think if you slapped a screaming Delta fan in your computer, you'd change your mind very fast.
 
Cooling the VPU/GPU is one thing, consider the other stuff that makes the video card work such as power supply circuits and the memory? Water cooling the whole card would be more like it, meaning to me the card has to be designed from the get go to be water cooled. Which also means a cooling system will be provided with the card :oops:. I am not sure a new case design incorporating the needed cooling for tomorrows advance computer will be forth coming. Heat pipes with fans seems more likely to me for the next couple of generations, increase the heat transfer surface area for the 120w should do the trick, blowing that hot air out of the case also seems to be how that will be done.
 
Johnny Rotten said:
Well, the 3 ghz P4 is putting out about 75 watts right now. The forthcoming 3.2ghz P4 will be approaching 80 watts. Now consider that gpu's are growing in transistor count and complexity far faster than general purpose cpus and the 120W figure, while alarming, isnt all that unrealistic. Cooling is going to continue to be a concern for 3d chips in the future just like cooling IS a concern for cpu's today.

According to the 3.06GHz hyperthreading power consumption tables over at Ace's Hardware--published a couple of months ago--maximum heat dissipation for the those HT cpus is 105W. (I shouldn't have to provide a link for this one.)
 
Frankly, I'm thinking a possible revival of the multiple g/vpu approach might be interesting--especially now that AGP (PCI Express later on?) can handle multiple devices. Seems to me ATI could mount two R350's on a single PCB with 256 mbs of ram, with the (by comparison) tiny little fans required and wind up with a PCB no larger than nVidia's present GF4 ti4600-series PCBs. Imagine an SLI-type arrangement with each vpu rendering every other scanline and having its own dedicated pool of 128mbs of ram. Even more feasible I suppose with R350 manufactured at .13 microns. With 16-pixels per clock you wouldn't have to worry about clockspeed to the extent that Dustbusters and 12-layer PCB's were in order, possibly. Heck, you could probably do this now with R350's at .15 microns as yields seem to be good enough. Just a thought. 256-bit buses per vpu might complicate the PCB, of course.
 
Nagorak said:
There's a difference between 1U etc servers, and huge mainframes. Correct me if I'm wrong, but I do not believe you can get an Itanium in a 1U form factor. If not, then pointing out this difference is ridiculous. If so, then I admit I'm wrong.

An example here.

Secondly, as long as the CPU is in a different room it doesn't count as being the same as a desktop. It doesn't matter if it's in a data center, or an office down the hall, if people don't work in the same room it's not the same conditions as a desktop situation. I could put a lot of ridiculously overpowered fans on my computer and it wouldn't bother me if I could sacrifice the use of a whole room of my house!

Desktop? Not every graphics chip is for desktop.
You said 100W is maximum for processors. Actually, 100W is just for desktop processors. That's what I want to point out.
 
Hmm, was just thinking about what someone said about power constraints limiting GPU power and not bandwidth....

Well there is one graphics technology that saves greatly on power dissipation.... COUGH COUGH.......
 
Dave B(TotalVR) said:
Well there is one graphics technology that saves greatly on power dissipation.... COUGH COUGH.......

Sure...especially considering that *cough* "technology" *cough* consumes so much less power than an actual product. ;)
 
I think that this is a good reason for graphics companies to start considering tile based rendering (as DaveB already said) *cough' Gigapixel tech *cough* and/or multichip cards.
 
LeStoffer-

It is a bit of a pain of course since you have mod your computercase to fit in a radiator (amongst other things).

Hmmm, I don't think I'd have to mod my case, I have about eight inches between my mobo and my drives of empty space(and I'm not running uATX).

Nagorak

You're welcome to do what you like. As for me, I'll buy from the company that can come up with an equally fast solution but can do it quieter and cooler.

Could you point me to the GPU that only uses a heatsink that can compete with the R9800Pro? ;)
 
BenSkywalker said:
Nagorak

You're welcome to do what you like. As for me, I'll buy from the company that can come up with an equally fast solution but can do it quieter and cooler.

Could you point me to the GPU that only uses a heatsink that can compete with the R9800Pro? ;)

Well, if maximum performance is all that matters, you are always going to push the power envelope. The question is rather what you consider an optimum tradeoff. DemoCoder indicated that for him, there is no tradeoff - maximum graphics performance is all that matters. For me, it's more complex.

The thing is that as you progress with performance as your only overriding concern, you will loose market potential in customers who aren't as narrow in their focus, and you will loose ease of technological transferability in that the maximum power designs are probably not optimal starting points to migrate from to chips used in portables (a growing market) and integrated chipsets (a growing market).


All computers in my home are very quiet by modern standards, and the machine I'm typing on right now (an 17" FP iMac) doesn't draw as much power in total as a Geforce FX, nor could it be designed to fit its current form factor if it had a typical 350 Watt power supply. So broadening our horizons a bit, not only do you pay for a maximum performance/power draw GPUs with the problems above, but you also impose limits on the entire system design.

Sure, I still have a Tower of Power at home, but given the choice, I'd rather it was small enough to be tucked away completely. I'd pay good money for designs that moved in that direction, as I consider it preferable to what I have, and I will not pay money for technology that generates additional noise in my home. Quiet cooling technology moves forward though, and the limitation these days seem to be the power supplies, particularly as continous power draw approaches or exceeds 200 Watts. If you don't mind bulky ugly solutions you can build mufflers, but somewhere around there I draw the line.

Basically, it's a question of priorities, and my wallet will support manufacturers who make GPUs that can be adapted to passive cooling, and preferably can be put into small silent enclosures. I might buy a 9800 and the biggest Zalman, but that would still increase power draw (and PSU noise) and increase heat generation within the cabinet, which would require me to increase exhaust fan speeds a bit. I'm just not terribly happy with that, and it's another reason for me to postpone buying for a while longer so I can see what tradeoffs the next generation of cards offer.

From this perspective, the 9600 is the most attractive option out there currently, but other IHVs may come up with something better, and 0.09 um will be very interesting.


Entropy

PS. I'm sure we will see top-of-the-line power consumption continue to climb as IHVs battle for the performance crown and the brand recognition that follows that title. That's a given.
 
Back
Top