Extremely Power Hungry D3D10 GPUs

Ailuros said:
Having a separate PSU for the GPU(-s) sounds like a great idea to me.
Ummm...no please.

Particulary external power supplies are evil incarnate, I hate external powersupplies. More boxes on the floor taking up space and collecting dust is a total pain. ONE beefy internal power supply is the way to go, thankyouverymuch...
 
_xxx_ said:
3dfx thought that too... ;)
I´m up there with Ailuros. i kind of like the idea to spread the load to different internal PSUs, because GPUs will always be very power hungry while I don´t see the remaining system to be that demanding on stable A/I-lines even when looking in the future. Power efficiency should be king in the CPU world for the time to come, while I rather not see that with GPUs because they always stress the process technology to their max in every generation.

Speaking of 3DFX in particular, they introduced their external power supply (external is a no-go for me) because they went a little too far ahead with their multi-chip V5-6000 (and I don´t remember power supplies being that sophisticated and stable back then which was their reasoning to include it) and looking back this was more like an extreme case rather than the usual gaming pc.

However, today this should be rather normal with enthusiast gamers (SLi/crossfire [physics, too]) and in terms of graphics hardware that needs to be fed with stable and high wattage for hours constantly shouldn´t necessarily tax the main system power supply. Not if you can avoid it.

I never liked the idea of 1KW PSUs, because overall even with extreme gamers, this sounds more like an excuse to me [to sell them] rather then a necessity.
 
Vysez said:
Also, 130 to 300 Watts is quite a wide range, more precisions are required in order to get a more concrete idea.
Given that these new chips will come in both traditional and GX2/Gemini configurations, I disagree.
 
Sunrise said:
I´m up there with Ailuros. i kind of like the idea to spread the load to different internal PSUs, because GPUs will always be very power hungry while I don´t see the remaining system to be that demanding on stable A/I-lines even when looking in the future. Power efficiency should be king in the CPU world for the time to come, while I rather not see that with GPUs because they always stress the process technology to their max in every generation.

Speaking of 3DFX in particular, they introduced their external power supply (external is a no-go for me) because they went a little too far ahead with their multi-chip V5-6000 (and I don´t remember power supplies being that sophisticated and stable back then which was their reasoning to include it) and looking back this was more like an extreme case rather than the usual gaming pc.

However, today this should be rather normal with enthusiast gamers (SLi/crossfire [physics, too]) and in terms of graphics hardware that needs to be fed with stable and high wattage for hours constantly shouldn´t necessarily tax the main system power supply. Not if you can avoid it.

I never liked the idea of 1KW PSUs, because overall even with extreme gamers, this sounds more like an excuse to me [to sell them] rather then a necessity.
And how much will it cost if that 1kW thing failure? 2 PSU in this case is a bit on the good size, and who know that in the future... the enthusiast pc game case would accomodate with a stack like for mounting more than 1 PSU if the market calls for it (like what offers on the server case now).
 
My take on this is that we might consider whether feature development on graphics ASICs haven't gone too far too fast, given the modus operandi of graphics on PCs.

Higher end features cost transistors, translating into higher power draw, and larger dies/higher cost.

The current way of doing graphics on PCs is to use a high end chip as the technology leader, and then cut it down in performance by reducing parallellism but retaining the feature set. Add voltage and frequency reductions for mobile use. I question whether this is necessarily optimal for the consumer, and whether holding back on features that focus on precision or generality of processing wouldn't actually be preferable for most consumers. (Put in simpler terms, it is an oft repeated lament that new cards introduce features that introduces functionality, but that once applications actually take advantage of the new functionality the performance isn't there. This has been a valid point, and definitely so for the performance reduced siblings of the high-end leader.)

IMO the high-end parts have removed themselves so far from the mainstream that it is questionable if they are a good base for making derivatives to serve other markets. If we break it down, almost half of the PC market are portables and a major part of the desktop market serve institutions, offices and industrial sites. This only leaves home desktop systems many of which aren't used for gaming. Only a certain portion of the home desktop systems are used for gaming, and the majority of these aren't used heavily. How much is left? And even out of the remainder, the people who have an active interest in gaming performance, there are still limitations in terms of acceptable cost and noise. (For instance, Anandtech has a computex article up where it is remarked that 94% of all add-in cards sold by Gigabyte are passively cooled.)

These DX10 monsters indicates that to get down to acceptable levels of power draw for the vast majority of users, performance has to be reduced drastically. The performance span between budget solutions and high-end solutions is already a problem, and it seems that the transistor penalty for implementing DX10 features will only make this problem more severe.

So - what will happen? Personally, I feel that it would make sense to not strive for identical feature set in a generation, but to let that feature set vary, in order to facilitate making cheap and cool running GPUs that still offer good performance within their feature limitations. The simplest way of doing this would of course be to let the previous generation of parts migrate forward to new process technology, allowing for smaller dies and lower power draw at the same performance level, but you could definitely try to be more intelligent about it than that. This approach has been used before by the way, the GF4MX being a good example.

What I believe will happen however is a further marginalisation of the high end, slow uptake of DX10, and a wider span between low end and high end performance. Not particularly good for either developers or consumers.

Unfortunately, the highest end is awarded so much attention in media (and forums such as this) that it pays off in terms of marketing to push hard there, and of course it also helps raising the barrier of entry for new graphics contenders. The current modus operandi of the industry is firmly entrenched.

It is remarkable though how out of step the rumoured DX10 parts are with the trend toward mobile computing, and lower power desktop computing.
 
at 300w, and from such a small die. Is it feasible to recover some of the heat losses with some kind of microgenerator(phase change engine of some sort with freon or butane or something?).
 
Chalnoth said:
I don't believe for an instant that DX10-level cards will require that much more power draw.
But the R600 might.
We are only talking two thirds more than what the X1900XTX draws (peak), and given that ATI has promised that it will offer superior performance in legacy apps plus the extended functionality... - it looked to be a very powerhungry device even before this report.

Of course, you will be able to provide DX10 capable cards with much lower power draw, but the fact remains - additional gates translates into higher power draw, larger dies, and higher cost, all other things being equal. Given the state of lithography, it might have made sense to not push the technology envelope so fast. Is the DX9 standard really all that limiting for games development?

Having a huge performance spread in the market and slow uptake of new features (made even slower by being tied to Vista) isn't a wonderful scenario from either developer or consumer perspective. The huge performance spread is easily ignoreable due to the limited market penetration of monster graphics. Just pretend they don't exist and you'll be 99% right even among the gaming population. The other issue is thornier.
 
soylent said:
at 300w, and from such a small die. Is it feasible to recover some of the heat losses with some kind of microgenerator(phase change engine of some sort with freon or butane or something?).


I would never be that fast of a jump in terms of cooling. Next thing becoming common will problably be more and more copper/aluminum heatpipes in contact with the components coupled with with larger (quieter) fans. Just look at the X1900XT ICEQ3 by HIS or how much the cooling changed from the 6800U to the 7800GTX 512. Eventually we may even see things like this become common (i hope):

http://www.pcauthority.com.au/review.aspx?CIaRID=2932

I'm tempted to get one just to mess with the cooler. Water cooling for the enthusiast moron! I've had two reservoirs crack on me in my last water cooling and peltier experiments. Makes a friggan mess. Plus tube managment is a pain especially with how much room these cards are taking up. I like the Sapphire idea alot.
 
Last edited by a moderator:
I don't see how that setup can possibly be better than a dual-slot cooler. Sure, it may be pretty good at getting the heat from the GPU to the heatsink, but there's no way it's better than just attaching that heatsink directly to the GPU. The reason why water cooling is good is:
1. Water is good at transferring heat.
2. You can have a much larger air cooling system outside the case.
 
Entropy said:
The current way of doing graphics on PCs is to use a high end chip as the technology leader, and then cut it down in performance by reducing parallellism but retaining the feature set. Add voltage and frequency reductions for mobile use. I question whether this is necessarily optimal for the consumer, and whether holding back on features that focus on precision or generality of processing wouldn't actually be preferable for most consumers. (Put in simpler terms, it is an oft repeated lament that new cards introduce features that introduces functionality, but that once applications actually take advantage of the new functionality the performance isn't there. This has been a valid point, and definitely so for the performance reduced siblings of the high-end leader.)
The chips have to meet API and OEM specifications. If they don't, then they either won't be certified or won't be bought.
So - what will happen? Personally, I feel that it would make sense to not strive for identical feature set in a generation, but to let that feature set vary, in order to facilitate making cheap and cool running GPUs that still offer good performance within their feature limitations. The simplest way of doing this would of course be to let the previous generation of parts migrate forward to new process technology, allowing for smaller dies and lower power draw at the same performance level, but you could definitely try to be more intelligent about it than that. This approach has been used before by the way, the GF4MX being a good example.
See what I said above about API and OEM requirements.
What I believe will happen however is a further marginalisation of the high end, slow uptake of DX10, and a wider span between low end and high end performance. Not particularly good for either developers or consumers.
Having a uniform featureset is a boon to developers as they know their code will work regardless of platform. As long as there is some way to improve performance (reduce resolution, etc.) then the end-user can enjoy the game.
 
Chalnoth said:
Er, according to Xbit Labs, the X1900 XTX consumes about 120 watts.
http://www.xbitlabs.com/articles/video/display/gpu-consumption2006_4.html

Going to 300 watts would be an increase of 150%. There is just no way. Now, a Crossfire system, maybe, but ATI really needs to think about making their products more power efficient in lieu of what nVidia's doing right now.

According to ATI the peak power draw of the X1900XTX is 190W.

Xbitlabs do a commendable job measuring card power draw, even managing to exclude powersupply losses. However, their testing method doesn't seem to stress the cards to their maximum. This matters not at all, as long as they only compare with their other collected data, which they are smart enough to do. In short, their relative values should be quite good, perhaps the best available anywhere, but their absolute data is a bit low. (And it could actually be argued that standardized PSU losses, +25% for good PSUs, should be added to the power draw of the card itself, since it is more heat to be dissipated that the card is responsible for.)
 
Entropy said:
According to ATI the peak power draw of the X1900XTX is 190W.

Source? I don't see how this would be possible (heat/cooling).

And it could actually be argued that standardized PSU losses, +25% for good PSUs, should be added to the power draw of the card itself, since it is more heat to be dissipated that the card is responsible for.

It's far more useful to measure power draw of each component separately because of the differences of efficiency between PSUs.
 
Chalnoth said:
I don't see how that setup can possibly be better than a dual-slot cooler. Sure, it may be pretty good at getting the heat from the GPU to the heatsink, but there's no way it's better than just attaching that heatsink directly to the GPU. The reason why water cooling is good is:
1. Water is good at transferring heat.
2. You can have a much larger air cooling system outside the case.


Its a 20C drop according to the review, argue with it all you want. Thats pretty sizable.
 
Guden Oden said:
Ummm...no please.

Particulary external power supplies are evil incarnate, I hate external powersupplies. More boxes on the floor taking up space and collecting dust is a total pain. ONE beefy internal power supply is the way to go, thankyouverymuch...

The one proposed up there doesn't look to me as an external power supply, au contraire.

I find the idea of a separate PSU for high end GPUs that fits into a HDD bay particularly interesting. Yes you would need a large case to start with, but then again who would be so naive to fit a high end system into a mediocre sized case anyway?

Add to that that Sunrise catched my thoughts beyond that right on spot. With power draw with multi-core CPUs in the future constantly increasing I can have a powerful main PSU which can last me longer if the load for the GPU(-s) is deducted by a PSU dedicated to it/them.

I have only one PCIe slot populated at the moment and the current PSU has a theoretical peak of 38A on the +12V rails. How long will it last me if I keep it and channel the power feed for one or more future GPUs from a separate PSU? Or is there any mainboard/CPU/host memory combination out there that needs 600W to operate right now?
 
Chalnoth said:
I bet you could get a similar drop just by using a better air-based cooler.

Agreed. However a water pipe going back and forth inside the sink through the fins might spread the heat better than relying on diffusion alone in a chunk of metal. Heat pipes can probably solve that more reliably and at a much lower cost.
 
Back
Top