Energy-efficient and cost-effective GPUs - anytime?

Techno+

Regular
Greetings and merry christmas to you all.

Nvidia's G80 and ATI's upcoming R600 GPU are graphics monsters indeed, but they are too large, consume a lot of power, dissipitate a lot of heat and are expensive to manufacture. All this makes me wonder, will nvidia and ATI ever make energy-efficient and cost-effective GPUs?, IMO they should learn a few lessons from AMD and Intel. I hope their next gen chips would give priority to efficiency, what about u, do u?
 
CPUs need to be efficient because they're used in huge server clusters where power draw matters.
GPU render farms don't seem to be that popular so they don't really have a dire need for efficient GPUs - while gamers keep screaming for more and more power.

If you want efficient cards, you probably shouldn't focus on the top-of-the-top Xtr3m3 cards either.
You can always try plugging an MXM laptop GPU into your desktop ;) or, heck, even a mid- or low-end passively cooled one.
 
Merry Christmas to you, too!

G80 (8800GTX) seems to draw less power per frame rendered vs. G71 (7900GTX), and that with more and faster RAM. Compare performance and power draw.

I'm an efficiency fanatic, too, but NV's been great about reducing power consumption per framerate the past few generations. As for R600, I wouldn't want to comment without seeing power draw tests and benchmarks. It'll surely draw more power than R580, but the point is, will it be even faster? Because it's not the high-end that you should look to for the performance per watt crown.
 
You have to define what you mean by "energy efficient" and "cost effective". What is a reasonable amount of power for a GPU to use? 1W? 10W? 100W? What is a reasonable price?

Both NVIDIA and ATI already make GPUs which fit my personal definition on both scores (7900GT and X1950Pro). But what's your personal definition? There are examples further down the product line which have even lower power, and lower price, what's wrong with them?

It seems to me that the absolute top-end will always be "out there" on the edge of sanity when it comes to power consumption and price, if only because that's the way things work. The further they can push the power budget the more performance they can squeeze out. Look at it this way... if the next generation brings no increase in performance, and no extra features, but simply halves the power consumption, how would the graphics enthusiast crowd react? Badly, I'd wager.

If you're waiting for a G80/R600 equivalent which uses 10W and costs $100 you'll be waiting a long time I think.

[Don't get me wrong, I'm as keen to see low power GPUs as anyone; I'm a silenter you see. But I'm also not deluded enough to think that the balls-out top-end will ever be anything other than ridiculous in power and price, so I hunt a couple of notches down from the top, and accept that performance won't be so good. It's all a trade-off.]
 
If you want efficient cards, you probably shouldn't focus on the top-of-the-top Xtr3m3 cards either.
You can always try plugging an MXM laptop GPU into your desktop ;) or, heck, even a mid- or low-end passively cooled one.

MXM? i never heard of these, where can i get one ?

thanx
 
The 7950GX2 was "energy efficient" from the point of view that it was based on the G71M instead of G71 (the laptop and desktop variants respectively). That's probably not what you're thinking of, but it's still a step in that direction, although only because heat dissipation would be too hard to manage otherwise. I'm sure we'll see similar things eventually "thanks" to CUDA, although maybe only for Quadro-like cards. Hmm.

If what you're thinking of, on the other hand, is a GPU that is a good compromise between performance, power and price, I think you may wish to consider that G80/R600 are the ultra-high-end and don't represent the mid-range market segment. Last I heard, 7600GT and 7900GT were nothing to complain about there, although they're obviously not manufactured on a low-power process, unlike their laptop/MXM equivalents.


Uttar
 
The 7950GX2 was "energy efficient" from the point of view that it was based on the G71M instead of G71 (the laptop and desktop variants respectively).
Is there official confirmation of that Uttar? I've seen it quoted a few times and yet a comparative analysis of 7900 GTs in SLI vs. a 7950 GX2 suggests that there's nothing 'special' about the power figures of the chips in the GX2.
 
Is there official confirmation of that Uttar?
Oops! That's an interesting point. Indeed, I've never seen an official confirmation of that. And tests such as yours definitely seem to imply it's just plain G71s. On the other hand, a 7950GX2 has twice as many memory chips as a 7900GT SLI, and I doubt the difference of the low-power process is more than 20-30%... So it's still perfectly well possible that it's based on G71M. If it is, the consequences of that certainly aren't too obvious, though!


Uttar
 
well where i live we dont have any online hardware vendors, so gotta go check this out in my local computer store, also i have noticed that these GPUs are more expensive, do u think they are good for a gaming PC?
 
Isn't the reason CPU could be energy efficient but GPUs are much harder to achieve is because GPUs are special purpose?? Increasing scalar performance on CPU takes extraordinary amounts of work and multi core CPUs need to have software optimized for it while increasing performance on a GPU is relatively easy, like double the number of pixel pipelines, give more bandwidth(it may be different now but since graphics are "embarassingly" parallel, its much easier to keep the execution units busy).

Like on a CPU, some programs barely can take advantage of 1.0 IPC, so rest of the CPU that isn't used can be throttled, while on a GPU most of the 3D apps take advantage of increasing parallel processing power on a GPU, so there is much less to throttle.

It's one reason I believe why GPU can't go low power and advance in processing power at the same time. Graphics processing power scales faster than process technology scales, while on a CPU its much less so. Especially if Nvidia/ATI wants 2x performance every generation, they'll never be able to lower the power requirements.
 
It's going to depend on what you mean by energy efficient.

A GPU during normal operation probably executes roughly an order of magnitude more CPU-equivalent operations than a CPU does per cycle.

Despite that, a GPU is usually in the same ballpark when it comes to power consumption. That makes it 10x more energy-efficient for the amount of work it does. It simply works on a larger problem than a CPU needs to worry about.
 
Despite that, a GPU is usually in the same ballpark when it comes to power consumption. That makes it 10x more energy-efficient for the amount of work it does. It simply works on a larger problem than a CPU needs to worry about.

Thats the way I'd take it. So i'd say they are highly efficient compared to CPU's due to the parallel nature of GPUs
 
Or, perhaps more accurately, the parallel nature of their workloads.
If you build it they will come. They built a GPU and they gave it parallel workloads. Clearly I'm correct as I can't recall seeing to many non-parallel thing being done a on GPU :p
 
ASICs are always more efficient at their specific task than a more general processor.

Think about MP3 players. Those little guys have a hardware implementation of a MP3 decoder that uses an extremely small amount of power. On a high-end PC, MP3s still use say 1% of the CPU to decode. That's a LOT more power than that little dedicated DSP. And a whole lot cheaper.

General purpose means more flexible, and to do that you need more hardware. More hardware variety means more unused hardware per unit of time. Hardware that is still using electricity.

GPUs are ASICs designed specifically for 3D rendering. The more general they become, the more power inefficient they'll become.
 
I have asked on a similar topic before.

By Energy efficient as saying that it takes less power when idle or when doing little thing such as inside Windows / Mac Desktop.

Not long ago some company ( cant remember which one ) was doing a research on possible using IGP on Windows Desktop or any other lower Gfx usage and only uses the Proper Gfx card when there is a need ( eg: Games )

if any of these idea could succeed i would be much happier. I never like the idea of Gfx card sucking up so much power even when i am hardly using it. After all the gfx computational power required for Desktop drawing hasn't increased much over the years.
 
Passively cooled GPU for the win ! ;)

At least if we could avoid having those huge fans on the boards wasting the next PCI slot, it would really be a good thing.
 
Back
Top