EU cripples future graphics cards

If you want to encourage sensible energy usage, fine: increase energy prices anyway you want. But imposing technical limits on a field that's still evolving at breakneck speeds is plain stupid.

Earlier we saw a doubling of transistor density every 18 months. We also saw increased frequency scaling as gate oxides became thinner and thinner and we had ever increasing power consumption.

That is over.

Moore's law is dead, we now have 24-30 months between doubling of densities, and frequency increases are all but gone.

You can do it for cars or air conditioners or your fridge, which have had 50y to evolve to a loin where all improvements are very incremental. Doing it for a field where you'll still see a 10x perf improvement a couple of years from now is incredibly short-sighted.

Improvements to GPUs are incremental. I've upgraded from a 4890 to a 7970 GE, performance has only doubled. There is 3 generations and 3½ years between the release of the two cards. The 7970 uses more power as well. The days when you upgraded your GPU yearly and saw a doubling of performance are over.

Note: The EU directive is only about idle power and, IMO, they aren't nearly ambitious enough.

Cheers
 
The pace of development is irrelevant, as is the "powerwall" ... those are only relevant to the ON state power, which aren't what this regulation is about.

If this regulation couldn't be met because the ASICs are too large and power regulation not advanced enough that would be one thing ... but those things aren't true. These regulations can be met, when they aren't met it's because the engineering cost to meet them was deemed too high ... now for sure they are not.
 
this regulation is about

Once again. Then, wth is this regulation about? What is the relation between memory bandwidth and power consumption? Do these people realise that tomorrow we may have 1 TB/s bandwidth achiveable with ultra-low voltage memory modules?
 
Once again. Then, wth is this regulation about?
"SLEEP/IDLE/OFF power consumption"
What is the relation between memory bandwidth and power consumption?
It's used as a proxy for performance, and it's a pretty decent one at that ... there is an exemption for the very highest performance desktop machines (CPU/GPU/PSU wise) for the next 12 months.
 
Slicing by memory bandwidths is a rough proxy for segementation (or chip sizes); this has been used in a number of cases previously. It undersood that that a bigger ASIC is going to have more static leakage power, even in relatively low power states and so a "one size fits all" categorization will not work, so some mechanism is needed to slice the market.
 

At 1900x1200, the resolution I use, techpowerup's numbers for the 7970 GE add up to 2.13 x performance of the 4890 (ie. 113% faster)

108 W on average vs 209 W on average.

AFAICT from the links you gave, 131 W average for 4890 vs 209 W for the 7970 GE. It still means that performance/watt is less than 50% better than 3½ years ago.

Cheers
 
To be accurate- slightly more than 3 years between April 02, 2009 and June 21, 2012.
But if we count the original 7970, then it would be 2.5 years.
 
To be accurate- slightly more than 3 years between April 02, 2009 and June 21, 2012.
But if we count the original 7970, then it would be 2.5 years.

Which you then should compare to the original 4870. The 4890 was a speed optimized 4870, clocking 13% higher, exactly the same ratio as 7970 GE vs 7970.

The 4870 was introduced late june 2008, the 7970 late december 2011, exactly 3½ years apart.

Cheers
 
At 1900x1200, the resolution I use, techpowerup's numbers for the 7970 GE add up to 2.13 x performance of the 4890 (ie. 113% faster)

Cheers

Err… I got a factor of 2.76. How did you get to 2.13?

Even a factor of just 2 in 5 years would be much faster than almost any other industry.
 
I think Roderic's point is that AMD and Nvidia has pushed power consumption to absurd levels trying to get highest absolute performance with no regard to electricity usage, - causing excessive heat production and associated noise.

I remember a time when computer ICs didn't have heat sinks at all (not even passive ones) and computers were silent.

Cheers

Maybe a ZX81 was silent, or an amiga/atari when floppy drive wasn't in use, but I sure remember hard disk drives to be noisy as hell :LOL:
 
Which you then should compare to the original 4870. The 4890 was a speed optimized 4870, clocking 13% higher, exactly the same ratio as 7970 GE vs 7970.

The 4870 was introduced late june 2008, the 7970 late december 2011, exactly 3½ years apart.

Cheers

RV790 has more work on it done in comparison to 7970---> 7970 GHz where you had different binning only. ;)
 
RV790 has more work on it done in comparison to 7970---> 7970 GHz where you had different binning only. ;)

I know. It still had the same architecture and the same amount of shaders.

Regarding the 2.13 x performance increase. I misread one of the graphs. The speedup is 2.8x. However don't forget the 7970 is 26% bigger than the 4890 (365 mm^2 vs 289 mm^2)

Cheers
 
Seems like power consumption at the high end is actually trending down a bit anyway.

Even if it hadn't been, legislators have far, far more important things to [strike]waste[/strike] spend their time on. What a joke.
 
Actually from an economic point of view I'm pretty sure that idle power consumption legislation has been one of the better uses of EU time ...
 
Bad regulations. They should have just added incentivea for lower consumptions (by making energy consumption classes, each of which has a different taxation), not go so deep since, as someone said above, technology in this field evolves too fast
 
Back
Top