_xxx_ said:
But since your wires need to conduct rather high current nowadays, you'll settle for a compromise which will allow you to have reasonably short lines which can allow for a higher load. Higher load means more heat as well, so having a bigger die area will also allow for more effective cooling. Brute force approach.
(Clearly not on topic, but anyway...
)
None of that is true.
The relevant wires (that is, interconnections between transistors) are carrying less and less current as processes shrink. This is just Ohm's Law: I = V/R. V is going down and R increases as feature size decreases. It is true that the
aggregate current increases but this is due to the quadratic increase in the amount transistors, which is a power grid issue and irrelevant in this discussion as it doesn't impact power density. (In fact, a larger die will increase the IR drop on the grid, another reason to keep your die small.)
For obvious reasons, GPU producers always want to get the maximum possible speed. You suggest that decreasing density can increase this speed. That can only be true if:
1. you have absolutely no other reasonable way to improve your cooling
and
2. you have a lot of timing margin left in your design.
1. is currently not the case, as is clearly demonstrated by tons of after-market cooling kits that are better than the default ones.
2. is in contradiction with the fact that the GPU should run at the maximum possible speed
If you don't have timing margin (which is how all chips are designed), then reducing density will either reduce your maximum speed or increase power density even more (which is exactly what you are trying to avoid) or both.
Your propagation timing (65% of total timing) is ~RC. R ~ Lwire and C ~ Lwire, so timing ~Lwire^2. By increasing Lwire by 10% to decrease power density, timing will deteriorate quadratically. The only way to counteract this is to upsize your transistors accordingly which will increase power density more than the decrease in density you were trying to obtain in the first place.
Cooling is an increasingly annoying problem, but it's not at the point where engineers are hitting a brick wall that's unsolvable without mounting gigantic dust busters.
As long as this is the case, there's no reason to reduce maximal clock speed for power reasons.
_xxx_ said:
Having higher density would have forced R580 to stay below 500MHz I guess.
I pretty sure your guess is wrong.