Why do GCN Architecture chips tend to use more power than comparable Pascal/Maxwell Chips?

It was mentioned by Anandtech that GCN Gen 4 (Polaris) already increased the instruction buffers from 12 DWORDs to 16, where the marketing slide confirms an increase:
https://www.anandtech.com/show/10446/the-amd-radeon-rx-480-preview/3

The Vega ISA document's description of the GCN line's buffer sizes, or what counts as a previous generation, seems to be incorrect in this instance. If there were a Polaris ISA doc for this to carry over from, I would blame that.
 
Reading this, it occured to me that in every die shot we've seen yet, AMDs individual CUs seem to be rather lenghty „stripes“ while Nvidias SMs are more compact and shaped like a fat L. While the first lends itself to denser packing in rows and columns, the latter seems to be able to have shorter intra-SM wiring albeit chip designers having a harder time to fill the gaps.
Which is an example of NVIDIA's physical optimization work for clockspeeds.
 
ahhhh
I remember years ago, at aceshardware there were discussions how much benefit in terms of power a custom design gives... it was Intel vs AMD back then.
My wild-wild guess is anywhere from 10 to 50%. "Easily" (I mean you can get as much if you have the manpower to research it)
Given how bad was K10 vs Intel, just at time when AMD claimed to go with 0 custom design using only standard libs...
The disparity in R&D budgets between AMD and Nvidia is almost 10-fold.
Before acquisition of ATi, R&D for amd+ati was 2-3times bigger than Nvidia's ...
Its a little miracle that AMD is still able to fight back.
 
ahhhh
I remember years ago, at aceshardware there were discussions how much benefit in terms of power a custom design gives... it was Intel vs AMD back then.
My wild-wild guess is anywhere from 10 to 50%. "Easily" (I mean you can get as much if you have the manpower to research it)
Given how bad was K10 vs Intel, just at time when AMD claimed to go with 0 custom design using only standard libs...
The disparity in R&D budgets between AMD and Nvidia is almost 10-fold.
Before acquisition of ATi, R&D for amd+ati was 2-3times bigger than Nvidia's ...
Its a little miracle that AMD is still able to fight back.
Zen has been reported to get quite a bit of manual tuning, and Lisa sent some of Summit's engineers over to RTG to help with physical optimization after the... Uhhh... Special Vega. I think the biggest issue between the two right now is indeed the difference in physical optimization done, and Lisa is keen on getting things back on track.
 
I think there are two things AMD needs to work on. We've seen countless articles and user anecdotes that many Vega cards don't need to be run at 1.2v. It would seem AMD set an arbitrary number to harvest as many usable chips as possible. GF just doesn't seem cut it with GPU chips. Hence why AMD announced they were moving their GPUs back to TSMC. The other issue AMD needs to tackle is that they don't have fine grained power control across their entire board. They quote TBP(typtical board power) because they really don't know the exact power of anything except the GPU and even that is not as accurate as it could be. This was detailed by buildzoid in a teardown for Gamer's Nexus. Nvidia knows exactly how much power everything is using and at high polling rates. This allows their GPU Boost to adjust rapidly to changes. This should be an easy fix for AMD as it mostly involves board layout.
 
Back
Top