I do recall an old slide deck that indicated how quickly it adjusted power in the micro seconds. But that's going to be on me to find it, unfortunately it looks like I can't find it, so I'll be honest in saying I may have imagined it.It's interesting but it's all wild conjecture without any kind of actual data. First I have heard of latency of about ~1ms for the clocks adjustements.
But I didn't say 1ms latency, 1ms for a power transfer is way too long. Typical DVFS has some latency which they have to account for, I believe in the nanoseconds. There is going to be added latency whenever you go through the CCX.
I chose stepping for easy math, but as for your statements: he did not use any percentages for anything. As for advertised AMD game clocks, they are typically 10% below the max boost, representing a conservative average to clear.Second Cerny told use when there is going to be any kind of downclock it's going to be usually by 2 or 3%, not 5% or 10%.
Frequency has tremendous impact to power at a specific activity level. Furmark kills the system because the activity of the silicon is very high across the chip. All the cores are constantly in use with very to little down time before it must perform the calculation again.But I think the biggest proof is the actual power consumption during different scenes. It's actually extremely rare to even reach 200W during gameplay (I actually haven't seen it yet on any analysis) while many cutscenes are consistently at that level.
The reason why GPUs can get away with boosting higher clock rates is because it's counting on to send that power to speed up the calculations.
And I wouldn't say rare at all.
Typically higher FPS draws more power in almost all cases, that should have been the take away you got from the DF video.