trinibwoy said:
Any process based on inferences (as Dave confirmed it is) will be prone to estimation error. The only way you can get bad empirical data is to have faulty readings at the source. Since both nVidia's and AMD's approaches depend on sensors to provide input data they are subject to the same risks. However, AMD goes a step further and translates that sensor reading (utilization) into another metric (power consumption) and that's where the additional source of error is introduced.
AMD doesn't depend on analog sensors (or at least not primarily, though I'm sure they have that too.) It depends on activity counters.
The granularity of that is unknown, of course, but the general principle is one of building a linear model that tries to estimate power as good as possible with a given set of digital inputs.
Those inputs would be things like: number of active ALU cycles vs the total number of clock cycles, the number of tex operations, the number of pipeline stalls etc.
In practice, some activity counters will be a better power predictor than other, so you try to build a model that assigns different weights to each counter and check which correlates best with a variety of workloads. If you do this right, you model should be a pretty good estimate of normalized power, which you then correct for various die specific silicon parameters.
So instead of analog measurement, you calculate it digitally. That's the deterministic part of it. In theory, you can calculate this every cycle. That's where the higher sampling come in. However, it suspect that most implementations will do this with a small microcontroller on die, for algorithmic flexibility. There's no need to do this every cycle after all.
A higher sample rate will allow one to react quicker when things really go wrong (as in: power virus), and use tighter guard bands, though it looks AMD has made full advantage of that for the 7970.