You're still wrong. The powertarget of the card caps consumption at 165W which is the TDP. It may not be the fastest implementation, hence some spike measurements for instance at TPU, but under sustained gaming load please prove that the TDP is exceeded before making such claims.
THG claims its compute load had a sustained average draw above TDP, but I'm uncertain this isn't a case of playing with a new toy in the wrong way.
Even if there were "one micro second spikes" as suggested by THG, i don't think it would do much to the TDP (or the average heat dissipated from the GPU + some margins perhaps) because the temperature can't just rapidly change within that time period.
You can't do much to TDP, because it's just a technical specification that serves as a guide to the cooling solution designer and it is a measure of power output that is averaged of a short period of time (not usec short).
In terms of temperature averaged over the chip, changes usually aren't that rapid, but local hot spots like highly utilized ALUs can spike in temperature in the time frame of tens of microseconds.
Designs have thermal sensors situated at various parts of the chip, and they do have failsafes. However, they also have known latency periods for how quickly they can react, depending on the method used, so what usually happens is that a safety margin of X watts of TDP and of Y degrees C is set below what the chip can draw and what it can heat up to.
This is one reason why everyone freaked out when AMD's 290 purposefully sat at 95C, but it's something of a feather in their cap that their thermal/voltage/clock solution can react that quickly at those power levels--and with
that cooler.
GPUs without that level of responsiveness leave that power and thermal budget on the table.
I would be very interested in the type of the mentioned compute load. Even if you only take plain averages, you should be able to see significantly higher power values at the wall socket integrated over time - especially with a large difference at almost a hundred watts. Maybe Igor could elaborate?
He could try comparing with more standard power measurement methods, like those that try to isolate the board, or a Kill-A-Watt meter. The claimed sustained violation of the TDP is bigger than typical margins of error from those schemes, and would show up as a higher value at the wall.
Choosing to take a noisy input and then flipping some settings to average things leaves open the possibility of a sampling or interpretation error.
The jittery graphs of exactly one architecture don't have any comparative value, and the analysis such as it is credits the spiking to things like turbo/voltage steps they didn't bother recording, and then extends a comparison to AMD's GPU that they didn't even measure.
All we get from that is people breathlessly pointing at spikes they have no context to understand.
It might be interesting if any spikes managed to breach the maximum safe limits for the card/power supply, but I don't think those are iron-clad enough to allow people to freak out over 1 usec twitches every once in a while, and I see no sign that the reviewer read through the electrical and thermal specifications of the PCIe spec and Nvidia's card/thermal solution guide.
I didn't even mention power consumption: just pointing out that all reviews are extremely positive... for a variety of reasons.
Playing devil's advocate: Bayer's Heroin also launched to rave reviews...