Well that's most definitely not the norm. Unless there's something wrong with Anandtech's review methodology or the games they test?
http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/16
I also happen to have a 960 and a 1060 that always always always run above boost clocks... so let's just say I don't agree with your wild generalization and strawman argument.
I don't know about the scenes Anand's testing, so I cannot assert if there's something wrong. As with all reviews, they test a certain sample of available games and applications and thus cannot catch all possible behaviours - I guess that goes without saying.
What I can say is that in our launch review we had the Titan X (Maxwell) already clocking under it's typcial boost in Risen 3 for example:
http://www.pcgameshardware.de/Gefor...orce-GTX-Titan-X-Test-Benchmark-1153643/2/#a1
(in case it is not obvious: When free boost is slower than forced standard clocks, it's below typical boost). Besides that, I just provided you with an example of card and case that's exhibiting this behaviour right now (Titan X-M vs. CoD:MW remastered).
You call that strawman - be my guest.
History tells that those claims are believable, based on previous launches. Besides I don't think you can do much one way or another to show SGEMM in a good light...
History tells that marketing material is carefully selected most of the time.
So there's a lot of evidence that points to an increase in performance and perf-per-watt. There's none pointing to the contrary, or at least you have provided nothing. "I am an skeptic until something fully proven" is not evidence. This is a speculation thread, if you are going to dispute someone's claim that "there appears to be a perf/watt increase", which is based on available evidence, I think that you should provide evidence of your own rather than just dismissing anything and everything that you choose not to believe.
For starters: I did never say, performance per watt would drop ("
to the contrary") nor did I say, there'd be no performance increase per watt („
dispute someone's claim that "there appears to be a perf/watt increase"“). Instead I said „I want to compare real world achieved sustained performance instead of spreadsheet metrics. For BOTH cards, so no apples vs. oranges, as you keep misunderstanding or want to force into my argument for some reason.“
I only caution against taking +50% perf/watt for granted when all the „evidence“ for it is the spec sheet number of peak boost throughput, in fact, you read it for yourself here:
https://forum.beyond3d.com/posts/1989770/
The point is not even if the clocks go down in some workloads. What matters is if Volta behaves differently than previous generations, and so far there's absolutely nothing that even suggests this is the case.
Of course: Contrary to all the generations before, they did not state a base clock for GV100 yet. Maybe an oversight, maybe it's a number that sounds a little too low for the marketing.
- and that's all I've been trying to tell you guys in this thread: It all depends on how much the clocks go down under full load. When we know that, we can estimate how much of an efficiency improvement Nvidias has achieved coming from GP100. Whether it's 50%, 5% or 150%.
The funny thing is: I don't care. I just want numbers that are not based on some crack pipe dream, but hard numbers from real sustained workloads.