NVIDIA Maxwell Speculation Thread

[QUO
So, it seems to me AMD is on a warning: performance per watt doesn't crumble when Titan X is overclocked, which prolly indicates there's another 20% performance lying in wait for 390X, e.g. overclocked AIB special versions of 980Ti running at 1400MHz with 6GB of memory.


The problem is on the temperature, not so much the TDP... 250W is rather conservative on the nvidia side, but the 83°C is allways the limit when on standard fan speed when gaming. ( nearly every game hit this limit )

If tGPU < 63°C : 1190 MHz @ 1.174V
If tGPU > 63°C : 1177 MHz @ 1.162V
tGPU > 73°C : 1164 MHz @ 1.149V
tGPU > 80°C : 1152 MHz @ 1.137V

So the max frequences shown by Hardware.fr was 1152mhz in games. ( Dying light, but the games was stutter like best.). let say 1120mhz as average. ( who look on paar with the 83°C limit )

IMG0046883_1.png


If Nvidia want to push it to 1250mhz, the fan speed curve will need to be drastically changed or the cooler modified ( AIB version should deal better with it ). Ofc, at home, when overclocking this is easy to do it.
 
Titan X Black Edition?

NVIDIA is certainly holding back thus far with Maxwell. They could have easily released the Maxwell parts with 15-20% higher clocks, but I think it made the best business sense to let AMD remain competitive (selling their chips nearly at cost). Sort of like how Intel does with CPUs.

Bottom line is, NVIDIA would never put itself at a performance disadvantage in the name of power efficiency. They simply no longer have to make 300W cards to beat AMD.

If the rumors are true then Fiji sounds like it's going to be pretty awesome and could be a fair bit faster that Titan X, at least in 4K. Faster than a theoretical 980Ti with 20% more speed that Titan-X though? I doubt it. And given Nvidia have yet to play the HBM card with Pascal I can't see them losing the crown any time soon (for more than a month or two). I hope I'm wrong though, it's about time we had another 9700 / 5870.
 
So, it seems to me AMD is on a warning: performance per watt doesn't crumble when Titan X is overclocked, which prolly indicates there's another 20% performance lying in wait for 390X, e.g. overclocked AIB special versions of 980Ti running at 1400MHz with 6GB of memory.

Not at all, the 390x is supposed to be in the same power envelope as a 290x, which already draws 20-70 watts less under load than a Titan X. http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/16

The Titan X is an interesting card, and probably bandwidth bound, without much Nvidia can do about that as they have neither stacked memory nor a 512bit bus. Meanwhile the 390x will probably be compute bound, with AMD's engineers obsessed over bandwidth as they have both a 512bit bus and HBM. I guess it depends on what you'll be running for what you want. It'll be interesting to see if the rumored compute improvements come to the next AMD architecture.
 
I think you've found the only review where 290X draws less power in the power test.
Yeah, I noticed that too. My 'Nvidia good, AMD bad' worldview wrt power was on shaky ground. Strange how their results are completely the opposite for everybody else.
There can be a large variation between samples, but a delta of 50W and more is a bit much

Edit: looks like Anandtech uses the quiet mode as reference instead of Uber, which is what everybody else uses.
 
Yeah, I noticed that too. My 'Nvidia good, AMD bad' worldview wrt power was on shaky ground. Strange how their results are completely the opposite for everybody else.
There can be a large variation between samples, but a delta of 50W and more is a bit much

Edit: looks like Anandtech uses the quiet mode as reference instead of Uber, which is what everybody else uses.

Odd, you are 100% right... And what's more they show the difference between a 980 and a Titan X to be a delta of 30 watts power draw between Techreport and Anandtech, and both use Crysis 3 as the test case. I wonder which one's a better representation? I'll let that the settings are probably a bit different between say, Techreport and Anandtech, for Crysis 3 (one uses FXAA and one SMAA for a start) but they show a large power usage difference... I wonder what exactly is causing it, and who if anyone would be considered to have better testing methodology.
 
Odd, you are 100% right... And what's more they show the difference between a 980 and a Titan X to be a delta of 30 watts power draw between Techreport and Anandtech, and both use Crysis 3 as the test case. I wonder which one's a better representation? I'll let that the settings are probably a bit different between say, Techreport and Anandtech, for Crysis 3 (one uses FXAA and one SMAA for a start) but they show a large power usage difference... I wonder what exactly is causing it, and who if anyone would be considered to have better testing methodology.
The Anandtech results are a big outlier compared to TechReport, PCPer, Guru3D, hardware.fr, tomshardware and TechPowerup. They show the 290X up to 50W worse than the Titan X. AnandTech still uses wall power instead of isolated GPU power. That works to the disadvantage of the GPU that has the highest FPS since it makes the CPU work harder comparatively. (It would be interesting to see by how much.)

And another edit: PCPer has both wall power and isolated power: http://www.pcper.com/reviews/Graphi...GM200-Review/Overclocking-Power-Consumption-N
The difference is 47W for wall power (in favor of the X) and, at the end, slightly less in isolated power. But that's only one game. They don't say if wall power is only one game as well.
What's interesting is that the power difference is much less in the beginning of the recording and then suddenly opens up. Who knows what's up with that...
 
Last edited:
The Anandtech results are a big outlier compared to TechReport, PCPer, Guru3D, hardware.fr, tomshardware and TechPowerup. They show the 290X up to 50W worse than the Titan X. AnandTech still uses wall power instead of isolated GPU power. That works to the disadvantage of the GPU that has the highest FPS since it makes the CPU work harder comparatively. (It would be interesting to see by how much.)
Our test is taken at 2560x1440 High settings, letting the game run 20+ minutes on Post Human. A large performance difference does lead to a CPU power consumption difference - not ideal, but at least a known issue. If our results differ widely then I'm afraid I wouldn't know why off of the top of my head. Perhaps it's the open testbed that most other sites use? C3 throttles based on temperature for GTX Titan X; it runs out of thermal headroom before it runs out of power headroom.
 
I'd quietly like to point out that the new tests that TR used weren't just developed by me. There's a quiet cabal of folks that worked on the framework behind the suite and some of the tests, which made it easy for me to pull together those tests and some others to ship to TR, so I can't and don't want to take all of the credit. Thanks very much to those folks; you know who you are :love:
 
I think you've found the only review where 290X draws less power in the power test.

Do we know that GPU utilisation is 100% on 290X in this game/benchmark?
Actually, it is prettty varying hence why depending the game choice, you can find this discrepancy. Not only for the 290x...



getgraphimg.php
 
Actually, it is prettty varying hence why depending the game choice, you can find this discrepancy. Not only for the 290x...

That's not a "game" where you see the low wattage for the 290x ... that is the "standby screen" or display off.
 
Obviously. But Lanek's point was that there are fluctuations between games, namely BF4 and Anno 2070 in this instance. On the R9 290X, it amounts to 40W, and up to 49W for the Titan X.
 
TBH, remarkable as it may be that the X beats the 290X, it doesn't make much sense to compare raw power consumption between Titan X and R9 290X. They're too far apart in performance to matter.

Perf/W is really the metric that matters, and the one to keep an eye on when the 390X appears. We're getting conflicting data on it. On one hand, it consumes about the same as a 290X, on the other, it has water cooling.
 
TBH, remarkable as it may be that the X beats the 290X, it doesn't make much sense to compare raw power consumption between Titan X and R9 290X. They're too far apart in performance to matter.

Perf/W is really the metric that matters, and the one to keep an eye on when the 390X appears. We're getting conflicting data on it. On one hand, it consumes about the same as a 290X, on the other, it has water cooling.

The water cooling is because AMD doesn't do so well on the heat dissipation end, and that will probably continue. Not the best thing for "hyper overclockers!" But unless you're an enthusiast on that end it shouldn't be the end of the world, as TDP and heat dissipation aren't linked in the strictest manner. Looking forward to AMD's launch, and the possibility of 16gb HBM. Useless for gaming, but apparently some compute applications love it.

As for more Maxwell cards, I'd skip the Titan X unless you really want that 12gb. As I do not doubt a 6gb "980ti" or whatever they'll name it is coming at a far more reasonable price and probably the same performance otherwise, if not better. They apparently have TDP room, though they may be bandwidth bound. But as SK Hynix is shipping 8ghz GDDR5, that could be boosted by about 14% as well.
 
Yeah it seems like an almost no brainer that Nvidia will launch a 980Ti/990Ti with both faster memory and faster core clock at a cheaper price soon after the 390X comes along (assuming the 390x is faster than Titan X that is).

The memory size will be interesting, 6GB sounds like the obvious choice but I wouldn't be surprised to see more expensive 12GB variants as well if the 390x comes in both 4GB and 8GB flavours. It's not a good marketing position to be in to have the faster GPU but with less memory.

I'm going to hold out for Pascal (or Arctic Islands) anyway. Maxwell is good but doesn't feel like a true generational leap over my 670OC.
 
For the 12GB version, i can imagine Nvidia will wait a bit for allow partners to do 12Gb version. Even if the GTX 9807990TI have SM disabled,... specially if they want to keep the premium price of Titan.
 
Last edited:
Perf/W is really the metric that matters

To whom?

What kind of crazy electricity contracts do you people have where +50W seems to make all the difference in the world for a graphics card that is used to its fullest between 10 and 20 hours a week?

Because the last time I did my calculations for the ~150W difference between using a R9 290X and a Geforce 750 Ti for gaming ~15 hours/week amounts to 10€ per year or less. And I'm damn sure that the electricity bills in my country aren't better than the rest of Europe.
I understand the concern of power consumption in laptops, but what's with this obsession with single or double-digit difference in power consumption in cards that are supposed to go into a desktop PC?
 
EDIT: Accidental double post, sorry.
Please remove..
 
Last edited by a moderator:
The majority of the time, we're looking at devices that are power-limited. Whichever part needs less power to reach the same performance bar, and the less heroics needed to keep it from burning to a crisp, is likely to have the most headroom.
 
Back
Top