NVIDIA Maxwell Speculation Thread

2 X 970 is a pretty impressive package with regards to price, performance and features. GTX 990 with two 980 cores wouldn't suck either, the low power consumption figures are basically pegging for a card like that :)
Perhaps a GTX 990 and Maxwell Titan in early 2015? $999 each?
 
Even if there were "one micro second spikes" as suggested by THG, i don't think it would do much to the TDP (or the average heat dissipated from the GPU + some margins perhaps) because the temperature can't just rapidly change within that time period.

As long as the average power dissipated by the GPU is within its TDP, the heatsink designed with the TDP in mind should be able to do its job nicely. Not sure why people are freaking out here :oops:

Sounds to me like nit-picking because there must be a flaw right! Anyway if the GM204 can do this given the relative power requirements.. think what the big Maxwell's can do :devilish:
 
I wouldn't call a 240W average over a 60 second period a transient spike, though ;-). At least I assume the measurements were done properly, the equipment certainly looks expensive enough :).

I would be very interested in the type of the mentioned compute load. Even if you only take plain averages, you should be able to see significantly higher power values at the wall socket integrated over time - especially with a large difference at almost a hundred watts. Maybe Igor could elaborate?

It is a disgrace in my opinion that nvidia released a new flagship card that is only slightly more powerful than the 780ti. In my opinion, they should have waited until they could relesse a card with a good leap in performance or not released anything at all. What they really need to do is push access to 20 or 16nm. If would have waited to launch this card on 16nm with 3000+ CUs it would have been far more powerful than the 780ti.

OMG you're completely right! Frankly, I really don't get it why any company bothered to release any graphics card after the mid-eighties, since it was very clear from the outset that they all lack the power to do convincing virtual reality simulations to a sub-nuclear level.
 
Last edited by a moderator:
Well he is overgeneralizing here, because the Maxwell-based GTX 750 Ti has MUCH lower power consumption in Tom's "Torture GPGPU" test vs. any comparable Kepler or Radeon GPU. And to actually gauge efficiency, one would need to see both GPGPU power consumed and performance (the latter which was not provided at all as far as I can tell).

Note that GTX 980 actually handily outperforms GTX 780 Ti with respect to compute performance in most cases (excluding double precision compute of course):

http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/20

It's a 970/980 gpu review with a methodology that measures GPU consumption on all rails with very high resolution and it demonstrates that 980 reference consumes +100W on (non-specified) GPGPU workloads while 970 reference consumes +72W. 750Ti shows the same consumption on gaming and GPGPU, yes, but 750Ti is arguably irrelevant on GPGPU.

And on contrary of what was argued here, that consumption isn't a mere "spike", it's consistent for at least 1 minute. If anything, we see occasional "drops" instead of "spikes".

44-Nvidia-GeForce-GTX-980-Reference-All-Rails-Total_r_600x450.png


We also see this increase on AMD cards, R9 290X reference on quiet mode consumes +61 and R9 280X reference consumes +32W, this means that while on gaming 970 reference consumes 20% less than R9 280X and 980 reference consumes 25% less than R9 290X reference on quiet mode, but on those GPGPU workloads 970 consumes only 1% less while 980 consumes only 7% less.

So yes, I suppose we have evidence that there's some optimization there, showing that Maxwell is a result of clever engineering, not some magic fairy dust. And that it has it's limitations.

I would like to congratulate Igor Wallossek on his findings and call the rest of the tech reviewers to follow up on this.
 
Strange results at THG.
GTX 750 Ti (5SMM @ >1020MHz): 58W
GTX 970 (13SMM @ >1050MHz): 240W

The BIOS files at TPU does not mention power values, like on GM1xx or GKxxx cards.
Mabye power management is done by the GPU on GM2xx with support of driver profiles? And THG found a non-profiled app, Furmark is restricted.
 
I would like to congratulate Igor Wallossek on his findings and call the rest of the tech reviewers to follow up on this.


Yes, me too. Hopefully there exist tech reviewers with a modicum of electrical engineering background out there to explain publically why this type of measurement is meaningless. It's just capturing noise.
 
http://blogs.nvidia.com/blog/2014/09/18/maxwell-virtual-reality/

This might make me switch to NVidia for the first time since their FX line. The ability for each gpu to render what one eye sees is pretty big

it of performance. SLI: We’re also tuning the way our GPUs work together when they’re paired to drive virtual reality experiences. In the past, our GPUs would alternate rendering frames when joined in SLI mode. For VR, we’re changing the way our GPUs work in SLI, with each GPU rendering one display. - See more at: http://blogs.nvidia.com/blog/2014/09/18/maxwell-virtual-reality/#sthash.dCv5CjEf.dpuf

Hopefully AMD can come up with some similar stuff so that we have options for VR.

I'm not planning to upgrade my 7950 till the consumer rift hits. So it will be interesting to see how this hsakes out
 
Where does Tom's say what applications they are measuring? If it's a custom workload, have they ever detailed it? Is the performance measured on the same workload so that Perf/W can be estimated on it? There's probably a very interesting story in there, but it's frustrating to see someone with access to such low-level tools not give all the data. It basically forces others to replicate the same analysis with more rigour if we want to conclude anything really interesting...
 
Yes, me too. Hopefully there exist tech reviewers with a modicum of electrical engineering background out there to explain publically why this type of measurement is meaningless. It's just capturing noise.

Possibly, it needs to be followed upon. Meanwhile we can compare his gaming numbers with what other reviews got, but power consumption tests are a mess.

Anandtech's ratio between 980 and 290X numbers seem to corroborate Tom's measurements. Guru3D's "calculated TDP" on 970 and 980 are close enough, but 280X and 290X are not (30-40 W over). Techreport is an odd ball because it puts 280X only 13W over 970 and 290X 101W over 980.
 
GM204 is not the flagship. I don't think you'll have to wait too long to see what Maxwell looks like at 250W... All these complaints about GM204 not being a flagship are a little short sighted IMO.

If it is not the flagship then it shouldn't have such a high price. For the slight performance it gives over the 970, it should cost less.
 
Where does Tom's say what applications they are measuring? If it's a custom workload, have they ever detailed it? Is the performance measured on the same workload so that Perf/W can be estimated on it? There's probably a very interesting story in there, but it's frustrating to see someone with access to such low-level tools not give all the data. It basically forces others to replicate the same analysis with more rigour if we want to conclude anything really interesting...

Is maybe this the key?
"The measurement intervals need to be adjusted depending on the application in question, of course, in order to avoid drowning in massive amounts of data. For instance, when we generate the one-minute graphs for graphics card power consumption with a temporal resolution of 1 ms, we have the oscilloscope average the microsecond measurements for us first."
[my bold]
http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-970-maxwell,3941-11.html

I don't know anything beyond what the text says, though. But from my experience with Nvidias Boost, mildly higher clocks mean significantly higher voltages. Depending on the averaging and the specific Compute load, this could lead to the measured amps being multiplied with a higher voltage than is actually being applied in reality.

For example, when heated and loaded enough and thus resorting to base clock (1126 MHz), our sample ran at 1,043 volts with around 80% of allowed Board Power according to Nvidia Inspector. While on highest boost state at 1278 MHz, it was supplied with 1,212 volts, which is quite a difference.
 
Last edited by a moderator:
The FX 5900 XT was actually pretty decent for its price.

FX also tended to be very solid cards for the DX7/8 and OpenGL games that were prevalent back then. The 5800 and 5900 at least. The lower models were not so hot.
 
Are we going to see the biggest GPU ever or is 20nm essentially required for the monster Maxwell? Adding in more hardware for HPC and such is sure to require a lot of transistors to move significantly ahead of GK110.
 
Nvidia could still throw another 28nm monster now that Maxwell is even more tightly packed. There's still like ~150mm² of leeway for a 384-bit HPC SKU, with gobs of DP throughput and even more L2.
 
GM204 is not the flagship. I don't think you'll have to wait too long to see what Maxwell looks like at 250W... All these complaints about GM204 not being a flagship are a little short sighted IMO.

It is their flagship, as they named it the gtx 980. Its market position speaks for itself.
 
Back
Top