NVIDIA Maxwell Speculation Thread

We certainly don't need to care about this as consumers, but it sure is interesting.
It's squiggly lines with no particular sign of context or understanding, and it makes comparisons to other architectures that it has neglected to profile and above all else failed to log something as fundamental as what clock and voltage steps were being used. It's hard to credit the squiggles to something they didn't keep track of.

In particular, I wonder if previous GPUs exhibited this much variance within a single millisecond,
It's very likely that there is a lot of microsecond-scale variation, but why stop at GPUs, heck why stop at chips made in this decade?

For reference, AMD's marketing for the 290 pointed out their power control method could operate in 10 usec increments, so what are the odds that there would be signs of variation on the oscilloscope for Hawaii?


One additional nitpick: what's with the "die shot" being bandied about.
Is there entertainment value on putting some grayscale anonymous chip as a base layer and then playing space invaders on top?
Is this somehow preferable to AMD's method of *nothing*?
 
I suspect most consumer electronics would exhibit similarly spastic power consumption if measured at microsecond granularity.
 
It's squiggly lines with no particular sign of context or understanding, and it makes comparisons to other architectures that it has neglected to profile and above all else failed to log something as fundamental as what clock and voltage steps were being used. It's hard to credit the squiggles to something they didn't keep track of.

It's very likely that there is a lot of microsecond-scale variation, but why stop at GPUs, heck why stop at chips made in this decade?

For reference, AMD's marketing for the 290 pointed out their power control method could operate in 10 usec increments, so what are the odds that there would be signs of variation on the oscilloscope for Hawaii?

All good points, but I for one wouldn't have expected the squiggly lines to be, well, that squiggly. I mean, you can see power going up and down by about 150W within just 30µs. And since there's no guarantee that the sampling frequency is high enough to correctly capture the signal, reality might be even more starking. I guess the squiggly lines are pointless if you already new that, but I didn't.

I'd heard about PowerTune, of course, and I knew that all chips can have spikes above their TDP, I just thought power draw was far more continuous. So I'm glad I've learned something, even if it is quite poorly framed in the article.

One additional nitpick: what's with the "die shot" being bandied about.
Is there entertainment value on putting some grayscale anonymous chip as a base layer and then playing space invaders on top?
Is this somehow preferable to AMD's method of *nothing*?

I think the goal is to promote the notion that CUDA cores are real cores, since you can "clearly see them, just look!". I suspect it's actually quite effective as a marketing technique.

Whether those pictures should appear in independent reviews, however, is another story. But I'm not sure all reviewers realize they're not real die shots.
 
One additional nitpick: what's with the "die shot" being bandied about.

Is there entertainment value on putting some grayscale anonymous chip as a base layer and then playing space invaders on top?

Is this somehow preferable to AMD's method of *nothing*?

Well yeah there is entertainment value and from a marketing standpoint it's far more effective than nothing. Of course if you're looking for an actual die shot then not so much. It all depends on what you're looking for :)

You raise an interesting point though. Why is it that AMD shares so few details about its architecture. For example just compare how they unveiled Tonga's compression capabilities. nVidia went a step further in detailing how it works and what was changed.

Did AMD publish a white paper or something similar for either Hawaii or Tonga?
 
If the chip doesn't maintain above TDP power draw for periods that measure more than a few milliseconds, if none of the transient spikes exceed the maximum power rating (not the same thing as TDP), if the chip's local temperatures don't climb past ~100-120 C, and none of the packaging and silicon-level physical limits are exeeded, the oscilloscopes are nothing but irrelevant nitpicking at the rate of millions of times a second.
The amount we need to care about this is proportional to the measurement granularity.

If there is sustained draw above TDP, or regularly measured spikes that exceed the safe bounds listed for the chip or power delivery circuitry, it might be worth the bandwidth used to read the page.
I see no sign of that kind of analysis, and they might be interested in seeing how everything that has come before it has behaviors that show up with high-speed oscilloscopes.

Spend the bandwidth.

"That's not the only offering that makes a good impression, though. Nvidia's reference GeForce GTX 980 does well too, as long as you don’t focus on the idle power measurement. And the party ends as soon as you look at the compute-based stress test results. A taxing load just doesn't give Maxwell any room for its optimizations to shine.

When it comes down to it, our most taxing workloads take Maxwell all the way back to Kepler-class consumption levels. In fact, the GeForce GTX 980 actually draws more power than the GeForce GTX Titan Black without really offering more performance in return."
 
Last edited by a moderator:
00-power-consumption-r2kct.png




More details here
http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-970-maxwell,3941-11.html

I was do it more simply after have seen some beahviour in TDP calcutation ( like Furmark 60W under a game result, because this is the first time we see it ) ... and so i was collect data on review for it, a long work... ok with material for it, it is more easy .. The first thing who have a bit intrigate me is how Guru3D who set a calculated TDP weas arrive at 171W when in general they are way under the official TDP ), furmark result was make me thing of some really aggressive TDP limiter software setting... )
 
Last edited by a moderator:
*removed personal attack*
It is a fact it consumes much less power than Hawaii and therefore has more OC potential and performance to extract. There is nothing to discuss about it, it is a fact.

I never said it didn't consume, on average, less power than Hawaii. I simply stated that in overclocked situations, it consumes as much as Hawaii and that their new boost is causing power consumption to be much higher than the TDP.

ok I see, even in front of the fact you deny :rolleyes:
if you show me a heavily boosted 970 with 250W power draw, I will change my mind. but hey, don't waste your time, such thing doesn't exist...

PS: BTW comparing heavily boosted GM104 to standard Hawai, how fair is that ? apple vs orange. and what about GM104 boosted vs Hawai boosted that reaches 350~400W TDP ???

Again, you are putting words in my mouth. See above.
You chart proves my point that GTX980 and GTX770, with current power measurements, are consuming roughly the same with a 65w difference between TDPs.

I'm simply pointing out there is something funky going on here and it is misleading.

I understand that it's not easy to be an uncompromising AMD fan today, but when you see a GPU released like this that beats its own predecessor and the competition in every metric possible, and perf/W in particular, isn't it possible to just stand aside an marvel at such a technical achievement? This is a mostly technical forum, if various metrics don't mean anything to you, then what are you doing here? AMD will come back, don't worry.

Where have I said anything pro AMD here? I'm trying to discuss why these changing metrics from manufacturers are OK...

GM204 is pretty much exactly what I thought it would be, it is a great GPU. I wasn't overly surprised by the launch other than the high resolution performance and Nvidia sticking to a false TDP.
I heard word about a month ago that the "rated" TDP for the cards is not accurate. That seems to be the case. I apologize for trying to discuss the matter here. I didn't realize this is a praising to our Allah Jen-Hsun only area.
 
I don't think ive ever seen someone do a power measurement in a space of 1ms. This is only useful when measuring losses through say a mosfet e.g. switching losses.

Plus all that could also just be high frequency noise (that may look like very high peaks on the scope which could be alarmingly to the novice but its not) which is present in almost ALL power supply rails that one might come across. To minimize the noise, one has to measure it with the ground loop of the oscilloscope minimized and perhaps also filter it out/bandwidth limiting the oscilloscope. Not to mention the noise/accuracy on the current sense resistor or whatever they use to measure the current.

I think until he shows whats really being shown on the scope and the accuracy of the measurements because that is the most important (calibration of the measurements not the equipment themselves).. it may not provide an accurate picture for Maxwell's power efficiency or as a matter of fact any of the cards measured in this way.

edit - or it could even be the ripple from the 12V rail. The actual power consumption of the GPU is after all the power circuitry (which could add an extra 1~30W loss depending on its efficiency which also happens to be related to the cost) and often the voltage is very very stable (at the GPU voltage) with current fluctuating due to the load. I would of assumed that nVIDIA did all the proper measurements here.
 
Last edited by a moderator:
It is a disgrace in my opinion that nvidia released a new flagship card that is only slightly more powerful than the 780ti. In my opinion, they should have waited until they could relesse a card with a good leap in performance or not released anything at all. What they really need to do is push access to 20 or 16nm. If would have waited to launch this card on 16nm with 3000+ CUs it would have been far more powerful than the 780ti.
 
When it comes down to it, our most taxing workloads take Maxwell all the way back to Kepler-class consumption levels.

Well he is overgeneralizing here, because the Maxwell-based GTX 750 Ti has MUCH lower power consumption in Tom's "Torture GPGPU" test vs. any comparable Kepler or Radeon GPU. And to actually gauge efficiency, one would need to see both GPGPU power consumed and performance (the latter which was not provided at all as far as I can tell).

Note that GTX 980 actually handily outperforms GTX 780 Ti with respect to compute performance in most cases (excluding double precision compute of course):

http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/20

67747.png
 
Last edited by a moderator:
Absolutely! How dare they release a product that gets unanimous positive reviews!

The reviewers are nuts or they are sold out. The GTX 980 is no where near powerful enough to cost 549. It should be 399 at most. If they want to sell a graphics card at a flagship it needs to be significantly more powerful than the previous flagship. I think NVIDIA should have waited to put out a card on a smaller process node.

The fact is uses less power is also almost meaningless to me. I think that any flagship should reach whatever max thermal limit exists. For example, they should have added more CUs to this card until it hit 250 watts.
 
I wasn't overly surprised by the launch other than the high resolution performance and Nvidia sticking to a false TDP.
I heard word about a month ago that the "rated" TDP for the cards is not accurate. That seems to be the case. I apologize for trying to discuss the matter here. I didn't realize this is a praising to our Allah Jen-Hsun only area.

You neither understand what TDP means nor do you understand that these are GeForce cards meant for gaming.

First, TDP is an average value. Short power spikes are completely irrelevant here since energy transfer to the cooler is a much much slower process. And secondly the maximum load test Tom's did is not a gaming workload. Thus it is also not relevant. If this were a Quadro or Tesla card...but it isn't.

It is also quite unlikely that the OC'ed card in their review consumes LESS power than the reference model. That makes no sense at all since it likely uses higher voltages and (due to the cooling solution) higher sustained boost. This alone eats at the credibility of their measurements. And where do show power measurements for Kepler cards they compare it to to better get a context of these values?
 
The reviewers are nuts or they are sold out. The GTX 980 is no where near powerful enough to cost 549. It should be 399 at most. If they want to sell a graphics card at a flagship it needs to be significantly more powerful than the previous flagship. I think NVIDIA should have waited to put out a card on a smaller process node.



The fact is uses less power is also almost meaningless to me. I think that any flagship should reach whatever max thermal limit exists. For example, they should have added more CUs to this card until it hit 250 watts.


GM204 is not the flagship. I don't think you'll have to wait too long to see what Maxwell looks like at 250W... All these complaints about GM204 not being a flagship are a little short sighted IMO.
 
GM204 is pretty much exactly what I thought it would be, it is a great GPU. I wasn't overly surprised by the launch other than the high resolution performance and Nvidia sticking to a false TDP.
I heard word about a month ago that the "rated" TDP for the cards is not accurate. That seems to be the case. I apologize for trying to discuss the matter here. I didn't realize this is a praising to our Allah Jen-Hsun only area.

Heh, it's been a while, LordEC. I think last time I posted here we were all going on about RV770 vs GT200. Those were good times.

I think your point is entirely valid. While Maxwell is a very impressive GPU uArch, some of the thermal characteristics of the GTX 980 ought to be explored. I don't see any premise for rash, knee jerk reactions that some users herel have resorted to.
 
You neither understand what TDP means nor do you understand that these are GeForce cards meant for gaming.

First, TDP is an average value. Short power spikes are completely irrelevant here since energy transfer to the cooler is a much much slower process. And secondly the maximum load test Tom's did is not a gaming workload. Thus it is also not relevant. If this were a Quadro or Tesla card...but it isn't.

It is also quite unlikely that the OC'ed card in their review consumes LESS power than the reference model. That makes no sense at all since it likely uses higher voltages and (due to the cooling solution) higher sustained boost. This alone eats at the credibility of their measurements. And where do show power measurements for Kepler cards they compare it to to better get a context of these values?

I know exactly what TDP means and I understand GeForce is for gaming. Thanks for checking though.

I wasn't specifically talking about the Tom's measurements, I actually wasn't aware of that until after my first few posts. I typically don't read Tom's reviews.

So feel free to actually read my posts and follow along before replying next time.

Heh, it's been a while, LordEC. I think last time I posted here we were all going on about RV770 vs GT200. Those were good times.

I think your point is entirely valid. While Maxwell is a very impressive GPU uArch, some of the thermal characteristics of the GTX 980 ought to be explored. I don't see any premise for rash, knee jerk reactions that some users herel have resorted to.
Been way too long. Good to see you back and posting.
Edit- Checked your post history, Dec 2009 was your last post. Almost 5 years.
 
It is a disgrace in my opinion that nvidia released a new flagship card that is only slightly more powerful than the 780ti. In my opinion, they should have waited until they could relesse a card with a good leap in performance or not released anything at all. What they really need to do is push access to 20 or 16nm. If would have waited to launch this card on 16nm with 3000+ CUs it would have been far more powerful than the 780ti.


The 980 is a better card than the 780 Ti in nearly every way and is cheaper too. That's a good thing for anyone who doesn't already own the previous flagship - which is a lot of people. I assume you're so disgusted by the whole thing cause you're already using a 780 Ti.
 
I know exactly what TDP means and I understand GeForce is for gaming. Thanks for checking though.

I wasn't specifically talking about the Tom's measurements, I actually wasn't aware of that until after my first few posts. I typically don't read Tom's reviews.

So feel free to actually read my posts and follow along before replying next time.

You're still wrong. The powertarget of the card caps consumption at 165W which is the TDP. It may not be the fastest implementation, hence some spike measurements for instance at TPU, but under sustained gaming load please prove that the TDP is exceeded before making such claims.
 
Back
Top