Nvidia BigK GK110 Kepler Speculation Thread

It's not that simple, because if you lift that ceiling, not only are you operating what essentially becomes an overclocked card, but you also change power consumption, temperature (obviously), noise and naturally, performance.

Basically, every metric is altered. You can do that, but you also have to measure the card at 80°C to give a sense of what buyers can expect at stock settings. I mean, at $1000, not everyone is going to want to have to bother fiddling with settings they might not even fully understand.

Well the effect in power consumption is not very big, because that is controlled separately. Boosting changed things a bit. Essentially you can get the same results by ramping up the fan speed or lowering the ambient temperature. The default fan profile is quite quiet if I'm not mistaken.

I like the reviews where multiple results are shown.
 
We've seen a difference in power results of 86W compared to the GHz edition. At first I thought it actually pulled less power but after checking more reviews out it is usually above, once by 49W (Anandtech).

53406.png


PCGH had the GHz edition draw 33W more (not sure if that's the best result for Titan) - http://translate.google.com/translate?sl=de&tl=en&js=n&prev=_t&hl=en&ie=UTF-8&eotf=1&u=http%3A%2F%2Fwww.pcgameshardware.de%2FGrafikkarten-Hardware-97980%2FTests%2FTest-Geforce-GTX-Titan-1056659%2F&act=url

Hard to make a direct comparison because Anands test is full system power.

Toms -

load-power.png
 
Last edited by a moderator:

Just to give a little more detail on that:
While the standard 7970's (including some OC-models) consume roughly 170 to 200 watts (with the 170w data point being an OC model), the GHz-Editions do use a lot more power. There's only one model that's below 230 watts, most of them just above 245 watts and some extreme-OC-models even over 260 watts. From an energy-efficiency point of view, AMDs current sGPU top-model was large step backwards.

Cross-referencing with our not-yet-published results from Anno 2070 power measurements, our BC2 result looks valid.

That said, I can bring the Titan to 265 watts (the 106% power target) with OC with no sweat. Power management seems to work quite precisely with Kepler after the GTX 590 debacle, that led to broken cards in reviews, IIRC.
 
It's possible Anandtech has a better 7970 GHz card as well now, yes. Problem with the temp tests is it's two different games, and Metro is bandwidth heavy so will push Tahiti to the limit (hence why it always performed poorly in power tests in that game).
 
There is huge power consumption variability between different dies in 28nm. Putting a handful of sample points under the microscope and making sweeping conclusions is GIGO.
 
Just to give a little more detail on that:
While the standard 7970's (including some OC-models) consume roughly 170 to 200 watts (with the 170w data point being an OC model), the GHz-Editions do use a lot more power. There's only one model that's below 230 watts, most of them just above 245 watts and some extreme-OC-models even over 260 watts. From an energy-efficiency point of view, AMDs current sGPU top-model was large step backwards.

Cross-referencing with our not-yet-published results from Anno 2070 power measurements, our BC2 result looks valid.

That said, I can bring the Titan to 265 watts (the 106% power target) with OC with no sweat. Power management seems to work quite precisely with Kepler after the GTX 590 debacle, that led to broken cards in reviews, IIRC.

Could you by any chance do a little temperature testing on power consumption too? I mean, it seems that the results are heavily influenced not only in performance but power consumption too based on the temperature around the card.
 
I find the power consumption reports interesting, but expected. The Titan is a very large chip that was always going to consume a lot of power.
 
@Kaotik,
Probably, but that depends if I am getting time budget from my boss. :) I own neither one of those cards nor the lab equipment. :)
 
I find the power consumption reports interesting, but expected. The Titan is a very large chip that was always going to consume a lot of power.

Actually, they look too good to be true at first sight. Obviously manufacturing process is more mature than ever to show such results.
 
Temperature limit will make it easy to use this in small cases though.

That is non-sense. A power limit would make it easier to use in small cases as it'll limit the upper temperature.

A temperature limit means that regardless of whether you have a small case or a large case the GPU will operate within its safety zone.

If you have a large case it'll be 80 degrees. If you have a small case it'll be 80 degrees. In both cases it'll operate under the safety limit.

If it was only power limited. In a large case it might only be 80 degrees while in a small case it might be over 100 degrees. So in a large case it would operate under the safety limit. While in the small case you'd be operating over it.

The temperature limit isn't there to make it easier to use in small cases. It's to ensure that the GPU remains within safe operating temperatures.

Regards,
SB
 
That is non-sense. A power limit would make it easier to use in small cases as it'll limit the upper temperature.

A temperature limit means that regardless of whether you have a small case or a large case the GPU will operate within its safety zone.

If you have a large case it'll be 80 degrees. If you have a small case it'll be 80 degrees. In both cases it'll operate under the safety limit.

If it was only power limited. In a large case it might only be 80 degrees while in a small case it might be over 100 degrees. So in a large case it would operate under the safety limit. While in the small case you'd be operating over it.

The temperature limit isn't there to make it easier to use in small cases. It's to ensure that the GPU remains within safe operating temperatures.

Regards,
SB

I'm sorry, but first you say that "A power limit would make it easier to use in small cases" and then you say if it was only power limited it might be over 100 degrees in a small case??

What you may not know is that the temperature limit is also a power limit. Titan uses less power in a small case with the same temperature setting than in a large case.
80c is not any sort of safety limit for this chip. Instead it's a good value for most situations to achieve good performance and acoustic level. Voltage is the one that mainly affects longevity and that is under different settings, at least overvolting.

temperature setting with Titan is not a limit, it's a target. It's the temperature where the card actually will be under stress at almost all times.
The video DSC linked earlier explains these pretty well. It's long though.


At 12.15 and onwards he talks about a bit how small form factor systems are one of the markets for Titan. The power management with temperature ceiling is the key here.
 
I think Nvidia shot themselves in their own foot with the temperature target. They should have set it to 90 degrees and/or should have increased fan speed a bit and that would have provided 10-15% more performance in reviews and results would have been much more consistent.
 
I'm sorry, but first you say that "A power limit would make it easier to use in small cases" and then you say if it was only power limited it might be over 100 degrees in a small case??

Yes, if you made a power limit such that a card is useable in a small case, then it obviously limits the potential in a larger case.

A temperature limit doesn't make it easier to put in a larger or smaller case. Regardless of what case you put it in the GPU will be kept within safe operating temperature limits.

I guess if you mean by making it easier to put in a small case to mean that it then won't impact the performance in a larger case, then OK, I could go with that. But it wasn't how I was interpreting you saying "easier." As just the statement alone of being easier is nonsense.

So again, upping the thermal limit wouldn't be a good idea anyway. The whole point is keeping the GPU within safe operating temperatures. Power limit alone couldn't do this, as shown by the large case versus small case situation. Upping the thermal limit isn't an option as it puts the GPU into what Nvidia considers unsafe thermal conditions for the GPU.

So again, I understand the goal of this. Give users the most performance while keeping the GPU within safe operating conditions. In the past if you wanted to target small SFF's that would mean conservative power and clock limits so that the thermal load doesn't exceed safe margins. So now with Boost 2.0 what they are doing is setting an upper bound for that thermal load and letting the other stuff adjust to it. Again raising that thermal limit means taking the chip out of the thermal safety margin that Nvidia have determined for the GPU.

But you still have the same irresponsible situation where performance at the start of a game is non-representative of the performance the card will deliver while playing the game. Hence, if tech sites don't report on this users will be misled into what to expect from their 1000 USD card.

If all or even most tech sites took this into consideration I'd have less of a beef with Boost 2.0. But even then I wouldn't like it as I prefer predictable performance to highly variable performance. I rarely run air conditioning in my place of residence during the hot summers (up to 100 degrees F), and use as little heat as possible during the cold winters (down to negative 10 degrees F). So temps I regularly have in my place of residence vary from about 90 degrees F (at which point I turn on Air Con) to about 68 degrees F (at which point I turn on heating). Therefore anything using Boost 2.0 would be a horrible experience for me with high performance during the winter and low performance during the summer. Versus a traditional card where my performance is the same regardless of what season it is with the only difference being that it is louder during the summer than during the winter.

Regards,
SB
 
If anything what they actually shot themselves in the foot with was the weak-ass power delivery circuitry - for a €900 video card no less! - which has an enforced ~265W hard limit according to Anandtech. The 6+8 pin aux power connectors allow for up to 300W, but the card itself never gets anywhere near that. Very very very disappointing, considering the utterly LUDICROUS price for the card itself.

Totally fail.
 
I'm curious if it's related to the longer latency of Nvidia's voltage/frequency scaling. The chip's significantly bigger, and its power draw can probably spike faster and to a greater degree than the smaller chips that had a lower ceiling. Unless Titan has updated that part of the system, it might be safer in the long-run to be more conservative.
 
Well the effect in power consumption is not very big, because that is controlled separately. Boosting changed things a bit. Essentially you can get the same results by ramping up the fan speed or lowering the ambient temperature. The default fan profile is quite quiet if I'm not mistaken.

I like the reviews where multiple results are shown.

The effect on power consumption is quite substantial, because when the card hits 80°C, it clocks down, or at the very least is prevented from clocking up. That reduces power.

See HFR's results: http://www.hardware.fr/articles/887-6/consommation-efficacite-energetique.html

Adding a couple of fans increases the power draw of Titan by up to 22W.
 
Back
Top