NVIDIA GF100 & Friends speculation

In order to stay within the 300 W power limit, NVIDIA has added a power draw limitation system to their card. When either Furmark or OCCT are detected running by the driver, three sensors measure the inrush current and voltage on all 12 V lines (PCI-E slot, 6-pin, 8-pin) to calculate power. As soon as the power draw exceeds a predefined limit, the card will automatically clock down and restore clocks as soon as the overcurrent situation has gone away. NVIDIA emphasizes this is to avoid damage to cards or motherboards from these stress testing applications and claims that in normal games and applications such an overload will not happen. At this time the limiter is only engaged when the driver detects Furmark / OCCT, it is not enabled during normal gaming. NVIDIA also explained that this is just a work in progress with more changes to come. From my own testing I can confirm that the limiter only engaged in Furmark and OCCT and not in other games I tested. I am still concerned that with heavy overclocking, especially on water and LN2 the limiter might engage, and reduce clocks which results in reduced performance. Real-time clock monitoring does not show the changed clocks, so besides the loss in performance it could be difficult to detect that state without additional testing equipment or software support.
Nice graph on that page

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/3.html

showing how power consumption alternates - well, except for the fact there's no timescale.

NVidia says it's 520 mm².

Oh, and based on the performance summary page, Cayman needs to be 33% faster than HD5870 to be as fast.
 
Lol@ power consumption limiter ... GF110 still drawing 304W :D

who was talking about the throttling again?

and yes, 64 TMU :)
 
Lol@ power consumption limiter ... GF110 still drawing 304W :D

who was talking about the throttling again?

and yes, 64 TMU :)

Without the limiter it would probably be more than that, actually. Given that the GPU throttles so violently, it's staying relatively cool, probably a solid 10°C cooler, thereby limiting its power draw. I suspect without a limiter it could draw about 10~15W more.

It's not a bad move from NVIDIA, though, as long as the limiter doesn't ever engage in games, it's fine, and it's probably why they were able to reach such high clocks without blowing the 300W limit. As far as I can tell, everybody wins.

PS: still 64TMUs, it seems. So that's almost certainly GF100 B1.

Edit: oh, and one more thing. The $580 price mentioned by TPU for the HD 5970 is bogus: http://www.newegg.com/Product/Produ...7&cm_re=Radeon_HD_5970-_-14-102-887-_-Product

So that's $500, but $470 after MIR.

That said, a pair of HD 6870s sounds like a better option.
 
It's not a bad move from NVIDIA, though, as long as the limiter doesn't ever engage in games, it's fine, and it's probably why they were able to reach such high clocks without blowing the 300W limit. As far as I can tell, everybody wins.

The modulation curve does not look very PSU friendly, but as long at it almost never happens it should be fine.
 
The GTX 580 was benched with the 262.99 drivers, whilst the GTX 480 was benched with 258.96, how much difference is there between those 2 releases? Could the performance difference be even smaller?
 
The GTX 580 was benched with the 262.99 drivers, whilst the GTX 480 was benched with 258.96, how much difference is there between those 2 releases? Could the performance difference be even smaller?

It's also interesting that they couldn't bench the 58xx with CAT 10.10 like they did the HD68xx.

Come on TechPowerUp, you should do a lot better.
 
And once again TPU annoy me with their low resolutions & old games that are CPU limited choice. It totally screws up the summary tables. Only the 25x16 are worth watching, but even those are being flattened and don't show the true perf differences in games that are able to use the additional power. Not to mention the older drivers for other cards but that's a common sin. I'd also like to see Vantage scores.

Kudos for the quiet cooler, lol at clock throtling - they introduced something new after all! :D
 
I worked out the difference between the 480 and 580 if the preliminary are to be believed.

Also, things I picked up. No Vantage score and no Unigine Heaven 2.1 scores.

AvP2 - 480/580

1680 - 17.6%
1920 - 16.5%
2560 - 14%

BFBC2 - 480/580

1680 - 18.2%
1920 - 14.4%
2560 - 14.6%

BattleForge - 480/580

1680 - 17.2%
1920 - 11.9%
2560 - 17%

COD4 - 480/580

1680 - 7.3%
1920 - 17.3%
2560 - 16.5%

Call of Juarez 2 - 480/580

1680 - 15.9%
1920 - 17.2%
2560 - 18%

Crysis - 480/580

1680 - 15.4%
1920 - 18.1%
2560 - 18.3%

DoW2 - 480/580

1680 - 19.4%
1920 - 19.3%
2560 - 20.6%

Dirt2 - 480/580

1680 - 16.9%
1920 - 17.7%
2560 - 19.8%

FC2 - 480/580

1680 - 14.3%
1920 - 15.9%
2560 - 17.4%

HAWX - 480/580

1680 - 22.7%
1920 - 22.7%
2560 - 28.7

Metro 2033 - 480/580

1680 - 14.1%
1920 - 14.3%
2560 - 17.8%

Riddick: Dark Athena - 480/580

1680 - 19.7%
1920 - 23.9%
2560 - 25.9%

S.T.A.L.K.E.R. - Clear Sky - 480/580

1680 - 8.7%
1920 - 13.9%
2560 - 15.9%

Supreme Commander 2 - 480/580

1680 - 0.8%
1920 - 1.8%
2560 - 6.4%

Unreal Tournament 3 - 480/580

1680 - 9.9%
1920 - 12.5%
2560 - 15.5%

WoW - 480/580

1680 - 3.3%
1920 - 2.7%
2560 - 0.8%

3DMark03 - 480/580

1680 - 13.2%
1920 - 14.1%
2560 - 15.8%

3DMark05 - 480/580

1680 - 2.1%
1920 - 3.6%
2560 - 7.7%

3DMark06 - 480/580

1680 - 4.2%
1920 - 5%
2560 - 10.6%

Unigine Heaven 2.0 - 480/580

1680 - 19.5%
1920 - 20.6%
2560 - 22.8%
 
Last edited by a moderator:
Nice graph on that page

http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/3.html

showing how power consumption alternates - well, except for the fact there's no timescale.

NVidia says it's 520 mm².
This imho looks a LOT better overall than GTX480. It is not that much faster, it does not use that less power, but still that together is enough so perf/power no longer looks silly (plus it actually manages to distance itself really now in absolute performance terms from HD5870). Seems like nvidia even managed that with a 6% less transistors/smaller die. Memory clock got into reasonable territory too, with decent OC headroom even. Oh, and did I mention nvidia actually managed to release a full chip :)..
Still, one can't deny this is what GTX480 should have been.
I don't quite get the part with the power limiter. If they actually measure inrush current as claimed, there's no reason to limit this to specific applications, which is also claimed... Even so, I'm not sure why it's even active at least when not overclocked - even with no limiter it "only" drew 304W with Furmark, which is slightly below what GTX480 used, and it looks to me like it's built to handle that (ok it's slightly over pci-e spec but since this is alternating between two power levels it is over pci-e spec anyway even with power limiter and looks like the cooler can handle those 300W).
Oh, and based on the performance summary page, Cayman needs to be 33% faster than HD5870 to be as fast.
Will be interesting to see how that pans out. Really would have looked good if that were released instead of GTX480 against HD5870, but Cayman MAY just make it look as silly as GF100 looked against Cypress - that is even if it doesn't beat it in performance it might be close enough to make it look like a hot, big monster chip which is just barely faster. But I'll reserve judgement on that...
 
Will be interesting to see how that pans out. Really would have looked good if that were released instead of GTX480 against HD5870, but Cayman MAY just make it look as silly as GF100 looked against Cypress - that is even if it doesn't beat it in performance it might be close enough to make it look like a hot, big monster chip which is just barely faster. But I'll reserve judgement on that...
At least, it won't be 6 month late... or will it?

What plumbed GF100 was as much its perf/watt/mm² as the endless announcement of announcement of announcement of (...), not to forget the cooler's suspicious design, with its tiny screamer.

If nVidia indeed has boards available by the time Cayman launches, it'll be more of a GTX280 vs HD4870, assuming GTX580 is ~10% faster than Cayman (which more or less has to be some 10-15% faster than GTX480, as less would imply Cayman Pro barely if any faster than Barts XT, so that's still a best case scenario)
 
At least, it won't be 6 month late... or will it?

What plumbed GF100 was as much its perf/watt/mm² as the endless announcement of announcement of announcement of (...), not to forget the cooler's suspicious design, with its tiny screamer.
Oh yes, that's quite right it definitely won't be that late to the party against Cayman, so from that point of view it's an improvement. (Considering the apparently minimal changes from GF100, you'd hope so!).

If nVidia indeed has boards available by the time Cayman launches, it'll be more of a GTX280 vs HD4870, assuming GTX580 is ~10% faster than Cayman (which more or less has to be some 10-15% faster than GTX480, as less would imply Cayman Pro barely if any faster than Barts XT, so that's still a best case scenario)
At least the die size discrepancy should be a bit smaller this time - granted that's unfair since gt200 was 65nm but even even GTX285 vs. HD4890 (which had a similar performance ratio than GTX280 vs. HD4870) there was a huge die size difference. Though of course it would be really bad for GTX580 is Cayman XT would be as fast or even faster than GTX580 as it's still supposed to be quite a bit larger (though if Cayman XT has similar power draw it might not be too bad from a cost perspective overall as at least the 2GB ram should be more expensive).
 
Back
Top