-The_Mask-
Newcomer
TPU haven't got a card from nVidia and they haven't signed a NDA.
Nice graph on that pageIn order to stay within the 300 W power limit, NVIDIA has added a power draw limitation system to their card. When either Furmark or OCCT are detected running by the driver, three sensors measure the inrush current and voltage on all 12 V lines (PCI-E slot, 6-pin, 8-pin) to calculate power. As soon as the power draw exceeds a predefined limit, the card will automatically clock down and restore clocks as soon as the overcurrent situation has gone away. NVIDIA emphasizes this is to avoid damage to cards or motherboards from these stress testing applications and claims that in normal games and applications such an overload will not happen. At this time the limiter is only engaged when the driver detects Furmark / OCCT, it is not enabled during normal gaming. NVIDIA also explained that this is just a work in progress with more changes to come. From my own testing I can confirm that the limiter only engaged in Furmark and OCCT and not in other games I tested. I am still concerned that with heavy overclocking, especially on water and LN2 the limiter might engage, and reduce clocks which results in reduced performance. Real-time clock monitoring does not show the changed clocks, so besides the loss in performance it could be difficult to detect that state without additional testing equipment or software support.
Lol@ power consumption limiter ... GF110 still drawing 304W
who was talking about the throttling again?
and yes, 64 TMU
It's not a bad move from NVIDIA, though, as long as the limiter doesn't ever engage in games, it's fine, and it's probably why they were able to reach such high clocks without blowing the 300W limit. As far as I can tell, everybody wins.
The GTX 580 was benched with the 262.99 drivers, whilst the GTX 480 was benched with 258.96, how much difference is there between those 2 releases? Could the performance difference be even smaller?
It's also interesting that they couldn't bench the 58xx with CAT 10.10 like they did the HD68xx.
Come on TechPowerUp, you should do a lot better.
AvP2 - 480/580
1680 - 17.6%
1920 - 16.5%
2560 - 14%
BFBC2 - 480/580
1680 - 18.2%
1920 - 14.4%
2560 - 14.6%
BattleForge - 480/580
1680 - 17.2%
1920 - 11.9%
2560 - 17%
COD4 - 480/580
1680 - 7.3%
1920 - 17.3%
2560 - 16.5%
Call of Juarez 2 - 480/580
1680 - 15.9%
1920 - 17.2%
2560 - 18%
Crysis - 480/580
1680 - 15.4%
1920 - 18.1%
2560 - 18.3%
DoW2 - 480/580
1680 - 19.4%
1920 - 19.3%
2560 - 20.6%
Dirt2 - 480/580
1680 - 16.9%
1920 - 17.7%
2560 - 19.8%
FC2 - 480/580
1680 - 14.3%
1920 - 15.9%
2560 - 17.4%
HAWX - 480/580
1680 - 22.7%
1920 - 22.7%
2560 - 28.7
Metro 2033 - 480/580
1680 - 14.1%
1920 - 14.3%
2560 - 17.8%
Riddick: Dark Athena - 480/580
1680 - 19.7%
1920 - 23.9%
2560 - 25.9%
S.T.A.L.K.E.R. - Clear Sky - 480/580
1680 - 8.7%
1920 - 13.9%
2560 - 15.9%
Supreme Commander 2 - 480/580
1680 - 0.8%
1920 - 1.8%
2560 - 6.4%
Unreal Tournament 3 - 480/580
1680 - 9.9%
1920 - 12.5%
2560 - 15.5%
WoW - 480/580
1680 - 3.3%
1920 - 2.7%
2560 - 0.8%
3DMark03 - 480/580
1680 - 13.2%
1920 - 14.1%
2560 - 15.8%
3DMark05 - 480/580
1680 - 2.1%
1920 - 3.6%
2560 - 7.7%
3DMark06 - 480/580
1680 - 4.2%
1920 - 5%
2560 - 10.6%
Unigine Heaven 2.0 - 480/580
1680 - 19.5%
1920 - 20.6%
2560 - 22.8%
This imho looks a LOT better overall than GTX480. It is not that much faster, it does not use that less power, but still that together is enough so perf/power no longer looks silly (plus it actually manages to distance itself really now in absolute performance terms from HD5870). Seems like nvidia even managed that with a 6% less transistors/smaller die. Memory clock got into reasonable territory too, with decent OC headroom even. Oh, and did I mention nvidia actually managed to release a full chip ..Nice graph on that page
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_580/3.html
showing how power consumption alternates - well, except for the fact there's no timescale.
NVidia says it's 520 mm².
Will be interesting to see how that pans out. Really would have looked good if that were released instead of GTX480 against HD5870, but Cayman MAY just make it look as silly as GF100 looked against Cypress - that is even if it doesn't beat it in performance it might be close enough to make it look like a hot, big monster chip which is just barely faster. But I'll reserve judgement on that...Oh, and based on the performance summary page, Cayman needs to be 33% faster than HD5870 to be as fast.
At least, it won't be 6 month late... or will it?Will be interesting to see how that pans out. Really would have looked good if that were released instead of GTX480 against HD5870, but Cayman MAY just make it look as silly as GF100 looked against Cypress - that is even if it doesn't beat it in performance it might be close enough to make it look like a hot, big monster chip which is just barely faster. But I'll reserve judgement on that...
Oh yes, that's quite right it definitely won't be that late to the party against Cayman, so from that point of view it's an improvement. (Considering the apparently minimal changes from GF100, you'd hope so!).At least, it won't be 6 month late... or will it?
What plumbed GF100 was as much its perf/watt/mm² as the endless announcement of announcement of announcement of (...), not to forget the cooler's suspicious design, with its tiny screamer.
At least the die size discrepancy should be a bit smaller this time - granted that's unfair since gt200 was 65nm but even even GTX285 vs. HD4890 (which had a similar performance ratio than GTX280 vs. HD4870) there was a huge die size difference. Though of course it would be really bad for GTX580 is Cayman XT would be as fast or even faster than GTX580 as it's still supposed to be quite a bit larger (though if Cayman XT has similar power draw it might not be too bad from a cost perspective overall as at least the 2GB ram should be more expensive).If nVidia indeed has boards available by the time Cayman launches, it'll be more of a GTX280 vs HD4870, assuming GTX580 is ~10% faster than Cayman (which more or less has to be some 10-15% faster than GTX480, as less would imply Cayman Pro barely if any faster than Barts XT, so that's still a best case scenario)