NVIDIA GF100 & Friends speculation

Agreed, unlikely.
But on the power front they seem to have made progress, considering GF104.
And they might yet introduce a mechanism that is throttling the GPU if the power draw gets to high - just like AMD has wisely done in the 5000 series. That way, you might not have to take Furmark power draw by 2, but only „average gaming loads” or „commercially usable software”.

Such an idea would also let them boost the default clocks, too. A higher-than-today's-default clock may be fine when the chip is lightly loaded (and therefore cooler). So if today's shaders are 1.4GHz, clock them at say 1.5GHz but under heavy power/heat load, drop down to say 1.2GHz.

This would also help CUDA apps significantly because they never use anywhere near the wattage of graphics apps.

One possible problem: pingponging between shader speeds may cause irregular FPS performance. The driver can always have its own heuristics to minimize this but it's still always a potential problem.
 
The overheating protection mechanism already in place in GF100 is able to scale down performance quite gracefully when the temperature gets to high until it's settled at an acceptable level. I suppose you could adapt this technique quite easily to power instead of heat.

edit:
But the fanboy-flak for throttling in gaming-applications …
 
why doesn't nvidia just make a <400mm2 die that uses less power and offers the same performance as the current >500mm2 die? :p

requires a lot less magic methinks.
 
why doesn't nvidia just make a <400mm2 die that uses less power and offers the same performance as the current >500mm2 die? :p

requires a lot less magic methinks.

Well, you're the boss. BTW I think you made a mistake with your signature! :p

In any case the Nvidia line up is a little bit messed up. Methinks they desperately need 32nm/28nm ASAP more so than ATI does. So in accordance with speculative nature of this thread I will lay claim to the idea that if TSMC gets to 28nm first, Nvidia will try to get their chips out on 28nm process first as well.
 
Take TPU's 3dmark03 (1280x1024 4AA 16AF)numbers

Your numbers would look very different if you used a relevant application. According to that same article the 480's advantage over the 460 in 3dmark03 is far higher than it is in modern day games. Also, the power consumption of any GF104 variant won't necessarily increase linearly with performance vs GF104.
 
Your numbers would look very different if you used a relevant application. According to that same article the 480's advantage over the 460 in 3dmark03 is far higher than it is in modern day games. Also, the power consumption of any GF104 variant won't necessarily increase linearly with performance vs GF104.

But only that set of information is available, nature is actually a pretty good stability test without becoming a power virus like furmark.

Define modern day games? the 480 quickly goes up to being 50% faster in things like stalker and metro at 19x12.

sontin said:
Huh? Even with 7 SMs GF104 is able to compte with a GTX470 card.
http://www.techpowerup.com/reviews/Point_Of_View/GeForce_GTX_460_TGT_Beast/28.html
average performance of the GTX460 is 82% of a GTX470 and once you up to clocks of the 460 to close that gap, so does the gap in power draw. There's a clear distance there otherwise even a 4890 would be able to compete with a GTX470 based on those standards since it's not that much slower than a 460.
 
http://www.techpowerup.com/reviews/Point_Of_View/GeForce_GTX_460_TGT_Beast/28.html
average performance of the GTX460 is 82% of a GTX470 and once you up to clocks of the 460 to close that gap, so does the gap in power draw. There's a clear distance there otherwise even a 4890 would be able to compete with a GTX470 based on those standards since it's not that much slower than a 460.

That's one situation.
Computerbase.de is using crysis for the measuring of the power consumption (whole system): http://www.computerbase.de/artikel/...-gtx-460-hawk/4/#abschnitt_sonstige_messungen ("Leistungsaufnahme").
HT4U.de is using HAWX and Furmark and they tested nearly all GTX460 versions from the vendors (numbers from the cards only): http://ht4u.net/reviews/2010/msi_n460_gtx_hawk/index13.php

And the Zotac AMP version have a 810Mhz core clock.
 
That's one situation.
Computerbase.de is using crysis for the measuring of the power consumption (whole system): http://www.computerbase.de/artikel/...-gtx-460-hawk/4/#abschnitt_sonstige_messungen ("Leistungsaufnahme").
HT4U.de is using HAWX and Furmark and they tested nearly all GTX460 versions from the vendors (numbers from the cards only): http://ht4u.net/reviews/2010/msi_n460_gtx_hawk/index13.php

And the Zotac AMP version have a 810Mhz core clock.

And when I check the 470 against the 460, the latter thas 20% less power usage in Crysis and ~20% less performance (AA/noAA.)

HT4U is not even benchmarking HAWX, so why would they take the time to measure the power draw with it?
If I check with TPU bencmark numbers, the 460 has 84% of the framerate of the 470 under hawx and the load of the 460 is.. guess guess 81%.

the Zotac AMP 460 has dead on performance of the 470 but in crysis but all measurements say that the power draw is max 10W lower for those OC cards. It's not like an OC'd 460 magically starts to consume Evergreen like power numbers.

As far as I see it, the 460 is not efficient enough to produce GF100 kind of numbers with Evergreen like power draw
 
Last edited by a moderator:
HT4U is not even benchmarking HAWX, so why would they take the time to measure the power draw with it?

Because they used the game before GTX460 and other cards. But it should only show the consumption in a game.

As far as I see it, the 460 is not efficient enough to produce GF100 kind of numbers with Evergreen like power draw

Why should GF104 consumes the same? They are different designs. But the two sites show that a GTX460@800Mhz is as fast as a GTX470 and needs not even the same power.

Tomshardware.com has a round up. And the palit version with 800Mhz uses not really more than a lot other GTX460 cards with lower clocks: http://www.tomshardware.com/reviews/geforce-gtx-460-roundup-gf104,2714-18.html

The design doesn't really react until 850Mhz and the standard vcore.
 
Last edited by a moderator:
I don't get what you mean. Of course, GF104 isn't even close both in power draw as well as in performance.

But it should be! (if they wish to become competitive)

Because they used the game before GTX460 and other cards. But it should only show the consumption in a game.

Read the edited text. the 460's consumption in HAWX looks impressive, but is nearly linear with its performance deficit.
 
Last edited by a moderator:
but it should be! (if they wish to become competitive)
i don't get what you mean. Of course, gf104 isn't even close both in power draw as well as in performance.
current gf100? Gf104 isn't even close.
define „performance” and you'll have your answer. :)
why doesn't nvidia just make a <400mm2 die that uses less power and offers the same performance as the current >500mm2 die? :p

requires a lot less magic methinks.
I still don't get it. Competitive is not only a question of power draw, but of performance in the market-relevant areas and - for gamers - image quality, for some maybe even physx.
 
Sontin: Power efficiency of GTX 460 when boosted to GTX 470 levels isn't particularly impressive. From what Neliz is showing it's in single digit % less. As he states it's nowhere remotely close to Cypress and very very very close to GF100 when clocked high enough to provide similar performance.

Neliz: Power efficiency of GF104 may be approximately the same as GF100, but when clocked to similar performance levels it does hold a rather large perf/mm^2 advantage.

All that said, beefing up GF104 so that it was similar in size to GF100 would quite likely also still keep power use approximately similar to or slightly less than GF100, it's not like adding components to the GPU would magically use no additional power. Either way you go about it, adding units or increasing clocks, you're going to end up with increased power use.

GF104 was a great adjustment by Nvidia, but it's no magic bullet versus the Evergreen lineup. What I'm more interested in seeing is if GF104 might be the start of Nvidia abandoning the monolithic huge die approach similar to how R600 -> Rv670 made AMD rethink how it designs GPUs.

Regards,
SB
 
Sontin: Power efficiency of GTX 460 when boosted to GTX 470 levels isn't particularly impressive. From what Neliz is showing it's in single digit % less. As he states it's nowhere remotely close to Cypress and very very very close to GF100 when clocked high enough to provide similar performance.

Check the links. The Zotac AMP card uses only 4% more power in crysis but it's 20% faster than a reference GTX460. The GTX470 needs 17% more than the zotac card, but without AA/AF it is only 3% faster and with 2% slower...
You will not able to reach GTX480 level with a GF104 card, but you come very close to it with maybe 50 watt less power consumption.
 
It's not possible to accurately predict the power consumption of a theoretical "big GF104". There are more factors that could affect the outcome than a simplistic "GF104 power draw * (1 + GF100 perf advantage)" calculation. It could go either way.
 
Check the links. The Zotac AMP card uses only 4% more power in crysis but it's 20% faster than a reference GTX460. The GTX470 needs 17% more than the zotac card, but without AA/AF it is only 3% faster and with 2% slower...
You will not able to reach GTX480 level with a GF104 card, but you come very close to it with maybe 50 watt less power consumption.
This generalizations is a bit senseless. GF104 seems to be very variable in terms of leakage. Some reviews shows default 1GB model to consume less power than HD5850, other reviews shows power consumption at HD5870's level. The same for OC boards... some of them consume significantly over 200W.

The same would apply to full-clocked GF104. Some boards would drain less power (and offer better perfromance per watt than GTX470), other will be more hungry and result in worse power effectivity.
 
The same for OC boards... some of them consume significantly over 200W.

Not really. That's only happen if they would increase the vcore - HT4U did it with the HAWK (and the card has a higher default vcore than other oc versions): http://ht4u.net/reviews/2010/msi_n460_gtx_hawk/index17.php.

The difference between a default GTX460 card (and the versions with 700-730Mhz) and a reference GTX470 is 70 watt in games and 80 watt in furmark.

And here is a list of a few overclocked cards: http://ht4u.net/reviews/2010/msi_n460_gtx_hawk/index17.php

The same would apply to full-clocked GF104. Some boards would drain less power (and offer better perfromance per watt than GTX470), other will be more hungry and result in worse power effectivity.
That means a "full-clocked GF104" consumes more than 70 watt or 52% than a reference GTX460. That will not happen.
 
Last edited by a moderator:
Back
Top