NVIDIA Kepler speculation thread

GTX 560 Ti 820MHz @170W -> GTX 580M 620MHz @ 100W : 25% clock reduction
GTX 670 ~1000MHz @170W -> GTX 680M 720MHz @ 100W : 28% clock reduction
It has a much more significant (imho surprisingly large) reduction in memory clock, though.

btw.
Is there any information, that GTX 680M is a 100W part?
What else could it be?
 
GTX 560 Ti 820MHz @170W -> GTX 580M 620MHz @ 100W : 25% clock reduction
GTX 670 ~1000MHz @170W -> GTX 680M 720MHz @ 100W : 28% clock reduction

The GTX 670 base clock is 915Mhz and the GTX 680M does not implement GPU boost so the difference is 195Mhz or 21% not 28%.

btw.
Is there any information, that GTX 680M is a 100W part?
First off the power is for the whole MXM module (GPU, Memory) not the GPU only and yes it is 100W.

http://www.mxm-sig.org
http://en.wikipedia.org/wiki/Mobile_PCI_Express_Module

Do you really expect Nvidia to release their top line mobile GPU at 55W and leave performance on the table?
 
Do you really expect Nvidia to release their top line mobile GPU at 55W and leave performance on the table?

Theoretically there is no problem at all. All depends on the efficiency of current architecture plus pressure from competition.
That's exactly what Intel does to AMD with their 77W Ivies against poor performing FXs.
 
Theoretically there is no problem at all. All depends on the efficiency of current architecture plus pressure from competition.
That's exactly what Intel does to AMD with their 77W Ivies against poor performing FXs.

The difference is that GK104 isn't winning against Pitcairn on perf/watt or mm^2/watt, and AMD Mobility highend is Pitcairn
 
Really? Performance delta between HD 7970 and GTX 680 is significantly smaller then performance delta between HD 6970 and GTX 580 (or HD 5870 vs. GTX 480, HD 4870 vs. GT280 etc.), but GTX 680 had the best reception.
 
Performance is what matters period.

And when it's not beating it in perf/w on desktop, what makes you think they could squeeze it into same power envelope on mobile and outperform Pitcairn there?
(Though 7970M apparently isn't even as full module too close to 100W yet, leaving room for 7990M)
 
Kaotik said:
And when it's not beating it in perf/w on desktop, what makes you think they could squeeze it into same power envelope on mobile and outperform Pitcairn there
Well, the lower clocks (and I presume voltage) will certainly help. The 7970m is at 850Mhz after all with 4.8 Gbps memory.

I'd expect overall power consumption to be fairly close under typical circumstances.
 
Well, the lower clocks (and I presume voltage) will certainly help. The 7970m is at 850Mhz after all with 4.8 Gbps memory.

I'd expect overall power consumption to be fairly close under typical circumstances.

Yes and GK104 is a lot bigger chip, it's not just MHz that matters.
 
http://www.anandtech.com/show/5818/nvidia-geforce-gtx-670-review-feat-evga/17

Look at load power for Metro2033 (an actual game). A 7 watt difference and the 670 was averaging clocks of 1050Mhz with 6Gbps memory. So a bigger chip, clocked higher, with higher clocked memory, pulling 7 additional watts. The 680m on the other hand will be clocked significantly lower (than the 7970m) with slower memory.

Kaotik said:
it's not just MHz that matters.
Funny, I don't recall saying that it was...
 
I don't know about Anands methods or tests, but TPU tests only the actual card consumption, in Crysis 2 the 670 consumes nearly 40% more than 7870 as average, and bit over 32% as peak value. In pure Watts that's 37W (peak) and 41W (average) difference, and desktop 7870 barely even goes over 100W as it is (in gaming)
 
Ok, you win, you will be able to play for 1 hour with the 7970m vs 45 minutes with the 680m on a fully charged battery... maybe, if you are right. I guess that is a huge issue.

But if you are really concerned about the 680m power consumption, why don't you just wait for the benchmarks to come out before buying one? Instead of spewing FUD...
 
http://ht4u.net/reviews/2012/gigabyte_geforce_gtx_670_oc_windforce_3_im_test/index14.php - 55W difference in hawk
http://www.techpowerup.com/reviews/Powercolor/HD_7850_PCS_Plus/26.html - 40W (both avg and max) in crysis 2

Btw, those power figures looks like they could keep a high-binned full 1Ghz 7870 below 100w most of the time, without powertune kicking in too much.

Ok, you win, you will be able to play for 1 hour with the 7970m vs 45 minutes with the 680m on a fully charged battery...

Hard to predict without knowing the power limiter / actual frequency range on both cards. But perf/watt (at around 100w) IS the interesting metric here.
 
Psycho said:
But perf/watt (at around 100w) IS the interesting metric here.
Why?

I mean I may be way out of touch with reality, but it was my understanding that people who generally buy these machines game on them while they are plugged in. Certainly any serious gaming is going to be done on a stable platform.

Again, the 680m is clocked substantially lower (both memory and core) than the 7970m, whereas this is not the case for the 670 and 7870. I really don't see why it is necessary to make baseless speculation on this point when the hardware and real world numbers will be out soon enough.
 
Is this when we start discussing potentially unannounced AMD hardware that doesn't officially exist because I think it might be the wrong thread.

/done.
 
Why?

I mean I may be way out of touch with reality, but it was my understanding that people who generally buy these machines game on them while they are plugged in. Certainly any serious gaming is going to be done on a stable platform.

Again, the 680m is clocked substantially lower (both memory and core) than the 7970m, whereas this is not the case for the 670 and 7870. I really don't see why it is necessary to make baseless speculation on this point when the hardware and real world numbers will be out soon enough.
If GPU A has twice the perf/watt than B, than, at any given targeted power consumption (it may be a little simplistic), A is going to be twice as fast as B. In this case, 100W.

Although I think this case resumes to how each manufacturer measures their own TDP. Point in case: the HD 7970 has an 25% higher TDP than the GTX 680, but it hardly uses more power (TPU, Hardware.fr and ht4u.net, speaking of which, I find it quite funny, since they don't benchmark the game they use for power testing anymore).
 
Back
Top