NVIDIA Kepler speculation thread

If NVIDIA doesn't have anything like ZeroCore, I would argue that their power advantage under load essentially vanishes, except maybe for people who only turn their computer on for gaming.

Zerocore kicks in when the computer has been idle for sometime, correct? Anyways, would argue that people who buy a performance graphics card NOT to use it LOSE out.
 
Speaking of which....where is the GK100 anyway? Do we have any presumed launch/availability dates?

There's really no point upgrading now only to find your GTX 6XX part being replaced by a GTX 7XX part a few months later! :S

Although the way this is going, NVidia is going to ask around 800$ for the 780 anyway.

But haven't we've seen this before?
GTX 280 aka GT200 (June 17, 2008) > 285 aka GT200b (January 15, 2009)
GTX 480 aka GF100 (April 12, 2010) > 580 aka GF110 (November 9, 2010)
GTX 680 aka GK104 (March 22, 2012 ?) > ...
 
Speaking of which....where is the GK100 anyway? Do we have any presumed launch/availability dates?

There's really no point upgrading now only to find your GTX 6XX part being replaced by a GTX 7XX part a few months later! :S

Although the way this is going, NVidia is going to ask around 800$ for the 780 anyway.

Our favorite mole/troll Charlie said it taped out in February. So if you slap GF100's supposed tape out date to release (6 months), then we are looking at the August time frame. Then again, Charlie has been consistently wrong about what GK104 was bringing to the table after his famous "wins in every metric" article (350mm^2, dedicated physx logic, sometimes slower than pitcairn, paper launch, wrong, wrong, wrong, wrong).
 
If I'm not mistaken their performance lead just went up dramatically. GF114 wasn't faster than Cayman. The fact that GK104 is selling at GF110 prices should make that obvious :)

Do you really think that a 1-1,1GHz / 1792 SP "Pitcairn like" (i.e. without 1/2 DP capability) and 256 bit bus chip based on GCN would have been much different in die size, power consumption and performance in respect to GK104?
 
Do you really think that a 1-1,1GHz / 1792 SP "Pitcairn like" (i.e. without 1/2 DP capability) and 256 bit bus chip based on GCN would have been much different in die size, power consumption and performance in respect to GK104?

Probably would be identical, too bad it's non-existent as of yet.
 
If I'm not mistaken their performance lead just went up dramatically. GF114 wasn't faster than Cayman. The fact that GK104 is selling at GF110 prices should make that obvious :)

Their performance lead just went up dramatically with what? An invisible chip?
 
But haven't we've seen this before?
GTX 280 aka GT200 (June 17, 2008) > 285 aka GT200b (January 15, 2009)
GTX 480 aka GF100 (April 12, 2010) > 580 aka GF110 (November 9, 2010)
GTX 680 aka GK104 (March 22, 2012 ?) > ...

Of course we have and that's why I'm asking for the probable launch date.

I've been using my GF110 card for a comfortable 16 months now. I'd like to repeat that for my next upgrade. :S

I've come to think that it's not a very wise move to buy the first products that come out of a new fab process. Of course AMD proved this theory wrong with the 58XX series, but proved it right again with their 79XX series. Go figure...
 
Of course we have and that's why I'm asking for the probable launch date.

I've been using my GF110 card for a comfortable 16 months now. I'd like to repeat that for my next upgrade. :S

I've come to think that it's not a very wise move to buy the first products that come out of a new fab process. Of course AMD proved this theory wrong with the 58XX series, but proved it right again with their 79XX series. Go figure...

What's interesting about this is that both GTX280 to GTX285 and the GTX 480 to GTX 580 saw a 7 month gap. But we are talking going from the GT200 to GT200b and from the GF100 to GF110. Putting the refresh vs respin theories of later aside what's different now is that some believe that we have a fully functional GK110 already.
 
I would expect that they moved the slider.

Exactly, I have find the thread in XS, so basically he said he have optimised tesselation ( moved the slider to 16x i think ), not he have let "AMD optimised " setting ( who have no effect, i have test it just right now, in reality i have faster performance using "Tesselation set by appliccations " instead of let " AMD optimised checked. " ( 1-2fps, normal error magin with Unigine ).
 
Mianca said:
Many review sites actually achieved memory speeds >1500Mhz when overclocking HD7870 ... so the memory controller is certainly up to the task.
Oops.
Well: it's still possible that the MC was not designed for these clocks and that you cross the boundaries of reliability, but, yeah... ;)
 
Probably would be identical, too bad it's non-existent as of yet.

That's fine, I would only to point out that the differences, as far as we know, are not so high and due more to the actual implementation of a certain architecture than to a "better architecture". Then, after the whole lineup will be launched, we will see the global picture..
 
nothing there now

Yup, but according to them:

- 18% faster overal than HD7970
- 172% perf/watt of Fermi; HD7970 has 117%
- load noise 39,6 db; 7970 43,7
- On BF3 @ Ultra 1080p its not only faster than HD7970 but also GTX590
- On BF3 @ Ultra 1600p is 2 fps faster than HD7970
- It smokes HD7970 on Crysus 2 @ 1080p - 69,70 vs 55,60 fps
- Dirt 3 same performance gap as Crysis 2, even at 1600p, where its faster than both GTX590 and HD6990

... and then link stopped working :(
 
Back
Top