NVIDIA Kepler speculation thread

This is NVidia's "RV770 moment" though this time AMD has nowhere to hide, whereas NVidia back then still had the absolute performance crown.

I wouldn't go that far. RV770 was considerably smaller than G200 and opened up a huge perf/mm lead. This is just Nvidia taking the lead by a small margin.

Even then it's doubtful that they will retain it vs Pitcairn with their competitor there.
 
there is a missing phase yet again on GTX680 boards just like the HD7970.. maybe that's why Fuad and other green pr guys are screaming buy it now, because the price will decrease and you may get a GTX685 for same price what they are asking for GTX680 now..
 
We'll never know why G80 has a hot clock in preference to a single clock, why NVidia rejected the single clock approach.
Maybe it was inevitable? G80 was already big enough, for the aging 90nm process at the time, and putting more SIMD lanes per multiprocessor at the base clock wasn't an option, if they wanted to meet the GFLOPs target. :???:
 
Just woke up and have to make breakfast and lunches for kids...most of the links are down...
has anyone benchmarked 3x 1920x1200 or higher situations? I see some 25x16, but nothing higher?
 
I don't see how anyone can argue that a hot-clocked design is easier than a single clock design.
Who, exactly are you arguing with? :LOL:
As I've already said the only element I can think of here is that compilation complexity may be dramatically higher with Kepler, and compilation complexity was what NVidia was running away from back then.

So the theory is that nVidia ran away from software complexity into the arms of architectural complexity and kept that up for GT200 and Fermi?

Plausible but we haven't seen anything that indicates Kepler instruction issue is more compiler dependent than Fermi. There are four independent schedulers for 6 SIMDs, seems very much hardware focused still.
 
I wouldn't go that far. RV770 was considerably smaller than G200 and opened up a huge perf/mm lead. This is just Nvidia taking the lead by a small margin.

Well, given that they had a MASSIVE deficit there, and managed to outgun AMD in just one architectural generation, thats pretty impressive.

Even then it's doubtful that they will retain it vs Pitcairn with their competitor there.

How do you know this? Sounds more like wishful thinking to me.
 
Well, given that they had a MASSIVE deficit there, and managed to outgun AMD in just one architectural generation, thats pretty impressive.

While AMD switched to a compute chip and Nvidia ditched theirs?

How do you know this? Sounds more like wishful thinking to me.

Because Pitcairn is hugely more efficient than Tahiti in mainstream gaming?
 

512KB L2 but faster 512bytes/cycle vs 384bytes/c GF110

pipeline_compares7u7y.png
 
While AMD switched to a compute chip and Nvidia ditched theirs?

Isnt it too soon to know that? If you are basing that on Sandra numbers I think its advisable to wait for others. Besides, those numbers were done on the driver version which didnt seem to have the right clocks (706Mhz vs 1006Mhz).

Because Pitcairn is hugely more efficient than Tahiti in mainstream gaming?

Then you are just merely comparing AMD GPUs. That says absolutely nothing about upcoming nVIDIA GPUs on that segment :rolleyes:
 
fillrate tests interesting..

overall roundup, difference shrinks on higher res (not sure it carries same disease from Fermi or just lack of bandwidth), though, it still manages to best..

total19xfi.png
 
Last edited by a moderator:
power consumption isn't distinctive as nv pr slides dictate, both cards act similar under load.. though 680 is faster hence better P/W ratio
idle consumption
GTX680 14.8W avg
HD7970 13.5W avg


 
Last edited by a moderator:
Back
Top