Nvidia BigK GK110 Kepler Speculation Thread

Yes, easily. The Jaguar cores in the upcoming consoles are far, far weaker than desktop CPU cores.

Well, they're not that much weaker than Bulldozer/Piledriver cores, on a per clock basis. But yes, still far slower than Sandy/Ivy/Haswell.

Also, the PS4 uses GDDR5, which means acceptable bandwidth, but crappy latency.
 
A Kepler SMX is drastically different from a Fermi SM. It runs instructions from 4 warps every cycle instead of two, and is two-way superscalar. This means it can in theory issue 8 instructions per cycle (=8x32=256 MADs), but only has 6 execution ports, giving you the 192 (=6x32) figure.

The bandwidth number is misleading in this regard since you also have to take into account the clock rate being lowered.

Wait, in summary:

Cuda compute capability 3.x devices have 4 warp schedulers. This means that each SM can execute 4 warps at clock. Each of these schedulers can also issue 2 instructions at once.
This means, as you said, 256 MADs. But I dont get what those execution ports are...??
 
That's what I'm running currently :(

Carsten, what did the cpu usage look like during those tests?

Here's a cpu usage graph of my Q9550@4Ghz running Crysis 3 public MP. The gpus were two 5850s@850Mhz. The settings were at max bar AA.



The game was running at about 40-45fps. It's one of the very rare moments that I have seen such a cpu usage from my cpus. The 5850s were cpu limited.

Luckily my 2500k@4Ghz was running the game at a solid 60 and still had another 15% spare. Didn't have the chance to check my 860 yet, since it's waiting my next gpu upgrade to get some gpus of its own! :)

I'd like a mod to transfer all Crysis 3 posts to an appropriate thread. I find the whole Crysis 3 resource usage very interesting. I believe it's the pinnacle of gaming.
 
Ok,seriously, consoles have 8 cores running @1.6GHz , can 4 cores running @3.2 GHz match them in performance?
I used to think they can , but after seeing Crysis 3 I don't anymore!

I think that yes, a 4 core at 3.2GHz can match the console GPU, certainly at least if you're looking at a 2500K, but on a console you have much less / no driver call overhead and less API overheard and importantly, the settings are tweaked so it will actually display less stuff than "all max" PC game.

What have do settings have on the CPU load remains to been seen, what if moving one or two sliders from "extreme" to "high" makes the game run 20% faster, plus you overclock your CPU 20%, then you could be playing fine I think.
(Or "almost fine" which granted is very frustrating :D)
 
Looks like I was close with my 30% prediction.

http://www.overclock.net/t/1362327/various-gtx-titan-previews/1580#post_19344015

LL
 
Depending on benchmark config it looks like a 40-50% average difference compared to a GTX680 and 30-40% to a 7970GE. That's definitely not outside any so far reasonable expectations.

Besides average values have that idiotic tendency to result from higher and lower values.
 
c3c67511_14340411151817titan1600.png


fa00791f_14340411152100titan1600.png


Far Cry 3 seems to be the new BF3 with it's wildly different results. 60% faster Titan on Guru 3d and 21% faster on Linus.

1305008


Linus also has it 29% faster than the 7970 GHz edition overall. 41% faster than the 680. Video was pulled.

Starting to think Nvidia has made a mistake here. They are drawing attention to the GHz Edition being far better value not only than Titan but also it's real competition, the 680.
 

SKYMTL ( Hardware canucks ) will be really happy to see some part of his review in the wild before he post it. ( i dont say that for you, but for the guys who have push them ).. ( anyway something strange, 690 sli scaling is so bad ? ) Also, i know last AMD driver have made some miracle, but i find the difference with 680 a bit huge, or does Nvidia have advert use games where the 600 was not so good ? ( sleepdogs, hitman, C3, many games gaming evolved )..
 
Yeah the eyefinity benchmarks give a massive lead for the GHz edition over the 680. This was part of Nvidia's problem here - they basically had to ask for the card to be benchmarked at 1600p and above in order to open a decent gap to the 680, but in doing so they've made the GHz edition pull away from the 680 as well. This is quite a reversal from what they managed to convince the press of regarding the 680 as being the perfect 1080p card (I believe [H] didn't even bench it at 1600p initially).

I guess they had no option, but anyone in the market for a $500 card (and I'd imagine the market for $500 cards is at least 5x the market for $1000 cards) can't be in much doubt about which one is the better choice currently.
 
Better results at Guru 3d, however they didn't appear to include the AvP score which wasn't a good one...

http://www.overclock.net/t/1362327/various-gtx-titan-previews/1560#post_19343859

LL


I guess we'll be looking at anything between 25% as a low point and possibly as high as 45% (faster than the Ghz Edition) as a high point. Pretty crazy when you think about it. 45% is reasonable and 25% is terrible, that's quite a difference.

The chart from overclock.net clearly show Titan 60.61% faster than the 7970GE in Far Cry 3 and 65.71% faster in Hitman so your "possibly as high as 45%" is completely wrong from the start. What cherry picked titles did you use to base your conclusions on?
 
Date of release has todays date 2013.02.21.

Did reviewers have this already or can we expect reviews today to come out using older drivers?

Its the WHQL, but the driver is ready since a while ( its the same of the last beta anyway ), and in general Nvidia give a pass to reviewers for get all the material, including drivers for review in case they have a new one.
Dont worry, if they have a driver who increase performance, even a new bios, they will quickly send it to reviewers.
 
Back
Top