NVIDIA Kepler speculation thread

I count 5 wins, 2 losses, and 2 ties in those guru benches.

Or more negatively could look at it as 5 wins, 4 losses or ties.

I thought Skyrim was all CPU limited though?

Regardless, looks like it should force AMD to drop 7970 to at LEAST 499, if not 449.
 
Rangers said:
I count 5 wins, 2 losses, and 2 ties in those guru benches.

Or more negatively could look at it as 5 wins, 4 losses or ties.
You (haha, just kidding) could also have remarked that in 7 out of 9 cases, the chip with a 256 bit bus beats or equals the 384 bit bus. That's really quite astonishing.
 
I'm not sure one of those losses is even really a loss though. Since they flipped the colors on that one from every other graph it is just as likely a typo. Nevermind they flipped it at least 3 times. They really should keep a consistent color scheme. It makes them a pita read when they flip them at random like that.
 
Last edited by a moderator:
You (haha, just kidding) could also have remarked that in 7 out of 9 cases, the chip with a 256 bit bus beats or equals the 384 bit bus. That's really quite astonishing.

No, not really. Just means bandwidth wasn't that much of a bottleneck in those tests. Also, the Kepler had faster clocked RAM making up some of that difference.

Looks like AMD got pretty seriously out engineered here though. For the first time in forever, I'm kind of shocked. Not a good sign at all.

Though I guess, wasn't it concluded most of the die size difference was down to the bus difference (which actually probably makes Tahiti more forward looking and thus not necessarily a "bad" decision).

Another problem there is just too many games now that are "Nvidia games". When there gets to be so many of those they are half of any benchmark suite, no matter how you slice it.
 
Muropaketti at least will be running eyefinity/surround tests in addition to 1 screen tests, should show how much the bandwidth & memory amount limits
 
You said the same can be said for AMD in response to my comment that nVidia's base clock is now much higher on 28nm. Since AMD's base clock has been pretty high for several generations already I don't know what sameness you're referring to....

You're completely missing his point.

The clocks for GK104 is the same as the clocks for Pitcairn/Tahiti for all intents and purposes.

What Jawed has been saying is that if Nvidia hadn't gone with hotclocks they would have had roughly the same clocks speeds on each node just like it was before G80.

Hence, it's nothing new or exciting that GK104 can hit the same clocks that AMD has been able to now that they have ditched the hot clock.

So basically...

Pre G80, clocks between Nvidia and ATI were roughly the same at the same nodes.

G80 - GF1xx Nvidia had lower base clocks due to the hotclock. But hence had a higher hotclock.

GK1xx+, we're back to pre-G80 days where there is roughly clock parity between Nvidia and AMD.

Hence, it's the same once again. Nothing about 28nm suddenly allowed GK104's base clock to skyrocket. At least no more so than it allowed AMD's base clock to go up. It was the change in architecture that accounts for the majority of the clock increase while the process node basically allowed for the same clock headroom as AMD. Just as Jawed has been saying. IE - the same as it was pre-G80 before Nvidia went to a hotclock.

And had Nvidia not gone with the hotclock, they likely would have had similar base clocks to AMD the past few years. Nothing new...

Regards,
SB
 
It would be interesting to know why the Tahiti is so limited in some scenarios - of course there is a 79- 106 MHz clock difference but also some more shading (and BW) resources. Texturing capability per clock is the same but as I recall FP16 filtering is full speed on NV chips.
Geometry? "Magic ROPs"? FP16 filtering rate?
A clock-per clock analysis would be useful.
 
What Jawed has been saying is that if Nvidia hadn't gone with hotclocks they would have had roughly the same clocks speeds on each node just like it was before G80.

And he knows this how?

Hence, it's nothing new or exciting that GK104 can hit the same clocks that AMD has been able to now that they have ditched the hot clock.

Ah, so this is mostly about poo-pooing Kepler in response to the hype. Hint: If people are excited about the chip it's not because of the clock speed.

Nothing about 28nm suddenly allowed GK104's base clock to skyrocket.

That's great cause nobody made that claim. Jawed asked why they hadn't taken the same approach with G80. The obvious answer is that they weighed options for each architecture and node and chose accordingly. They also obviously learned from 6 years of experience. Or is there something more sinister afoot?
 
The power draw numbers from tweaktown are not what we were led to believe...

When it comes to the power draw side of things we can see that overall it's pretty good and load numbers sit around 50 - 60 watts higher than some of our reference clocked cards from AMD. While that might sound like a lot, overall we're still dealing with power draw that sits around the 450 watt mark.

On the idle side of things, though, you can see we're not as aggressive as the HD 7970 which can sit at around the 130 watt mark while the GTX 680 sits closer to the 200 watt mark.

It draws more power than stock 7970 in their tests.

http://cdn5.tweaktown.com/content/4...pler_2gb_reference_card_video_card_review.png

In guru 3d's test though it draws less than 7970, though not a ton so. http://i.imgur.com/Fkmm0.png

Given Tahiti's vast edge in idle, where these cards will spend most of their time, I think currently the overall power edge goes to Tahiti, based on the few leaks so far.

Overall I see few games where 7970 actually beats 680 though. I think AMD might have to go all the way to 449 to maintain decent sales.

And it makes 7950 practically redundant, though Pitcairn is still a good chip.

It also means that while in a couple weeks AMD is going to spit out the 7990 that will retake the performance lead, eventually a dual GK104 should be the trump card.

Would like to see official gk104 die size from an unbiased source like Anandtech, as well, to give us a clearer picture.
 
And had Nvidia not gone with the hotclock, they likely would have had similar base clocks to AMD the past few years. Nothing new...

Was it not similar? You should not compare NVs >500mm² monsters with AMD ~300mm² performance chips.

If you compare similar sized chips:
G92b vs RV770: 738 vs 750MHz
GF114 vs Cayman: 820(most vendors over 900MHz) vs 880MHz
 
But any numbers? ;) If you are saying that these numbers are wrong you have to know it will have more/less than these specs :D

Wait a second. What's the difference between me saying it isn't so compared to someone who puts the contradicting statement in form of a screenshotted spreadsheat and uploads it so some website?
 
It looks to me like 680 might be as much faster over 7970 as 580 was over 6970...difference being now with a smaller die rather than 200mm larger.

Summary of guru3d benches

jXLRrA6VEiGhR.JPG


Though Trinibwoy tells us die size doesn't matter, well at least it used to not :p
 
As i can see in the leaked benchmarks minimun frame rates are better on HD7970... and power usage would be similar if HD7970 got rid of the DP logic like 680...
As I wrote in another post i consider GCN architecture much more elegant than Fermi-Kepler one, but AMD must launch a Tahiti pitcairn-like chip to crush the competition or will lose a win that is in its hand this generation...
 
As I wrote in another post i consider GCN architecture much more elegant than Fermi-Kepler one, but AMD must launch a Tahiti pitcairn-like chip to crush the competition or will lose a win that is in its hand this generation...
Does it need to? Are halo chips that important?

AMD could just resurrect the 'sweet spot' strategy and focus on Pitcairn based cards and dual Pitcairn SKUs, and allow board makers to develop with their own money the likes the 'Asus Mars' cards based on Tahiti for those who want the highest single card performance.

$150-300 cards sell in significantly greater volumes than $300-600 cards.
 
And had Nvidia not gone with the hotclock, they likely would have had similar base clocks to AMD the past few years. Nothing new...
Thanks, it was getting tedious.

We'll never know why G80 has a hot clock in preference to a single clock, why NVidia rejected the single clock approach.

I don't see how anyone can argue that a hot-clocked design is easier than a single clock design. Otherwise GPUs would have been running at G80's hot-clock back in the day. GPUs have historically been "wide and slow", which most definitely points in the direction of what is easier to do as a fab customer in the GPU chip world. G80 etc. clearly bucks that received wisdom and compromises by running only the ALUs at hot clock, keeping the rest at considerably lower clocks.

As I've already said the only element I can think of here is that compilation complexity may be dramatically higher with Kepler, and compilation complexity was what NVidia was running away from back then. Have to wait and see.

None of this changes the fact that Kepler looks extremely tasty. The big one coming later this year should be genuinely great, a massive step up from GF110 (a true generational jump, something for GTX580 owners to relish). This is NVidia's "RV770 moment" though this time AMD has nowhere to hide, whereas NVidia back then still had the absolute performance crown.
 
Back
Top