NVIDIA Kepler speculation thread

The only thing this "leaked" review did was stir the pot. I agree - some of the games used are questionable and some of the settings are worthless to bench $500 cards at. When pcperspective, anandtech, techpower, techreport, hardocp, hardwareheaven, and hardwarecanucks get their reviews online, those will be the only sites I visit to draw my conclusions from.
 
There are results from 2560x1600 too, you know.
Still, look at their own, older benchmarks.

Same settings, same game, same everything.

Even the scores of the other cards (apart from HD7970 and HD7950, curiously) line up really nicely.

Only HD7970 and HD7950 suddenly got 10%+ slower :LOL:
 
What he is trying to say is that HD7970 and 7950 should be 10% higher on those graphs.
Somehow only those two cards lost 10% since the last Tom's benchmark.

But I never really check Tom's reviews anyways so whatever.
 
223032at2nxvutvbxbkv36wk0y.jpg


Default 7xx frequency 670ti
Default frequency 11xx "680"
Default 100x frequency of 680
There is also overclocked Edition 670ti
maybe simulated 670Ti is 1536CCs@706MHz and normal 670Ti 1344CCs@780MHz.. They also share 300.99 driver with some kind of promotion codes

http://www.redquasar.com/forum.php?mod=viewthread&tid=7763
 
There are Metro runs at higher resolutions, but what's with the 16x12 ish resolution? Who does that anymore?
Where's my 3600x1920 benchies!?

I'm sure Widescreengaming will get those out. It'll be interesting to see how it compares in situations that will actually push an enthusiast class card.

Will Nvidia be releasing a 670 on the 22nd?

Previously they were released in pairs 480/470, 580/570.

22nd 680/670?

Rumor is that 670 is launching later. How much later? Who knows...

Regards,
SB
 
Another one to show just how ridiculous that "six games average" is - even compared to their own (non special GTX 680 review) benchmark suite:

33wmdcx.jpg


Given HD7870 performance in that second benchmark, I suppose they just copied launch driver results for HD7950.

Be that as it may: The same GTX 580 card that was almost trumped by a tiny little HD 7870 a few weeks ago now wins by 8% over HD 7950. :rolleyes:
 
Why don't you apply the "woulda-coulda-shoulda" logic to GK104 as well? You know, to balance the fairy tale equation :)

I think I've explained it quite well in a post after, that is, the architecture is one thing, the implementation is another. I don't think that the GCN as an architecture is so different in terms of performance and power consumption with respect to Kepler (as Pitcairn seems to suggest), but of course the real cards can be another matter according to the choices made in the design. Of course GTX680 seems to be quite a good card compared to 7970.

So your sarcasm is completely wasted, if you want to troll please choose another target.

And, BTW, the traslation is "the heart cannot be commanded", but I assure you that my heart is quite neutral here.
 
Last edited by a moderator:
I don't think that the GCN as an architecture is so different in terms of performance and power consumption with respect to Kepler (as Pitcairn seems to suggest), but of course the real cards can be another matter according to the choices made in the design.

Yet you're basing your evaluation of Kepler's performance and power consumption on the implementation of one chip/card. Do you not see the flaw in your logic? You can't compare a hypothetical GCN based chip to a real Kepler based one and arrive at any useful conclusion.
 
Yet you're basing your evaluation of Kepler's performance and power consumption on the implementation of one chip/card. Do you not see the flaw in your logic? You can't compare a hypothetical GCN based chip to a real Kepler based one and arrive at any useful conclusion.

The flaw in my logic is the same flaw in YOUR logic as it's you that first declared that with Kepler Nvidia increased its performance lead while we don't know IF they will have a GK100/110 or whatever, and how it will perform, and which kind of response AMD could have in their hands at the time GK100/110 will launch. Also, you based this conclusion by comparing Tahiti and GK104 only, while leaving Pitcairn out of the discussion, while its efficiency is quite higher than in Tahiti, eve if they are based on the same architecture.
And GK110 or whatever will for sure have to spend part of the transistor budget for boosting the DP capability, which of course will not be useful for efficiency in gaming scenarios...
 
Last edited by a moderator:
It's easier to grasp what I'm saying if we stay away from the land of make believe.

GF114: ~360mm^2
Cayman: ~390mm^2
Cayman wins by ~15%

GK104: ~300mm^2
Tahiti: ~365mm^2
GK104 (reportedly) wins by ~10%

Pitcairn: ~212mm^2
Cayman: ~390mm^2
Pitcairn wins by ~8%

If the obvious isn't obvious then I can't help you further :) Pitcairn is highly efficient of course and nobody's doubting that. However nVidia now needs far less die to achieve equal or better positioning than previous generations. How that can be construed as anything but an improvement is beyond me.

I understand the "Tahiti is overburdened with compute" angle but that card has been played many times for GF110 already. GF114 was also stripped of compute just as Pitcairn and GK104 are now but was nowhere near this level of relative performance.
 
The reference to SKU positioning issues as a yardstick of architecture is rather perplexing, correct percentages or not.


Plus keep in mind that Cayman didn't always win- it still had trouble with compute and tesselation in extreme factors, even if you count those metrics as corner-cased.
 
<rant>
As a consumer, my give-a-fuck-o-meter is at dead ZERO when talking about how much perf per millimeters you can gimmie. When we start talking about millimeters, I start thinking of dick-waving contests.

You know what interests me? Performance in games that I play as a factor of how much money comes out of my wallet -- with an obvious minimum performance level that must be achieved to be considered. Even as an enthusiast -- when I go online to NewEgg or Amazon to place a purchase for either a GTX670Ti or 7950 in the next 30ish days, the item that I place in my cart will not be decided by total die size.

Die size is an interesting metric for noodling, but it's a useless metric for performance, and proven via history, unrelated to my purchase price.

</rant> Carry on.
 
Nvidia is better positioned than its previous generations, of course. Nobody denies that.

It's the sentence "extends their performance lead" that it's , IMO, quite misleading.
Now, AMD went with GCN for an architecture that is similar, in the general concept, to what Nvidia had since G80.
But, to define if Nvidia has a "performance lead" we should define what a "performance lead" is.
Nvidia had an absolute performance lead with Tesla/Fermi lines simpy because they went for a very big chip and AMD did not. But AMD was winning in the perf/area metric and this was due to their VLIW architecture which has some drawbacks, of course. Perf/area should affect pricing, so it is interesting somewhat.

Now the architecture is changed for AMD and it is more similar to what Nvidia have. If we want to compare the architectures, then we have to look at the whole lineup, not the most favourable case for Nvidia only. And if we look at other chips other than Tahiti, we see that Pitcairn, having the same limitations in the DP as the GK104, has, even looking at these "preliminary" benchmarks, a similar perf/area than the Nvidia chip.

So, it's all about the design targets. AMD could have the looked for same die sizes of Nvidia chips with similar amount of SPs and similar performances. But of course they chose a different strategy, which could be better or worse than Nvidia's, who knows. They maybe could go after a "GF100/5870" situation in the reverse by releasing higher clocked Tahiti and increase power consumption but reclaiming an absolute performance lead until GK100/110 or whatever will be released.
 
Albuquerque said:
<rant>
As a consumer, my give-a-fuck-o-meter is at dead ZERO when talking about how much perf per millimeters you can gimmie. When we start talking about millimeters, I start thinking of dick-waving contests.
...
</rant> Carry on.
If you don't care about millimeters, what are you doing posting in a forum about architecture (and, tangentially, its consequences on consumer pricing)?
 
If you don't care about millimeters, what are you doing posting in a forum about architecture (and, tangentially, its consequences on consumer pricing)?

Maybe I should return the rhetorical question? Why do millimeters matter when talking about architecture and pricing? An architecture has no specific size, and by pure virtue of history that we can see through 20/20 eyes, die size has absolutely no direct correlation to how much the end product (ie, the video card i will insert into my machine) costs.

Sure, you can make up fantastical "what if's" by breaking your own rule and making up a strawman on how a eleventy brazillion millimeter die would drain me of untold fortune in order to posses it, but we both know that's completely disconnected from reality.

So, why do millimeters matter in architecture and pricing? Dont' give me the strawman "what if" scenario, give me something tangible that has historical relevance.
 
Given equal performance the more efficient process will save you money. You could look at some of the bit-coin math that people do for real world scenarios. On this site you should certainly expect all aspects to be covered. If you don't care, I suggest you skip over posts with that content. I don't think its imperative that others know that you don't care.
 
Given equal performance the more efficient process will save you money. You could look at some of the bit-coin math that people do for real world scenarios. On this site you should certainly expect all aspects to be covered. If you don't care, I suggest you skip over posts with that content. I don't think its imperative that others know that you don't care.

I never said anything pro or con for power efficiency (which is what you're describing, as efficiency can be measured many ways.) Nevertheless, you can (and obviously, have) made a very plain case as to why power efficiency is something we should care about. Even for those not interested in BitCoins (I am not), you can obviously make the case for thermal output drop meaning less noise, or less power draw while gaming which is great for mobile platforms.

So, again, power efficiency is something that's plainly important for multiple reasons. Physical size of the die? I think even that could be important, when talking about pad space for wiring, or perhaps total surface area for thermal transfer under load. But performance per millimeter? That metric, insofar as I can discern, has no bearing on anything tangible.

Do you have examples of why performance per millimeters matters? We know it isn't related to cost of the video card, because we have multiple cases in history where large and "inefficient" (in terms of perf/mm) dies sold cheaply. We also have cases to the opposite.

Edit:
Here's where I am going with this -- the last umpteen pages have like four people talking about how perf/mm affects the video device of their chosen deity. But I see absolutely NOTHING regarding why anyone should care. Hell, even the "perf" part of that perf/mm metric is too abstract to begin with. Perf in what? Double float adds? Integer adds? MUL's? Bogos? Do we go on actual measured output or theoretical maximums? What application would be used to generate the measurement, if we aren't using theoreticals?

Sure, the whole measurement aspect would be a cool discussion -- and has been the source of many pretty awesome brainstorming threads on this forum over the last several years.

What the hell does performance per millimeter give us, in terms of why we should care more or less about Kepler? It doesn't affect price, power consumption, performance, heat, driver flaws, gaming results, or anything else that I can conceive of.
 
Last edited by a moderator:
Back
Top