NVIDIA Kepler speculation thread

What color is the sky in your world?

Show me a video card where perf per mm of die space materially affected the price of the video card.

Yes, I know "big" dies are more expensive than "small" dies, when all else is exactly equal. But all else is never exactly equal; NVIDIA and AMD get different rates on their silicon. They have different contracts on how they pay for yields. And when you're talking the difference between a $22 die and a $31 die on a card that costs $599 retail, the nine dollars is fundamentally rounding error.

You obviously could make an argument why a 400mm die doesn't make sense on a sub-$100 part, because now you're talking about a price target that materially (in a non-rounding-error way) impacts the bottom line of that card. But we're talking about Kepler, the GTX680, the cream of the crop, the Big Daddy that's going to be a quarter (or more) of the price of a pretty hardcore i7-2700k, SSD-equipped, 16Gb of DDR3-2133Mhz ram gaming rig.
 
Last edited by a moderator:
Albuquerque said:
the nine dollars is fundamentally rounding error.
Not to AMD or Nvidia (or their partners).

As for how that affects you the consumer, well if they gave them away for free they wouldn't be in business very long...

FWIW, I agree perf/mm^2 isn't the most important metric. Perf/watt, absolute performance, and perf/dollar are all more important to me as a consumer, but that is a far cry from saying perf/mm^2 is meaningless (and that it doesn't indirectly impact other metrics).
 
Last edited by a moderator:
GM% = ((sale price) - (COGS))/(sale price)

So if that $599 card has $275 in materials/labor then GM% = 54%
$9 increase in cost GM = 53% GM

Not huge. 1%.

Over sales of 1M cards it's $9M in incremental margin lost on $315M GM$. Again, not amazing, but sure they care.
 
Not to AMD or Nvidia (or their partners).

As for how that affects you the consumer, well if they gave them away for free they wouldn't be in business very long...
Strawman. Nobody is giving anything away for free.

Gross margins: Yes, I get that.

Why does performance per millimeter matter one iota on either of these metrics? Even as a pure thought experiment, we cannot directly measure the size, and there's several hundred ways (or more) that we could measure performance. We can pull strawmen all day about how SI is the winner in INT OPS / mm^2, oh but wait GK is now the winner in DP FLOPS / mm^2. Oh but wait, INT is more worthwhile because herp derp, no way because DP FLOPS is way more important because of derp herp.

Seriously, why is anyone defending Kepler (or arguing against it) based on perf / mm? On a die we have no true size, on performance that is (at best) speculation? Who was it that had that quote? "You're taking apples plus oranges to compute grapes, and then comparing it to apples." Or something. That's what this whole diatribe feels like.

What am I missing?
 
How is perf/mm2 not a factor if some GPUs have die sizes that are close to what's technically feasible?

I find it fascinating how one architecture performs the same work load with more or less logic. Where else do you suggest we should discuss this to avoid boring you to death?
 
Last edited by a moderator:
I have to agree with ABQ's point. If buyers don't care about perf/mm (and I don't think they do) but instead care only about outright performance then the manufacturer with the $9 higher COGS might sell 10x the volume and laugh all the way to the bank. Remember GM$ is always more important than GM%.
 
Mize said:
I have to agree with ABQ's point. If buyers don't care about perf/mm (and I don't think they do) but instead care only about outright performance then the manufacturer with the $9 higher COGS might sell 10x the volume and laugh all the way to the bank. Remember GM$ is always more important than GM%.
I'm pretty sure buyers don't care about the differences between VILW4, VILW5 and GCN either. So that's also out of place in an architecture forum then? If not, what exactly is your point of bringing this up?
 
I'm pretty sure buyers don't care about the differences between VILW4, VILW5 and GCN either. So that's also out of place in an architecture forum then? If not, what exactly is your point of bringing this up?

LOL. I have been owned. :)


Edit: Hey wait a minute!!! It's not an architecture thread!!!
 
Last edited by a moderator:
How is perf/mm2 not a factor if some GPUs have die sizes that are close to what's technically feasible?
Based on what performance metric? Based on what measurement of die space, since we can't know what it is? We're talking about an unknown number that varies based on how you decide to measure it, divided by another unknown number that is at least somewhat more accurate in sizing constraint. To give us what, exactly?

I find it fascinating how one architecture performs the same work load with more or less logic. Where else do you suggest we should discuss this to avoid boring you to death?
Then wouldn't we be talking about perf / transistor count? I get it, performance per ROP is an interesting metric. Performance per TMU is also, so is performance per amount of register space. I understand why those are interesting questions to architecture, and they too belong here. But performance per total surface area of the physical die when density is A: unknown and B: nonlinear makes no sense.

You just claimed a 9 dollar price difference on a 21 dollar die was irrelevant. /facepalm
No, you misread. It's plainly up there for you to see, and for everyone else to see how you spun it. I quite clearly stated that $9 on a $599 market price is rounding error. And it is.

I'm pretty sure buyers don't care about the differences between VILW4, VILW5 and GCN either. So that's also out of place in an architecture forum then? If not, what exactly is your point of bringing this up?
No argument here.
 
Yes, I know "big" dies are more expensive than "small" dies, when all else is exactly equal. But all else is never exactly equal; NVIDIA and AMD get different rates on their silicon. They have different contracts on how they pay for yields. And when you're talking the difference between a $22 die and a $31 die on a card that costs $599 retail, the nine dollars is fundamentally rounding error.
FYI. While it doesn't affect your argument you're quite far off on the die cost of a $599 retail card. Outside of the workstation market no one has that kind of margin.
 
FYI. While it doesn't affect your argument you're quite far off on the die cost of a $599 retail card. Outside of the workstation market no one has that kind of margin.

Fair enough. Last I recall seeing any die costs, they were pretty low digit stuff. I made up a horrible hypothetical situation based purely on what may be my poor memory. Nevertheless, even if I'm off by a factor of two on my die pricing schemes, I am uncertain as to how it would generally affect my questioning of why Perf / MM matters.

Nevertheless, I'd also be interested in knowing more factual numbers on die costs if you know of any. I guess I should go Googling :)
 
Then wouldn't we be talking about perf / transistor count? I get it, performance per ROP is an interesting metric. Performance per TMU is also, so is performance per amount of register space. I understand why those are interesting questions to architecture, and they too belong here. But performance per total surface area of the physical die when density is A: unknown and B: nonlinear makes no sense.
perf/mm^2 is more useful than perf/transistor as density plays into architectural decisions. perf/mm^2 is also relevant to a technical discussion because it dictates the part cost for a IHV even though it only sets a minimum cost for the consumer.
 

Haha, I said make believe not "rumour to be verified in less than 48 hours".

<rant>
As a consumer, my give-a-fuck-o-meter is at dead ZERO when talking about how much perf per millimeters you can gimmie.

As a consumer I completely agree but this is Beyond3D's architecture forum, not Newegg ;)

But, to define if Nvidia has a "performance lead" we should define what a "performance lead" is. Nvidia had an absolute performance lead with Tesla/Fermi lines simpy because they went for a very big chip and AMD did not.

nVidia is still going for a big chip, that hasn't changed.

See, you can't assume that they have no big chip unless you assume that GK104 was always meant to occupy the flagship role. Now ask yourself how in the world could nVidia have assumed that a < 300mm^2 chip could beat AMD's best when they've been so far behind the perf/mm curve for several generations in a row? If we want to assume stuff then the assumptions should at least make sense!

It's a rather pointless argument anyhow, I think the facts (will) speak for themselves.
 
perf/mm^2 is more useful than perf/transistor as density plays into architectural decisions. perf/mm^2 is also relevant to a technical discussion because it dictates the part cost for a IHV even though it only sets a minimum cost for the consumer.

I know that architecture can drive part of the transistor density, but we have no way to know the total transistor count as you're already well aware. So density itself is only partially useful in terms of how much actual data we have to work with. And then "performance" is yet another cloudy metric; what performance are we even talking about? floating point ops? Integer ops? Texture addressing rate? Primitives rate? Raster rate? At what clockspeed? Pure theoretical output or observed? which application are we using to derive the measurement, and is that app capable of deriving a pure result that isn't otherwise bottlenecked?

We're taking a purposefully inaccurate transistor count (ie, a PR number) to derive density that fully ignores process capability outside of the architecture, and then dividing by a single dimension of observed performance that likely cannot be wholly separated from the other two dozen (or more) observable dimensions that any singular GPU might have. Hell, comparing the individual performance dimensions against eachother is far more useful and interesting (cycles being the most typical dimension -- flops per cycle, ROPS per cycle, blah-de-blah.) I get all of this, except why square millimeters has any useful link to performance.

The part cost for the IHV / minimum cost to consumer would be driven by total die size, not perf / mm^2, correct?. Unless of course we're going to stretch the example to the absolute maximum die size and then find the area under the curve to generate some concept of maximal performance that could ever be obtained. Except that't not entirely true, because a simple speed change would then move the goal posts once more.

Guess I just am not going to get it; performance per die area seems like so much handwaving to me. That doesn't mean I have any bearing on reality or this conversation / thread, but since so many people were absolutely ready to hand-wave me into the ground on why I must be a moron not to understand, I at least thank one or two of you for actually articulating rather than just doing facepalms and asking about the colors of the sky.
 
The part cost for the IHV / minimum cost to consumer would be driven by total die size, not perf / mm^2, correct?.
Correct, though perf/mm^2 can dictate margins. The perf part of perf/mm^2 is no more accurate than perf/dollar and other hand wavy metrics. However, it's more interesting to people now because that status quo has seemingly changed.
 
Guess I just am not going to get it; performance per die area seems like so much handwaving to me.

It has a direct bearing on absolute performance when competitors have considerably different die-sizes.

Correct, though perf/mm^2 can dictate margins. The perf part of perf/mm^2 is no more accurate than perf/dollar and other hand wavy metrics. However, it's more interesting to people now because that status quo has seemingly changed.

Exactly. Perf/mm on its own has always been interesting but what makes it especially relevant with Kepler is how much perf/mm has increased and what that means for the 28nm competitive landscape.
 
Correct, though perf/mm^2 can dictate margins. The perf part of perf/mm^2 is no more accurate than perf/dollar and other hand wavy metrics. However, it's more interesting to people now because that status quo has seemingly changed.

Even dollars seems like something more tangible, as even when the cost moves (as it always does), at least it has some bearing on reality. Nevertheless, I thank you for your time, and now vow to return focus of the thread back to the people who actually know WTF is going on, and the fanboys from both sides who are here to decry or encourage their chosen eyeglass color :cool:

I'm still pretty damned excited for the 680, and if the 670Ti is going to be as cool as the hints suggest, then I might finally find myself back on Team Green for the first time since the 7900GT-on-AGP days...
 
It has a direct bearing on absolute performance when competitors have considerably different die-sizes.



Exactly. Perf/mm on its own has always been interesting but what makes it especially relevant with Kepler is how much perf/mm has increased and what that means for the 28nm competitive landscape.


Oh yes, die size is VERY important NOW...predictably enough.

Could have sworn not long ago you were arguing with me how perf/mm was irrelevant and you didn't understand why it was discussed so much...dont feel like looking up those posts, though.
 
Back
Top