NVIDIA Kepler speculation thread

I have this great idea for a website: write a whole bunch of nonsense breaking news about company XYZ. Then a bit later, I write more breaking news that my earlier breaking news is probably too hard for company XYZ because of whatever other nonsense reason. Then even later, I write that my first breaking news won't happen after all, because, wouldn't you know it, it was indeed too hard for company XYZ. And then I write "I told you so." And nobody will ever be able to prove me wrong.

Rinse, lather, repeat.
++1
 
I dont consider 300 vs 350 all that relevant a die size difference.
If it's indeed 300mm2, then a die size difference of 16% sounds a lot less than a difference of 21%, doesn't it? Hint: you accidentally used a number of 350 instead of 365.

I also suspect there's some fanboy number fudging going on those Kepler measurements as I understand it's actually pretty hard to get an exact die measurement (you actually have to rip the die out, which nobody will ever do).
Like fanboy Charlie who launched a price of $299? :LOL: (Dear Godwin, where were you when the word 'fanboy' was invented?)

All die size guesses are in the open: somebody posts a picture on the web. Others bring out photoshop and do whatever they think reasonable to come up with an estimate, which tend to vary wildly due to lack of calibration references or low resolution source image. Feel free to point out all inherent pro-Nvidia biases in that process if you like. I'd attribute it more to uncertainty.

And, BTW, you really don't need to rip out a die to measure it. A vernier caliper should get the job done just fine, give or take a few 100um to account for the glue around the die.

Still funny how die size is really important now when all those years AMD led it was irrelevant...
You never get tired of this slam dunk argument, do you?

If anything I'd compliment AMD on their larger die, as I think it's a better idea to have the performance lead with a larger die, than not (especially when it's 300 vs 350, rather than 350 vs 500+ as typically AMD has been relative to Nvidia in the past). I've been wishing AMD would start pushing their die sizes up for a while personally. Although they didn't, at least this time it is larger than the competitor.
Tahiti has a size of 365mm2. Cayman had a size of 384mm2. I don't quite see the upward trend here, but, hey, compliments to them!
Also I don't follow you: you're complementing AMD for making a larger chip that has the same performance? Where were you when Nvidia was in that position? Why don't you simply make the argument I made earlier? There's no shame in having a bigger die if you can monetize that difference by opening up new markets. And it won't make you look like a tool.

Bottom line this is the first time in a long time, ages honestly, Nvidia apparently wont have clear single GPU performance leadership, for an indefinite period of time, and that is unequivocally a step back for them.
What if they simply share the GPU performance leadership with a die that's more than 20% cheaper to produce, with a cheaper PCB with less RAM and less powerful power supply circuit? I think they'll like that bottom line much more.
 
If it's indeed 300mm2, then a die size difference of 16% sounds a lot less than a difference of 21%, doesn't it? Hint: you accidentally used a number of 350 instead of 365.


Like fanboy Charlie who launched a price of $299? :LOL: (Dear Godwin, where were you when the word 'fanboy' was invented?)

All die size guesses are in the open: somebody posts a picture on the web. Others bring out photoshop and do whatever they think reasonable to come up with an estimate, which tend to vary wildly due to lack of calibration references or low resolution source image. Feel free to point out all inherent pro-Nvidia biases in that process if you like. I'd attribute it more to uncertainty.

And, BTW, you really don't need to rip out a die to measure it. A vernier caliper should get the job done just fine, give or take a few 100um to account for the glue around the die.


You never get tired of this slam dunk argument, do you?


Tahiti has a size of 365mm2. Cayman had a size of 384mm2. I don't quite see the upward trend here, but, hey, compliments to them!
Also I don't follow you: you're complementing AMD for making a larger chip that has the same performance? Where were you when Nvidia was in that position? Why don't you simply make the argument I made earlier? There's no shame in having a bigger die if you can monetize that difference by opening up new markets. And it won't make you look like a tool.


What if they simply share the GPU performance leadership with a die that's more than 20% cheaper to produce, with a cheaper PCB with less RAM and less powerful power supply circuit? I think they'll like that bottom line much more.

GK104 die is shrinking every day. I'm afraid it will simply vanish into oblivion before its even released!

And it's not released yet, right? So right now Nvidia isnt sharing much of anything.
 
GK104 die is shrinking every day. I'm afraid it will simply vanish into oblivion before its even released!

And it's not released yet, right? So right now Nvidia isnt sharing much of anything.

We have an alleged die shot.

nvidia-k104-core-2.jpg
 
R600 was an epic fail, especially because of its shelf life that was less than 6 months. Tahiti is far from it. Once we see a die shot of Pitcairn, it should be relatively easy to get an estimate of the cost of DP on the size of the shaders. If it's on the order of 20% (as I expect), then that would bring a non-DP Tahiti in at a smaller or similar size of GK104. What's really seems to be happening this generation is that Nvidia is closing the area efficiency gap that's been there for years. With 2 players close to maximum efficiency, going forward, the one thing that will determine performance will not be technical competency but a game of marketing bluff: how to trade off die size vs. performance vs. features (DP/ECC or not) vs. what your competitor will do.

I'd say that another determining factor, especially going forward, will be software, namely the juxtaposition between the stack that an IHV offers and its developer centric efforts. One needs only look at how the pro market looks, or at where CUDA sits versus CL to see that NV has been stellar on that front, whereas AMD is...hmm...let us be indulgent and say laggard.
 
Err... maybe I'm missing something, but how does allowing a 52Mhz boost make any sense whatsoever? I mean, are people supposed to fist-pump when they get a 5% speed bump (if that)?
 
No hot-clock? *snif*

Or the "base clock" should imply there's actually a shader domain?

I didn't see anything in the translation that contradicted the existence of the hot-clock, but one would suppose if there was a higher clock in the system, it would be on the slide. One can't expect the shader count to have risen so dramatically unless "no hot-clock", "changes to 'core'", "dramatic increase in shader area use", "faerie dust". If there is any faerie dust, I'd expect it to be aimed at the thermal cost of data movement, not on core counts....
 
Back
Top