NVIDIA Kepler speculation thread

Kaotik said:
it's not the first time nVidia is the only one having so much problems, it happened with 40nm too - once TSMC fixed the initial problems, everyone else were happy, but nVidia continued to get bad yields.
I wonder what your sources are, because I know for a fact that your statement is 100% BS. The yield problems lasted much longer than 'initial problems'. More like 'a year or so'.

As for the yields continuing to be bad for Nvidia after that: read their financial statements. They say just the opposite.
 
Last edited by a moderator:
If it's really just NV, then I stand corrected.

I guess one thing that I find a little opaque is how, if so much design automation / standard cell libraries / etc... are in use, one company can get significantly worse results than another (assuming both have functional designs, i.e. excluding logical design bugs like the Pentium FDIV fiasco).

Of course, the amount and grain of redundancy included on the chip is a factor, as is die size. But your die size is known, and redundancy is presumably designed in based on (at least) projected defect densities. I would assume all fab customers get the same numbers from TSMC. Since TSMC claims they are accurate (at least for this metric), what other things could cause a yield estimate to be too good?
 
psurge said:
But your die size is known, and redundancy is presumably designed in based on (at least) projected defect densities. I would assume all fab customers get the same numbers from TSMC.
This is pretty much how it is in real life.

Since TSMC claims they are accurate (at least for this metric), what other things could cause a yield estimate to be too good?
Even TSMC can only predict based on prior experience what the defect rate is going to be going forward. Sometime it's too optimistic, sometimes the opposite.

You will get variation between difference companies: there are differences in metal stackup (how many metal layers, their characteristics, which layers to use for power etc.), there are different cell libraries with different rail width, many RAMs are custom designed etc. There is obviously the case were AMD explicitly designed around the via issue for 40nm, which is a nice case of design for manufacturability.

But despite all that, in broad lines, similar designs from different companies will have similar yields.
 
dnavas said:
Aggressive assumptions about yield benefits for removing hot-clock or other changes to internal architecture?
A CPU die is dramatically different from a GPU so architecture will play a role, but the parameters that matter for yield should be pretty similar for GPUs by different companies. A hot clocked design may have a bit more variability, but I don't think it will be material enough to matter. (Let's not forget the lower area, which should improve yield.)

I mean: if a hot clock would have lower power but also lower yield, I'm sure you'd still go for the lower power.
 
IMHO , I am done with the yield rumors crap , in every generation NVIDIA has produced since the GT200 , Charlie comes out and says NVIDIA has a yield problem , Frankly this is getting tedious , boring and ridiculous , every time , he comes out and says stuff like that with no substantial evidence , we don't even get to see a noticeable impact on the company's financials if something like that is true .

No one cares If NVIDIA , Intel or AMD has a yield problem , every company suffers from such things one way or another , and apparently it's Charlie's job to downplay the problem for the other companies and to magnify it for NVIDIA , what we care about is the final product .. Charlie can inflate such things to get some site traffic as much as he likes , I am not falling for his BS once again .
 
To be fair, he did reference a statement from a financial call. That said... sorry if I dragged down SNR (though I do think silent_guy has compensated for a net positive).

Wish there were something concrete to discuss/compare with GCN!
 
I wonder what your sources are, because I know for a fact that your statement is 100% BS. The yield problems lasted much longer than 'initial problems'. More like 'a year or so'.
Sorry, I was tired last night and didn't think how long 40nm was actually in use before HD5 & GTx4xx-series - regardless I meant the phase where they supposedly "fixed the problems" around a year ago
 
Can somebody clue me in on how this "double confirmed" joke was born? :)

Some random post on a Chinese forum or something with crazy rumors about an upcoming product line (maybe Southern Islands, I can't remember). It didn't make much sense, but some of the "information" was boldly marked DOUBLE CONFIRMED.

Edit: Dr Evil beat me to it, but the original post has apparently been deleted. If I recall correctly, it had quite a bit of stuff in it, with a table and comments along the lines of "A, B and C are just rumors, I'm not sure, D, E and F are probably solid but not double confirmed, G, H and I are double confirmed".
 
Last edited by a moderator:
Somebody correct me if I'm wrong (the Southern Islands thread is a monster), but I think it's not even clear which 28nm process AMD is using.

Could they be using the 28LP process at TSMC? hence why they were able to release 28nm products so early than everybody else?

I also remember reading about TSMC having problems with the 28HP process (HighK metal gate) which could be the process nVIDIA opted to use but not sure how that turned out.
 
Could they be using the 28LP process at TSMC? hence why they were able to release 28nm products so early than everybody else?

I also remember reading about TSMC having problems with the 28HP process (HighK metal gate) which could be the process nVIDIA opted to use but not sure how that turned out.

28HPL anyone?
;)
 
IMHO , I am done with the yield rumors crap , in every generation NVIDIA has produced since the GT200 , Charlie comes out and says NVIDIA has a yield problem , Frankly this is getting tedious , boring and ridiculous , every time , he comes out and says stuff like that with no substantial evidence , we don't even get to see a noticeable impact on the company's financials if something like that is true .

You don't really need to look very far for evidence when Nvidia is screaming about poor yields. They did it on 40nm, they are now doing it on 28nm - http://www.electronicsweekly.com/Articles/17/02/2012/52998/nvidia-cites-poor-28nm-yields.htm

No one cares If NVIDIA , Intel or AMD has a yield problem , every company suffers from such things one way or another , and apparently it's Charlie's job to downplay the problem for the other companies and to magnify it for NVIDIA , what we care about is the final product .. Charlie can inflate such things to get some site traffic as much as he likes , I am not falling for his BS once again .
I dunno how you can say it's BS when Nvidia themselves are saying it's a problem. As far as not impacting their financials, it was the main reason for their low Q1 outlook. You could probably use forgetting about Charlie for a bit and just looking at the pretty overwhelming evidence.
 
All the expectations were failed at PDXLAN - all nVidia demonstrated with Gearbox was Borderlands 2 running on Tegra 3, not a single word about Kepler
 
Back
Top