Isn't it at about 550 sqmm?
It's not reticle limited - it's cost limited. It certainly couldn't have been 800mm² or something crazy like that, but it could have been a fair bit bigger without hitting the reticle limit.
This whole reticle limit thing started when David Kanter asked John Nickolls about it for GT200 at the Tesla Editor Day. I was in the same room (along with Damien Triolet and iirc Theo Valich - I'm not kidding) and he basically didn't deny it, so David reasonably interpreted it as meaning it was indeed reticle limited. It wasn't - so presumably we just misinterpreted it and frankly John wasn't really the right person to know that anyway. Nobody, no matter how smart they are, can know everything for a company the size of NVIDIA or a field as extensive as 3D Graphics. And he was definitely very nice in person, RIP
(see RecessionCone's post below, here's an
article link)
But let's not forget the other side of the case: there is no question that TSMC fixed the issues completely early 2010 and 40nm became quickly a very stable high yielding process without any via doubling monkey business.
Ding ding. Thank you! I'm tired of people forgetting about that - TSMC clearly said so at the time, but people didn't believe them, so it apparently slipped out of everyone's mind. It happened in January 2010 according to one of my sources (not sure if that's what they applied the fix or when they got the first fixed wafers back - presumably the latter).
The 40nm process is of exceptional quality right now, without monkey business.
Exactly - and the best proof of that is its massive commercial success. TSMC 40nm is now one of the most successful processes in the entire history of the foundry industry. It has become TSMC's fastest process to surpass its predecessor in volume - and in fact, their fastest process to become their largest revenue contributor!
rpg.314 said:
People's focus on yields. They see considerable disparity and latch onto it.
NVIDIA probably had fairly low margins on GT21x before that but it is noteworthy that they shipped in high volumes as early as Q2/Q3 2009 when AMD was producing a lot less RV740s (if any at all for all I know). So the idea they had 40nm-specific problems on those that AMD did not is rather absurd. Yes, GT21x probably yielded worse than RV8xx before TSMC fixed the process, but it didn't turn out as badly as some people think it did.
However there is one more thing: NVIDIA did have one 40nm-specific problem with GF100 and the metal-heavy 'fabric' around the chip (as explained in that golem.de article). They were more than 75% responsible for that problem and admitted as such - while the tools did not properly simulate the fabric, NVIDIA should have known they couldn't be sure it would work, and they're the ones who took the risk anyway. What they should have done, given the low maturity of the tools at the time, is either: 1) do something more conservative, or better: 2) make a test chip for that specific aspect to make sure it'd work in advance.
In the end, they fixed the fabric, and at about the same time TSMC fixed the process. GF100's mass production yields and margins were fine. It was behind schedule which hurt them somewhat but they apparently survived. And ironically if didn't have the fabric problem they'd have mass produced before TSMC fixed the process and would have had much lower initial yields so they'd have been screwed either way. Then later they made incremental improvements with GF11x which had relatively little to do with process problems. End of story.
(BTW, just to make the cynics happy given how little there is for them to like here otherwise: GF10x uses 2 transistor Vt levels whereas GF11x uses 3 transistor Vt levels. That will allow NVIDIA to reduce power consumption but it also increases wafer price slightly. It's still a good trade-off and AMD might actually be doing the same, but it's not entirely free)