This would probably be better for us the consumers. As much as I'd like AMD to master cheap MCMs with massive sideports it wouldn't be good for competition at the moment.I'm willing to give GT3xx a bit of breathing room as well if its going up against an AFR solution from AMD. Either a modest price premium, a small performance disadvantage or even a small combination of both would be acceptable to me if it means no input lag and more consistent performance.
You've said this before. Why?As much as I'd like AMD to master cheap MCMs with massive sideports it wouldn't be good for competition at the moment.
Because NVIDIA is months behind the curve, without an architectural advantage in at least one consumer segment they would be in for major hurt in margins IMO. I don't think scientific computing alone is going to bring in the bread.You've said this before. Why?
They'd have to pretty much go with the A1 was to pull that off, wouldn't they?even if demand might make it hard to buy one for another month or two.
If Charlie is right, I'd say it's not "gutsy" so much as "insanity driven by desperation"!NVIDIA KEEPS HOWLING that it will have GT300 out this year, so we did a little digging on the topic. All you can say is that it seems willing to fritter away large swaths of its and TSMC's cash to meet a PR goal.
The short story is that GT300 taped out in week 3 or 4 of July 2009. It is a huge chip, about 23mm X 23mm, but we are now hearing it will be 23.something X 23.something millemeters, so 530mm^2 might be a slight underestimate. In any case, TSMC runs about 8 weeks for hot lots, 2 more weeks for bringup, debug, and new masks, and finally another 10-12 for production silicon.
GT300 wafers went in at TSMC on August 3, give or take a couple of days, so if you add 22 (8 + 2 + 12) weeks to that, you are basically into 2010 before TSMC gives you the wafers back with pretty pictures inscribed on them. Like babies, you just can't rush the process with more, um, hands.
While it is not rushable, you can do some of it in parallel, and that uses what are called risk wafers. Those wafers are put in at the same time the first silicon hot lots are, so they have been in the oven about 2 weeks now. Just before the final layers are put on, the risk wafers are parked, unfinished, off to the side of the TSMC line.
The idea is that with the hot lots, Nvidia gets cards back and debugs them. Any changes that are necessary can hopefully be retrofitted to the risk wafers, and then they are finalized. Basically, the risk wafers are a bet that there won't be anything major wrong with the GT300 design, in that any changes are minor enough to be literally patched over.
Risk wafers are called risk wafers for a reason. If you do need to make a change big enough, say a metal layer spin, the risk wafers are then what are called scrap wafers. You need to make damn sure your design is perfect, or nearly perfect, before you risk it. Given Nvidia's abysmal execution recently, that is one gutsy move.
He claims that a metal spin will result in scrapping the whole risk lot. It's just the opposite: the reason you park the wafers unfinished is because it allows you to do a metal spin later on without having to scrap them. Since the vast majority of bugs are fixable in metal only, you get the best of both worlds: the chance to make a quick fix that's less costly (because base is more expensive than metal) and a way to reduce the fab delay significantly (because base processing take roughly half of the processing time through the fab.) There is not a single company who doesn't park wafers before metal for exactly this reason.If Charlie is right, I'd say it's not "gutsy" so much as "insanity driven by desperation"!
Dunno, if it's A1 - I'm doubtful that was the first tape-out in July. Regardless, it depends how desperate NVidia is to have a spoiler on the market, like GTX295 or 7800GTX, and a halo gets extra shine when you can say "we can't keep up with the demand!".
Jawed
The beauty of this article is not in the actual information content but in the way it shows directly how the emperor has no clothes.
silent guy seems to be saying that wafers are parked before the majority of metal layers are constructed.I'm curious whether there's any room to be even more aggressive though. Obviously you could mass produce a few hundred wafers on A1 all the way to the last stages and hope for the best... But is there any intermediate step? I guess what Charlie would be describing, if it made any sense (but I suspect it doesn't) is a scheme where you went through most metal layers but not all. I can't see what the point of that would be in practice though.
I have been saying they were doing exactly that ever since Charlie claimed it was impossible for them to meet their 2009 target (feel free to dig up the forum posts if you don't believe me) - so it's good to see he's finally hedging his bets.
SemiAccurate said:Which way will it go? We will know in Q1. There will either be an updated spec card with a higher stepping or there won't be. Nvidia will have cards out in real, not PR, quantities or it won't. Nvidia will either have a hidden charge on its balance sheet or TSMC will.
As I said above, let's not be ridiculous. This is perfectly standard procedure. NVIDIA is *already* paying for GT21x per good chip and Morris Chang has explicitly said 40nm gross margins weren't as high as 65nm right now but they should improve to the same level as yields improve in the next few months. Selling per good chip doesn't require a "write-off" as Charlie claims, it's just standard gross margin accounting. And TSMC doesn't take the design risk if A2 doesn't work out because of design bugs, obviously.Woah, so NVidia's going to try to produce say 600,000 GT300s (say TSMC guarantees 60% yield) between now and Christmas. Awesome, they're going to make TSMC pay! Wow, TSMC must be desperate. (Sure, NVidia will then have to bin those GPUs and maybe only 80% of those will work as GTXs and GTSs.)
That's assuming A1 is sufficiently bug-free to go to reviewers. That's a big assumption. The fact of the matter is that if they're willing to pay a bit extra every step of the way and take a bit extra risk, they can easily hard launch this thing in late November.GT216 is A2, which is one metal spin it seems. So by cherry-picking from the first lot (50 wafers?), NVidia can get a few hundred review samples out for W7 launch on A1. Then worry about successive metal spins to fix up the remaining risk production (thousands of wafers) so that they have enough to sell.
Actually, he's saying they are parked before ANY of the metal layers are constructed. My question is whether there is any reason to construct one, a few, or most of the metal layers while not actually finishing the whole process. Based on your assumed ordering, I think it wouldn't make much sense, but I'd appreciate a more expert opinion.silent guy seems to be saying that wafers are parked before the majority of metal layers are constructed.
Well, you apparently can't fix borked ROPs Silicon layer changes being required for a product to work at all are relatively rare, although they do happen (MCP79/Ion, for example).Really the question is about the meaning of "metal spin", what kind of faults you can fix
I've not seen any statement that NVidia is paying per good GT21x, so quote, please?As I said above, let's not be ridiculous. This is perfectly standard procedure. NVIDIA is *already* paying for GT21x per good chip and Morris Chang has explicitly said 40nm gross margins weren't as high as 65nm right now but they should improve to the same level as yields improve in the next few months.
Like GTX295 was a hard launch?Selling per good chip doesn't require a "write-off" as Charlie claims, it's just standard gross margin accounting. And TSMC doesn't take the design risk if A2 doesn't work out because of design bugs, obviously.
That's assuming A1 is sufficiently bug-free to go to reviewers. That's a big assumption. The fact of the matter is that if they're willing to pay a bit extra every step of the way and take a bit extra risk, they can easily hard launch this thing in late November.
A3 was required for GT200b, and that's on a process that has been in full production since July 2007.Given how screwed up in so many ways they'll be if they need an A3 though, I'm not even sure that's their primary worry.
Like GTX295 was a hard launch?
Jawed
I And before that he said "we hear Nvidia is paying a premium for each of the risk wafers" - so is it per wafer or per good die after all? Unless risk refers strictly to hot lots, which would contradict the rest of the article? And why would I believe they're being charged a premium when NV themselves claimed to have preferential pricing and Jen-Hsun and Morris Chang have been good personal friends for a very long time? This is patently absurd.
As for Charlie's obsession with Global Foundries - here's a hint: it won't matter a single discrete GPU product coming out at either NVIDIA or AMD for a full *two* years. 32G is dead and both GF and TSMC are now claimed 28HP tape-outs will happen in 4Q10 - y'know, like 40nm tape-outs apparently happened in 4Q08?