Let the beating begin!

JoshMST

Regular
What the hell, my hide needs toughening anyway.

http://www.penstarsys.com/editor/so3d/2006_04/index.html

Constructive criticism welcome. Of course, since I don't have access to how much ATI and NVIDIA actually pay for each raw die, I had to just let some of the numbers speak for themselves instead of applying a monetary value.

Enjoy... or not.

Edit: Silly me, the title should be "Let the Beating Begin"

Edit 2: Thanks for changing the title of the subject, my fingers don't always do well at 2am MST.
 
Last edited by a moderator:
  • Like
Reactions: Geo
1 big problem - afaik almost all chips made today do use some form of redundancy.
ATi claimed recently that although their chips are much bigger, through usage of redundancy they have very good yields. true, this can't be taken in account...
Also note claims that G73 has 4 quads internally, with only 3 "visible" (btw anyone who tried running Rivatuner on several cards in order to see if the "quad-mask" is changing?).
This can mean NV "increases" G73 yields by as much as 20% ... and obviously Ati is using smething similar ....
 
Well, for starters, I'm thinking that (according to Josh's prices and yeild calcs) ATI are not selling R580 dies for $555 per working unit! ;)

There are those that joke that the R580 is the die equivalent of a dinner plate. It is a huge die, and with only 36 good ones coming of a wafer with adequate yields, we can see that it is a very expensive die to produce

Presumably this applied to G70 as well, given that G70's die size is not actually that different.
 
Dave Baumann said:
Presumably this applied to G70 as well, given that G70's die size is not actually that different.

On another process though, so it's a bit different.
 
Well that's the kicker, unless you know the actual yield of die per wafer, as well as how they are buying each good die (per wafer ordered or per good die), you really don't know.

So, I used the only real metric I could find to get a "decent" idea of what a mature yield could possibly sustain. So die size, wafer size, edge exclusion, and 0.5 defects/cm square was the best I could do.

I would of course be very interested if anyone had any more concrete information on how NVIDIA and ATI deal with TSMC (at this point is it per wafer or per good die), as well as what kind of yields of good die per wafer they are getting, as well as percentages of those good die falling within the different speed bins. Once I had all that info, then we could get a much more accurate account of what each company pays per good die for each SKU! Spill it people!

Dear God Dave, who coached you with that bit of constructive criticism? Are you saying that 90 nm Low-K wafers are less expensive than 110 nm? Heh, well maybe I took "dinner plate" too far. Still, damn big die though.
 
Last edited by a moderator:
The problem with the analysis, so far as I can see, is that it doesn't seem to translate very well at all to the public margin#'s we have from the IHVs.

You'd be easily justified in looking at these numbers in thinking that NV has 50%+ margins advantage in the gpu business over ATI.

But what do we know about actual margins? We know that ATI, excluding the chipset business (which is awful margins for them, we all know) has stated that the rest of the business is in the 34-38% range. Let's be conservative and give ATI the middle of that range at 36% for the GPU business, which is still something like 80% of their business.

NV claims to be headed to 45%. Let's even give them that, on the assumption that they see the move from their current 40% to 45% to be driven by ramping the new 90nm chips into volume and retiring the older ones.

I tend to think that part of the analysis, 36% vs 45%, is giving NV the better of it on assumptions, but that's fine for the point I'm trying to make.

So, what does that tell us? That looks like, at max, a margins advantage to NV in the GPU business of 25% ((45-36)/36).

But that's the best possible case for NV, and when you start applying some other realities to it you have to start subtracting.

1). ATI has said, and I haven't heard NV dispute, that NV's very lucrative workstation business is a 60% margins business. How much does that add to NV's margins advantage having nothing to do with die size and yields?

2). NV has said repeatedly that their margins success story is not all about manufacturing. They are quick to point at internal company improvements from top to bottom. They bring it up anytime the question comes up, in fact. How much does that subtract from that 25% "max" advantage, if we're trying to calculate just the part that's down to manufacturing?

3). I pulled a fast one on just giving NV their 45%. . .while that's not inappropriate, it is a future consideration --they are talking about that for 4Q, I believe. What will ATI be doing between here and there? 80nm, of course, which should increase their margins as well, also reducing that 25% "max" manufacturing process.

How much do the three combined factors above reduce the picture from that max 25%? Well, that's a lot harder. . .but anyway I look at it, it seems to me that Josh's analysis doesn't map as well as it ought to what we know about the financial realities.
 
JoshMST said:
Dear God Dave, who coached you with that bit of constructive criticism? Are you saying that 90 nm Low-K wafers are less expensive than 110 nm? Heh, well maybe I took "dinner plate" too far. Still, damn big die though.

Again, the die size itself isn't siginficantly different from G70, so I'm not really sure why these type of comments come now when the die size doesn't really appear to be particularly out of the ordinary from previous die growth - R580's die size is inline with previous trends, G71 is currently the the abnormality, and unless we see some further high end parts go in the same direction as G71 then R580 isn't really out of the ordinary.

However, even a quick look at two figures would tell you that there is something seriously wrong with the calcuations. Your costings for a high end wafer was at $20K (while I don't have anything definate, thats I figure that I have had floating around in my head for a while now as well, so I'll bite that) and then there is 164 cores per wafer for R580. This puts the cost per core at around $121 and applying a 35% margin (IIRC this is what they said for desktop without integrated, and there is likely to be a premium for parts like R580 over that, but we don't know how quantifyable that is), which puts the selling price per core at around $164 - thats for all chips on a wafer to come out at the profit margins they are stating. That's not unreasonable, roadmaps from last year suggested that R580's die cost was ~$200, so allowing for some yeild drop off we are probably in sane areas. However, thats not supported by a yeild as low as 36 cores per wafer as the selling price would need to be significantly greater than that.

If we do consider that the fully workable yeild is that low then that would also probably suggest that they actually have significantly more chip inventory at fewer quads, and yet it has been now 3 months since the release of R580 and no products with these configurations have yet been released - by contrast, 7800 GT was announced a few weeks after GTX. ATI said recently that they sold 50,000 X1900's, which would equate to ~1400 wafers, yeilding at 36 full cores, so they have this many wafers worth of potential product just lying around after the first 5 weeks?

The fact that the X1900 "GTO" isn't here yet, and the rumours that there is a specific RV570 chip to fill this spot suggests that the yeild drop-off for this configuration perhaps isn't as great as may have been thought, and perhaps they have a redundancy mechanism that isn't the same as seen with other configurations.

Also, the comparisons to the R520 products are a little out of place, given that the lifecycles are at opposing ends - or, alternatively, why not have compared NV42 to RV530?
 
Excellent points Geo.

A couple of things to consider as well... NVIDIA's margins on their chipset products (both integrated and standard nForce 4) are still much lower than on the pure GPU side. Still, NVIDIA is not trying to push their chipsets as ATI is doing, so their margins have got to be better than the competition in that space.

We truly do not know very much about the actual yields on these products, and so that is a major point of contention for this article. We see supply of current 7900 GT and GTX as tight, is that because it is only 3 weeks after launch and the product is still ramping? Or did the transistor count cuts that NVIDIA did with the product reduce the redundancy of the G71 chip and make it harder to produce in good numbers? ATI had a lot of time to work on R520 due to the soft ground error, and as such it came out as a very well designed chip, and a very solid basis for R580. So with all that extra work, should we expect better yields on R580 than on G71, even though it is a significantly larger chip?

Many questions here, and hopefully it will spark some good debate.
 
The $20,000 per wafer, from what I understand, is really the top end and used for very specific and low run products. From what I am hearing, a product like the G73 or RV515 usually come in at around $2000 per wafer, while the higher end consumer products (depending on metal layers) can be between $4,000 and $10,000. Now, we can go really over the top and see that some test wafers being run at breakneck speed to get working samples back quickly can approach $100,000 per wafer.

So, I would imagine that for for G71 and R580 production the wafer costs are probably closer to $4,000. Otherwise, just as you had pointed out, we are not paying for dies that are $555 a piece.
 
A product like R580/G71 are probably going to be on the highest performance process there is. However, that still doesn't address the fact that, according to this analysis, there should probably be a majority of R580 product that has failed which would tend to indicate that there should be a significant portion around that would fit into a lower end SKU, yet, that product isn't here after a considerable time on the market.
 
Dave Baumann said:
A product like R580/G71 are probably going to be on the highest performance process there is. However, that still doesn't address the fact that, according to this analysis, there should probably be a majority of R580 product that has failed which would tend to indicate that there should be a significant portion around that would fit into a lower end SKU, yet, that product isn't here after a considerable time on the market.

I'm not disagreeing with you there Dave, but the very fact of the matter is that these things are considered trade secrets, and guys like me don't have access to this information. So the best we can do is put out a generalized model and comment on it. How much redundancy is there on each of these designs? How many defects can a chip handle yet still be considered good? Are there other processes after manufacturing that can fix certain defects? Obviously these companies have looked into many possibilities, because there is no use throwing away dies that can be salvaged in one way or another, and these guys cannot simply wish away defects on a wafer. There will always be defects, but who has the better strategy to work around these defects?

Again, NV and ATI are not paying $20K per wafer on the high end. As I mentioned above, the figure is usually bandied about for high end products at low volumes. G71 and R580 are not exactly low volume ;)
 
I found it interesting that Josh is of the opinion that NV is going to skip 80nm for G7x. Feeling pretty good about that, Josh? Or is that one of those "51%" predictions? :smile:

A famous jurist once said that the trick of being a judge was sounding 100% certain when in fact you were only 51% certain. . .:LOL:
 
geo said:
I found it interesting that Josh is of the opinion that NV is going to skip 80nm for G7x. Feeling pretty good about that, Josh? Or is that one of those "51%" predictions? :smile:

A famous jurist once said that the trick of being a judge was sounding 100% certain when in fact you were only 51% certain. . .:LOL:

Heh, I am about 75% certain. I mean, did NVIDIA ever port NV40 to 110 nm? Sure, they did NV42 to 110 nm, but with the amounts of GF 6800 GT and Ultra that were sold from August of 2004 through June 2005, I think we can see that NVIDIA might not feel the need to make such a shrink. Considering it may only be 5 to 7 months before we see G8x products be release, would NVIDIA be better off by porting their current designs to 80 nm? My gut feeling here is no.
 
JoshMST said:
Heh, I am about 75% certain. I mean, did NVIDIA ever port NV40 to 110 nm? Sure, they did NV42 to 110 nm, but with the amounts of GF 6800 GT and Ultra that were sold from August of 2004 through June 2005, I think we can see that NVIDIA might not feel the need to make such a shrink. Considering it may only be 5 to 7 months before we see G8x products be release, would NVIDIA be better off by porting their current designs to 80 nm? My gut feeling here is no.

Thanks. I'm thinking 7-9 on G8x, btw. NV seemed pointed at Nov on the inside in a recent conference call (in fact they referred to it as mostly a 2007 product).

Edit: Tho Xbit seems of the opinion that R6x0 is definitely a 2007 product now, so maybe Jen-Hsun will still get his "first to market" bragging rights that he loves so well. ;)
 
Last edited by a moderator:
JoshMST said:
Again, NV and ATI are not paying $20K per wafer on the high end. As I mentioned above, the figure is usually bandied about for high end products at low volumes. G71 and R580 are not exactly low volume ;)

Yet they are the highest-end, lowest-volume products designed by both IHVs.
 
kemosabe said:
Yet they are the highest-end, lowest-volume products designed by both IHVs.

That is true, but NVDA and ATI have much higher volumes for those types of products than pretty much everyone else out there. $20K is usually reserved for those fabless semi's that order between 20 and 100 wafers for specialized products. $100K per is for 10 to 20 wafers that are fast tracked through the system.
 
JoshMST said:
Are you saying that 90 nm Low-K wafers are less expensive than 110 nm? Heh, well maybe I took "dinner plate" too far. Still, damn big die though.

I thought that 90nm Low-K wafers are 4 inches larger than 110nm wafers. So you can't really compare the two.
 
Back
Top