My editorial is finally out.

2 months for this? :p j/k

Very well written. Things I have never even thought about before are now swirling through my head.
 
Interesting insights, and a worthy read. I'll post something more constructive when I've read it for a second time.

Thanks for taking the time to not only write this article, but for running the site for however many months it's been. It's been invaluable!

EDIT: Read it again, and whilst there's nothing startingly new there, it's always useful to look at things from a different perspective. If nothing else, it serves as a useful collection of many things which have gone on over the last few months... maybe you should forward Jen Hsun a link to the article? :LOL:
 
The Dig continues his standing ovation from nVnews and fumbles in his pocket for a lighter to hold up.

An outstanding editorial Uttar, truly! Thanks for sharing a unique look inside the rumor mill. 8)

BTW-Have you shared this with anyone from nVidia yet? If so, any fun reactions? :LOL:
 
First page: I can't tell if you're calling NVIDIA evil, or saying that people say that NVIDIA is evil.

Page six: where did the 347M go? Full metal tapeouts do not cost 10M a piece, even in .13u. 1M is closer to the mark. Also, your math doesn't work out on the cost per chip and the number of good chips. A .13u wafer costs somewhere between $1500 and $2500, depending on how much the fab loves you. Not bothering to do the math (so I might be wrong), but you're going to get more than 150 die off of a 10" wafer, and (I believe) all .13u fab lines are 12" wafers.
 
RussSchultz said:
First page: I can't tell if you're calling NVIDIA evil, or saying that people say that NVIDIA is evil.

I'm not calling anyone anything. I'm saying a general opinion often is to consider NVIDIA as a whole to be evil.

Page six: where did the 347M go? Full metal tapeouts do not cost 10M a piece, even in .13u. 1M is closer to the mark. Also, your math doesn't work out on the cost per chip and the number of good chips. A .13u wafer costs somewhere between $1500 and $2500, depending on how much the fab loves you. Not bothering to do the math (so I might be wrong), but you're going to get more than 150 die off of a 10" wafer, and (I believe) all .13u fab lines are 12" wafers.

I admit that's certainly the least reliable part of the whole editorial. It's mostly based on the information of 1-2 sources, which are both of pretty darn good reliability, but both got major things wrong in the past.

I'm wondering why you're saying that you're going to get more than 150 dies per wafer though. If you look at the numbers Tom's, for example, gives for the Athlon 64 ( 122 CPUs/wafer considering 18% waste ), I'm intrigued. I admit that type of stuff never was my specialization.

I'm sure you know those things better than me, so if you could actually show me how wrong I am about these things, I'd appreciate it.


Uttar
 
Isn't the cost of 10M for the Nvidia resources HW testing verification etc, whilst 1M is the cost of manufacture?
 
The fab charges something like 500k to 1M for a full mask set. Generally, on any given project you pay this once. A full mask set includes some 20-30 different masks, one for each layer/step of the fabrication process. The first half or so are involved in building up the transistors & capacitors (and inductors, if you're feeling silly). The next half are referred to as the 'metal layers', which define the routing between the elements in silicon. These metal layers are typically where changes are centered in minor revisions of the chip. They'll cost anywhere between 10k and 50k per layer (depending on which one it is)

There's engineering behind each tapeout, but not 9M dollars worth. (Excluding the overall engineering design effort, of course). The engineers will run sims, the layout team will make changes to the netlist, validation will check whatever, but its generally done in about 2 weeks and you're likely going to be paying those engineers to work anyways, and they're doing it on equipment you've already bought.

As for the number of chips on a wafer, getting 150 per wafer may be "about right", assuming they're the approximately the same size as athlon 64s, but getting 7-10 and the cost being 60 dollars just isn't right. (10*$60 = $600, which is much less than a cost of a wafer)
 
RussSchultz said:
These metal layers are typically where changes are centered in minor revisions of the chip.

Remember the chip simply didn't work AFAIK. It's not like they did a few minor changes here and there. Between tape-outs, quite a few things were cut.
Plus, as I think you noted yourself, it's impossible those $10M went only into the tape-out costs. There are a few possibilities, should this number have any kind of real origin:

a) Hardware bought between tape-outs and asked by engineers to have additionnal debugging power is included.
b) Some of the risk production runs are included in the tape-out costs.
c) NVIDIA, being used to always get things working on their first try, always produced a lot of chips at most tape-outs.

Or it could be a combination of these factors. Or simply unfounded.

As for the number of chips on a wafer, getting 150 per wafer may be "about right", assuming they're the approximately the same size as athlon 64s, but getting 7-10 and the cost being 60 dollars just isn't right.

Good point. Actually, I think I can explain that.
Those quotes come from the same person, but there were things said inbetween. I would suspect the 7-10 chips number corresponds to one of the first tape-outs that actually worked, where they rushed production so they actually had something to show.
And then $60/chip would be the cost they had in their last tape-out, the one with the highest yields.

I'll have to put a small correction in the bottom of Part 6 to make that clearer. Thanks for pointing out that mistake to me :)

PaulS:
EDIT: Read it again, and whilst there's nothing startingly new there, it's always useful to look at things from a different perspective. If nothing else, it serves as a useful collection of many things which have gone on over the last few months... maybe you should forward Jen Hsun a link to the article? :LOL:

I agree that beside the different perspective ( you do agree it's a different perspective, at least? I've yet to see anyone have a similar one. ), there's not a LOT of new info in there.
And much of the stuff, I had already hinted to posts in these forums. So even though IT'S new for 95% of the reader, I suspect it might not be new to you.

Although I'd like to highlight this part in the 3DMark03 debacle section:
Compounding the error, when the debacle started, pretty much everyone in the company—including many driver developers!— were taken by surprise. Weeks later, many still had no clue on what the optimization/cheating issue was all about.

AFAIK, pretty much EVERYONE in the company was taken by surprise. While everyone was shouting how evil NVIDIA was, (nearly) nobody knew WTF was happening and why they were so suddenly angry at them.
Many driver developers and certain well-placed guys also had no idea about the whole thing even a few weeks after the articles went public.

Was this public information? I don't think so, but please feel free to prove me wrong :)


Uttar
 
Uttar said:
I agree that beside the different perspective ( you do agree it's a different perspective, at least? I've yet to see anyone have a similar one. ), there's not a LOT of new info in there.
And much of the stuff, I had already hinted to posts in these forums. So even though IT'S new for 95% of the reader, I suspect it might not be new to you.

Although I'd like to highlight this part in the 3DMark03 debacle section:
Compounding the error, when the debacle started, pretty much everyone in the company—including many driver developers!— were taken by surprise. Weeks later, many still had no clue on what the optimization/cheating issue was all about.

AFAIK, pretty much EVERYONE in the company was taken by surprise. While everyone was shouting how evil NVIDIA was, (nearly) nobody knew WTF was happening and why they were so suddenly angry at them.
Many driver developers and certain well-placed guys also had no idea about the whole thing even a few weeks after the articles went public.

Was this public information? I don't think so, but please feel free to prove me wrong :)

Yes, I agree it's a different perspective - hence I made note of that in my post. Information doesn't have to be 100% "new" to be interesting or of use :)

Thinking about it, however, I think I may have been a bit hasty about saying that there's not much information in there that wasn't already known - I forget that a lot of people a) don't keep abreast of developments in communities such as this B3D one, and b) won't necessarily have heard X or Y things which weren't ever put on boards like this. Coupled with me having come to some of the conclusions in the article based on my own thoughts/research, i'm sure there probably IS new information in there. Just not necessarily for everyone.

Should have made that clearer to start with! Good job, regardless :D
 
( Correction, 5 hours after release: It seems the 7-10 chips and $60 numbers are not for the same timeframe; 7-10 chips per wafer would bring the costs significantly beyond $60 per chip. That would imply NVIDIA did some production at 7-10 chips, just to have something to show. And then they improved yields, but didn't manage to get costs lower than $60 per chip. Also, the $10M/tape-out number most likely includes other expenses related to the tape-outs, since a normal tape-out should cost around $1M. Thanks to Russ Schultz for pointing that out. )

Added this to Part 6, just after the quotes.
Does that resolve the problems you noted, Russ? :)


Uttar
 
I still don't know where anybody can come up with $10M per tapeout, without inventing costs.

1) I would be absolutely, completely suprised if the NV30 had 6 full layer revisions. Maybe one or two. They are pretty rare.
2) Even if there were 7 full mask sets made, I still don't know where you're getting that extra ~$9M. Maybe if you amortize ALL of the development costs for the entire project over those 7 revisions, you'd be in that ball park.
 
RussSchultz said:
I still don't know where anybody can come up with $10M per tapeout, without inventing costs.

1) I would be absolutely, completely suprised if the NV30 had 6 full layer revisions. Maybe one or two. They are pretty rare.
2) Even if there were 7 full mask sets made, I still don't know where you're getting that extra ~$9M. Maybe if you amortize ALL of the development costs for the entire project over those 7 revisions, you'd be in that ball park.

Actually, I'd personally bet on 3 full layer revisions.
As I said below the quotes, there were either 4 or 7 tape-outs AFAIK.
What would make sense is 1 original tape-out + 3 full layer revisions + 3 respins.

Regarding 2) - Could very well be it became 10 instead of 1 ( the 0 being a typo? ) through the mill, since I know there are at least 2 intermediaries.

Would making these two points clearer resolve the concern now? Got to go to bed now, could make that change tommorow though.

Also, beside that, what's your opinion on the editorial as a whole? Didn't even see you talking about anything but Part 6 ( and 1 to a lesser extent ) :p

Thanks for explaining me these errors though :)


Uttar
 
RussSchultz said:
I still don't know where anybody can come up with $10M per tapeout, without inventing costs.

telling costs like this to consumer would be "great reason" to fans to put another $400 on the card that they would A) not nessessarily need or B) have almost zero impact on gaming performance.

so, telling bigger development costs than chip actually had, can nowadays used as marketing weapon. (sounds unbelievable, but still so true. costs like this kind of answers to question that most consumers have before buying the product: "why it has to be so expensive?" )
 
Uttar said:
The emperor has no clothes, but since he shoots the messenger, nobody’s gonna tell him! They give him the message he wants to hear: “Yes, Jen Hsun, next generation will be even better!â€￾

Although this may very well be the truth I don't think this situation is as rare in other corporations as your statement, IMO, suggests. It may start from the top, but middle management isn't blameless. There are also advantages to saying yes.
 
Fred da Roza said:
Uttar said:
The emperor has no clothes, but since he shoots the messenger, nobody’s gonna tell him! They give him the message he wants to hear: “Yes, Jen Hsun, next generation will be even better!â€￾

Although this may very well be the truth I don't think this situation is as rare in other corporations as your statement, IMO, suggests. It may start from the top, but middle management isn't blameless. There are also advantages to saying yes.

For some reason I was reminded of stories coming out of MS in the middle 90's, when they were, ot at least believed themselves to be, under severe competitive pressures.
 
micron said:
An eight page article on why nVidia sucks.......

Ah, a glass-half-empty man. I thot it was an eight page article on what nVidia needs to do to quit sucking.
 
;) I'm only a little bit disapointed. I thought Uttar was going to cover more of the industry in general instead of dedicating 98% of the article to nVidia's dirty laundry. Uttar, why do you use an nVidia avatar on the forums you hang out at? I used to think you were a person who liked their products...
 
micron said:
;) I'm only a little bit disapointed. I thought Uttar was going to cover more of the industry in general instead of dedicating 98% of the article to nVidia's dirty laundry. Uttar, why do you use an nVidia avatar on the forums you hang out at? I used to think you were a person who liked their products...

The truth hurts eh :)
 
Back
Top