Cost of advanced chip manufacture

IQandHDR

Newcomer
That's not true. The cost per transistor stopped dropping, that's not the same thing as an increase. It has remained stable for 4 generations now and you didn't have the sort of prices you have now on 20nm and 16nm. What you have is AMD not able to compete. Drop it, this argument has already lead to the forum being closed and you are bringing it again.
I am sorry, but this is simply not true.
These numbers are vendor agnostic:
Udklip.PNG

Even wafers are getting more expensive.
ZQlUWcaJAdMsWfkP.jpg

I am sorry if these numbers offends you, but asking me to stop posting because you feel like I should stop is not a valid request from my perspective?
Please counter with numbers, if you feel that the numbers are false.
(No offence meant :))
 
Just wanted to add this:

Code:
GP102 = 11.800 Mil Die:471 mm² MSRP $1199 Transistor per $= 9.841.534
TU102 = 18.600 Mil Die:754 mm² MSRP $999 
GA102 = 28.300 Mil Die:826 mm² MSRP $1999 
AD102 = 76.300 Mil Die:609 mm² MSRP $1599 
GB102 ~ 90,180 Mil Die:744 mm² MSRP $2000 *)

Another way to look at this is the increase in transistor per generation:
GP102 -> TU102 =  57.6271% increase
TU102 -> GA102 =  52.1505% increase
GA102 -> AD102 = 169.611% increase
AD102 -> GB102 =  19.502% increase
(Numbers for GB102 are pure guesswork as I have no insider information)

The GA102 - AD102 increase is mindboggling (most is due to cache increase I suspect)
As the MSRP did not increase by 169% from the 3090Ti -> 4090
So even with a flat transistor price, the AD102 stands out.
 
Just wanted to add this:

Code:
GP102 = 11.800 Mil Die:471 mm² MSRP $1199 Transistor per $= 9.841.534
TU102 = 18.600 Mil Die:754 mm² MSRP $999
GA102 = 28.300 Mil Die:826 mm² MSRP $1999
AD102 = 76.300 Mil Die:609 mm² MSRP $1599
GB102 ~ 90,180 Mil Die:744 mm² MSRP $2000 *)

Another way to look at this is the increase in transistor per generation:
GP102 -> TU102 =  57.6271% increase
TU102 -> GA102 =  52.1505% increase
GA102 -> AD102 = 169.611% increase
AD102 -> GB102 =  19.502% increase
(Numbers for GB102 are pure guesswork as I have no insider information)

The GA102 - AD102 increase is mindboggling (most is due to cache increase I suspect)
As the MSRP did not increase by 169% from the 3090Ti -> 4090
So even with a flat transistor price, the AD102 stands out.
Did you leave some information off? Cost/transistor is only listed for GP102. Also using the total price of the graphics card isn't the best way to do it. Need to know the cost NVIDIA pays for the GPU.

Any way you slice it, cost/mm² has gone through the roof in the last 10 years. I'm not really sure about cost/transistor but that's crazy if even that is increasing. There will soon come a point where consumer hardware no longer gets any benefit from new nodes. Price limit will be reached for ordinary people.
 
Did you leave some information off? Cost/transistor is only listed for GP102. Also using the total price of the graphics card isn't the best way to do it. Need to know the cost NVIDIA pays for the GPU.

Any way you slice it, cost/mm² has gone through the roof in the last 10 years. I'm not really sure about cost/transistor but that's crazy if even that is increasing. There will soon come a point where consumer hardware no longer gets any benefit from new nodes. Price limit will be reached for ordinary people.
Ah, yes, bad copy/paste from my side, will update later, sorry about that.
 
I am sorry, but this is simply not true.
These numbers are vendor agnostic:
View attachment 12405

Even wafers are getting more expensive.
View attachment 12406

I am sorry if these numbers offends you, but asking me to stop posting because you feel like I should stop is not a valid request from my perspective?
Please counter with numbers, if you feel that the numbers are false.
(No offence meant :))
What you showed here is not the same as cost per transistor, which was what I was replying to. Your own original chart had the cost pretty much stagnant. Wafers are more expensive because they hold more transistors, so even if the cost per transistor would stay the same, they would still be more expensive.

Furthermore if you look at the first chart above the largest increases in cost are software and validation. Nothing to do with transistors. Software costs will probably not be that high forever, especially with the help of AI, for example.
 
Last edited:
Yes. We need industry pricing data. Consumer facing data isn't usable for this discussion.
I doubt anyone in the industry will break their NDA's to answer our forum questions, some of that information is (just guessing here though) vital business information not availble to the public 🤷‍♂️
It's is hard to find exact numbers, but you can find stuff like this:
1732552034647.png



I did how do a small modification to the chart:
Code:
GP102 = 11.800 Mil Die:471 mm² MSRP $1199 Transistor per $(for the consumer) = 9.841.534
TU102 = 18.600 Mil Die:754 mm² MSRP  $999 Transistor per $(for the consumer) =18.618.618
GA102 = 28.300 Mil Die:826 mm² MSRP $1999 Transistor per $(for the consumer) =14.157.078
AD102 = 76.300 Mil Die:609 mm² MSRP $1599 Transistor per $(for the consumer) =47.717.323
GB102 ~ 90,180 Mil Die:744 mm² MSRP $2000 Transistor per $(for the consumer) ~45.090.000

Since GP102 was 16nm this is from the time where cost either stagnated or increased for the vendors, one could argue that as a consumer...you get more transistor per $ than ever before, when looking at transistors vs. MSRP.
Again, AD102 is really a big outlier there with a +160% jump in transistors.
 
What you showed here is not the same as cost per transistor, which was what I was replying to. Your own original chart had the cost pretty much stagnant. Wafers are more expensive because they hold more transistors, so even if the cost per transistor would stay the same, they would still be more expensive.
See post above.
If you have better numbers, please share.
 
See post above.
If you have better numbers, please share.
We can only estimate the cost to NVIDIA of the actual dies. At least I've never seen this information.

You could fit at maximum 240 AD104 dies (294mm²) on a 300mm (70,685mm²) 4N wafer. Obviously in reality it's less than that because you can't use the ones on the edges, but I dunno how many that is. Let's say you get 200 dies out of a $20,000 wafer (accounting for edge and defects, total guess). So an AD104 costs NVIDIA around a hundred bucks. That smells about right. Please someone check me on this.
 
Last edited:
BTW that puts AD102 at just over $200. But it could be more since more of the wafer is wasted on the edge (maybe yields are lower too). I'm sure the edge waste math is doable but not by me. And I don't know the yields.
 
We can only estimate the cost to NVIDIA of the actual dies. At least I've never seen this information.

You could fit at maximum 240 AD104 dies (294mm²) on a 300mm (70,685mm²) 4N wafer. Obviously in reality it's less than that because you can't use the ones on the edges, but I dunno how many that is. Let's say you get 200 dies out of a $20,000 wafer. So an AD104 costs NVIDIA around a hundred bucks. That smells about right. Please someone check me on this.
I think you number of dies are to high:

For example, let’s take two die sizes, 20mm x 16mm and 10mm x 8mm. Using a 300mm wafer, the former produces 127 good chips out of 173 total, meaning a yield of 73.24 per cent.

Which matches (more or less) my ideal number of die to wafer (excluding any defects)
1732555189264.png
 
BTW that puts AD102 at just over $200. But it could be more since more of the wafer is wasted on the edge (maybe yields are lower too). I'm sure the edge waste math is doable but not by me. And I don't know the yields.
I get (with no defects) a maximum number of 69 die to wafer:
1732555406497.png
 
I think you number of dies are to high:



Which matches (more or less) my ideal number of die to wafer (excluding any defects)
View attachment 12418
I'm not accounting for the geometry of the chips so it's just ballpark. But they (you?) came to 181 on a 20x16mm (320mm²) die. My guess was 200 for a 294mm² die. Not too far off.
 
I'm not accounting for the geometry of the chips so it's just ballpark. But they (you?) came to 181 on a 20x16mm (320mm²) die. My guess was 200 for a 294mm² die. Not too far off.
Yes, I agree...we are basically doing "napkin" math without knowing a lot of the variables (defects, wafer cost, lithography cost and other expenses) for a die ;)
 
Yes, I agree...we are basically doing "napkin" math without knowing a lot of the variables (defects, wafer cost, lithography cost and other expenses) for a die ;)
I did my "math" (educated guessing) with basically no consideration for yields since that is unknown. But yea the true cost of an AD104 is probably between $100-$150 if you account for that.

Also they can put defective AD104s into the 4070 and I've no idea how to account for that.
 
I did my "math" (educated guessing) with basically no consideration for yields since that is unknown. But yea the true cost of an AD104 is probably between $100-$150 if you account for that.

Also they can put defective AD104s into the 4070 and I've no idea how to account for that.
I acutally think it is even more complicated than that:

1732556333766.png

I presume these come on top of the "bare" wafer cost.
 
I think we're just talking about the cost of the die. Well that's what I'm talking about.
What I think a lot of people might not appreciate is the scale of those costs in relation to the amount of hardware sold, and the fact that all of those costs have to be spread across the number of products sold.
None of the IHVs really go into detail about this in their financials that I'm aware of, but just as a thought experiment...

Think about the complexity of the driver stack, the compiler, and then validation of the same, for something like a Riva TNT, an 8800GTX, and a 4090, and then think about the number of person-hours you'd need to develop, validate, and support the same.

Back in the early days, there wasn't even hardware video encode/decode to worry about, let alone GPGPU programming stacks, DLSS, all the effort put into making sure that shader compilation a) works and b) is performant, etc.

That all costs a ton of money at a scale that just wasn't the case back in the day. Even in a theoretical world where the cost of the silicon die was zero, or at least stayed the same each generation, all of those other costs have increased dramatically and would result in the cost of the product to the consumer necessarily being more expensive in order to have a viable business and keep the lights on.

There's certainly some argument to be made about the shape of that trend line, whether it's mostly linear, exponential, or somewhere in between, but you'd be hard pressed to argue that it does anything but increase each generation.
 
Back
Top