28nm @ TSMC: Very expensive or just wafer-limited?

Indeed, but as far as the consumer is concerned the actual performance is what matters. The point being that given X performance/transistor, constant cost/transistor is not going to allow anyone to increase the performance/dollar ratio as they have historically. To pretend it is only a problem for Nvidia is nonsensical.
 
Latest MSRPs of 28nm products are indicating, that 28nm could be 4-times expensive as 40nm comparing costs per mm^2, if the margins are equal to 40nm products.

Could be this real or is 28nm just wafer-limited and the prices are only so high to secure supply and to shift the margin in direction of manufactures instead of retail.

What could be a good estimate how expensive 28nm could be?

In what universe the latter won't lead to the former?
 
But these two chips do not perform the same.

Look at the history. Nvidia has been struggling to compete with their larger chips for years now - but they still went ahead and did at 28nm what they always do.

They really, really needed GK104 to be as good as it is, and got lucky that Tahiti is so mediocre.

But really, how much money is made on $350+ chips anyway? Nvidia will probably be competing vs Pitcairn and Cape Verde on a big perf/mm2 disadvantage below $250 soon. They just aren't willing to accept being beaten even if it's financially more sound, so they go with bigger chips at each price point, and rely on their special deal with TSMC to fund it. Now they are finding that they aren't getting such a great deal anymore they complain about it.

Their one advantage - bigger dies and cheaper wafers - is no longer valid. TSMC gets nothing out of giving Nvidia any favouritism - if they don't want the wafers you can be sure AMD will take every one. Nvidia won't accept that they need to meet AMD on a similar die size, so they instead cry about not getting cheap wafers any more.

That is what these slides are about - TSMC has probably dropped Nvidia's favourable pricing and Nvidia has nobody else to blame but themselves.
 
Actually, when it comes to complaining about foundries, NVIDIA seems to be, by far, the most vocal company out there.

They are just generally vocal, or it should be said, just plain defensive and foundry issues just expose this on a regular basis. If their chips are called out as slower by competitors (whether it's true or not) or if they make statements like...

"Kepler in super phones" (and presumably tablets) and
"Graphics are still far away from photo realistic and can go beyond that"

It's just cringe worthy stuff.
 
jimbo75 said:
That is what these slides are about - TSMC has probably dropped Nvidia's favourable pricing and Nvidia has nobody else to blame but themselves.
No, that isn't what the slides show at all. At this point it is clear any further conversation with you on this topic would be meaningless. Peace.
 
No, that isn't what the slides show at all. At this point it is clear any further conversation with you on this topic would be meaningless. Peace.

That's up to you. :smile:

The logic says that Nvidia has a worse deal than they used to have. You should ask yourself that if things are so bad for Nvidia, how bad must they be for AMD?

If nothing else has changed, AMD would be in an even worse position. It is highly likely that Nvidia has lost their cheap wafer agreement and now finds themselves paying the same as AMD and everyone else.

Again you have to ask yourself - what does TSMC get out of giving Nvidia a better deal? 40nm proved to TSMC that Nvidia cares nothing about relationships - they were happy to put the boot in while TSMC was under threat from GF and I'm sure TSMC didn't forget that.
 
Who is Nvidia addressing with these slides exactly? Some of those slides are about as sophisticated as marketing blather for gaming benches.

Yes, they are kind of vocal, customers (like us) are vocal too, because it's obvious we need fair (and low) prices.
My question- is it really TSMC responsible for this pricing explosion? I mean- they do get the tools and machines from somewhere else, right? Those ones are guilty. ;)

:???:

One noteworthy debate that has bubbled to the surface now and again is the push for 450mm wafers. After all, if you can't make transistors half the size, you can try using wafers with twice the area.

There are possible cost savings, but there are a lot of conflicting motivations for the equipment manufacturers here.
The transition to 300mm wafers was very painful for the tool makers, and it winnowed the customer pool significantly. 450mm is more expensive, and the number of customers even smaller.

TSMC, if we are to follow Nvidia's numbers, would have a strong motivation to pursue bigger wafers since its smaller transistors don't look compelling.
Intel has also been a strong pusher for 450mm wafers.
We have seen further motions on this transition, but it still looks like the currently in-progress fabs are starting with 300mm wafers with the intent to someday start on 450.
 
Yeah if the projected trend continues you'll have to pay same price for 100mm2@20nm and 200mm2@28nm.. And it'll keep doubling for next nodes

It's that and can be so much more:

beckley-slide-5.jpg


If you're looking at additional numbers like these you've got to be asking why would I even want to go to the next node? Because of the partnership I have with TSMC? The same partner who raised prices 45 days before going into production just because he suddenly became popular?

What if they skip the next node? What happens to the others? Do they pay more because a big volume first adopter is sitting out? Do they sit out too? Who then pays for the capex of that new node? The fruit company? Wanna bet?
 
jimbo75 said:
That is what these slides are about - TSMC has probably dropped Nvidia's favourable pricing and Nvidia has nobody else to blame but themselves.

No, that isn't what the slides show at all. At this point it is clear any further conversation with you on this topic would be meaningless. Peace.

Didn't Nvidia say in their conference call that they are now paying per wafer, and not yields.
 
Sinistar said:
Didn't Nvidia say in their conference call that they are now paying per wafer, and not yields.
Who gives a damn. Nvidia's motivation for showing the data does not interest me in the least. It is not important. What is important is whether or not the data presented is accurate, and I have seen no reasons thus far to doubt that is the case.

Now, if the data is accurate, then it is a problem for everyone using TSMC (and ultimately the end users), at least if you care about qualitative improvements in the end user experience continuing at the same pace as historical levels (without drastic price inflation). If you don't care about such things, for whatever reason, then perhaps this is not the thread for you.
 
Who gives a damn. Nvidia's motivation for showing the data does not interest me in the least. It is not important. What is important is whether or not the data presented is accurate, and I have seen no reasons thus far to doubt that is the case.

Now, if the data is accurate, then it is a problem for everyone using TSMC (and ultimately the end users), at least if you care about qualitative improvements in the end user experience continuing at the same pace as historical levels (without drastic price inflation). If you don't care about such things, for whatever reason, then perhaps this is not the thread for you.

One thing to keep in mind is that NVIDIA presents transistor cost as a function of yield(t), scaling factor and wafer cost. The latter two are presumably (roughly) the same for everyone, but different chips from different companies may have different yields at any given time, and yield curves that grow more or less quickly.

So while the chart presented by NVIDIA may be accurate when it comes to their own designs, it may not entirely reflect what other users of TSMC's services are seeing and expecting to see in the future.
 
Alexko said:
One thing to keep in mind is that NVIDIA presents transistor cost as a function of yield(t), scaling factor and wafer cost. The latter two are presumably (roughly) the same for everyone, but different chips from different companies may have different yields at any given time, and yield curves that grow more or less quickly.
If scaling factors and wafer cost qualify for a rough treatment in calling them the same for everybody, then defect density does too. There is really no reason to think that a fab is going to be much better in quality for one but not for the other, except for you-know-which-kind-of-cases where one needs to work around a bug in the process.
 
AMD suffered with its transition to 5xx0 architecture even though they ran "test" runs on the 4770 cards. There were similar yield issues with the 6xx0 architecture. We don't know how it is working out with the latest cards. NVIDIA has always complained about yield issues ever since the 40nm node so we know they have had issues.

I guess the question is should AMD/NVIDIA and other partners pay for testing? When there is only one horse in the race - unfortunately they do. What's needed is more competition in the semi-conductor field. It is a shame that Global Foundries is stumbling also. Another large competitor would have certainly benefited the end user.

Of course there is a blindingly obvious solution to all this... unfortunately it means the IHV can no longer compete based on node shrinks :(

Is the silicon free lunch is coming to an end?
 
Some specificity would be required to determine what the "free lunch" is.
It's never been truly free.
The assumed scaling from an optical shrink ended years ago.
Intel and AMD ran out of "cheap-ish lunch" territory somewhere around 130nm and 90nm.

When I see editorializing on web sites about how we can't assume things will improve with a shrink just now, it just shows they haven't been paying attention.
Everyone has had to work harder for years. If the general tech press is picking up on it, it just means that the foundries and fabless companies can't hide how hard it is anymore.
 
Last edited by a moderator:
Some specificity would be required to determine what the "free lunch" is.
It's never been truly free.
The barriers for assumed scaling from an optical shrink ended years ago.
Intel and AMD ran out of "cheap-ish lunch" territory somewhere around 130nm and 90nm.

If it carries on going up as in the NVIDIA slides it means it isn't worth it being first anymore. Let others mature process and have longer cycles between refreshes and new architectures. This is something that has been happening but may become more pronounced as will higher prices and less availability.

Apple for instance did not jump onto 28nm for its A5X processor like some had assumed. And I agree it has been becoming harder for the IHV's to deliver but still they have been delivering [new faster, performing products at approx the same price, with more features and approx same amount of power consumption - wattamouthful] to a large extent. Perhaps that time is coming to an end as we hit the economics barrier f no return rather than "just" the physics one.
 
It's not worth being first if your situation matches Nvidia's, with good odds many fabless companies would.

There are players (or at least one) in a better position, so they will reap the monetary benefits.
 
Maybe the "free lunch" is getting harder to catch but since about 90nm whole, "Cut power by half, double density, increase clocks" has not held. As 3dilettante notes this has been going on for years--and it is often right in the fab press releases on new nodes. When you double your density but only improve power efficiency by 25-30% something has to give: If power limited you chip can only grow by 25-30% or if area limited your power consumption is going to nearly double for the same area chip (with ~ 1.9x as many transistors). And this has held true as GPUs, even with more power efficient designs, have seen their TDP steadily increase since the early 2000s.

It is unfortunate that TSMC is the only big player right now for IHV progressive fabbing. Hopefully GF gets there ducks in order, but even then as others noted the fact you just cannot pick up your GPU design at TSMC and seamless plug it in over at GF pretty much says everyone is stuck.
 
As other have pointed out - the slides that were released could be due to NVIDIA specific issues.
AMD certainly are faster at getting their graphics cards out sooner on the new nodes at least.
 
Back
Top