But these two chips do not perform the same.jimbo75 said:No I'm saying that Nvidia's planned performance chip was 296mm2 compared to AMD's planned performance chip at 212mm2.
But these two chips do not perform the same.jimbo75 said:No I'm saying that Nvidia's planned performance chip was 296mm2 compared to AMD's planned performance chip at 212mm2.
So you are saying 300 > 365 ???
Latest MSRPs of 28nm products are indicating, that 28nm could be 4-times expensive as 40nm comparing costs per mm^2, if the margins are equal to 40nm products.
Could be this real or is 28nm just wafer-limited and the prices are only so high to secure supply and to shift the margin in direction of manufactures instead of retail.
What could be a good estimate how expensive 28nm could be?
But these two chips do not perform the same.
Actually, when it comes to complaining about foundries, NVIDIA seems to be, by far, the most vocal company out there.
No, that isn't what the slides show at all. At this point it is clear any further conversation with you on this topic would be meaningless. Peace.jimbo75 said:That is what these slides are about - TSMC has probably dropped Nvidia's favourable pricing and Nvidia has nobody else to blame but themselves.
No, that isn't what the slides show at all. At this point it is clear any further conversation with you on this topic would be meaningless. Peace.
Yes, they are kind of vocal, customers (like us) are vocal too, because it's obvious we need fair (and low) prices.
My question- is it really TSMC responsible for this pricing explosion? I mean- they do get the tools and machines from somewhere else, right? Those ones are guilty.
Yeah if the projected trend continues you'll have to pay same price for 100mm2@20nm and 200mm2@28nm.. And it'll keep doubling for next nodes
jimbo75 said:That is what these slides are about - TSMC has probably dropped Nvidia's favourable pricing and Nvidia has nobody else to blame but themselves.
No, that isn't what the slides show at all. At this point it is clear any further conversation with you on this topic would be meaningless. Peace.
Who gives a damn. Nvidia's motivation for showing the data does not interest me in the least. It is not important. What is important is whether or not the data presented is accurate, and I have seen no reasons thus far to doubt that is the case.Sinistar said:Didn't Nvidia say in their conference call that they are now paying per wafer, and not yields.
Who gives a damn. Nvidia's motivation for showing the data does not interest me in the least. It is not important. What is important is whether or not the data presented is accurate, and I have seen no reasons thus far to doubt that is the case.
Now, if the data is accurate, then it is a problem for everyone using TSMC (and ultimately the end users), at least if you care about qualitative improvements in the end user experience continuing at the same pace as historical levels (without drastic price inflation). If you don't care about such things, for whatever reason, then perhaps this is not the thread for you.
If scaling factors and wafer cost qualify for a rough treatment in calling them the same for everybody, then defect density does too. There is really no reason to think that a fab is going to be much better in quality for one but not for the other, except for you-know-which-kind-of-cases where one needs to work around a bug in the process.Alexko said:One thing to keep in mind is that NVIDIA presents transistor cost as a function of yield(t), scaling factor and wafer cost. The latter two are presumably (roughly) the same for everyone, but different chips from different companies may have different yields at any given time, and yield curves that grow more or less quickly.
Some specificity would be required to determine what the "free lunch" is.
It's never been truly free.
The barriers for assumed scaling from an optical shrink ended years ago.
Intel and AMD ran out of "cheap-ish lunch" territory somewhere around 130nm and 90nm.