Welcome, Unregistered.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

Reply
Old 25-Mar-2012, 00:30   #26
Duck Dodgers
Junior Member
 
Join Date: Apr 2006
Posts: 30
Default

Quote:
Originally Posted by Alexko View Post
Actually, when it comes to complaining about foundries, NVIDIA seems to be, by far, the most vocal company out there.
They are just generally vocal, or it should be said, just plain defensive and foundry issues just expose this on a regular basis. If their chips are called out as slower by competitors (whether it's true or not) or if they make statements like...

"Kepler in super phones" (and presumably tablets) and
"Graphics are still far away from photo realistic and can go beyond that"

It's just cringe worthy stuff.
Duck Dodgers is offline   Reply With Quote
Old 25-Mar-2012, 00:39   #27
ninelven
PM
 
Join Date: Dec 2002
Posts: 1,459
Default

Quote:
Originally Posted by jimbo75
That is what these slides are about - TSMC has probably dropped Nvidia's favourable pricing and Nvidia has nobody else to blame but themselves.
No, that isn't what the slides show at all. At this point it is clear any further conversation with you on this topic would be meaningless. Peace.
__________________
//
ninelven is offline   Reply With Quote
Old 25-Mar-2012, 02:07   #28
jimbo75
Senior Member
 
Join Date: Jan 2010
Posts: 1,211
Default

Quote:
Originally Posted by ninelven View Post
No, that isn't what the slides show at all. At this point it is clear any further conversation with you on this topic would be meaningless. Peace.
That's up to you.

The logic says that Nvidia has a worse deal than they used to have. You should ask yourself that if things are so bad for Nvidia, how bad must they be for AMD?

If nothing else has changed, AMD would be in an even worse position. It is highly likely that Nvidia has lost their cheap wafer agreement and now finds themselves paying the same as AMD and everyone else.

Again you have to ask yourself - what does TSMC get out of giving Nvidia a better deal? 40nm proved to TSMC that Nvidia cares nothing about relationships - they were happy to put the boot in while TSMC was under threat from GF and I'm sure TSMC didn't forget that.
jimbo75 is offline   Reply With Quote
Old 25-Mar-2012, 02:09   #29
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,486
Default

Who is Nvidia addressing with these slides exactly? Some of those slides are about as sophisticated as marketing blather for gaming benches.

Quote:
Originally Posted by UniversalTruth View Post
Yes, they are kind of vocal, customers (like us) are vocal too, because it's obvious we need fair (and low) prices.
My question- is it really TSMC responsible for this pricing explosion? I mean- they do get the tools and machines from somewhere else, right? Those ones are guilty.

One noteworthy debate that has bubbled to the surface now and again is the push for 450mm wafers. After all, if you can't make transistors half the size, you can try using wafers with twice the area.

There are possible cost savings, but there are a lot of conflicting motivations for the equipment manufacturers here.
The transition to 300mm wafers was very painful for the tool makers, and it winnowed the customer pool significantly. 450mm is more expensive, and the number of customers even smaller.

TSMC, if we are to follow Nvidia's numbers, would have a strong motivation to pursue bigger wafers since its smaller transistors don't look compelling.
Intel has also been a strong pusher for 450mm wafers.
We have seen further motions on this transition, but it still looks like the currently in-progress fabs are starting with 300mm wafers with the intent to someday start on 450.
__________________
Dreaming of a .065 micron etch-a-sketch.
3dilettante is offline   Reply With Quote
Old 25-Mar-2012, 02:30   #30
upnorthsox
Senior Member
 
Join Date: May 2008
Posts: 1,429
Default

Quote:
Originally Posted by Man from Atlantis View Post
Yeah if the projected trend continues you'll have to pay same price for 100mm2@20nm and 200mm2@28nm.. And it'll keep doubling for next nodes
It's that and can be so much more:



If you're looking at additional numbers like these you've got to be asking why would I even want to go to the next node? Because of the partnership I have with TSMC? The same partner who raised prices 45 days before going into production just because he suddenly became popular?

What if they skip the next node? What happens to the others? Do they pay more because a big volume first adopter is sitting out? Do they sit out too? Who then pays for the capex of that new node? The fruit company? Wanna bet?
upnorthsox is offline   Reply With Quote
Old 25-Mar-2012, 03:34   #31
Sinistar
Member
 
Join Date: Aug 2004
Location: Indiana
Posts: 523
Default

Quote:
Originally Posted by jimbo75
That is what these slides are about - TSMC has probably dropped Nvidia's favourable pricing and Nvidia has nobody else to blame but themselves.
Quote:
Originally Posted by ninelven View Post
No, that isn't what the slides show at all. At this point it is clear any further conversation with you on this topic would be meaningless. Peace.
Didn't Nvidia say in their conference call that they are now paying per wafer, and not yields.
Sinistar is offline   Reply With Quote
Old 25-Mar-2012, 04:01   #32
ninelven
PM
 
Join Date: Dec 2002
Posts: 1,459
Default

Quote:
Originally Posted by Sinistar
Didn't Nvidia say in their conference call that they are now paying per wafer, and not yields.
Who gives a damn. Nvidia's motivation for showing the data does not interest me in the least. It is not important. What is important is whether or not the data presented is accurate, and I have seen no reasons thus far to doubt that is the case.

Now, if the data is accurate, then it is a problem for everyone using TSMC (and ultimately the end users), at least if you care about qualitative improvements in the end user experience continuing at the same pace as historical levels (without drastic price inflation). If you don't care about such things, for whatever reason, then perhaps this is not the thread for you.
__________________
//
ninelven is offline   Reply With Quote
Old 25-Mar-2012, 13:46   #33
Alexko
Senior Member
 
Join Date: Aug 2009
Posts: 2,945
Send a message via MSN to Alexko
Default

Quote:
Originally Posted by ninelven View Post
Who gives a damn. Nvidia's motivation for showing the data does not interest me in the least. It is not important. What is important is whether or not the data presented is accurate, and I have seen no reasons thus far to doubt that is the case.

Now, if the data is accurate, then it is a problem for everyone using TSMC (and ultimately the end users), at least if you care about qualitative improvements in the end user experience continuing at the same pace as historical levels (without drastic price inflation). If you don't care about such things, for whatever reason, then perhaps this is not the thread for you.
One thing to keep in mind is that NVIDIA presents transistor cost as a function of yield(t), scaling factor and wafer cost. The latter two are presumably (roughly) the same for everyone, but different chips from different companies may have different yields at any given time, and yield curves that grow more or less quickly.

So while the chart presented by NVIDIA may be accurate when it comes to their own designs, it may not entirely reflect what other users of TSMC's services are seeing and expecting to see in the future.
__________________
"Well, you mentioned Disneyland, I thought of this porn site, and then bam! A blue Hulk." —The Creature
My (currently dormant) blog: Teχlog
Alexko is offline   Reply With Quote
Old 25-Mar-2012, 16:23   #34
silent_guy
Senior Member
 
Join Date: Mar 2006
Posts: 2,323
Default

Quote:
Originally Posted by Alexko
One thing to keep in mind is that NVIDIA presents transistor cost as a function of yield(t), scaling factor and wafer cost. The latter two are presumably (roughly) the same for everyone, but different chips from different companies may have different yields at any given time, and yield curves that grow more or less quickly.
If scaling factors and wafer cost qualify for a rough treatment in calling them the same for everybody, then defect density does too. There is really no reason to think that a fab is going to be much better in quality for one but not for the other, except for you-know-which-kind-of-cases where one needs to work around a bug in the process.
silent_guy is offline   Reply With Quote
Old 26-Mar-2012, 18:28   #35
Tahir2
Itchy
 
Join Date: Feb 2002
Location: United Queendom
Posts: 2,873
Default

AMD suffered with its transition to 5xx0 architecture even though they ran "test" runs on the 4770 cards. There were similar yield issues with the 6xx0 architecture. We don't know how it is working out with the latest cards. NVIDIA has always complained about yield issues ever since the 40nm node so we know they have had issues.

I guess the question is should AMD/NVIDIA and other partners pay for testing? When there is only one horse in the race - unfortunately they do. What's needed is more competition in the semi-conductor field. It is a shame that Global Foundries is stumbling also. Another large competitor would have certainly benefited the end user.

Of course there is a blindingly obvious solution to all this... unfortunately it means the IHV can no longer compete based on node shrinks

Is the silicon free lunch is coming to an end?
__________________
"Unless I am very mistaken… and yes, I am very much mistaken." - The Legend M Walker
Tahir2 is offline   Reply With Quote
Old 26-Mar-2012, 18:59   #36
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,486
Default

Some specificity would be required to determine what the "free lunch" is.
It's never been truly free.
The assumed scaling from an optical shrink ended years ago.
Intel and AMD ran out of "cheap-ish lunch" territory somewhere around 130nm and 90nm.

When I see editorializing on web sites about how we can't assume things will improve with a shrink just now, it just shows they haven't been paying attention.
Everyone has had to work harder for years. If the general tech press is picking up on it, it just means that the foundries and fabless companies can't hide how hard it is anymore.
__________________
Dreaming of a .065 micron etch-a-sketch.

Last edited by 3dilettante; 26-Mar-2012 at 19:14. Reason: removed confused text
3dilettante is offline   Reply With Quote
Old 26-Mar-2012, 19:17   #37
Tahir2
Itchy
 
Join Date: Feb 2002
Location: United Queendom
Posts: 2,873
Default

Quote:
Originally Posted by 3dilettante View Post
Some specificity would be required to determine what the "free lunch" is.
It's never been truly free.
The barriers for assumed scaling from an optical shrink ended years ago.
Intel and AMD ran out of "cheap-ish lunch" territory somewhere around 130nm and 90nm.
If it carries on going up as in the NVIDIA slides it means it isn't worth it being first anymore. Let others mature process and have longer cycles between refreshes and new architectures. This is something that has been happening but may become more pronounced as will higher prices and less availability.

Apple for instance did not jump onto 28nm for its A5X processor like some had assumed. And I agree it has been becoming harder for the IHV's to deliver but still they have been delivering [new faster, performing products at approx the same price, with more features and approx same amount of power consumption - wattamouthful] to a large extent. Perhaps that time is coming to an end as we hit the economics barrier f no return rather than "just" the physics one.
__________________
"Unless I am very mistaken… and yes, I am very much mistaken." - The Legend M Walker
Tahir2 is offline   Reply With Quote
Old 26-Mar-2012, 19:27   #38
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,486
Default

It's not worth being first if your situation matches Nvidia's, with good odds many fabless companies would.

There are players (or at least one) in a better position, so they will reap the monetary benefits.
__________________
Dreaming of a .065 micron etch-a-sketch.
3dilettante is offline   Reply With Quote
Old 26-Mar-2012, 19:29   #39
Acert93
Artist formerly known as Acert93
 
Join Date: Dec 2004
Location: Seattle
Posts: 7,812
Default

Maybe the "free lunch" is getting harder to catch but since about 90nm whole, "Cut power by half, double density, increase clocks" has not held. As 3dilettante notes this has been going on for years--and it is often right in the fab press releases on new nodes. When you double your density but only improve power efficiency by 25-30% something has to give: If power limited you chip can only grow by 25-30% or if area limited your power consumption is going to nearly double for the same area chip (with ~ 1.9x as many transistors). And this has held true as GPUs, even with more power efficient designs, have seen their TDP steadily increase since the early 2000s.

It is unfortunate that TSMC is the only big player right now for IHV progressive fabbing. Hopefully GF gets there ducks in order, but even then as others noted the fact you just cannot pick up your GPU design at TSMC and seamless plug it in over at GF pretty much says everyone is stuck.
__________________
"In games I don't like, there is no such thing as "tradeoffs," only "downgrades" or "lazy devs" or "bugs" or "design failures." Neither do tradeoffs exist in games I'm a rabid fan of, and just shut up if you're going to point them out." -- fearsomepirate
Acert93 is offline   Reply With Quote
Old 26-Mar-2012, 19:33   #40
Tahir2
Itchy
 
Join Date: Feb 2002
Location: United Queendom
Posts: 2,873
Default

As other have pointed out - the slides that were released could be due to NVIDIA specific issues.
AMD certainly are faster at getting their graphics cards out sooner on the new nodes at least.
__________________
"Unless I am very mistaken… and yes, I am very much mistaken." - The Legend M Walker
Tahir2 is offline   Reply With Quote
Old 26-Mar-2012, 19:38   #41
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,486
Default

There may be some wrinkles specific to Nvidia, but the overal trends drawn are true if you are fabless company that needs to compete on the bleeding edge of manufacturing.

GF, a foundry, has shown unsustainable cost growth in its own presentations.

The move for tighter integration between design and manufacturing steps is something all the foundries are struggling with, and they aren't all doing this for Nvidia.
__________________
Dreaming of a .065 micron etch-a-sketch.
3dilettante is offline   Reply With Quote
Old 26-Mar-2012, 19:48   #42
Tahir2
Itchy
 
Join Date: Feb 2002
Location: United Queendom
Posts: 2,873
Default

Do you think then IHV's asking to be treated as IDM's by TSMC is correct? I can't see that working. Have AMD released themselves from that sort of arrangement with Global Foundries?

Interesting to see the contrast in reaction - AMD talks about changing its strategic goals and not competing at the very high end against Intel (albeit they were mainly talking about CPUs rather than discreet GPUs at that time) and NVIDIA seemingly goes for TSMC's jugular with these somewhat childish slides - asking to be treated as an IDM and rough fair justice (what does that even mean?!)
__________________
"Unless I am very mistaken… and yes, I am very much mistaken." - The Legend M Walker
Tahir2 is offline   Reply With Quote
Old 26-Mar-2012, 19:56   #43
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,486
Default

AMD's CPU side probably gets the laugh track for hopping from the the model just prior to when the entire fabless industry realizes it has to be more like an IDM.

edit: To note, years ago when the idea of a spin-off was rumored, I said that if AMD did this it might as well give up competing in x86, and it increasingly looks like it has.

AMD didn't stop being an IDM because being and IDM wasn't the best way forward for manufacturing on leading-edge nodes, they did it because they were on the verge of imploding.
What Nvidia says is necessary is not unique to them, and fabs like GF have been talking about or offering some of these steps already.

This is also something I don't quite get about Nvidia's presentation, since it's not something newly discovered or a secret. What are they trying to do with these slides?
__________________
Dreaming of a .065 micron etch-a-sketch.
3dilettante is offline   Reply With Quote
Old 26-Mar-2012, 21:01   #44
Tahir2
Itchy
 
Join Date: Feb 2002
Location: United Queendom
Posts: 2,873
Default

Quote:
What are they trying to do with these slides?
Stating the obvious here:
To change the status quo - at a guess and complete stab in the dark they are unhappy with the deal they have with TSMC and when negotiations didn't go their way they are looking for hearts and minds (the interweb) to rally against the tyranny of the evil semi fab.
__________________
"Unless I am very mistaken… and yes, I am very much mistaken." - The Legend M Walker
Tahir2 is offline   Reply With Quote
Old 27-Mar-2012, 00:12   #45
silent_guy
Senior Member
 
Join Date: Mar 2006
Posts: 2,323
Default

Quote:
Originally Posted by Tahir2
As other have pointed out - the slides that were released could be due to NVIDIA specific issues.
Not very likely.
Quote:
AMD certainly are faster at getting their graphics cards out sooner on the new nodes at least.
A whopping 2 1/2 months faster, no less. That explains everything.
silent_guy is offline   Reply With Quote
Old 27-Mar-2012, 01:22   #46
Tahir2
Itchy
 
Join Date: Feb 2002
Location: United Queendom
Posts: 2,873
Default

Quote:
A whopping 2 1/2 months faster, no less.
In the world of 12 month life cycles (or 18 if u like) it is impressive. Glad you agree!
__________________
"Unless I am very mistaken… and yes, I am very much mistaken." - The Legend M Walker
Tahir2 is offline   Reply With Quote
Old 27-Mar-2012, 02:53   #47
3dilettante
Regular
 
Join Date: Sep 2003
Location: Well within 3d
Posts: 5,486
Default

Quote:
Originally Posted by Tahir2 View Post
Stating the obvious here:
To change the status quo - at a guess and complete stab in the dark they are unhappy with the deal they have with TSMC and when negotiations didn't go their way they are looking for hearts and minds (the interweb) to rally against the tyranny of the evil semi fab.
I don't think tech nerds have much pull with TSMC.
__________________
Dreaming of a .065 micron etch-a-sketch.
3dilettante is offline   Reply With Quote
Old 27-Mar-2012, 03:46   #48
Ninjaprime
Member
 
Join Date: Jun 2008
Posts: 337
Default

Personally, I think its more of a "why 294mm^2 costs the same as 550mm^2 did, and we're just as mad as you guys" which is sort of damage control after the viral marketing BS they put out about AMD overcharging for 7970.
Ninjaprime is offline   Reply With Quote
Old 27-Mar-2012, 04:12   #49
silent_guy
Senior Member
 
Join Date: Mar 2006
Posts: 2,323
Default

Quote:
Originally Posted by Tahir2 View Post
In the world of 12 month life cycles (or 18 if u like) it is impressive. Glad you agree!
I don't doubt that Nvidia would love to be as fast or faster than AMD, but if you know the amount of things that can (and do) slip in typical projects, the fact that there is only 2 1/2 months difference is really quite remarkable (and totally unlike the rest of most semiconductor companies.)

In addition, you'll certainly agree that it's a bit of a stretch for different companies have dramatically different results for an identical process. After all, like I said before, the D0 doesn't change. In addition, (successful) tech companies are highly reactive wrt things going wrong: if you see that somebody does better, you better fix it. That's what project post mortems are for. In the case of Kepler, it's obvious that Nvidia spent a lot of effort fixing earlier wrongs. The visible ones are things like perf/mm2 and perf/W, but it's foolish to thing that they only looked at visible aspects.
silent_guy is offline   Reply With Quote
Old 01-Apr-2012, 00:10   #50
Arun
Unknown.
 
Join Date: Aug 2002
Location: UK
Posts: 4,934
Default

[Apologies for the hilariously long post, I haven't written anything so long on Beyond3D since I started working so I got carried away a bit! Obviously these are strictly my own opinions and I'm not speaking for anyone else here)

Frankly NVIDIA's slides don't make much sense to me, likely because they're intentionally making things look worse than they really are. First of all I somewhat suspect they're somehow including the cost of the the I/O elements that don't scale (i.e. their 'scaling factor' is less than the actual transistor scaling). But more importantly, if the cost of 40nm transistors only became cheaper than 55nm transistors in Q4 2010, then why does their other graph show that they manufactured ~4x more wafers on 40nm than 55nm for the full year of 2010? Some of that can be attributed to their DX11 architecture being 40nm-only, but then why did they manufacture nearly 1/3rd as many 40nm wafers as 55nm wafers in 2009 when the costs were so much higher?

The answer is very likely power consumption (and performance to a lesser extent but these are somewhat interlinked if you can adjust your voltage). You pay more per transistor but each transistor is more power efficient. This also implies that NVIDIA's entire argument is rather dubious: as there is a fundamental trade-off between area and power at several different design points (architecture, synthesis, voltage, etc.) then it would make more sense to compare cost for identical power consumption (roughly speaking as this is highly non-linear). And when you do that, the new process node will become more cost efficient much faster than it does when only comparing transistor cost.

It's certainly true that both cost per transistor *and* power consumption scaled down faster several process generations ago. That doesn't mean process scaling is anywhere near dead yet and I'm sure that NVIDIA realises that. I suspect that what they're really trying to do there is pressure TSMC to accept lower gross margins on new processes to essentially subsidise early adopters (i.e. NVIDIA but also AMD/Qualcomm). They already do that but obviously NVIDIA would like them to do so even more.

By the way, we do have some actual cost targets from TSMC for 28nm vs 40nm and for 20nm vs 28nm. They are quite revealing:
Quote:
Originally Posted by Lora Ho, CFO, April 2010
We do have an internal goal for 28 nanometer cost. It is a parity to our 40 nanometer. We are working hard to achieve that goal. In the same time, we believe the value we bring to the customer in 28 nanometer. On the pricing side, we should be able to get a reasonable price so that the SGM for 28 nanometer will not be lower than the prior node.
Quote:
Originally Posted by Morris Chang, CEO, January 2012
It's about a ratio of 1.45 I think. 1.45 per 1000 wafers per month capacity. If that costs $1 capital in on the 28 it will cost $1.45 on 20. Did I explain myself?
It's important to realise that this is strictly capital expenditures and that early capacity is more expensive to build than mature capacity. The actual wafer cost also depends on the materials so for example the cost of High-K is mostly not part of that 'parity' estimate for 28nm. On 20nm where the main cost increase is on the litography side, it makes sense for a larger percentage of the total increase to be visible on the capital side. Also remember that various other expenses have also been increasing very significantly including process R&D (TSMC has been hiring very aggressively in the last few years for example, and non-salary expenses are also increasing a lot for everyone).

Anyway it's pretty clear that a significant part of the wafer price increase on 28nm and the expected increase on 20nm is just TSMC trying to make more money before the process is amortised, and NVIDIA is fighting back to try to get TSMC to subsidise early adopters more. They'll probably just meet in the middle like they always do and this whole thing will amount to nothing (especially as it dates back from November and 28nm yields have increased significantly since then).

I don't think there's any chance of NVIDIA leaving TSMC as long as Jen-Hsun and Morris Chang are both CEOs anyway since they're good personal friends. Morris won't be CEO forever though, he already came back from retirement when 40nm was becoming a huge problem in order to convince customers, e.g. Jen-Hsun, to stick with them rather than switch to GlobalFoundries.

On the other hand, it's also very clear that 20nm cost will increase significantly more than 28nm cost, and wafer prices will increase more accordingly. This is despite 28nm introducing High-K and 20nm not using FinFET. I think TSMC would really really like to use either e-beam or EUV at 14nm rather than keep using more and more expensive variants of immersion litography (BTW, isn't it ironic how EUV was seen by some as likely being forever too expensive 5 years ago, and now the alternatives have become so expensive that lots of people have started looking at it as a way to reduce costs? TSMC still prefers e-beam which is interesting though, I actually hope that gets to market because it would have some more interesting consequences)
__________________
"[...]; the kind of variation which ensues depending in most cases in a far higher degree on the nature or constitution of the being, than on the nature of the changed conditions."
Arun is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +1. The time now is 05:29.


Powered by vBulletin® Version 3.8.6
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.