R500/NV50

The 90 nm node question is still very much in the air. Intel went to great lengths (and many millions of dollars) to get their 90 nm process up and running, AMD is only now starting serious production of 90 nm parts which will show up in late Fall of this year. If you have watched closely in the past, TSMC has followed up full scale production of each node change about 6 to 8 months after AMD.

Now, TSMC does not have the budgets for process tech advancement that Intel and AMD does, even though being a foundry IS their business. Overall product margins for AMD and Intel chips are much higher than the margins that TSMC has by producing a 3rd party chip. Spring of 2005 is when we can expect to see the NV50/R520, but many in the industry are skeptical that the commercial foundries will have a solid 90 nm process up and running for full scale production. I have talked to Intel Fab engineers, and they have stated that the jump from 130 nm to 90 nm was far, far more costly in both time and money than the jump from 180 nm to 130 nm. We all know the problems everyone had getting 130 nm up and running, now multiply those problems by a factor of 3 and you can understand the jump to 90 nm.

Oddly enough, the commercial foundry that will probably have a full scale 90 nm line up and running is IBM. They already have 90 nm parts in full scale production. While yields and speed bins are not good for the process right now, in 9 months the IBM 90 nm should be solid enough to produce high end GPU's at a good price with decent yields and speed bins. NVIDIA has the advantage here by working with IBM. I have heard very little about TSMC's 90 nm node, but what I have heard is that it is nowhere near production quality (and very far away from what IBM is able to do now).

If TSMC has considerable problems with its 90 nm node, most likely they will continue to work on improving the 110 nm node. Currently the transistor performance of the 110 nm node is nearly identical to their 130 nm FSG line, and due to the 110 nm node just shrinking some of the features of that line, yields and speed bins should be nearly identical to the 130 nm FSG. Now, if TSMC wants a stopgap measure to please certain of its clients, it may integrate Low-K into the 110 nm line, which would decrease power draw and heat production, and would allow a chip to run faster (just as it does with the 130 nm Low-K line vs the 130 nm FSG).

IBM is the wild card here. If IBM can get its 90 nm commercial line up to spec and running as well as Intel's or AMD's lines, then NVIDIA could have a leg up on ATI, unless ATI also decides to look into IBM. Still, a large sticking point to working with IBM is that it is a large company, and it has its own products that it wants to produce on its own lines. So if IBM wanted to dedicate most of its 90 nm line to its PowerPC product, then all other 3rd party products will have to fight for space on that line. IBM's products get priority here.

If you want to get an exact answer, I would suggest asking Sireric (though he will not tell you a thing). Other than that, just take a look at the foundry scene and do your best to imagine what is going through the engineers' and bean counters' heads. Going 90 nm is a sticky proposition at this point in time.
 
Very good post JoshMST, that´s exactly what i´m very skeptic about myself, but that´s not the only thing that makes me think about something else currently.

All the roadmaps we know about, state that R480 should be announced in the Q3/Q4´04 timeframe. Since 130nm Low-K is up and running at TSMC and 110nm is ramping since about June/Juli, what improvements can ATI actually stick into it ?

Samsung already changed their GDDR3 product page to "max. 700MHz" from the "max. 800MHz" they had before. Availability of parts >600MHz seems to be almost nil at the moment.

ATI surely can pump up the clocks some more, but ... is it "worth" it ? Not just for us, but for ATI.

If it´s on 110nm, maybe they get better margins and sell it in the same price-range as their current XT-PE part, but is that all we can expect from them ?

The availability of the XT-PE, especially in Europe, is not very good, is that already a sign of a product "cancelation" to be replaced by a 110nm part with better profits ? Just my thoughts, but they come up everytime i read about 110nm and 90nm.

PS: In recent news, Samsung changed to 90nm for their DDR2-memory, maybe the will use it for GDDR3, too, but that´s just a guess out of the wild.
 
I am curious how you know TSMC has 90 nm in the bag? From what I gather, they aren't even close to having test wafers out. IBM on the other hand is producing 90 nm PowerPC parts (though of course, not 3 GHz as they had promised).
 
90nm is up and running - probably not on particularly complex ASIC's at the moment. They are generating about 1% of their revenue from the 90nm node.

Edit: Evidently that 1% was UMC, not TSMC

ATI had TSMC at their R420 launch presentation and I asked when they expected to see "Complex ASIC's, such as those used for graphics" on the 90nm node and they replied in the first half of 05.
 
And first half of 2005 is when theese new part+s will come right, so it makes sense then.
110nm with low-k sound´s interessting but didný many here point uot that 110nm is for economical use but i guess they coukd change that.
 
Josh, i have a problem with your post. If we replace 0.09 by 0.13 it is exactly what we heard last year. And we know how it turned.
 
JoshMST said:
IBM is the wild card here. If IBM can get its 90 nm commercial line up to spec and running as well as Intel's or AMD's lines, then NVIDIA could have a leg up on ATI, unless ATI also decides to look into IBM. Still, a large sticking point to working with IBM is that it is a large company, and it has its own products that it wants to produce on its own lines. So if IBM wanted to dedicate most of its 90 nm line to its PowerPC product, then all other 3rd party products will have to fight for space on that line. IBM's products get priority here.
I could have sworn that ibm had installed a competition style operation, so if the fabs can make more money producing ati chips they would do them instead of ibm chips. Maybe my memory is going, and im thinking of something else.

epic
 
Patrick: your post is somewhat confusing, mostly because the process landscape is pretty complex and each individual manufacturer had their own share of problems. I really don't understand the thrust of your statement. Anyway, last year at this time, TSMC's 130 nm process was running full bore, and their Low-K was already in full production. So I guess I am confused.

Epic: you could well be right, I will have to take a look into this. One thing that is well understood is that the East Fishkill fab is still not at full capacity in terms of wafers started. Perhaps we will get a better idea of IBM's stance if and when that fab will ever reach full productivity.
 
JoshMST said:
One thing that is well understood is that the East Fishkill fab is still not at full capacity in terms of wafers started. Perhaps we will get a better idea of IBM's stance if and when that fab will ever reach full productivity.

Josh, just curious as to how you got to this conclusion. I realize that E.Fishkill has devoted lines (eg. STIs 65nm, IBM's internal use), but I've never seen a statistic on what lines or how they balance the Fab buisness.

And 2005 for 90nm at TSMC? Yikes! And, IIRC, their process uses a mean gate length of 50nm instead of Intel or Sony's 45nm which is already in production and being refined.
 
I believe during many of the past IBM conference calls they have mentioned that they are still looking for customers for their East Fishkill fab, but that their situation is improving.

As for specifics, I think you have to work at IBM to know those things. Typically these companies hold this information close to their chest for a variety of reasons. This is why my article is under Editorials and is mainly speculation based on what facts the industry gives us.
 
key words ; "Complex ASIC's". A CPU is not a GPU for IBM. TSMC had test waffers in June.. ( same as AMD). but i could be all wrong, I have never been in there plant.
 
R500 (now R520) and NV50 have been in development for a long time. since 2002 and 2003 respectively, as far as I know.

R600 and NV60 are also in development

R700 and NV70 are in development as well, if not at least in the planning stages.

on the ATI side, Dave Orton himself talked about R500, R600, R700 and even R800.


it's all about concurrent development by having 2 or more GPU/VPU design teams.
 
Most consumers don't know much more than "it has a higher number so it must be better" (I still meet people who think that the GeForce4 MX is better than the GeForce3 -- I suspect they're the majority).


well I know the GeForce4 MX is weaker/lesser than the GeForce 3 (NV20).

how does GeForce4 MX (NV17) compare to GeForce 2 Ultra (NV16) ?

what I remember is....I believe the NV17 has 2 active pipelines whereas NV16 has 4 pipes @ 240~250 Mhz...the NV17 is almost certainly clocked faster.
 
Megadrive1988 said:
how does GeForce4 MX (NV17) compare to GeForce 2 Ultra (NV16) ?

off of nvidia and B3D tables:

Geforce4MX 460: 300core, 38 mill triangles/s, 8.8GB/s
(2x2 pipes)

Geforce 2 Ultra: 250core, 31 mill triangles/s, 7.36GB/s
(4x2 pipes)
 
Back
Top