ATI to delay 80nm GPU migration?

It also mentions Nvidia.

I think we can pretty much expect there to be atleast some minor issues on the processes going forward. The smaller the process gets the more issues surface and need attention, things that worked before may not be a valid way to do things on these newer processes, or they might just not be optimal. Given that these are fabless companies with alot of different types of designs, I think that we cannot expect the foundries to be able to supply them with tools that will be optimal for their particular applications of the process. Expect that the fabless companies will need to put more effort into combating the issues that their particular designs will face on the given process. This will likely lead to longer delays from that the process is ready to the time that they will be able to bring out products with acceptable performance and yield.

EDIT: Formatting.
 
Last edited by a moderator:
Currently, each production batch produces hundreds of units for Radeon x1900, Radeon x1600 and Nvidia GeForce7900 series production, while production for the GeForce 7600 series outputs thousands of chips, the makers said. All of these graphics chips are manufactured on 90nm node.

In comparison, when ATI first began volume shipping its Radeon x1800 to first tier vendors last year, output was less than a hundred units per batch, while output for second-tier vendors was even lower.
Wow. If R520 cores costs $120 per, as Fudo said, can we derive X1900 and X1600 costs from that--possibly even 7900 and 7600, assuming similar yields? Also, how the heck is X1600 in the "hundreds" while 7600 is in the "thousands?" Or are we talking high hundreds vs. low thousand, roughly equivalent to the die size differential (with an allowance for slightly different yields)?
 
This tends to go somewhat against the conference call nvidia did the other day that UTTAR transcribed, on that they were pretty gunfho on 80nm. That was a conference call to money people though.

I thought the change from 90nm to 80nm was fairly straight forward, why would this be causing so many problems ? Or is the story just not true ?

There is also another story on Digitimes about TSMC losing some 90nm orders so maybe they have reduced the 90nm price and so tempting nvidia and Ati to stay on it longer ? That sounds more reasonable to me.
 
Is 80 nm that big of a deal? Why don't they simply cancel their 80nm plans and wait for the 65nm instead? I dont think its that far ahead.
 
phenix said:
Is 80 nm that big of a deal? Why don't they simply cancel their 80nm plans and wait for the 65nm instead? I dont think its that far ahead.

It's abuot like the switch from 110 to 90. And also much too expensive / not matured for mass production atm.
 
Pete said:
Also, how the heck is X1600 in the "hundreds" while 7600 is in the "thousands?" Or are we talking high hundreds vs. low thousand, roughly equivalent to the die size differential (with an allowance for slightly different yields)?

Inst the X1600 smaller than the 7600 too?
 
kemosabe said:
Actually, if you listen closely......I think the yodeling has already begun. :LOL:

I've grown a very bad allergy against yodeling in my childhood (oh and yes that is OT).
 
chavvdarrr said:
7600 is smaller than X1600

Also 0.11 had no low-k, both 0.09 and 0.08 are low-k with the latter only optical shrink


Do someone know why?
After all X1600 does have 157M transistores and the 7600 does have 177M:?:
 
Do either TSMC or UMC have viable working 65nm nodes?I thought only Intel could use 65nm for large scale production ATM.
 
Last edited by a moderator:
Are you suggesting that AMD would take a role as a foundry for Nvidia ? I think that very unlikely. By the time that AMD would be able to put capacity aside for foundry services the manufacturing process at fab 30 would probably not be attractive for Nvidia.

Or are you referring to Chartered ? Or IBM ?
 
Does AMD have their 65nm up and running ok?Isn`t that quite newish 4 them...and I don`t think they have the ability to act as a foundry...as for IBM, that didn`t work out too well last time and I haven`t read anything about their 65nm process. As I stated above, Intel are the only ones using this on a large scale AFAIK, and they`re certainly not going to give foundry space to anyone but themselves(almost). So 80nm does make sense, seeing as to how 65nm doesn`t seem to be ready in any usable way ATM and for a while.
 
So what are we gonna be looking at? early 2007 for TMSC 65nm process? That would make it a little over 1 year, down the line from intel production.

Morgoth the Dark Enemy, thats a sick and very funny sig. For the record, so do I ! ;-) lol
 
(Excuse the double post in a row)

So basically I speculate the following :

g80 (nv50) Aug. 2006 80nm.
g90 (nv55) march 2007 65nm. ( I HOPE)
g100 (nv60) fall 2007 65nm (mature)

its the NV60 im most looking forward. And does any body have any idea when we are going to get 512bit memory bus. I mean, memory just seems to be crawling along in progress compared to the GPUs at the mo.

Just for fun does anybody wanna guess at transistor counts for these things !
 
There's probably nothing wrong with AMDs (along with IBM, Toshiba - though tosh is 'new' and not part of the 65nm process - and AMD has not signed up for 45/35nm yet AFAIK) 65nm process, the yields are probably not as high as the 90nm process yet but that's not the reason for the wait imo. They simply have not installed enough of the manufacturing equipment to make a 65nm launch of anything worthwile (remember it costs alot of money to make the masks for a new chip and then there's testing etc.) and keep up with demand. Right now all they are able to compete with Intel with their 90nm chips which they have two fabs capable of producing. Why would they want to rush the introduction of 65nm products when they will be able to keep up or atleast be able to sell their current 90nm products well into fall.

As for the rumour perhaps it refers to SNAP (strategic Nvidia AMD partnership) which has more to do with chipsets and platform devellopment I think.

kyetech said:
And does any body have any idea when we are going to get 512bit memory bus. I mean, memory just seems to be crawling along in progress compared to the GPUs at the mo.
From what I've heard 512bit memory buses gets harder to implement on smaller manufacturing processes (or rather as cores become smaller). Apparently there is a big problem with having enough space on the core for all those pads that will connect to the pins (or umm.. balls) on the chip package. And then there is the increased cost (increased waste/lower yield with more layers) of the pcb since the number of traces for the memory doubles.
GDDR4 is in volume production right now :p and will offer roughly double the bandwidth of GDDR3, though at first not quite twice the bandwidth.

AFAIK ATIs highend cores (R520/580) are capable of working with GDDR4 with little or no change in design. However - due to the fact that Hynix and Samsung do not adhere to a strict standard and it may take alot of engineering effort to get both running on the same cards. Also I think Nvidias current cores do not support GDDR4, that probably comes with the g80.
 
Back
Top