NVIDIA: Beyond G80...

G90 in September?
http://diy.yesky.com/vga/365/3329865.shtml
... battle against R650.

translation:
A few days ago, the foreign media have reported, AMD in the third quarter of 2007 introduced manufacturing process based on 65nm Node core of the ATI R650 will Rade 2950XTX graphics on HD. Today they got the news, AMD will also be introduced to the same manufacturing process based on 65nm Node R650 core of the ATI Radeon HD 2 950XT, its frequency than ATI Radeon 2950XTX lower HD, Other functions and interfaces completely the same. According to the company's current plan, These two manufacturing process using 65nm Node in the top DX10 graphics will be introduced in September. But from R600 issued repeated postponement of the performance, whether or not issued on time some people suspected. Nvidia's G90 competitors will be at the same time release, regret is not on the G90 Chip any information.


But I think just rise the clock up will not help ATi. :???:
 
G90 in September?
http://diy.yesky.com/vga/365/3329865.shtml
... battle against R650.

translation:



But I think just rise the clock up will not help ATi. :???:

I don't want to rain on your info, but wasn't that "Yesky" site involved in releasing a fake set of pictures or slides a few months ago ?
I seem to recall something about that name, but it's hard to keep track of these quasi-anonymous sources that "pop out" in the far east from time to time... ;)
 
I don't want to rain on your info, but wasn't that "Yesky" site involved in releasing a fake set of pictures or slides a few months ago ?
I seem to recall something about that name, but it's hard to keep track of these quasi-anonymous sources that "pop out" in the far east from time to time... ;)

You mean these slides?
 
They save on costs. The saved die space is not used for anything, the chip is physically smaller.

Larger dies have higher production costs, which hurts profitability.

I think it is flawed to automatically assume "larger dies" have higher production costs for Nvidia.

Nvidia doesn't own their chip plant. It buys its chip wafers from TSMC.

TSMC will pretty definately charge Nvidia quite abit more for a 65NM wafer than a 90NM wafer as it cost them more to produce once you add in the costs for "amortorizing" the equipment required for 65NM wafer production.

TMSC will definately charge nvida 40% to 50% for 65NM wafer than a 90NM one.
 
I think it is flawed to automatically assume "larger dies" have higher production costs for Nvidia.
I'd bet on it being the case.
TSMC doesn't have to charge the same amount per good 200mm2 chip as it does per 400mm2 chip.

Nvidia doesn't own their chip plant. It buys its chip wafers from TSMC.
I think large customers like Nvidia can pay per good die.
I'm sure if a design is high-yielding, Nvidia can negotiate the price per chip down.
If yields are poor, TSMC has less incentive to do so.

TMSC will definately charge nvida 40% to 50% for 65NM wafer than a 90NM one.
All the more reason to have smaller chips with higher yields.

TSMC absorbs a lot of the risk and overhead involved in manufacturing chips, but the overall trend with regards to large die size costing more filters through.
 
Sure, large dies = lower yields and fewer candidate dies per wafer = fewer chips to amortize wafer and manufacturing costs over.
 
Sure, large dies = lower yields and fewer candidate dies per wafer = fewer chips to amortize wafer and manufacturing costs over.

Yield is not directly affected by die size, it als depends on the process. If you can get more working dies out of a larger chip, then it might be cheaper then a smaller chip that has lower yields.
 
Yield is not directly affected by die size, it als depends on the process. If you can get more working dies out of a larger chip, then it might be cheaper then a smaller chip that has lower yields.

Then you must be willing to cede thermal and clock advantages to the competitors that transition to the next process node It would be a temporary cost advantage that would be gone in a few months.

Once a new process has matured, the yields improve enough to bring them in line with the older processes. At that point, the cost for a smaller process wafer start would have to be double that of previous generations for a break-even.
On top of that, the price premium that a next-gen product can produce must fail to materialize.
 
Which could be easily the case, considering that for example Intel ist still using 130nm for their chipsets,while they surely have the technolgy for 65nm and even less.

For a GPU it all depends how mature a process is.If it is a new process it might be the safer bet to go with the old. And then there is the question of production capability. Can you get enough Chips in the modern process ?
 
Which could be easily the case, considering that for example Intel ist still using 130nm for their chipsets,while they surely have the technolgy for 65nm and even less.
From what I understand, chipsets these days are typically pad-limited, such that they wouldn't be capable of reducing the size of the chip. Thus it makes no sense to go for smaller processes unless they have more functionality to add to the chipset.
 
Yeah, but then i doubt it would be called Geforce 8xxxM or even NB8E, as they said at the Inq.

... and it does not make really sense to make a chip just for notebooks and G92 is also near.

I still puzzle about 8800M since I read months ago about it and that it would used in SLI setups in NBs. :???:

edit:

hmmm...

NVIDIA_G84.DEV_0409.1 = "NVIDIA NB8E-SE"
http://forum.ixbt.com/topic.cgi?id=10:52777-47

Inq's 8800M GS?

edit²:
NVIDIA_G84.DEV_0409.1 = "NVIDIA GeForce 8800M GS"
http://www.driverheaven.net/windows.../134673-forceware-165-01-xp-english-only.html
 
Last edited by a moderator:
Back
Top