Negt Gen depends on?

I have been thinking about the next-gen GPU´s and i see a clear relation between the silicon process and the ability to move forward.
The main question´s i have is what are the foundries offering at the 90nm node(difference, timeframe etz..)
I could make this longer but i rather have a discussion that can be filled later on. So please tell me what you know and predict´s so far.
 
I think a large emerging problem is that at 90nm and lower the leakage current of the transistors begins to become comparable to the active current of those transistors which means mucho heat and power consumption.

Prescott anyone?
 
Yes i think it´s going to be the hardest transition so far.
Still there are major differences between a CPU and GPU´s.
When are the semiconductor-companys expect to have the capacity too run lines at this node and does anyone have info on what silicon process they will use, and the effect this might have for better or worse?
I know it´s hard too predict as changes probably will happen but there are PLENTY of good people here so i have trust in you guy´s. ;)

Edit
Spelling
 
i think things are def gonig to slow down but perhaps that is not a bad thing .

We can already run current games with insane res and features on.

Hopefully they will work on greatly increasing the shader and vertex performance.
 
they are running testing .09 now. i predict .09 will come in better than .13, for GPUs. Look to AMDs abillity to get .09 out in mass as your gauge.
 
karlotta said:
they are running testing .09 now. i predict .09 will come in better than .13, for GPUs. Look to AMDs abillity to get .09 out in mass as your gauge.

Or Intel's inability?
 
karlotta said:
i predict .09 will come in better than .13, for GPUs. Look to AMDs abillity to get .09 out in mass as your gauge.

Huh? They say that yields are good and that they are on track for shipment later this year, but where can we gauge AMDs abillity to get .09 out in mass now. :?:
 
Why is it that CPUs can run at roughly 3-4X the frequency of GPUs for a given process? Whatever that reason is, can GPUs start to employ some of those techniques to up MHz for future gens?
ERK
 
LeStoffer said:
karlotta said:
i predict .09 will come in better than .13, for GPUs. Look to AMDs abillity to get .09 out in mass as your gauge.

Huh? They say that yields are good and that they are on track for shipment later this year, but where can we gauge AMDs abillity to get .09 out in mass now. :?:
there Blk Diamond process seems to have the most sucesse at 13, and where AMD goes so goes TSMC.
 
Cut and paste of a previous post of mine....
nelg said:
Here is a story about IBM’s woes and the difficulties with the move to .9 micron.
"Working with UMC, we have leveraged the benefits of triple-oxide technology on 90-nm to break the industry trend of increased power consumption when moving from 130-nm to 90-nm," said Erich Goetting, vice president and general manager of the Advanced Products Division at Xilinx.
Xilinx said last week it was dropping IBM Microelectronics as a foundry for producing 90-nm FPGAs and would rely only on UMC.

LINK
 
Dave B(TotalVR) said:
I think a large emerging problem is that at 90nm and lower the leakage current of the transistors begins to become comparable to the active current of those transistors which means mucho heat and power consumption.

Prescott anyone?

Isn't SOI supposed to take care of this problem? Granted Intel didn't implement SOI, but AMD is supposed to be doing so in their "next gen" A64s which are supposed to appear later this year.
 
Isn't SOI supposed to take care of this problem? Granted Intel didn't implement SOI, but AMD is supposed to be doing so in their "next gen" A64s which are supposed to appear later this year.

Not running as high as CPU´s but GPU´s would still benefit or?
Good point as AMD/IBM seem´s to have made the right decisions esp AMD and matured the process instead of being the first aka Intel.
 
Back
Top