Fujitsu at 90 nm tells us something about 65 nm CPUs

Status
Not open for further replies.
Olivier: do not mix the SUN's SPARC CPUs with Fujitsu SPARC CPUs: one is still an in-order machine and the other is a complex siper-scalar OOOe beast ;)

Nonamer, it will not be a problem when they shrink to 45 nm ( guess why they are so anxious ? ) full SOI manufacturing process ( I say full as it seems that the current 65 nm manufacturing process shoudl be absically using a mix of SOI and bulk CMOS [bulk CMOS would be used for the e-DRAM] ): yes there is a Toshiba PR that talked about this mixed process )... yes, capacitor-less e-DRAM cells.

New SOI wafer is first in world to allow DRAM to achieve same electrical characteristic as DRAM on bulk wafers

Tokyo--Toshiba Corporation today announced a breakthrough in embedding DRAM on silicon-on-insulator (SOI) wafers that ends the DRAM performance degradation typical of such integration. The new technology will be applied to high-performance system-on-chip (SoC) applications.

Performance enhancements of logic LSI for future broadband applications require integration of DRAM cells and a high performance processor on a single chip. Such a move will support high speed, wide bandwidth data transfers and improve overall system performance. However, embedding DRAM cells in an SOI wafer results in electrical characteristic inferior to those of DRAM on bulk wafers, as increased leakage current triggered by the SOI structure degrades data retention in the DRAM.

Toshiba's breakthrough hybrid wafer fuses the electrical characteristics of SOI and bulk wafer. The wafer's structure is achieved by removing part of the SOI layer and the buried oxide layer on SOI wafer and replacing it with conventional silicon by epitaxial-Si growth (SEG) technology.

The electrical characteristics of DRAMs embedded in the new hybrid wafers match that of DRAMs produced on bulk wafer with 180-nanometer (nm) CMOS process technology. Toshiba will introduce the hybrid wafer with 65nm process technology, currently targeted for 2005.

http://www.toshiba.co.jp/about/press/2002_06/pr1201.htm

About the 45 nm generation and the capacitor-less e-DRAM cell:

Toshiba Develops the World's First Embedded DRAM Memory Cell Technology on Silicon-on-Insulator Wafer

13 June, 2003

TOKYO -- Toshiba Corporation today announced that Toshiba has developed and verified the operability of the world's first memory cell technology for embedded DRAM system LSIs on silicon-on-insulator (SOI) wafers. Toshiba aims to apply the new technology to mass production of system LSIs for broadband network applications in 2006.

The move to ubiquitous computing -- total connectivity at all times -- relies on high-performance equipment. This in turn requires advanced system LSIs integrating ultra-high performance transistors and embedded high-density memory. One promising measure to dramatically raise transistor processing speed is fabrication of system LSI on a new-generation silicon substrate, silicon-on-insulator (SOI). However, the conventional DRAM cell structure is designed for conventional bulk wafers and it is difficult to produce embedded DRAM on SOI wafer.

Toshiba has experimentally fabricated a 96kbit cell array and verified the practical operability of the advanced cell structure with sufficient characteristics required for embedded DRAM system LSIs on SOI.

Full details of the new technology were presented on June 11 and 12 at the VLSI Symposium in Kyoto, Japan.

What is SOI?

Unlike a conventional bulk wafer, the SOI wafer comprises three layers: one single-crystal layer of silicon; a base silicon substrate; and a thin insulator, 1/1,000 the thickness of a human hair, that electrically insulates the single-crystal layer from the substrate, inhibiting waste electronic leakage to the substrate. The result is lower power consumption and higher processing speeds.

Toshiba succeeded in forming embedded DRAM system LSI on an SOI wafer by developing a new DRAM memory cell technology that makes use of the characteristics of SOI wafer itself, eliminating the necessity of capacitors where current DRAM cell stores data. The new memory cell technology, dubbed floating body cell (FBC), will be used for embedded DRAM system LSI for the 45-nanometer generation on.

Principle of Operation and cell structure

Conventional DRAM cell consists of a capacitor, where electric charge is stored, and a transistor that function as switches. The newly developed FBC does not have a capacitor and memorizes data by storing the electric charge in its transistor. Since the transistor works as both capacitor and electric switch, the cell area is half that of a conventional DRAM cell.

Manufacturing process

Compatibility in the manufacturing processes of DRAM cells and logic ICs is a crucial issue for the development of embedded DRAM cell technology for SOI-based system LSI. Toshiba's new process achieves full compatibility without any degradation in the performance of systems LSI. In order to ensure compatibility, poly-Si plug, a buffer layer of poly-silicon, is formed in contact area in memory cell.

Verified Operability

Toshiba's experimental 96Kbit cell array achieved successful operation in all bits, a 36-nanosecond access time, 30-nanosecond data switching time, and 500-millisecond data retention time (at 85 degrees C). The results demonstrate that the new FBC technology can be applied to system LSI integrating DRAM cells with megabit or greater memory capacity.
Note:

1 nanometer = one billionth of a meter

<Reference>Comparion of Cell Structures


Information in the press releases, including product prices and specifications, content of services and contact information, is current on the date of the press announcement,but is subject to change without prior notice.

http://www.toshiba.co.jp/about/press/2003_06/pr1301.htm

img1302.gif


This should help them reduce the chip size even further than what simply going from 65 m to 45 nm technology would suggest.

Somehow I doubt that Intel's 90 nm SRAM cell ( which I bet is a 6T design: lower latency and lower access time ) is denser than SCE and Toshiba 65 nm e-DRAM cell.

Even the size of their SRAM cell was pretty small, record small to say the truth.

2) Embedded DRAM cell:
High-speed data processing requires a single-chip solution integrating a microprocessor and embedded large volume memory. Toshiba is the only semiconductor vendor able to offer commercial trench-capacitor DRAM technology for 90nm-generation DRAM-embedded System LSI. Toshiba and Sony have utilized 65nm process to technology to fabricate an embedded DRAM with a cell size of 0.11um2, the world's smallest, which will allow DRAM with a capacity of more than 256Mbit to be integrated on a single chip.

3) Embedded SRAM cell:
SRAM is sometimes used as cache memory in SoC systems. The Hi-NA193-nm lithography with alternating phase shift mask and the slimming process combined with the non-slimming trim mask process will achieve the world's smallest embedded SRAM cell in the 65nm generation an areas of only 0.6um2.

http://www.toshiba.co.jp/about/press/2002_12/pr0301.htm
 
ban25 said:
V3 said:
I want to know the size and wattage of that thing.

It's 388 sqmm.

Very interesting: considering we would put only 4 MB of SRAM and not 6 MB for the LS memories of the APUs and the very tiny SRAM cell size SCE and Toshiba have developed for their 65 nm process, considering we will see e-DRAM yes, but very dense e-DRAM, also considering we will not need OOOe and all the rename logic and the Reorder Buffer for the APUs ( which takes space ), ... and the fact that we can expect a 280+ mm^2 CELL CPU realized with 64 nm technology these numbers are not depressing considering this is a 90 nm processor.

This chip is only 28% bigger than a possible 280+ mm^2 CELL CPU ( even if their manufacturing capability was lower back then they still sold PlayStation 2 consoels with 279 mm^2 GS chips ).

We know that 28% of total area reduction is more than just being in range of what a 90 nm to 65 nm shrink can deliver, but we do not know yet: they might even be able to shrink it better than that.
 
jvd said:
No offense but I have little faith that ATI can deliever either. The R500 (the supposed GPU for the XB2) is not even a .09u part, looks to be like a .11u part or even a .13u if Dave's theory on ATI process upgrading is correct. Even with major upgrades it's not to be all that impressive given the very lowly origins. Hopefully the R500 thing is more of a rumor than a fact. Probably I would guess that the PS3 will win the performance wars by default from simply having so much money spent on it, no matter how inefficient it is or how much it falls short.

Well lets see . Hyper z , thier smooth vision. Thier aa . Thier pixel shaders. Their extensive knowledge in building gpus. Compare to sony that made the gs . I would put my money on sony being a bit faster but ati having the image quality .

Not only that but an r500 should be more than on par with what the cell chip can actually put out. Not only that but they can just put two r500s in the xbox 2 if need be . Even 2 on one die. They also have experiance with on die ram. So that too can bring up the speed of the r500. Lots of things they can do. Sony isn't the only one that can spout off big numbers .

&

Josiah said:
:rolleyes:

you remind me of the people who judge hardware based on how many "bits" it has

consider this:
R350 (Radeon 9800): .15u
NV35 (GeforceFX 5900): .13u
now which one is better? NV35? nope.

more examples: the Radeon 9600 has about 1/2 the transistor count of the 5900, yet in many circumstances it is faster. Geforce 4 has about 1/3 the transistor count, same deal.

I'm not making any claims that XB2 will be better than PS3 or anything like that. but to say R500 is "not to be all that impressive" is nonsense. it will be ridiculously amazing compared to any PC or console we currently have. will it be better than PS3? who knows, only a fool would compare two things that don't exist.

No matter how good a GPU is, it's not going to bridge the monsterous gap between between .13u/.11u and .065u. Any neat feature will be crushed by overwhelming brute power on the PS3, even if it doesn't get close to 1TFLOP. While the 9600/GF4 (they have about the same # of transistors FYI) may come close to the 5900 in certain situations, you'll be hard press to say they are even the same league performance wise overall. Here, you're looking at a 4x-6x times difference in transistors. Might as well as compare the GF2 to the 5900. Only way you're going to get comparable performance out of the R5x0 is to do a dual core on a more advance process, but if you're going to do that might as well as get a real DX10 part and look better in the process.
 
It's 388 sqmm.

Thanks. That's as big as those Itanium 2. Got the wattage ?

I think 4PE + 64MB eDRAM Cell will be around that size too, give or take (most probably larger).
 
No matter how good a GPU is, it's not going to bridge the monsterous gap between between .13u/.11u and .065u. Any neat feature will be crushed by overwhelming brute power on the PS3, even if it doesn't get close to 1TFLOP. While the 9600/GF4 (they have about the same # of transistors FYI) may come close to the 5900 in certain situations, you'll be hard press to say they are even the same league performance wise overall. Here, you're looking at a 4x-6x times difference in transistors. Might as well as compare the GF2 to the 5900. Only way you're going to get comparable performance out of the R5x0 is to do a dual core on a more advance process, but if you're going to do that might as well as get a real DX10 part and look better in the process.

This is getting off topic. But when comparing the 9600pro doing dx 9 against the 5900ultra doing dx 9 . Full dx 9 that is the 9600pro is the same speed.

Pushing foward a r500 on .11 or .09 will go up against a cell chip nicely .

If the cell chip is doing all the work and rendering then it will completely software based for effects. Which take a very long time to master and take away alot of speed from the system. The r300 right now does alot of things in hardware that help it keep its speed up. The r500 should be a leap at least as big as the r200-r300 if not bigger with many more things and more money thrown at its design. Also ati doens't need to worry about fabbing the chips. Thats ms's problem. So if ms .09 they will get it . If they want a chip that costs 200 or so from ati they will get that too. Sony isn't the only one that can invest billions into a system.
 
This is getting off topic. But when comparing the 9600pro doing dx 9 against the 5900ultra doing dx 9 . Full dx 9 that is the 9600pro is the same speed.

Really ?

NVIDIA should have stick with 96 bit internal precision like ATI, instead of going for 128 bit precision, NVIDIA was always good with features and those things, but they never was good with getting those features to run fast.

If the cell chip is doing all the work and rendering then it will completely software based for effects. Which take a very long time to master and take away alot of speed from the system. The r300 right now does alot of things in hardware that help it keep its speed up.

No, its not like that, on that level its abit more similar actually, r300 was made to run vertex and pixel shader fast. And Cell is made to run software cell fast too.
 
V3 said:
It's 388 sqmm.

Thanks. That's as big as those Itanium 2. Got the wattage ?

I think 4PE + 64MB eDRAM Cell will be around that size too, give or take (most probably larger).

Except that we would probably see 32 MB or less of e-DRAM and we are talking about 65 nm technology and not 45 nm, but perhpas you were already counting that in your calculation.

If the CELL chip were 388 mm^2 using 65 nm technology we would be looking at almost 1.4 Billion Transistors which is a bit too much if you ask me.

I think that below 1 Billion Transistors they can implement what they ar eplanning for.

I think 280+ mm^2 will be a good approximation of CELL chips's size ( around 600-800+ Million Transistors ).
 
Except that we would probably see 32 MB or less of e-DRAM and we are talking about 65 nm technology and not 45 nm, but perhpas you were already counting that in your calculation.

Yep on 65nm, it'll be that big. My guess is, process today is actually good enough for big chips, that start appearing now days.

If the CELL chip were 388 mm^2 using 65 nm technology we would be looking at almost 1.4 Billion Transistors which is a bit too much if you ask me.

Well if you're going for maximum density. But, it can be bigger, and less dense, that way heat will be easier to remove.

I think that below 1 Billion Transistors they can implement what they ar eplanning for.

Yes, around there, give or take.

I think 280+ mm^2 will be a good approximation of CELL chips's size ( around 600-800+ Million Transistors ).

Any guess the size of PSP CPU ? It'll probably be large too.
 
V3 said:
This is getting off topic. But when comparing the 9600pro doing dx 9 against the 5900ultra doing dx 9 . Full dx 9 that is the 9600pro is the same speed.

Really ?

NVIDIA should have stick with 96 bit internal precision like ATI, instead of going for 128 bit precision, NVIDIA was always good with features and those things, but they never was good with getting those features to run fast.

If the cell chip is doing all the work and rendering then it will completely software based for effects. Which take a very long time to master and take away alot of speed from the system. The r300 right now does alot of things in hardware that help it keep its speed up.

No, its not like that, on that level its abit more similar actually, r300 was made to run vertex and pixel shader fast. And Cell is made to run software cell fast too.

Yes but there are tons of things that are hardcoded into the r300 . There are also tons of bandwitdh saving tech , fsaa tech , aniso tech , bump maping , displacement maping and other stuff that is all hardcoded into the chip. It makes designing games faster and many times performance of those effects are better in hardware than software .

If the cell chip in the ps2 can sustain 1tflop it might be able to keep up with the r500 imho .
 
nonamer said:
No matter how good a GPU is, it's not going to bridge the monsterous gap between between .13u/.11u and .065u. Any neat feature will be crushed by overwhelming brute power on the PS3, even if it doesn't get close to 1TFLOP. While the 9600/GF4 (they have about the same # of transistors FYI) may come close to the 5900 in certain situations, you'll be hard press to say they are even the same league performance wise overall. Here, you're looking at a 4x-6x times difference in transistors. Might as well as compare the GF2 to the 5900. Only way you're going to get comparable performance out of the R5x0 is to do a dual core on a more advance process, but if you're going to do that might as well as get a real DX10 part and look better in the process.

You're making an assumption based on assumptions and stating it as fact. Again, I'm not prepared to directly compare R500 and PS3 as it's foolish, both are unfinished unannnounced unfinalized products that we know little about.

This is what we do know: According to this and this and this R500 will be 90nm and have 300 million transistors. We know Cell will be 65nm, and the amount of ram that's on the chip will take up over 300 million transistors.

Speculation on this board (quoting Panajev here) is that Cell will be 500-800 million transistors. Which means it would be 200-500 million transistors of actual logic (again this is based on speculation). If we are to compare these processors based purely on the number of transistors they contain (which as I've said is ridiculously stupid) it's not clear which one is better. Depending on what speculation you believe R500 might have 1.5 times as much logic, or Cell might have 1.6 times as much logic, or anywhere in between (but Cell certainly won't have 4x-6x more, not even 2x more, maybe not even as much!).

My opinion is that the technology will be roughly on par. Both R500 and PS3 will be insanely powerful compared to what we have today. However I don't think there will be any PC games to really take advantage of R500 at first, except maybe some tech demos. But then again, if PS3 launches in early 2006 R500 will have been here for a year already, and we'll be talking about R600...
 
R420 looks to be a .13-to-.11 transitional part, so though R500 looks to start at .11, more than likely it is both designed and one of its iterations will be at .09 at the least. (Depends on how long the stretch the architecture.) For Xbox2 itself, of course, the process depends on how much money Microsoft is willing to devote to the GPU and which fab they will work with.

As for the overall look, I'm torn. I figure we're going to hit different walls in different places at different times, as more emphasis is placed on programming skill and developers. CELL will be doing much in software, and the modern GPU's are moving towards more and more programmability, so number-crunching in all areas won't necessarily reign supreme if the programmers can't run alongside it. Almost DEFINITELY we will see more advancement within each of the platforms than we have before as they mature... How fast will the art assets grow? How constraining will title budgets be on rate of application and growth? Could be a whole different ballgame.
 
Josiah said:
You're making an assumption based on assumptions and stating it as fact. Again, I'm not prepared to directly compare R500 and PS3 as it's foolish, both are unfinished unannnounced unfinalized products that we know little about.

This is what we do know: According to this and this and this R500 will be 90nm and have 300 million transistors. We know Cell will be 65nm, and the amount of ram that's on the chip will take up over 300 million transistors.

Speculation on this board (quoting Panajev here) is that Cell will be 500-800 million transistors. Which means it would be 200-500 million transistors of actual logic (again this is based on speculation). If we are to compare these processors based purely on the number of transistors they contain (which as I've said is ridiculously stupid) it's not clear which one is better. Depending on what speculation you believe R500 might have 1.5 times as much logic, or Cell might have 1.6 times as much logic, or anywhere in between (but Cell certainly won't have 4x-6x more, not even 2x more, maybe not even as much!).

My opinion is that the technology will be roughly on par. Both R500 and PS3 will be insanely powerful compared to what we have today. However I don't think there will be any PC games to really take advantage of R500 at first, except maybe some tech demos. But then again, if PS3 launches in early 2006 R500 will have been here for a year already, and we'll be talking about R600...


I mostly agree, Jos, especially the last part.
One of the aspects people seem to forget is that PS3 games will be played by the VAST majority of the people at 480i (i'd say 90% even in 2005, maybe more in 2006 and of course as time goes by, more people will enjoy hi-res displays... but not that many, in Europe there is next to NO ONE even using 480p for the fact that you can only get it from PS2s and that even then, very few people can afford Plasma TV's and those who do might not have a PS2) and it's VERY VERY easy to make a 640x480 game look photorealistic (or as close to a DVD playback as possible) than having to go all the way up to 1600x1200... The resources saved from having to run games at a lower resolution will put PS3 in a somewhat advatageous position compared to PC GPUs. I was thinking about AA solutions on PS3, which could make this resource-svaing gap smaller, but we'll have to wait and see what happens.
 
London boy . A 8x1 design running at 1ghz is more than enough to do 8x fsaa and 16 tap ansio at 640x480 running at a constant 60fps. I highly doubt the r500 will just be a r300 clocked higher. That is what the r420 is going to be (with more vertex shaders and pixel shaders). I suspect the r500 to be a 12x 2 design running some where around 800mhz with a small amount of on die t1ram. Mabye 16 megs or so. Add that to the compression and current forms of aniso and fsaa by ati and i think we will have a part easily on par with a ps3 . Not only that but i suspect ms to go with a dual core version of the r500.

But hey i'm also hopeing that the r420 is 2 r350 cores on a single die at .11 micron next year.
 
jvd:

> A 8x1 design running at 1ghz is more than enough to do 8x fsaa and 16
> tap ansio at 640x480

480p won't cut it next gen and I certainly don't see M$ holding back in that area with Xbox already supporting HDTV.
 
cybamerc said:
jvd:

> A 8x1 design running at 1ghz is more than enough to do 8x fsaa and 16
> tap ansio at 640x480

480p won't cut it next gen and I certainly don't see M$ holding back in that area with Xbox already supporting HDTV.


of course next gen ONLY 480p will suck. but just think about how many people will even KNOW what 480p (and all the HDTV resolutions) are...
98% or the people (i'm talking about Europe) will be playing at 480i with insane amounts of AA, which will look decent enough for Mr Joe "let's-kick-some-ass-in-DOA5" Average... while u and i will be geeking it away at 720p...
that's the way it's gonna be, and no one can change that.
 
Still no real reason to go over HDTV resolutions, though, which PC GPU's and games continually push past.

But yes, though the vast majority of the public likely WILL still be non-HDTV and neven non-Progressive in 2005, I can't see any of the next consoles nor developers ignoring it. Their share will be higher and growing, the devices will be cheaper and more varied, and they'll be riding out any number of years in which it will only increase.
 
480i will be enough in the later half of this decade? :oops:
HDTV resolution support is a must for all next gen consoles. Personally, I consider it a "must have" for current gen consoles. However, the 55 inch screen in the living room might be skewing my perspective. :LOL:
The idea that 480i support will be enough for new consoles introduced in circa-2005 or later is ridiculous. This is the 21st Century... it's time to move beyond 1960's era display technology.

If a next gen console doesn't support HDTV by default, then I question how next gen it really is.
 
Of course they will SUPPORT those resolutions, although all i'm saying is: think about how many people will actually even KNOW what "HDTV resolutions" are....
 
cybamerc said:
jvd:

> A 8x1 design running at 1ghz is more than enough to do 8x fsaa and 16
> tap ansio at 640x480

480p won't cut it next gen and I certainly don't see M$ holding back in that area with Xbox already supporting HDTV.
why not . sony isn't even doing that in some games right now. But honestly do you really think that more than 10% of the installed tv base is hdtv ? i highly doubt it. They are still selling normal tvs for at least 200$ less than a hdtv thats also smaller . I think this gen most games will come with the option of less ansio and higher res or more ansio and less res. But most will play at 640x480 . The gen following this will be the first true hdtv gen
 
Status
Not open for further replies.
Back
Top