SCEI & Toshiba unveil 65nm process with eDRAM

Vince

Veteran
65nm CMOS Technology (CMOS5) with High Density Embedded Memories for Broadband Microprocessor Applications
http://www.sony.co.jp/SonyInfo/News/Press/200212/02-1203/

TOKYO--(BUSINESS WIRE)--Dec. 3, 2002--Toshiba Corporation and Sony Corporation today announced the world's first 65-nanometer (nm) CMOS process technology for embedded DRAM system LSIs -- a major breakthrough in process technology for highly advanced, compact, single-chip system LSIs that will be only one-fourth the size of current devices while offering higher levels of performance and functionality...

The move to ubiquitous computing -- total connectivity at all times -- relies on high-performance equipment. These in turn require advanced SoC (system on chip) LSIs integrating ultra-high performance transistors and embedded high- density DRAM. In such devices, size and performance levels are directly related to process technology: finer lithography results in smaller devices that offer higher levels of performance. The new process technology announced by Toshiba and Sony and integration to a new level that allows bandwidths to be scaled up and the maximization of system performance.

The new SoC technologies for 65nm process generation include: 1) a high- performance transistor with the world's fastest switching speed; 2) the world's smallest cell for embedded DRAM; and 3) the world's smallest cell for embedded SRAM.

The new process technology is the result of joint development of Toshiba Corporation and Sony Corporation of 90nm and 65nm CMOS process technology that was initiated in May 2001. Full details will be presented at the December 9-11 International Electron Devices Meeting (IEDM) in San Francisco.

Outline of new technolog

1) High-performance transistor with 30nm gate length:

Transistors in this technology have high nitrogen concentration plasma nitrided oxide-gate dielectrics to suppress gate leakage current. This optimization reduces leakage current approximately 50 times more efficiently than conventional SiO2 film and allows formation of an oxide with an effective thickness of only 1nm. Furthermore, Ni silicide is applied in the gate electrodes and source/drain regions to attain low resistance and to reduce junction leakage current. Shallow extension formation optimizing ultra-low energy ion implantation, spike RTA and offset spacer process successfully suppresses the short channel effect of MOSFET and achieves superior roll-off characteristics. An excellent switching speed of 0.72psec for NMOSFET and 1.41psec for PMOSFET at 0.85V (Ioff=100nA/um), were obtained. Currently available Hi-NA193-nm lithography with alternating phase shift mask and slimming process provides 30nm gate lengths.

2) Embedded DRAM cell:

High-speed data processing requires a single-chip solution integrating a microprocessor and embedded large volume memory. Toshiba is the only semiconductor vendor able to offer commercial trench-capacitor DRAM technology for 90nm-generation DRAM-embedded System LSI. Toshiba and Sony have utilized 65nm process to technology to fabricate an embedded DRAM with a cell size of 0.11um2, the world's smallest, which will allow DRAM with a capacity of more than 256Mbit to be integrated on a single chip.

3) Embedded SRAM cell:

SRAM is sometimes used as cache memory in SoC systems. The Hi-NA193-nm lithography with alternating phase shift mask and the slimming process combined with the non-slimming trim mask process will achieve the world's smallest embedded SRAM cell in the 65nm generation an areas of only 0.6um2.

4) 180nm Multi layer wiring:

In order to reduce the chip size, it is important reduce the pitch of the first metal of the lowest layer. The new technology has a 180nm pitch, a 75% shrink from the 90nm generation. To reduce wiring propagation delay and power dissipation, a low-k dielectric material is adopted. The target effective dielectric constant of the interlayer dielectric is around 2.7.
http://www.nyse.com

Wow, I've heard that before.... :rolleyes: This is going to be so much fun watching people finally STFU.

PS. Hey Ben, ever hear that comment in bold about preformance being directly proportional to lithograph before? I could have sworn I did, but then someone said it was wrong... I dunno...
 
So maybe there is some ground being made in the "every appliance will be a computer" project that Sony is working on.
 
Vince, do you work for Sony?

Honest question.

This isn't some kind of breakthrough, it's an expected development.

Sony will most likely be on 0.09micron chip/0.06 micron DRAM by 2005, and so will Intel. :rolleyes:

The way you're trumpeting this around like a major breakthrough or that Sony's going to own everyone is laughable at best...
 
Vince said:
Wow, I've heard that before.... :rolleyes: This is going to be so much fun watching people finally STFU.

PS. Hey Ben, ever hear that comment in bold about preformance being directly proportional to lithograph before? I could have sworn I did, but then someone said it was wrong... I dunno...
I'm not sure what you and Ben argued about, but it seems to me the article is worded poorly. The performance increase does not directly come from the lithography process - it comes from the higher clocks and greater gate concentration the smaller process allows.

Toshiba only recently began offering their .13um process to third parties. I wonder if they plan on ramping up R&D spending to stay competitive with the other titans (IBM, TSMC, UMC, Chartered).


As a side note, Intel will be at .09um when the Prescott core bows late next year, and should be at .065um in 2005. Truth be told, if anybody beats Intel to market with either of the 2 processes I'll be incredibly surprised.
 
As a side note, Intel will be at .09um when the Prescott core bows late next year, and should be at .065um in 2005. Truth be told, if anybody beats Intel to market with either of the 2 processes I'll be incredibly surprised.
I agree, Intel has been incredibly aggressive in process technology the past couple years.

0.09 micron will be in mass production by the 2nd half of 2003, and Toshiba probably won't do that until 2004 sometime...
 
Glonk said:
This isn't some kind of breakthrough, it's an expected development.

I realise, but after arguing over and over the exact same points listed here, only 6 months ago - I'm entitled to be a bitch. No, I don't work for Sony, but these things are pretty cut and dry, yet people argue them to death and it pisses me off. That is all.

The performance increase does not directly come from the lithography process - it comes from the higher clocks and greater gate concentration the smaller process allows.

True, but as stated, the increase in preformance will come from the increased concurrency/parrallelisation (is that a word?) of the architecture thats possible with a smaller lithography process. Look historically at specilized hardware (3D works well) and even counting the NV30, the clockspeed increase has been perhaps 5-7X since the Riva128, yet the preformance is several orders of magnitude greater.

Sony will most likely be on 0.09micron chip/0.06 micron DRAM by 2005, and so will Intel.

They will be farther IMHO. Target for MS's specs and SCE seems to point to necessitate sub-90nm.

I agree, Intel has been incredibly aggressive in process technology the past couple years

Nobody will argue this with you.


The NYSE link is for the mainbody quote, taken from a PR release on NYSE.com
 
Vince said:
PS. Hey Ben, ever hear that comment in bold about preformance being directly proportional to lithograph before?

Didn't Gordon Moore say that? I'm no engineer, but I was under the distinct impression that that is the basis of Moore's Law. Increases in transistor density depends in large part upon advancements in lithography. No? Or is it the "directly proportional" part that you're questioning?
 
Are you still not understanding what I have been trying to tell you for quite some time now? Increasing transistor density in a general purpose processor in no way whatsoever indicates that it will be able to compete with dedicated hardware at the same tasks. A P4 still can't compete with a TNT1 in rendering real time 3D graphics. Transistor density allows you to do more things, a point I have never argued. The issue has always been dedicated hardware versus software.

If Sony relies on software rendering they will not be able to compete with dedicated hardware. For that matter, I think they are still iffy trying to push out 6.6TFLOPS based on .065u in a general purpose CPU.

The best I can figure you either work for Sony or have an investment in them. You are honestly quoting a press release as 'evidence' in a discussion here?
 
BenSkywalker said:
If Sony relies on software rendering they will not be able to compete with dedicated hardware.

It's not like they are trying to do "software rendering" on some x86 CPU (had they been, you would certainly be indisputibly correct). If it is an array of rapid execution vector units (albeit, governed by software), that pretty much blurs the line with "dedicated hardware". It just happens to not be what nVidia is up to, IMO.
 
The issue has always been dedicated hardware versus software.

I wouldn't call PS2 VUs and GS a software route. They are pretty dedicated hardware.

. A P4 still can't compete with a TNT1 in rendering real time 3D graphics

Hmm, don't know about that. Those P4 is getting pretty fast.
 
Are you still not understanding what I have been trying to tell you for quite some time now? Increasing transistor density in a general purpose processor in no way whatsoever indicates that it will be able to compete with dedicated hardware at the same tasks.

But, see this is what pisses me off, your argument is fundimentally flawed, yet you keep arguing it.

Whats the diffrence between a VU and a VS? Explain to me how a VU is 'general computing'. Whats the fundimental diffrence between a VU and the NV3x's new TCL front-end?

Explain to me why a 'general processor' like the EE or SH-4 can outpace the <quote> Hardwired <quote> solutions that you speak of that nVidia produced at the same time. Why does the EE utterly destroy the NV1x's TCL front-end?

ANSWER: Because it's throwing more tranistors at the problem. You [programmable] can maintain parity with a hardwired solution if you devote more resources [read logic] to the problem. This is simple. This is my point. Yet you fight it, over and over.

V3 said:
I wouldn't call PS2 VUs and GS a software route. They are pretty dedicated hardware.

Thank you Lord!! Look at OGL and DX10+. The merger of the PS and VS is comming, architectures like the P10/9 are going to be the future.

Didn't Gordon Moore say that? I'm no engineer, but I was under the distinct impression that that is the basis of Moore's Law.

Yep, what I'm advocating is that threw advanced lithography, you can increase the programmability of an architecture by a large amount while still maintining preformance parity with a comparable hardwired design. But, in order to do this, you must 'beat' Moore's Law - and thus to equal a hardwired design while maintaing flexibility - you must increase the usable transistor counts. This can be done threw: (a) More Advanced Lithography, (b) Multichip, (c) GRID/Cluster/Pervasive/or otherwise Computing, (d) Architectural Advance.

The best I can figure you either work for Sony or have an investment in them. You are honestly quoting a press release as 'evidence' in a discussion here?

Heh, neither... just know that I'm not wrong. Honestly... no. I was posting it anyways and it had the part that supported my position. This is all. Hey, had nothing better to do on here other than act out my ambitions to be a contemporary Patrick Henry and start an argument. Hey, and your always ready to argue back ;)

If Sony relies on software rendering they will not be able to compete with dedicated hardware. For that matter, I think they are still iffy trying to push out 6.6TFLOPS based on .065u in a general purpose CPU

There is no way they will yeild a true 6.6TFLOPS in one console... period. I'll be impressed if they can output a true TFLOP and sustain it, but I'm not so sure.

I too, wonder if SCE will use a full software rendering approach with PS3, with the idea that the hardware will be more of a 'VU' like, scientific computing approach [not like the traditional CPU].

The upsides if they can pull off a true TFLOP of sustained computer power thats fully programmable would be huge. Some very interesting things can be done by the developer who takes iniative and designs a custom pipeline for their work... atleast IMHO. SCE would definatly need some nVidia caliber Dev Rel.

I doubt it would be near an nVidia powered solution, but does it have to be? Interesting questions emerge, such as with 1TFLOP [Which is well over the GSCube IIRC which rendered FF:TSW and Antz at 60fps] isn't that suficient? How much of a visual diffrence would be seen?

The biggest question I have, is if a developer had full control, and could tailor the entire 3D pipeline for his title - how much is gained in effeciency? I mean, they could literally do anything... hell, banish trinagles. All the petty arguments about the nV2A's PS and the TEV's features would disapear.

But, I bet there will be some sort of rasterizer/GSx
 
MfA said:
As long as it is not economically viable it is irrelevant.

I've got a prototype quantum computer in my backyard that works, the only problem is it doesn't work at any temperature above absolute zero. :LOL:
 
If Sony goes for the all software approach in PS 3, they will lose. While it's true that the pixel/vertex (and soon to be unified) shader are just small SIMD processors, alot of operations are sunk into other tasks, such as early Z rejection, Z comparisons for MSAA, for texture filtering etc. These are all operations that can be solved with great efficiency in hardware.

Just imagine the amount of work a general purpose processor would have to do to take 6 Z-samples (like the R300) for MSAA, shade each fragment based on numerous anisotropically filtered textures, with say 2-8 bilinear samples per texture.

Can you say *ouch* ?

Cheers
Gubbi
 
PS3 will rule with CELL!
Xbox2 or GC2 will not stand up to PS3 if they are released within the same time frame. :oops:
 
Back
Top