Official announcement: Toshiba starts PS3 memory producton.

Whoa... where did you get this from? If you're referring to what I believe you are, entire industry is moving to the 90-65-45 steppings. The Toshiba eDRAM is better than the industry mean for the 65nm node - so I really don't see how this is relevant when you consider that it's highly probable that any IC in the PS3 was designed with the 65nm node in mind; it's intrinsically related to the capabilities on that process.

I was just referring to Mr. Davari comment, this comment was made in beginning 2001, for him to make comment:

Though the version of Cell aimed at the 100-nm process technology would not include embedded DRAM on the main processor chip, other chips, including a graphics processor, would include eDRAM, Davari said. Then, as process technology moves to 70-nm and 50-nm design rules, embedded DRAM likely would become practical for the main processing elements, he said.

From that comment using 100 nm process, he was sure that Cell chip couldn't include embedded DRAM for the main processor chip, only in graphics processor. (Its interesting he talked about graphics processor already)

For the practicality of eDRAM in Cell, he would relied on the projection for 70nm and 50nm design rules.

From someone with authority, he was the only person, that mentioned eDRAM for the main memory.

So I am pretty sure, since the beginning of their design (when they started in 2000) on 100nm they don't rely on eDRAM for the main memory, if it is practical I am sure they'll use it. But if its not, is not the end of the world for them, like DMGA often suggested.

This is in opposition to how crucial eDRAM is to graphic processor He breifly mentioned. It seems the graphics processor was always going to have eDRAM. And with Sony aiming PS3 for High Definition, I wouldn't be suprised if each of those pixel engines in each Visualiser comes with 32 MB of eDRAM.
 
V3 said:
I was just referring to Mr. Davari comment

I know. His comment is totally aligned with what SCE has said at several occasions, GDC included. Namely that the 90nm process can support their logic requirements, but it would take 65nm for them to integrate eDRAM effectivly.

What I've questioned and quite correctly is your stance which you again stated up here:

V3 said:
For the practicality of eDRAM in Cell, he would relied on the projection for 70nm and 50nm design rules.

The projection is correct and analogous to what SCE has more recently stated with the corrected industry standard 90-65-45 steppings that have evolved since 2000/2001. Which is evident by the later 2002 (IIRC) agreement between STI to work on process technology.

Basically, everything he said is applicable in practical terms to what we've been discussing regarding Cell the architecture and how it'll be utilized in the BE. Which is why I questioned you as this has been the status quo condition, with the exception of you stating that 50nm and 65nm would lead to practical differences - which is more likely than not incorrect.

PS. Also, I don't think he ever explicitly stated the embedded DRAM was to be the "main" memory [pool] as you said.

PPS. I find the IBM VP's (whose division lead Cell's development) comments particularly humorous in light of my recent 'conversation' with the Baumann, who kept stating that Cell's R&D isn't relative to that invested in PS3. :LOL:
 
Yeah, IBM announced Blue Gene years before the PS3 was forming. Most of the money being spent was intended for bioinformatics research.
 
DemoCoder said:
Yeah, IBM announced Blue Gene years before the PS3 was forming. Most of the money being spent was intended for bioinformatics research.

And I'd suppose if Cell had something to tangibly do with BlueGene outside of Deadmeat's mumblings - it would be a valid comment. It's quite clear that the Cell architecture* has been designed for it's role in PS3 from the beginning - something Davari's comments only reinforce. Anyways, this is the wrong thread for the discussion.

* Which is distinct from the generalized Cellular Computing Ideal. What your stating is analogous to me stating that Church-Turing announced the architecture decades before the Pentium4 was forming as if it was in some tangible way describing the exact architectural manifesation or related tasks of the P4. Or, better yet, that all the money pumped into early computing by the Defense Department in mid-20th century is related to the R&D money spent on the PPC970 implimentation...
 
Please Vince, you're not that dumb to make such a pathetic analogy.

We're talking about one company, IBM, a company I used to work for at T.J. Watson in NY in the massively parallel/distributed computing division under Steve White. When I was there, they were working on Deep Blue and IBM SP2. SP2 was the core project. Deep Blue was the demo. I attended meeting where they were talking about multi-core (and aspect oriented programming, years before it hit the mainstream) Here's how it works: the Director sets a goal -- build a 1000 teraflop computer, and he allocates a budget to several competing research teams. These teams then go off and come up with several different ways of reaching 1 petaflop. Cell computing was one of them. There are also teams also working on DNA, RSFQ, and Quantum. Cell (under another name) was the most near-term realistic, and got the most funding. But you need a showcase for the technology, and that's where blue gene came along. Blue Gene is a application project, that is, a project funded to DO SOMETHING with 1 petaflop. It's run by a separate IBM division -- computational biology. Actually procuring and manufacturing the full blue gene is eating up most of the money.


So yes, "CELL" as you know it, benefitted tremendously from money being spent at IBM on pervasive/mas-par/distributed computing. And money being spent today is not solely for the PS3, but also for IBM's HPF projects.

Sorry, it's just not the same as money spent on Alan Turing == Pentium4. We're talking about IBM taking a design that was already researched and proven, and tweaking it in Austin for PS3 application. The same as they did with the PPC for GameCube. Most of the cell RESEARCH had nothing to do with the PS3 application.

IBM does basic research before applications are ready, and money spent on that research ends up in multiple applications. Even money being spent on CELL at Austin is not solely intended for PS3, since IBM is reusing that work for HPF projects at Livermore.

I can tell you that as far back as 1996 when I worked there, they were talking about cell architectures as the "next big thing" and had already started spending money on it. Unless you can tell me how the Austin cell differs "radically" from the cell design at Yorktown, I'm going to claim that the vast majority of the money spent on "CELL" is not solely for PS3.
By that I mean, well before the IBM/Sony partnership, IBM had fully designed a cell chip on paper, then they tapped out a "cut down" version with 4 cells later. Do you have any information to show that the Austin CELL version is a radical re-do of this work, or is it merely another cut-down or tweak of research already completed?
 
DemoCoder said:
So yes, "CELL" as you know it, benefitted tremendously from money being spent at IBM on pervasive/mas-par/distributed computing. And money being spent today is not solely for the PS3, but also for IBM's HPF projects.

Exactly, which is irrelvant to the money invested by the STI partners. Democoder, your correct, but your stating something that's the inverse of our [Dave and I] discussion. Please, if you wish to get into a discussion I'll allways be there - but don't support and argue something that's perverse in relation to the origional topic I was discussing.

We're talking about the money spent by STI under the 2001 agreement (eg. 400M on R&D infastructure and the Cell project), the expanded 2002 agreement (eg. Sony + STI) on process technology as well as the >4Billion on manufacturing infastructure under an expanded OTSS agreement are relative to an investment in a project whose intention is focused squarely on a specific implimentation - PS3.

What you're discussing is irrelvant as I have never debated prior art's influence (as per it's futility in an argument) which is why I brought up the parallel of the DoD prior investements and it's irrelevance to PPC970's actual design.

I can tell you that as far back as 1996 when I worked there, they were talking about cell architectures as the "next big thing" and had already started spending money on it. Unless you can tell me how the Austin cell differs "radically" from the cell design at Yorktown, I'm going to claim that the vast majority of the money spent on "CELL" is not solely for PS3.

Than what's it for? The Broadband Engine is highly differential to the Blue-Gene projects being worked on that I'm aware of. AFAIK, TJ Watson-Yorktown was working on a BG program based on reductionist SMT cores which are utilized in high concurrency (bounded by 10^N) with the intention to mask latencies by way of massive parallelism - put into praxis by keeping as many threads in flight as possible vis-a-vis concurrent clusters of TUs surrounding sparce computational resources such as FPUs or FXUs. The concept being that there is a region where die area, preformance and power requirements intercept for the mean supecomputing tasks that is best solved by the above design.

The Broadband Engine (as an architecture) looks to be more akin to an extention of the EmotionEngine or an advanced genetically produced design by a high bias of post-contemporary PC 3D accelerator ideology (such as unified constructs) with some features present from Cellular Computing (namely the memory hierarchy and forms of interconnection) where you have a vastly fewer absolute number of a basic "cell"/core that is composed of computation heavy constructs with high bandwith hierarchical interconnections that are at some level 'overseen' by a seperate construct. IIRC, didn't nVidia patent something vaguely similar with a 'Gatekeeper' controlling underlying resources?

The architectural differences are obvious on a per-IC level. If you wish to theorize about how the concept of non-local cellular processing is analogous to how the IBM projects work on a macroscopic scale, then I'd agree based on what Kutaragi has stated. The way in which interconnection and processing tasks look to be distributed is very much akin to the BG projects, which arguably, are biologically inspired. Yet, this isn't an architectural feature - and as you very well know, a analogous tasks don't imply similar constructs underlying it them. And, frankly, this is a spill-over benefit for STI - not an example of where STIs investment in a set-piece architecture spilled over elsewhere.

Yet, if you wish to continue debating how the Broadband Engine (as envisioned in the Suzuoki or Kahle/Gschwind et al. IBM patents) is rooted architecturally in a BlueGene project I'll disagree. Again, just because there is a body of research in the general direction (eg. Church-Turing) doesn't mean it's described every microarchitectual detail of a specific manifestation (eg. Pentium4). So, look threw the patents by Sony or STI-Autin and you tell me how they relate to the BG programs architectually... rather how they relate to a closer degree than, say, the EmotionEngine as a forebearer.

PS. Where they working on any RANN projects during your time? You didn't say when you departed.

PPS. I'm out for awhile, so if you respond quickly it's not that I'm avoiding you.
 
I left in beginning of 1997. I was there for two years. My project was unrelated to mas-par. It was a tamper-resistant trusted computing project. To build a PCMCIA card which could run digital rights, cash, and privacy computation and be resistance to reverse engineering. The result was a co-processor emeshed in a fine web of wires, with crystals to detect ultrasonic drills, and a light detector, all sealed in epoxy. If you tried to drill the thing, you'd either break one of the micro-wires in the mesh, set off the light detector, or vibration (ultrasound) sensor.

The problem that killed the project was heat. I worked on software for a breadboard version, but the real device had problems overheating. They may have fixed them after I left (unrelated, it was dot-com boom, I wanted to work in startup)


What you say may be correct, the only information I have is about Watson/Yorktown, since that is where I worked, and I still know people there. But on a gut level, I highly doubt that money spent on the BE, if it is truly different, won't be repurposed for other projects. IBM had two major thrusts at Watson: a bunch of guys who favor centralized massive parallel designs, and a bunch of guys who favor distributed massive designs. One guy would routinely evangelize at meetings on a future with billions of embedded teraflop devices in homes across the world (perversive/ubiquitous). Another guy would sing us with tales of megaprojects solving the core questions of the universe, designing new generations of materials, vehicles, and drugs.

The pervasive guy was more consumerist in tone and imagined these processors doing more than just games. He wanted home security, automation, voice command, etc, much of what the EE was supposed to do with synthesis. He wanted scenes of networked devices in homes interacting over the net like you see with video phones in the future in movies like Minority Report. So, although IBM is working with Sony on this project, I would bet they are also looking forward to reusing this work in a future generation of home electronics, cell phones, pdas, etc per the pervasive vision, once they can reduce power consumption and size further.
 
...

So DemoCoder, you should be aware of how hard it is to actually code for any cellular architecture, care to share with us????
 
...

One guy would routinely evangelize at meetings on a future with billions of embedded teraflop devices in homes across the world (perversive/ubiquitous). Another guy would sing us with tales of megaprojects solving the core questions of the universe, designing new generations of materials, vehicles, and drugs.
Any chance Kutaragi Ken made a visit while you were there????
 
No, I don't have any experience on IBM mas-par. I was part of a trusted computing group, but I did attend meetings and demos of other groups. The division I worked in worked on many distributed/parallel projects, for example, Video-on-demand Video Servers were supposed to be this big thing in 1996, so one project that had alot of popularity was a parallel real time filesystem implementation for video serving (code-name Shark, then Tiger Shark)

My project was mostly centered around libraries and Mach kernel device drivers for using the trusted co-processor.

The closest I've come to actually working with lots of parallel chips in a realtime machine was doing Amiga software. :)

I'm just trying to point out in this thread that IBM T.J. Watson is focused on "basic" non-commercial research. It is a separate entity from the rest of IBM that doesn't have to be profitable. Watson does true academic-type research. IBM recoups the costs by taking discoveries from the research division in computer science, mathematics, physics, etc and engineering real software/hardware around them, like IBM's discovery of giant magneto-resistance (GMR) effect, but in separate application divisions.

I still receive IBM Systems Journal as a result of my being a former employee, and I can see the groundwork for much of what is being talked about today with CELL, BlueGene, MAJC, etc in old IBM papers from years ago. That, and I know that IBM taped-out multicore designs already due to a discussion with a friend who works there and sent me one of those "isn't this cool" emails.
 
Demo, was this the man at the meetings that would have crazy visions of Teraflop devices in homes?(Million of homes)

tokusyu_int_5d.jpg


Or someone that looks like him.
 
The closest I've come to actually working with lots of parallel chips in a realtime machine was doing Amiga software.
Hey, I'd be really interested to know what Amiga sowtware you've been working on? A game? What part of the software you've been making?
 
Which is why I questioned you as this has been the status quo condition

Their projection for the time schedule is spot on, however, Toshiba eDRAM density even if it has the smallest cell size available in the world, isn't as densed as what was projected in 2000, when the STI cell project begins. And Mr. Davari comment could be base on the earlier projection.

However since we know cell was initially designed for 100nm process, eDRAM was not crucial to its performance. Had Microsoft released an Xbox2 around this year, Sony can start building their Broadband Engine on their 90nm process.

Though, eDRAM is still crucial for their graphics processor.

, with the exception of you stating that 50nm and 65nm would lead to practical differences - which is more likely than not incorrect.

With the amount of logic and memory they are planning, its possible that has some practical implication in the way Broadband Engine is implemented.

They have options, one was to put everything in BE on a die and put that into a package. Another options is only put one processor element on a die and put into the number of processor element that are required into a package. At first its likely they go with the first option, but after Sony restructuring who knows what they need, they might opt for the flexibility along the line of the second option. Maybe they are going to do both, since PS3 is such a big piece to them.

PS. Also, I don't think he ever explicitly stated the embedded DRAM was to be the "main" memory [pool] as you said.

He said embedded DRAM is possible for the main processing element. That's the "main" memory pool, that all APUs can access too. The patent suggest another hierarchy of memory from the I/O processor, is that what you referred as the "main" memory pool ?

PPS. I find the IBM VP's (whose division lead Cell's development) comments particularly humorous in light of my recent 'conversation' with the Baumann, who kept stating that Cell's R&D isn't relative to that invested in PS3.

Well arguing about nothing is the idea of fun around here. :D
 
Vince said:
DemoCoder said:
So yes, "CELL" as you know it, benefitted tremendously from money being spent at IBM on pervasive/mas-par/distributed computing. And money being spent today is not solely for the PS3, but also for IBM's HPF projects.

Exactly, which is irrelvant to the money invested by the STI partners. Democoder, your correct, but your stating something that's the inverse of our [Dave and I] discussion. Please, if you wish to get into a discussion I'll allways be there - but don't support and argue something that's perverse in relation to the origional topic I was discussing.

Seems to me that Democoder already said exactly what we were talking about:

DemoCoder said:
IBM does basic research before applications are ready, and money spent on that research ends up in multiple applications. Even money being spent on CELL at Austin is not solely intended for PS3, since IBM is reusing that work for HPF projects at Livermore.
 
V3 said:
Which is why I questioned you as this has been the status quo condition

Their projection for the time schedule is spot on, however, Toshiba eDRAM density even if it has the smallest cell size available in the world, isn't as densed as what was projected in 2000, when the STI cell project begins. And Mr. Davari comment could be base on the earlier projection.

This is blatently wrong. I do appologize as I've furthered this incorrect statement by responding to it as I did, which was my mistake.

  • In 2000, they projected the lithography nodes they'd utilize as 100nm, 70nm, 50nm.
  • In early 2002 they amended this to 90nm, 65nm, 45nm. And announced the worlds smallest eDRAM cell @ 65nm, with a follow on their dual-loading 65nm process with a fully SOI 45nm one.
If anything, Toshiba and Sony are ahead of the game - not behind which your stating. Although, the differential between the 2001 projections and the 2003 actual are very small as 65nm & 70nm are basically equivalent in practicle terms.

However since we know cell was initially designed for 100nm process, eDRAM was not crucial to its performance. Had Microsoft released an Xbox2 around this year, Sony can start building their Broadband Engine on their 90nm process.

Again, we've already made the distinction between Cell the architecture which may well have 90nm parts for some niches and the Broadband Engine which is 95% probable to be based on the 65nm process. You can speculate all you want, but it flies in the face of what's known.

V3 said:
Vince said:
, with the exception of you stating that 50nm and 65nm would lead to practical differences - which is more likely than not incorrect.

With the amount of logic and memory they are planning, its possible that has some practical implication in the way Broadband Engine is implemented.

Ok, fine, then it's a "practical" net gain for STI as they projected 100nm and got 90nm; they projected 70nm, they got 65nm; they projected 50nm they got 45nm. Thus, if anything their process advancements have exceeded their projections, albeit by a small margin in praxis.

Again, I'm sorry for not thinking and just listning to you when assuming it was a net loss when they switched node sizes.

V3 said:
Vince said:
PS. Also, I don't think he ever explicitly stated the embedded DRAM was to be the "main" memory [pool] as you said.

He said embedded DRAM is possible for the main processing element. That's the "main" memory pool, that all APUs can access too. The patent suggest another hierarchy of memory from the I/O processor, is that what you referred as the "main" memory pool ?

You're avoiding the statement. He never stated that the eDRAM is the only and main system memory - which is what you're implying. All he said is that it's possible for use within a PE at sub-100nm scales -- something which you're extrapolating to mean it's the "main" memory. There is no evidence to support this and if anything the known data suggests that there will be XDR in the system at a lower, larger level. Alot of speculation based on very little evidence.
 
Back
Top