So, 1 PE after-all or is this just for GDC 2005 ?

So, 1 PE after-all or is this just for GDC 2005 ?

  • No, this is only the CPU they are describing at GDC: the final CPU of PlayStation 3 will have more P

    Votes: 0 0.0%
  • "Eh scusate... ma Io sono Io evvoi...e voi non siete un cazzo" --Il Marchese del Grillo.

    Votes: 0 0.0%
  • This, as the last option is a joke option... do not choose it.

    Votes: 0 0.0%

  • Total voters
    185
DaveBaumann said:
With all the talk about inexpensive design, and only to go with 3 seperate chips, that's just silly.

Where is this talk coming from?
In this thread you seem to indicate that MS is doing Xbox/Xbox2 business as a tremendous PR campaign to gain mindshare or a so-called "foot in the door of the living room" from which they expect no concrete return :)

If they are serious about Xbox PC and are interested in hardware business like Apple, then they may be able to continue their expensive experiment for a while, but if not and still using 3 separated chips as you suggest, I can't believe their sanity.

BTW you often apply GPU development things to CPU, but it's misleading as a typical new GPU architecture development cycle (2 years) is far shorter than that of CPU (4 years).
 
one said:
Oh, that's a very interesting argument. What you suggest is, an exact description of off-the-shelf technology solution, isn't it? :LOL:

So if Cell is related to any off-the-shelf parts, then Cell isn't a clean sheet?

Because I always though clean sheet, meant a blank bit of paper where you stick whats needed to reach the goal, where you got those bits from didn't matter. That if an existing part did the job, you would just use it, rather than rebuilding from scratch something just so you could claim clean sheet-ness.
 
V3 said:
Based on the current info that's floating around. You yourself have commented on them.

The exact information I had on that was highly ambiguous as to the actual design.

V3 said:
The bit about turning profit next generation.

They already went with IP license this time, why ruin that with 3 chips design just for the CPU.

As I was commenting earlier in this thread: The next generation will last for ~4-5 years for the console product alone, and there may be other products that are complementary as well (we just don’t know). They need to turn a profit over the course of the next generations lifetime they don’t need to do it the second the hardware hits the market.

The problem with XBox was not that it was expensive to begin with – successful consoles nearly always are, but that they have a bunch of fairly fixed cost items in there (CPU, Graphics, MPC, HD) that they are not getting further cost reductions over the lifetime of the console – I wouldn’t be surprised to find that XBox hardware is loosing more money now than it was at the introduction.

Just “IP licensing†is not a magic bullet that suddenly solves all of your cost issues since you still have to have someone to implement that silicon and do something with it, meaning you either pay someone else to do it or you do it in house – this could end up costlier, in the initial stages, than just paying for complete chips; you also have to pay the fab to make the chips, which will be a relatively fixed cost. You IP license for a reason, and one of the reasons for doing it is to have control over the silicon to make changes over the course of the lifetime of the console and gain the cost advantages of newer silicon processes that are available, and viable (cost wise) over that lifetime.

Yes, a 3 CPU core system would be expensive initially, but then so will the console be initially – as the consoles life goes on it will have price drops, but changes to the internals (on top of basic economies of scale) of the system, such as moving more CPU cores on to a single die, can mean that the costs will scale, more or less, with the price reductions, meaning that there is roughly a similar cost to sale ratio across the course of the consoles lifetime rather than one that widens as the console price is forced to drop further but you aren’t getting very large price drops due to relatively fixed chip/component costs.
 
DeanoC said:
I would love to have a good discussion of the advantages and disadvantages the PS3 and Xenon architecture poses.

Still waiting for you to kick this one off. :)
Although, to truly do that, I guess NDAs would have to be broken.
It would be very interesting though - I'm too ignorant as far as gfx code is concerned (present and of the future) to possibly have a good handle on that part of the equation, even if I held all the cards. Which I don't. So an educated discussion that relates the architectures to the target applicaton area would be really helpful.

PS. If it would be too hamstrung at this date due to lack of public data, just pen down your thoughts, and let it loose once the NDAs are lifted, so that we have a good foundation for discussions at that point.
 
one said:
In this thread you seem to indicate that MS is doing Xbox/Xbox2 business as a tremendous PR campaign to gain mindshare or a so-called "foot in the door of the living room" from which they expect no concrete return :)

If they are serious about Xbox PC and are interested in hardware business like Apple, then they may be able to continue their expensive experiment for a while, but if not and still using 3 separated chips as you suggest, I can't believe their sanity.

I don’t know any of these things, I’m making suggestions of things that could happen because at present there seems to be a lot of conclusions based on an absence of information. The CPU discussion is a hypothetical to illustrate the point that the patent outlines a flexible design that can be altered over the course of the consoles lifetime.

BTW you often apply GPU development things to CPU, but it's misleading as a typical new GPU architecture development cycle (2 years) is far shorter than that of CPU (4 years).

Well, think about it – if a graphics processor take in the order of 2 years to build, they must have requested tendered to the graphics vendors before then...
 
Sorry to extend these pointless Cell arguments but I just want to point out that eetimes articles One quoted actually points out that Cell isn't a clean sheet design either.

Half way done its changes its mind I quote "If Kahle at IBM started with a nearly clean sheet". My emphasis...

So we now can endless argue on how near is nearly clean sheet? A few smudges on the paper, or a complete processor core for the PUs?
 
DeanoC said:
one said:
Oh, that's a very interesting argument. What you suggest is, an exact description of off-the-shelf technology solution, isn't it? :LOL:

So if Cell is related to any off-the-shelf parts, then Cell isn't a clean sheet?

Because I always though clean sheet, meant a blank bit of paper where you stick whats needed to reach the goal, where you got those bits from didn't matter. That if an existing part did the job, you would just use it, rather than rebuilding from scratch something just so you could claim clean sheet-ness.

I'd like to ask you, what is architecture in CPU?

When Synergistic Processor which will occupy most of the silicon in the Cell processor is a non-von-Neumann dataflow architecture, to me it seems such hybridity of Cell is very different from the expected Xenon CPU.

7.4 A Streaming Processing Unit for a CELL Processor
3:15 PM
B. Flachs1, S. Asano2, S. Dhong1, P. Hofstee1, G. Gervais1, R. Kim1 , T. Le1 ,
P. Liu1, J. Leenstra3, J. Liberty1, B. Michael, S. Mueller3, O. Takahashi1 ,
Y.Watanabe2 , A. Hatakeyama4,H. Oh1, N.Yano2
1IBM, Austin, TX
2Toshiba, Austin, TX
3IBM, Boeblingen, Germany
4Sony, Austin, TX

The design of a 4-way SIMD streaming data processor emphasizes
achievable performance in area and power. Software controls data
movement and instruction flow, and improves data bandwidth and pipeline
utilization.
The micro-architecture minimizes instruction latency and
provides fine-grain clock control to reduce power.

DeanoC said:
Half way done its changes its mind I quote "If Kahle at IBM started with a nearly clean sheet". My emphasis...

So we now can endless argue on how near is nearly clean sheet? A few smudges on the paper, or a complete processor core for the PUs?

From the same page...
Then the trio went to that proverbial clean sheet, drawing upon the symmetric-multiprocessing experience within IBM.
Need to say more? ;)
 
Overall this was an interesting thread. That is, if you can ignore the "cell is bigger & badded then CPU X" that is currently happening. Why is it vince continues to argue with a few people in here that clearly know more about CPU design and/or Xenon & Cell hardware. It's really beyond me.

One, can you please drop the "Look, it says here it's a clean sheet design.." so the thread can get back to the interesting discussion?
 
Qroach said:
Overall this was an interesting thread.
Oh, really? According to my 1st post in this thread I voted "This, as the last option is a joke option... do not choose it." it's nice to see you too have fun in this joke thread. :rolleyes: If you took a risk to express your own view on the subject without preaching at others, then it's more fun I promise :p
 
I Said "overall" Not everyhting in thi sthread was interesting. More specifically the need for some to think sony is portrayed in the best light "always". No company deserves that.
 
Qroach said:
I Said "overall" Not everyhting in thi sthread was interesting. More specifically the need for some to think sony is portrayed in the best light "always".
Honestly why you think so is beyond me. I'd like to quote V3 in this thread...
V3 said:
Beside real clean sheet design doesn't always turn out to be good either. So why are people here making a big deal out of clean sheet design is beyond me.
 
Your enjoying this aren't you one? :p

Now I've got question for Deano if it is not under NDA. Simply put we all know the common conceptions of how cell is built is it possible that they could have doubled the number of S/APU's per PE or increased the per clock perfomance of each APU so that they would require less PE's to reach their target performance goal which is still at thge moment unknown.
 
one said:
I'd like to ask you, what is architecture in CPU?
Well in Cell case we are talking about the PUs, they are the Central Processing Units, its APUs aren't CPUs in any conventional sense. Just that same way that the fact that VU (and GS these days) are on the CPU die doesn't make CPUs.

The amount of units on a die is largely irrelevant to the architecture overview. Cell is a linked orbit configuration (sorry for the quick diagram but should do enough to explain, numbers of things are not indiacitive of anything except how lazy I am).

cell.png


Now if you not concerned with FLOPs counts, the PU are clearly very interesting. They are controlling waht the APU are doing, we know they are 64 bit PowerPC cores but have little other (public) information

So lets pose some questions and my views, feel free to disagree. Hopefully this will steer discussion towards meaningful disagreements rather than semantics.

Given the APUs are fairly small (we assume this from the amount of them and there uses), are they PowerPC? More precisely is there any advantage to making APUlet uses the PowerPC ISA?

I'll start the bidding with a no, they are designed as vector processor as such they will likely have a custom ISA for this job.

Are there any advantages to have the PU use a new PowerPC core?
Will they operate any worse as APU scedulers if they used (for example) a PPC970 core.

I'll offer a no, the scheduling will likely want a fairly normal core with good branching. It however may favor a multi-thread core, so prehaps this would be a good change from existing designs.

Where is 'standard' game code going to run?

Given the speciality of the APUs and the fact that game code is often highly unoptimised, I will suggest on the PU. This again suggests prehaps multi-threading (prehaps one thread scheduler, one game code) but with again a fairly standard execution engine.

Do IBM have any PowerPC cores that could with a few modification do the job of a PU

Yes, the PowerPC 300 cores seem to be an ideal fit. Multi-threading, with multi-processing link. High clockspeed, designed for 65nm and simple embedded architecture (no OOOE etc.).

Now I not saying that a PU isn't based (or vice versa...) on PowerPC 300, but the whole arguement that Cell is so revolutionary that everything must be custom doesn't hold if you look at the PU, which to my mind are the 'real' CPUs of the chip. The APUs still look like vector processors doing very specialist jobs, controled by some 'normal' cpus.
 
Xenus said:
Your enjoying this aren'tyou one? :p

Now I've got question for Deano if it is not under NDA. Simply put we all know the common conceptions of how cell is built is it possible that they could have doubled the number of S/APU's per PE or increased the per clock perfomance of each APU so that they would require less PE's to reach their target performance goal which is still at thge moment unknown.

Who said there performance goal wasn't simple the best they could get? Or even best in real world circumstances?
But if your talking about the TFLOP figures often quoted, AFAIK there is no problem with adjusting number of things to reach you goal, IMO thats the whole point of Cell.

But I suspect that random TFLOP figure, was either not serious or was given up a long time ago...
 
Just “IP licensingâ€￾ is not a magic bullet that suddenly solves all of your cost issues since you still have to have someone to implement that silicon and do something with it, meaning you either pay someone else to do it or you do it in house – this could end up costlier, in the initial stages, than just paying for complete chips;

Complete chips ? Why don't MS asked IBM to do what they want from the get go ? If they want 6 cores, why don't they ask IBM for 6 cores on a single chip. Why go the 3 seperate chips route and downsize later ?

Even a single chip, they can still downsize it, when better process is available.

Yes, a 3 CPU core system would be expensive initially, but then so will the console be initially – as the consoles life goes on it will have price drops, but changes to the internals (on top of basic economies of scale) of the system, such as moving more CPU cores on to a single die, can mean that the costs will scale, more or less, with the price reductions, meaning that there is roughly a similar cost to sale ratio across the course of the consoles lifetime rather than one that widens as the console price is forced to drop further but you aren’t getting very large price drops due to relatively fixed chip/component costs.

Dave, a console, with a single chip high end CPU and a single chip high end GPU, is already expensive initially. It'll be tough for them to control cost with 3 seperate chips, just for the CPU.

Like you said, the redesign of 3 seperate chips into single chips going to cost them later on, so why not cut cost from the beginning and just have a single chip ? Single chip gives better integration, maybe even better performance too, from the get go.

They need to integrate both the CPU and GPU onto a single die, about half way, into the life time of next gen consoles, to really cut cost.
 
I think the point Dave was getting at putting six cores on the same die may not be feasable at that time due to yeild issues. Thus with spliting it in to seperate chips you get rid of the yeild issue and put six cores on the same die when it is actually feasible.
 
V3 said:
Complete chips ? Why don't MS asked IBM to do what they want from the get go ? If they want 6 cores, why don't they ask IBM for 6 cores on a single chip. Why go the 3 seperate chips route and downsize later ?

It all depends on the processing power you are targeting and the process you are using and how that relates to die size. With an effective multi CPU design that won’t “lookâ€￾/perform too different between multiple cores across die and multiple cores on a single die then you can yield more usable chips per wafer by reducing the die space – if you have a very large CPU with 4 cores on it and there is a defect on the wafer that hits it then you have lost 4 cores (unless you can reuse the working ones elsewhere), have a CPU with two cores for about half the die costs and you loose half the cores per defect. Shrink more cores onto a single die later as the process scales down, whilst keeping the die area within similar bounds, hence the defect loss within similar tolerances.

Dave, a console, with a single chip high end CPU and a single chip high end GPU, is already expensive initially. It'll be tough for them to control cost with 3 seperate chips, just for the CPU.

Like I said, this is merely an example of something that could be done. Assuming the patent is what will end up being the case then the description of the patent leaves room for more than one CPU in its design and isn’t fixed to a set number of cores per chip.

They need to integrate both the CPU and GPU onto a single die, about half way, into the life time of next gen consoles, to really cut cost.

No, this is one area I would be very surprised to ever see happen since, by the looks of things at the moment ATI are doing the layout and process management via TSMC for the graphics, whilst IBM are doing the CPU – if they are on different fabs to begin with (which probably means different libraries as well) then it would be every costly / difficult to do in the future.
 
I think the point Dave was getting at putting six cores on the same die may not be feasable at that time due to yeild issues. Thus with spliting it in to seperate chips you get rid of the yeild issue and put six cores on the same die when it is actually feasible.

Wait, how big do you expect each of this dual cores chip to be ? That six cores on a chip isn't feasible ?

BTW who will be manufacturing these CPU chips for MS anyway ?
 
It all depends on the processing power you are targeting and the process you are using and how that relates to die size. With an effective multi CPU design that won’t “lookâ€￾/perform too different between multiple cores across die and multiple cores on a single die then you can yield more usable chips per wafer by reducing the die space – if you have a very large CPU with 4 cores on it and there is a defect on the wafer that hits it then you have lost 4 cores (unless you can reuse the working ones elsewhere), have a CPU with two cores for about half the die costs and you loose half the cores per defect. Shrink more cores onto a single die later as the process scales down, whilst keeping the die area within similar bounds.

What you've said, its not alien to me, since I suggested the same thing as well, when I was discussing the Broadband Engine on this board.

The problem with 3 chips design is its expensive to keep the level of performance, the same as if all the cores are on a single die. Beside 6 cores should be feasible next gen. Its better to eat bad yield than have to deal with high performance multiple chip configuration.

No, this is one area I would be very surprised to ever see happen since, by the looks of things at the moment ATI are doing the layout and process management via TSMC for the graphics, whilst IBM are doing the CPU – if they are on different fabs to begin with (which probably means different libraries as well) then it would be every costly / difficult to do in the future.

If Sony could fab Cell and the NV chip on their fab, why can't MS do something similar maybe at IBM or TSMC or other place ? I am sure the redesign will be worth it. Sony went into alot of redesign in this gen. I expect MS to do the same.
 
Back
Top