So, 1 PE after-all or is this just for GDC 2005 ?

So, 1 PE after-all or is this just for GDC 2005 ?

  • No, this is only the CPU they are describing at GDC: the final CPU of PlayStation 3 will have more P

    Votes: 0 0.0%
  • "Eh scusate... ma Io sono Io evvoi...e voi non siete un cazzo" --Il Marchese del Grillo.

    Votes: 0 0.0%
  • This, as the last option is a joke option... do not choose it.

    Votes: 0 0.0%

  • Total voters
    185
function said:
Both the PS3 and Xenon CPU's are currently rumoured to be new designs, share the same process size, and run at similar clock speeds. If this is case, where is a 16, 8 or even 4 times increase in performance for the PS3 CPU (over Xenon) going to come from?
Do they share the same process size? I doubt it, if Xenon is scheduled to be released in Fall 2005.
Do they run at similar clock speed? I doubt it, as the Xenon CPU don't have Synergistic Processors.
Where is a X times increase in performance for the PS3 CPU (over Xenon) going to come from? I guess it's from Synergistic Processors and Redwood.

I already expect the ISSCC 2005 presentations will disclose the 1st-gen Cell performance as 256GFlops or less, and people will scream "sub-1TFlops waaaaa!!!!1111" all over the internet, but until "Broadband Engine" (this name is from the Rambus-Sony-Toshiba (which lacks IBM) agreement in 2003, not from the patents) is disclosed in March, I just cross my fingers.

By the way, no Xbox 2 announcement in GDC 2005, right? Or will they have a separated event near the GDC conference halls?

function said:
Almost all of of the talk around CPU power seems to be based around GFlops, with little to no consideration going to integer processing, thread handling and switching, size and effectiveness of caches or any of the other things I obviously don't really understand the impact of.

In other words, according to the PR a 1-rack Cell workstation can do 16Tflops (with a customized benchmark which can exploit parallelism to the max), but can a 1-rack Xenon CPU workstation do 16Tflops in the same condition?
 
one said:
function said:
Almost all of of the talk around CPU power seems to be based around GFlops, with little to no consideration going to integer processing, thread handling and switching, size and effectiveness of caches or any of the other things I obviously don't really understand the impact of.

In other words, according to the PR a 1-rack Cell workstation can do 16Tflops (with a customized benchmark which can exploit parallelism to the max), but can a 1-rack Xenon CPU workstation do 16Tflops in the same condition?

The Xenon CPU is designed to run video game code as effciently as possible, not run scientific benchmarks and do things like protein folding. I'm sure Microsoft brought some intresting idea's to the table since they are the most powerful software company on the planet.
 
function said:
Almost all of of the talk around CPU power seems to be based around GFlops, with little to no consideration going to integer processing, thread handling and switching, size and effectiveness of caches or any of the other things I obviously don't really understand the impact of. ;)
IMO all of the compute intensive code in a game can be parallelized well, and a lot of it benefits from SIMD floating point. I think good programmers will be able to code around small caches/lower IPC per core/etc in the end (how well they do at the start or how long it will take for them to get up to speed Im not so sure about). There is no way around having low peak performance though.
 
Brimstone said:
The Xenon CPU is designed to run video game code as effciently as possible, not run scientific benchmarks and do things like protein folding.

The 16Tflops rating doesn't come from scientific benchmarks such as LINPACK according to the Sony rep.

Brimstone said:
I'm sure Microsoft brought some intresting idea's to the table since they are the most powerful software company on the planet.

Eh... financially powerful != technically powerful.
I respect the recent achievement of Microsoft Research looking the number of their papers, but not more than IBM + Sony + Toshiba in silicon matters.
 
I wouldn't underestimate the pool of 3D graphics talent within MS, as I keep getting reminded by developers. MS are also not doing the silicon they have other partners on that side (IBM + ATI + SiS) who also have very good understanding of the repective elements they have been drawn in for!
 
DaveBaumann said:
I wouldn't underestimate the pool of 3D graphics talent within MS, as I keep getting reminded by developers. MS are also not doing the silicon they have other partners on that side (IBM + ATI + SiS) who also have very good understanding of the repective elements they have been drawn in for!
That reminded me one thing... Intel and IBM have been always eager to present their latest achievements at ISSCC for years (IBM's at the top of the number of accepted papers these years), but where's the IBM's presentation for the "clean-sheet design Xenon CPU"? And also, how much did Microsoft pour in the CPU development so far? The deal between Nintendo and IBM over Gekko in 1999 was over 10 billion dollars, but it includes all manufacturing costs at IBM fabs.
 
one said:
That reminded me one thing... Intel and IBM has been always eager to present their latest achievements at ISSCC for years (IBM's at the top of the number of accepted papers these years), but where's the IBM's presentation for the "clean-sheet design Xenon CPU"?

Probably buried under a ton of confidentiality agreements, much like ATI's work on the graphics processor. I very much doubt that the contract allows any partners to speak about individual elements publically prior to MS making detailed announcements on the platform.

And also, how much did Microsoft pour in the CPU development so far?

There has, as yet, been no disclosure on any of the development costings by either MS or their partners. The one thing that I do think is that they had a fair indication of the type of structure they had in mind for the CPU even prior to them going to the graphics IHV's, indicating that they may have been working with IBM for quite a long time already.
 
passerby said:
Anyway, good to know that ISSCC is less than a month away.
Since I'm not into this ISSCC thing I'd like to know if mortals like us will have some access to ISSCC papers just after the conference, otherwise I'm non that interested in ISSCC.
Usually GDC papers (just one month after the ISSCC) appears online shortly after the end of the conference.

ciao,
Marco
 
function said:
Anyway, just a general thought going round my head is this: if the PS3's CPU were to be several times more powerful than Xenon's CPU, where would this power come form?
Since neither the details of the Xenon or the PS3 is out in the wild, this is an impossible question to answer authoratively.

But there are two areas that can make large differences.
* The memory subsystem/hierarchy.
* Computational resource balancing. (Catch all bullet point.)
Augmented by a trickier third
* Internal communication overhead.

OK then, point by point, the PS3 has elected to go with a memory type that has advantages and problems. The console environment however emphasizes the benefits, which is very high potential bandwidth, (while avoiding the problems incurred when you try to package this memory into DIMMs of arbitrary size and in arbitrary numbers).
Depending on implementation details, this could give the PS3 a huge bandwidth advantage.

As far as computational resource balancing goes, the PS3 Cell processor is a vector floating point monster. Comparing it for a moment with an x86 processor (or the PPC 970) points towards a PE that is less sophisticated in terms of extracting maximum single thread performance as far as superscalarity and particularly OOO execution goes. However, it conversely has easily an order of magnitude greater resources for vector floating point math. And that's per PE.
There was no way in hell any x86 processor could come close to that, without ditching SSE for something better, and then multiplying it, adding control logic for these vector processors, trying to squeeze all this onto a die, and then writing new tools for accessing the new capabilities since existing x86 tools wouldn't be useful, invalidating any rational reason to stick with x86. So Microsoft did the reasonable thing and looked for something better geared towards the needs of this kind of product. We still don't know what IBM offered them, only that it was sufficiently better that it made sense to abandon x86. It may well be that the PS3 Cell is still far more capable in terms of vector math, but that this will be somewhat countered by the Xenon GPU taking over some of the tasks that the Cell APUs will handle, and that offloading this to the GPU will allow other types of code (AI and other typically branch heavy stuff) to be better handled by the Xenon CPU.

The third point is about how hard it is to apply parallell execution resources to a particular problem. This depends strongly on both software tools, and basic hardware capabilities. Latencies for accessing data at various levels and locations. About this, we know next to nothing. It would make sense to assume that the Cell processor is exceedingly good at this, since parallell execution is a fundamental idea behind the Cell concept. Implementation is everything though. It could be speculated that the Xenon CPU will be best harnessed using rather coarsely parallell codes, something that might or might not map well to game codes.

Those are some issues that could make a big difference. But even knowing a lot more details than we do, it would be very very difficult to go from schematic description to real world performance projection. The proof of these puddings will be in the eating.

And perhaps both of these consoles will be sidestepped in the marketplace by a small, cheap, and quiet powermiser of a Nintendo designed and prized such that people other than the tech geeks would like to have it in their living room. :D
 
passerby said:
Anyway, good to know that ISSCC is less than a month away.
I've noticed that the next, best show is always "a month away" and that we invariably get less information than we think we will. :p
 
nAo said:
Since I'm not into this ISSCC thing I'd like to know if mortals like us will have some access to ISSCC papers just after the conference
I'm counting on sources such as EETimes. At least, EETimes appear to have gotten interested.
 
Just a little Reality Check Engine:

Did anyone notice how the only place people hear about the 1Tflop thing is...... here...? Or other internet forums?

It seems to me that there was one single comment YEARS ago from Sony about targetting 1Tflop, then nothing. Ages ago.

If Sony unveils something that doesn't have 1Tflop written on it, the only people to bitch about it will be... us. Like this
frustrate.gif
 
Also, it be that the PE is more effiecient at scheduling tasks than they though so it could be a case of doubling the S/APU's per PE and halving the amount of PE's.
 
one said:
The 16Tflops rating doesn't come from scientific benchmarks such as LINPACK according to the Sony rep.

No, it comes from multiplication: A x B x C = 16 Tflops.

In other words, when dealing with companies quoting flops numbers, you have two choices, chuckle and play along, or laugh and walk away. In either case, you are reconizing that its all just BS.

Aaron Spink
speaking for myself inc.
 
one said:
That reminded me one thing... Intel and IBM have been always eager to present their latest achievements at ISSCC for years (IBM's at the top of the number of accepted papers these years), but where's the IBM's presentation for the "clean-sheet design Xenon CPU"?

Difference in ownership. MS owns the Xenon design. I think MS and Sony are opperating on a slightly different PR strategy.


And also, how much did Microsoft pour in the CPU development so far? The deal between Nintendo and IBM over Gekko in 1999 was over 10 billion dollars, but it includes all manufacturing costs at IBM fabs.

10 billion? right....

For 10 billion, nintendo could have built 2 seperate mega fabs, produced enough chips for 150-200 million GC, and had enough design teams to redesign GC silicon every year.

Aaron Spink
speaking for myself inc.
 
In other words, when dealing with companies quoting flops numbers, you have two choices, chuckle and play along, or laugh and walk away. In either case, you are reconizing that its all just BS.

16 Tflops comes straight from IBM's press release.

Also, the Cell has economy of scale over what CPU will go in Xenon and can possibly get higher specs. because Cell will go into STI usage.

Maybe the Xenon CPU will make an appearance at ISSCC 2005:

"
Another approach, which will be described by IBM (BlueGene/L), uses low-cost, small, power-efficient processors in a massively parallel fashion. This complex SOC ASIC includes two processor cores, embedded DRAM, SRAM, and custom logic, achieving a high-power/cost-performance trade-off, suited to its role as a building block of IBM’s BlueGene/L supercomputer."

Source: www.ieee.org


[/quote]
 
aaronspink said:
And also, how much did Microsoft pour in the CPU development so far? The deal between Nintendo and IBM over Gekko in 1999 was over 10 billion dollars, but it includes all manufacturing costs at IBM fabs.

10 billion? right....

For 10 billion, nintendo could have built 2 seperate mega fabs, produced enough chips for 150-200 million GC, and had enough design teams to redesign GC silicon every year.

OK, it's a 1 billion dollar deal :oops:

http://www.eet.com/article/showArticle.jhtml?articleId=18301817
LOS ANGELES — In a stunning coup for its new Pervasive Computing strategy, IBM Microelectronics has won the design for the next-generation Nintendo game console ICs. The estimated $1 billion win, snatched from the jaws of long-time Nintendo partner NEC Corp., has the potential to make IBM the dominant figure in ASICs at the 0.18-micron generation, and to establish the PowerPC as the highest-volume RISC processor.
 
that includes money for r&d and the cost of the chips i assume .

Over 4 years thats not a large sum of money
 
Back
Top