PS3 vs. Xenon CPU performance

Shifty Geezer

uber-Troll!
Moderator
Legend
Assuming one Cell processor at 256 Gflops in PS3, and the rumours of dual-core Xenon processor at 60-80 Gflops are true, what kind of differences can we really expect between the systems?

As that 256 Gflops of PS3 seems rather specialized, and without good branching, how would you expect these processors to perform on AI, physics, and other non-graphics related activities?

I know all the details of the machines aren't in (bandwidth limits, whether PS3's PE has to vertex process, etc.) but as speculation on the merits of the processors, lets guess both have ideal situations ot run in. As an exercise to inform ignorants like me as to the differences between SIMD and conventional architectures, what would the describe PE not be good at, which the Xenon CPU would, and vice versa, especially in the contexts of a games console and media hub?
 
Actually, what is the difference between the EE and the Pentium in the Xbox?

I thought the EE was better in floating point than any Pentium at the time.

So the Xbox's superior graphics must be attributable to the GPU and the RAM?

The higher bandwidth and the SIMD HW of the PS2 didn't trump the Xbox's better processing of textures?
 
wco81 said:
Actually, what is the difference between the EE and the Pentium in the Xbox?

I thought the EE was better in floating point than any Pentium at the time.

So the Xbox's superior graphics must be attributable to the GPU and the RAM?

The higher bandwidth and the SIMD HW of the PS2 didn't trump the Xbox's better processing of textures?

The EE takes care of the "Vertex Shading" (if so we can call it) on PS2, Which is done on the GPU on the Xbox.
The amount of RAM on the Xbox, better texture compression/handling and IQ helps Xbox games look better than many PS2 games.
So yes, on the Xbox, it's all the GPU's merits.
The EE is much better at floating point performance than the Pentium on the Xbox, however it needs to do things that the Xbox does on the GPU's vertex shaders, so we're back from square one.
 
AT this point it's all guesswork.

If the CPU on PS2 is performing vertex work, does that mean that the GPU has more transistor to dedicate to pixels? or did they want EDRAM on the GPU or etc etc etc....... So I think graphics is difficult to quantify without detail of the enire system.

Cell like architectures are good at streaming problems or complex math on small data sets. To me thats basically graphics, Physics to a large extent as long as the data set can be partitioned efficiently, and audio.

Where cell will not do aswell is on algorythms that require random access to large datasets. Most none trivial AI falls into this category. Conventional provessors with well structured L2 caches have a significant advanrage here.

Having said all of that parallel systems are really difficult to optimise for (especially none symmetric ones). In a single CPU system the fastest overall solution is the combination of the most optimal parts. In a parallel system it is oftem advantageous to run something on another processor even if that processor doesn't run it efficiently because overall balance is an issue.
 
AI might not benefit much from SIMD, and have a hard time fitting in the batched data model ... but it is massively parallel (or at least as massively as the amount of entities which need AI).

I wonder how hard it would be to do software driven vertical multithreading on a SPU, to make it more usefull for when data wont fit in 256 KB.
 
I think we are missing the key info on both which is the graphics card. ps3's 256 gflops is just the cell right?

So we are left with both ati and nvidia telling us they are throwing thier best tech into thier respective consoles. Very little is known about these parts.
 
flick556 said:
I think we are missing the key info on both which is the graphics card. ps3's 256 gflops is just the cell right?

So we are left with both ati and nvidia telling us they are throwing thier best tech into thier respective consoles. Very little is known about these parts.

Assuming that PS3 will come nearly a year after XB2 it is easy to answer this question. Nvidia's GPU will be faster. IMO of course.
 
The cards are definitely in Sony hands for now.What we hear today is just the first variant of the Cell processor line.They definitiely can afford to wait until MS releases more info on their next gen console since they have nothing to lose.MS can buy time to find new technologies to incorporate into their console but the more time they waste they'll allow Sony to lower the PS3 launch price instead.How quiet can they be?The Cell is the work of 3 mega companies - IBM,Sony and Toshiba.It took them some time to come out with this processor.

As for the GPU part, Nvidia is developing it with inputs from Sony this time.They'll be engineers from both side to work on it.What does ATI has?They have the ArtX team which integrated themselves with their development team thaat gave Nvidia some headaches for the past few rounds.
 
hugo said:
They have the ArtX team which integrated themselves with their development team thaat gave Nvidia some headaches for the past few rounds.

Not really, AFAIK. Not a big 3D card expert, but I'm fairly sure the latest generation of cards have seen things pretty evenly balanced, or even tipped in NVidia's favour - despite them getting a kicking in previous recent generations. IIRC, the 6800Ultra was out for quite some time before ATi released something which outperformed it (and not by much - in some benches the nvidia was still faster). I'm open to correction on that, I'm working off none-too-perfect memory here.
 
Evenly balanced yes...Next round we'll be anticipating Nvidia to launch a card which utilizes technologies/stuffs that they might have learned from their partnership with SCEI with some modifications.Will they have access to Rambus's XDR,IBM's Cell microprocessor line and GPU processes?

I do see a trend with ATI today that they are busy buying companies out there to acquire new technologies/IPs in order to keep themselves ahead of competition.They ought to be cautious on their budget because that was one of the very reason how 3DFX went down.
 
hugo said:
As for the GPU part, Nvidia is developing it with inputs from Sony this time.They'll be engineers from both side to work on it.What does ATI has?They have the ArtX team which integrated themselves with their development team thaat gave Nvidia some headaches for the past few rounds.

Wrong, nVidia already developed it by themselves are are working with Sony to match it up to the PS3. Sony didn't have any part in developing nVidia's part. In fact it will be similar to the next generation part that nVidia will release on the PC.
 
hugo said:
Evenly balanced yes...Next round we'll be anticipating Nvidia to launch a card which utilizes technologies/stuffs that they might have learned from their partnership with SCEI with some modifications.Will they have access to Rambus's XDR,IBM's Cell microprocessor line and GPU processes?

I do see a trend with ATI today that they are busy buying companies out there to acquire new technologies/IPs in order to keep themselves ahead of competition.They ought to be cautious on their budget because that was one of the very reason how 3DFX went down.

Wrong, both nVidia and ATI companies buy other companies. It not a defensive move, its an agressive move and both companies do it.
 
MfA said:
AI might not benefit much from SIMD, and have a hard time fitting in the batched data model ... but it is massively parallel (or at least as massively as the amount of entities which need AI).

I wonder how hard it would be to do software driven vertical multithreading on a SPU, to make it more usefull for when data wont fit in 256 KB.

Your right it is, every entity could be considered a thread, however, pretty much any none trivial system I've seenrequires random access to large data structures. Trivial example, what an AI entity can do at any point is potentially changed by the state of any entity with which it can interact. Now either you duplicate that data per entity or it's none trivial to partition.

L2 cache is a really large win here simpy because the the entities you care about (memory patterns) are temporally coherent.

Not saying it's impossible, just more difficult to come up with efficient ways to do this. One AI solution we were experimenting with has a very math heavy section which could trivially be run on cell, but it's probably not the performance limiting factor.

The other issue which is by comparison minor is that a lot of AI in games is script driven, most scripting languages are not conducive to running in 256K on any processor. Never ming splitting the memory to allow double buffering.
 
a688 said:
hugo said:
As for the GPU part, Nvidia is developing it with inputs from Sony this time.They'll be engineers from both side to work on it.What does ATI has?They have the ArtX team which integrated themselves with their development team thaat gave Nvidia some headaches for the past few rounds.

Wrong, nVidia already developed it by themselves are are working with Sony to match it up to the PS3. Sony didn't have any part in developing nVidia's part. In fact it will be similar to the next generation part that nVidia will release on the PC.

I don't think I'm wrong totally.Nvidia might use XDR in their future cards and they can familiarize themselves with the Cell technologies from their collaboration with Sony. Oh yeah if they have already developed it then tell me why did they took away the NV50 and went back to redesigning their next generation GPU?How about the involvement of Sony's engineers in the GPU work?

You mentioned that the GPU that nVidia will be using will be similar to their PC offering.How do you supposed so?Pairing it with a CPU which has at least 256GFlops is not an easy task.If they were to use a GPU design which they've been working on by themselves don't you think it will be quite a bit of a bottleneck in reagrds to the Cell?
 
a688 said:
hugo said:
Evenly balanced yes...Next round we'll be anticipating Nvidia to launch a card which utilizes technologies/stuffs that they might have learned from their partnership with SCEI with some modifications.Will they have access to Rambus's XDR,IBM's Cell microprocessor line and GPU processes?

I do see a trend with ATI today that they are busy buying companies out there to acquire new technologies/IPs in order to keep themselves ahead of competition.They ought to be cautious on their budget because that was one of the very reason how 3DFX went down.

Wrong, both nVidia and ATI companies buy other companies. It not a defensive move, its an agressive move and both companies do it.

Nvidia isn't as agressive as ATi in this aspect.
 
3dfx's primary purchase was STB, which resulted in them "going vertical" at exactly the wrong time (i.e. when they had a tecnology screw up meaning they had no new products to "go vertical" with) which meant the board end of the business just dragged them down. i.e. - don't confuse what 3dfx did with what ATI and NVIDIA are doing now.
 
hugo said:
a688 said:
hugo said:
Evenly balanced yes...Next round we'll be anticipating Nvidia to launch a card which utilizes technologies/stuffs that they might have learned from their partnership with SCEI with some modifications.Will they have access to Rambus's XDR,IBM's Cell microprocessor line and GPU processes?

I do see a trend with ATI today that they are busy buying companies out there to acquire new technologies/IPs in order to keep themselves ahead of competition.They ought to be cautious on their budget because that was one of the very reason how 3DFX went down.

Wrong, both nVidia and ATI companies buy other companies. It not a defensive move, its an agressive move and both companies do it.

Nvidia isn't as agressive as ATi in this aspect.

Name the companies that either has purchased as of now and who they are planning to purchase. Ignore purchases in other markets besides graphics:

nVidia: 3dfx (who purchased gigapixel)
ATI: ArtX and somebody else relatively soon.
 
I'd also like to mention the fact that ATI is not bad off economically. It has been around far longer than Nvidia or 3dfx back in he day so I have no doubt htey will be going anywhere soon.

With that being said and in regards to the differences in the CPU's for Xbox 2 and PS3. I am still waiting to see just hwat will power the Xbox 2 because right now I haven't a clue except for very strange rumors. If it is a tri-core CPU then why doesn't Nintendo go for the same thing? Afterall, it would be the tri-force.
 
a688 said:
hugo said:
a688 said:
hugo said:
Evenly balanced yes...Next round we'll be anticipating Nvidia to launch a card which utilizes technologies/stuffs that they might have learned from their partnership with SCEI with some modifications.Will they have access to Rambus's XDR,IBM's Cell microprocessor line and GPU processes?

I do see a trend with ATI today that they are busy buying companies out there to acquire new technologies/IPs in order to keep themselves ahead of competition.They ought to be cautious on their budget because that was one of the very reason how 3DFX went down.

Wrong, both nVidia and ATI companies buy other companies. It not a defensive move, its an agressive move and both companies do it.

Nvidia isn't as agressive as ATi in this aspect.

Name the companies that either has purchased as of now and who they are planning to purchase. Ignore purchases in other markets besides graphics:

nVidia: 3dfx (who purchased gigapixel)
ATI: ArtX and somebody else relatively soon.

http://www.beyond3d.com/forum/viewtopic.php?t=19979
The somebody else u were refering to will probably not be some small company.They're hoping it will let them stay ahead of Nvidia.Another point..didn't Nvidia bought only the patents/technologies that were of 3DFX/Gigapixel?They did not merge with it.
 
Back
Top