Do you expect the ps3 will be the most powerful system?

xbdestroya said:
Fox5 said:
xbdestroya said:
Fox5 said:
I think Xbox 2 could be more powerful than the PS3 because the PS3 is relying heavily on Cell. I don't think Cell will be a flop, but I think it will fall well short of expectations. If Ps3 significantly relies on Cell for graphics then, at least until developers become more skilled at programming for it, that Xbox 2's bigger reliance on a gpu will pay off. I think Cell will be more powerful than any computer cpu now, but will be crushed by the dual core cpus available when PS3 arrives.

I think you are very much overestimating what the PC world is going to be receiving in terms of dual cores in the near future. I know that for all practical purposes, anyone with a top of th eline processor now will more or less be taking a step backwards by buying a Smithfield processor.

Though I give more respect to AMD and their Toledo implementation, it'll be some time before that offers better performance to their FX-55 as well.

Well, I'm assuming optimized code will be available for dual cores. Considering PS3 is coming out in 2006, I think dual 2.5ghz athlon 64s could best it.(though only slightly once ps3 is fully optimized for) If cell was x86 based, it's not like it would give great performance if plopped into a pc right now, it would likely perform much worse.

I think what you're writing about is a very POV type situation; I agree that as far as x86 goes, a dual core chip would be able to dominate Cell, but then again so could a single core. If we're talking about flops, then obviously Cell would humiliate dual, quad, octal, and on down in terms of multi-core x86 chips. That assumes a frequency wall has been hit for a while.

And just looking at things from a practical perspective, not x86 or flops performance, I don't see multi-core x86 chips catching Cell when it comes to media performance, and I don't see Cell knocking the x86's off their own perch. Still, if you can get Cell emulating x86 well enough to send and receive email and do some word processing, for a lot of people that negates the need for an Intel or AMD machine.

Not anyone on this forum presumably, but no small part of the populace.

Since when has a FLOPs monster been crucial for games? I think the randomness of user interaction makes the ability to efficiently handle branching code the most important part of a cpu that's handling the physics, AI, and other assorted game code.
Intel's Pentium M cpus are far from power houses when compared to athlons and pentium 4s, yet they can easily best both architectures when it comes to games.
Athlon 64s and P4s are similar in raw performance, yet the Athlon's integrated memory controller easily bests the P4s in games.
And back in the day the Athlon was much more powerful than the Pentium 3, yet the limitations of SDR ram and the Athlon's cache kept it from ever gaining a significant performance advantage, and in many cases the P3 was still faster.

Now, if software rendering was still common, I think P4 would rule all x86 cpus as the gaming cpu to have, and Cell would have its place in the gaming world, but gpus do graphics much better.
 
MfA said:
Just because code is branch heavy is not to say it isnt parallelizeable.

So you think the parallelization abilities of Cell will outweigh any problems it might have with branching code?

Eh, it will also depend on how big is Cell's memory bus? Can it be fed enough data to make full use of it?

Cell may turn out to be practically the perfect combo, incredibly fast on some parts of the code, and fast enough on others. However, I think that branching is what should have been focused on to improve game performance, nearly every significant increase in game performance on PCs has been related to a better cpu/memory system for handling branching rather than making a more powerful cpu.
 
I presume PS3 will be technically better for the following reasons (but that does not necessarily mean far better graphics):
Cell
BD
Sony’s specialty is high-end consumer electronics + Sony innovation
+6 months or later depot

I didn’t mention some other points as they are present in both cases (PS3/Xenon) so they cancel each other out- e.g.:
Collada/XNA
nVidia/ATI

having said that, we all know how things can change when being brought from paper to reality.
 
Fox5 said:
However, I think that branching is what should have been focused on to improve game performance, nearly every significant increase in game performance on PCs has been related to a better cpu/memory system for handling branching rather than making a more powerful cpu.

You think a 30+ stage CPU based in a 25+ yr old x86 ISA is the poster-child for branch handling in games? The irony here is that boiling things down to the essence, you realize the entire x86 paradigm was hardly designed to run game code as its raison d'etre. It's a general purpose solution that manages to do the task reasonably well, but only by the blessing of brute clock rate (or alternatively, pushing hard and wringing as much clock scaleability as you can out of an aging architecture). It's also arguable that the nature of x86 proliferation has "dragged" software practices to make game code more amenable to a general purpose CPU, rather than vice versa. More simply, countless measures have been blessed upon the x86 scene to bring them up to the task of running game code, rather than the processors themselves having particular aptitude to doing so. In another round of irony, that they are heralded as so ideal for running the next big game is more a testament to [gasp] marketing and hype. That would also completely ignore the pivotal impact that external GPU's have had on the platform, as well. Had it not been for hardware GPU technology, x86 would have been abandoned long ago for playing games. That much is true.
 
HEHEHEHE! I just made that all up (obviously). I call bullshit on anyone trying to say one of those systems is more powerful than another at this point in time.
 
randycat99 said:
Fox5 said:
However, I think that branching is what should have been focused on to improve game performance, nearly every significant increase in game performance on PCs has been related to a better cpu/memory system for handling branching rather than making a more powerful cpu.

You think a 30+ stage CPU based in a 25+ yr old x86 ISA is the poster-child for branch handling in games? The irony here is that boiling things down to the essence, you realize the entire x86 paradigm was hardly designed to run game code as its raison d'etre. It's a general purpose solution that manages to do the task reasonably well, but only by the blessing of brute clock rate (or alternatively, pushing hard and wringing as much clock scaleability as you can out of an aging architecture). It's also arguable that the nature of x86 proliferation has "dragged" software practices to make game code more amenable to a general purpose CPU, rather than vice versa. More simply, countless measures have been blessed upon the x86 scene to bring them up to the task of running game code, rather than the processors themselves having particular aptitude to doing so. In another round of irony, that they are heralded as so ideal for running the next big game is more a testament to [gasp] marketing and hype. That would also completely ignore the pivotal impact that external GPU's have had on the platform, as well. Had it not been for hardware GPU technology, x86 would have been abandoned long ago for playing games. That much is true.

And gpus handle the graphics, something which cell would destroy and x86 code in. Can cell be good at graphics, AI, and physics? They don't seem to be particularly related functions.(or does cell have apus for each task?) Also, x86 may not be meant for handling branching, but cell doesn't seem much better, the power core should do quite well but the apus themselves don't seem fit for it.
Do you really think that cell will destroy anything that's available in 2006 on the PC? I'd agree with a statement that it would destroy anything now, and maybe anything for up to 6 months after its release, but it doesn't seem to be the world changing cpu that the hype has made it out to be.(it's not like consoles haven't had better cpus/graphics at their launch than pcs before, but it never lasts long, even back when PCs didn't have 3d cards or a modern post pentium architecture)
 
Fox5 said:
Since when has a FLOPs monster been crucial for games? I think the randomness of user interaction makes the ability to efficiently handle branching code the most important part of a cpu that's handling the physics, AI, and other assorted game code.
Intel's Pentium M cpus are far from power houses when compared to athlons and pentium 4s, yet they can easily best both architectures when it comes to games.
Athlon 64s and P4s are similar in raw performance, yet the Athlon's integrated memory controller easily bests the P4s in games.
And back in the day the Athlon was much more powerful than the Pentium 3, yet the limitations of SDR ram and the Athlon's cache kept it from ever gaining a significant performance advantage, and in many cases the P3 was still faster.

Now, if software rendering was still common, I think P4 would rule all x86 cpus as the gaming cpu to have, and Cell would have its place in the gaming world, but gpus do graphics much better.

I'm not sure where you are coming from here - the Athlon 64 beats the Pentium M AND the Pentium 4 in gaming, and not just the 4. I'm not just talking as a processor in general either, but on a GHz for GHz level. Every 'Pentium M on the desktop' review out there I have read has indicated the same thing.

And not saying it's not possible for the desktop to surpass the PS3 (and XBox 2) shortly after their respective releases, but the designs being implemented this go around are a departure enough in my mind from the standard PC way of doing things that, as far as gaming performance goes, it's going to take the PC a bit longer to catch up this time. The PC world is just a bigger ship to turn. Granted by the time, in this instance, PS3 launches, that ship will already have been in the process of turning for a year.

Well, we'll see how it goes. I don't think, for cost cutting reasons, that we might see the mind0numbing power in PS3 Sony was originally looking to bring, but I have high hopes for the Cell architecture in the future, both inside and outside of gaming.

That being said, I'm not looking for x86 to die, or stop progressing. I build my own computers and I'm not looking to have that taken away from me.
 
Fox, it's not a matter of what will "destroy" what in xyz years. If it does- great. If not- hey, it was still a cool idea. It's just exciting to see some really cool ideas and new approaches being tried out. You AND I not having even seen one running in real life makes assertions that it will or will not "destroy" something in year 2006 and beyond, like it is fact, is pretty ridiculous. Attributing some x86 advantage to "branching" pretty much jumps the shark, imo. I'm still trying to figure out exactly what you are referring to when you cite "branching". Whether you are speaking about the capabilities of a branch prediction unit (BPU) or the indeterminant branching that happens in the execution of the actual code, I still think the point is pretty specious.
 
Fox5 said:
Since when has a FLOPs monster been crucial for games?

Ever since people started using GPU's to accelerate graphics rendering? :)

I think the randomness of user interaction makes the ability to efficiently handle branching code the most important part of a cpu that's handling the physics, AI, and other assorted game code.

User interaction has very little influence when you're dealing with such small time-slices such as the time used to render a frame or perform IK transformations on a character, for example. There's a whole lot of flop processing to be done on games, in which today's x86 CPUs are not equipped to handle and which I hope FLOP powerhouses like CELL and Xenon will bring new life to. For example: character animation, physics, procedural texturing and some of the more CPU intensive elements of AI.
 
Fox5 said:
And gpus handle the graphics, something which cell would destroy and x86 code in. Can cell be good at graphics, AI, and physics? They don't seem to be particularly related functions.(or does cell have apus for each task?) Also, x86 may not be meant for handling branching, but cell doesn't seem much better, the power core should do quite well but the apus themselves don't seem fit for it.

Hey, which console will adopt x86 in the next-gen...??












Oh well, Phantom :LOL:
 
Dual-core CPUs won't crush a CELL for games processing, because the P4 or Opteron cores as they exist now, have pretty poor SIMD processing power, and simply doubling them up won't make SIMD streaming workloads execute faster than a CELL. They might win in integer performance tho.

CELL has 8 SPEs that run at 4.8Ghz. Two Intel/AMD SSE units at <4Hhz is going to be significantly slower.
 
nAo said:
nelg said:
In the lab yes, and that was only the SPEs not the PPE.
PPE and SPEs should be running at the same clock rate.
Yes but looking at this thermal image it looks like the PPE might be the limiting factor. According to Jaws these >4Ghz numbers are for the SPEs only. So it is presumptuous to rely on clock speeds obtained in a laboratory setting for only a portion of a chip to guestimate final clock speeds.

kaigai001.jpg
 
Fox5 said:
Why can't the SPEs run faster than the PPE? Areas of the Pentium 4 run faster than the core speed.
I am sure it is possible but would it be necessary? Part of the job of the PPE is to manage the work loads of the SPEs. If the SPEs are underutilized then there would be no benefit in rasing their clock separately from the PPE.
 
nelg said:
DemoCoder said:
CELL has 8 SPEs that run at 4.8Ghz.
In the lab yes, and that was only the SPEs not the PPE.
FYI, it's confirmed the whole CELL prototype (DD1) could run at 5.2Ghz. However since PPE in DD2 is fatter now it may be able to run only at a lower clock speed. IIRC an independent SPE could run at up to 5.6Ghz according to ISSCC presentations.

http://neasia.nikkeibp.com/neasia/001090

The disclosed specs for the prototype chip were not maxed-out data created for the conference. The development team has confirmed operation at up to 5.2GHz on the first prototype chip obtained in April 2004, but the ISSCC presentations on Cell merely stated "4GHz or higher". More than likely, the companies are expecting to use about 4GHz in actual equipment for reasons of higher IC yield, lower dissipation and simplified board design. The initial chip exhibited no problems with logical operations, and was able to boot the operating system (OS). Dissipation, however, was a major issue. Masakazu Suzuoki, VP, Microprocessor Development Dept, Semiconductor Business Div at SCE, feels that this has been resolved: "We had a difficult time reducing dissipation at the start, but finally found the solution in the second half of 2004."
 
nelg said:
Fox5 said:
Why can't the SPEs run faster than the PPE? Areas of the Pentium 4 run faster than the core speed.
I am sure it is possible but would it be necessary? Part of the job of the PPE is to manage the work loads of the SPEs. If the SPEs are underutilized then there would be no benefit in rasing their clock separately from the PPE.

the PPE doesn't have to handhold the SPEs every cycle. Remember, the SPEs can supply themselves and talk to each other and main memory via DMA. That said, I think the SPEs and PPE will run at the same clock speed, and that speed will be over 4Ghz. 4Ghz seems like the speed of light barrier to some, but the fact is, the PPE/SPE are significantly simpler than the G5/P4/Opteron and that is what allows them to run at much higher speeds.

The XB360 has heat issues too, that's why they use water cooling. Maybe Sony can clock higher by going with phase change cooling. :)
 
Back
Top