Jedi, is that true? Is there some evidence behind that or is that just your opinion?blakjedi said:Bingo. Sony has been using those g5mac towers to stream their frames to for the last six months...
Jedi, is that true? Is there some evidence behind that or is that just your opinion?blakjedi said:Bingo. Sony has been using those g5mac towers to stream their frames to for the last six months...
The rendering of the mesh visuals has no impact on the experiment which is a cloth simulation solver. Whether they connect to a Mac, PC, or Silicon Graphics workstation, it's still only the Cell that's processing the simulation.Lysander said:"The client-side of the application receives the streams of data coming from one or more Cell processors, reconstructs the mesh and generates the display." That does not sound dumb to me. You say that mac has nothing to do with performance benchmark.
Is not that similar to procedural synthesis for X2?Shifty Geezer said:The rendering of the mesh visuals has no impact on the experiment which is a cloth simulation solver.
All the same, 40,000 simulation points is not that lightweight, especially when you're actually dealing with self-intersection. Hell, if it's achieving the hypothetical 50 fps you pose on that many points, that's actually approaching the performance the AGEIA's PPU. I'd like to know more about how it actually performed. Collision against other objects is the easy part, self-collision is so-so, but getting it all to look good means doing it all on a hell of a lot of points, and that's the killer.That'll take maybe 5 minutes on a 3.62 GHz P4, whereas if they were working on a Cell workstation, or had a Cell server that they could get to do the work, the preview would be done in 1 minute. Or maybe the difference is a 10 frame a second preview on the P4 and a 50 fps preview with a Cell instead.
ShootMyMonkey said:But if the performance is decent with that kind of granularity on not-so-well optimized code, I think that bodes well.