"Cell Technology for Graphics and Visualization"

scatteh316 said:
So it looks like that Cell could actually do the FSAA in PS3. Hmmmmm it just keeps on getting better and better :)

Unfortunately, using RSX to rasterise might complicate things ;)

Also, MSAA != FSAA.
 
I'd like to know how their adaptive sampling worked. Did they sample more in areas that needed more sampling (higher contrast edges) or did it scale down with more complex images or what? I don't know if there's scope for edge only AA functions. RSX could render the relevant pixels in higher fidelity and have a SPE sample them, or something wierd like that. Maybe.
 
The raycaster just renders a landscape defined by a heightfield and texture, that is all it does ... the multisampling is not what you think it is.
 
It doesnt matter ... it can just render pretty anti-aliased landscapes, lets just stick to that.
 
MfA said:
The raycaster just renders a landscape defined by a heightfield and texture, that is all it does ... the multisampling is not what you think it is.

Hi MfA,
I thought it sounded too good to be the first raytracing CPU. How does the complexity of what they demonstrated compare to say, a typical room in a FPS? Reckon Cell + RSX can do raytracing for a typical scene in a FPS?
 
I would say that the availability of data for objects was always readily available in the demo as it was created on the fly. Raytracing an actual scene would need far more memory accessing and that would be the limiting factor I suppose. Was there no info from Siggraph on true scene raytracing on Cell?
 
How does the complexity of what they demonstrated compare to say, a typical room in a FPS?
It has nothing to do with scene complexity. Suffice to say that performance implications of this demo do not apply to a general purpose raycaster.
 
Anybody remember this? :
http://firingsquad.com/games/outcastpreview/

Raycast voxel terrain on PCs about 6 years ago.

Still a big difference in resolution of both display and source content, plus having what is most likely HDR probes for lighting, and a more extensive illumination model in general. But overall, the cost of first-hit raycasts on what you know is necessarily a heightfield is really not that expensive. It's nice to know that we can do it, but using it in practice is another matter.

BTW, for those interested --
http://www.flipcode.com/voxtut/
 
I'd never heard of that game. My last encounter with terrain generation was things like Terragen at up to several minutes a frame. :oops:
 
Nite_Hawk said:
Hi NAP,

Welcome to the board. :) I think we all were expecting Cell to be good at doing physics and possibly AI. Seeing graphics demos on it like this is just amazing in my opinion though.

Nite_Hawk
Thanks Nite_Hawk. In my opinion though the collaboration between Cell and RSX will make the difference in the next console war by graphics point of view. The Cell processor is able to generate a high level of graphics and thanks to that collaboration the difference between x360 and PS3 will be marked enough, even if X360 can depend on EDRAM that's too good for free antialiasing effect at 720p. This obviously imho


P.S. you forgive me if there are some mistakes.
 
Shifty Geezer said:
I would say that the availability of data for objects was always readily available in the demo as it was created on the fly. Raytracing an actual scene would need far more memory accessing and that would be the limiting factor I suppose. Was there no info from Siggraph on true scene raytracing on Cell?
Since the Cell raytracing demo was from the same people that develop OpenRT I assume the scene was static. The biggest problem right now with real time ray tracing is handling dynamic objects. Their raytracer works with a KD tree and it takes minutes to build the tree. Too long to do every frame. There are probably some short cuts they can take, but nothing general purpose enough for games at the moment.
 
I thought it sounded too good to be the first raytracing CPU. How does the complexity of what they demonstrated compare to say, a typical room in a FPS? Reckon Cell + RSX can do raytracing for a typical scene in a FPS?

Just to elaborate on what Faf said, this is a very simple demonstration of "ray-tracing" (I hesitate to use the term, because it can barely be applied here). All they've done is take height and texture data and combined them. This is easy to parallelize, because it's very easy to partition the data - simply give each SPU one radial slice starting from the camera position. The only thing they have to worry about is a foreground terrain feature blocking a background one - terrain features which are defined by the height data. (In fact, this is a pretty standard exercise in intro computer graphics classes)

This is a far cry from any 3-d game, which has moving objects, reflective surfaces and shadow projection, making it much more difficult to efficently partition the work to the SPUs.

While the IBM team definitely did a good job of getting high utilization out of CELL for this application, this is definitely a special case, and doesn't provide a very good model for real 3-D games.
 
Yes, when you look at it more closely and get past the raytracing buzzword it certainly does have it's limits for realtime (read game)situations.
 
Back
Top