Cell and RSX : their relationship.

Status
Not open for further replies.
Fafalada said:
ShootMyMonkey said:
In theory, you could create this entirely on the SPE SRAMs and send it off a packet at a time that way, but then footprint becomes the issue. Character packets are small, but not, for instance, terrain packets.
Erhm - there's no 'theory' here, it's been done this way for the last 5 years on PS2, with a measly 16KB of available memory on VU1.
Heck terrain renderers are probably one of the best showcases of this type of processing on VU1, and SPEs are only more capable.

Damn. Faf is like the Lord Elrond of B3D :p Teach on man!
 
So wasn't VU1 mostly used as a T&L engine?

Assuming that task is given to the SPE's in parallel would RSX be able to keep up with 20bs/s worth of data?

You guys think there's any benefit to this?
 
seismologist said:
So wasn't VU1 mostly used as a T&L engine?

Assuming that task is given to the SPE's in parallel would RSX be able to keep up with 20bs/s worth of data?

You guys think there's any benefit to this?

I'd imagine the biggest issue would be synchronising the Data.

Remember that graphics chips are state machines, all submitting processors have to synchronise in such a way as to guarantee good rendering state for each of them.

It's not impossible, but it is potentially difficult to get right and efficient.
 
Any possibility RSX has been designed with this exactly in mind? nVidia (David Kirk) have been talking up their relationship with Sony saying it's more than just a business deal but a vision thing. If that's not all hot air PR, perhaps it points to RSX including some circuitry to help with Cell synching?
 
seismologist said:
So wasn't VU1 mostly used as a T&L engine?
In PC/DirectX Terms : VU1 = VertexShader + Programmable Primitive/Topology Processor + Programmable Triangle Clipper.
These are the common roles it performs.

ERP said:
I'd imagine the biggest issue would be synchronising the Data.
Well RSX could include multiple render contexts to facilitate this better - and obviously it also needs a nice data arbiter there to even have a chance of working.
That said until I know more about ways in which RSX can consume data from SPEs it's hard to even argue about this.
 
What if you wanted realistically modelled clouds that morph and billow as they pass overhead? The "passing overhead" bit at least wouldn't be too intensive - just recalculate things every few frames or so since the sky is probably slow moving - but if you wanted instantaneous response (for example, when I wave this wand, I want a thunderstorm pronto!), modelling that would require more regular update.
You could do all that simply modifying the textures going into the skydome rather than the dome itself. And actually, that's one thing I can think of a possible within a pixel shader. You start with a base HDR Perlin Noise texture and you apply an asymptotic exponential cutoff to it. And this affects mainly alpha but you'd apply a sort of light-occlusion approach based on the alpha values. In order to go from a primarily sunny sky to a thunderstorm, you simply steepen the exponential cutoff and lerp from a bright blue sky texture to a dark sky texture -- 1 float pixel shader constant to change.

I'd imagine the biggest issue would be synchronising the Data.

Remember that graphics chips are state machines, all submitting processors have to synchronise in such a way as to guarantee good rendering state for each of them.
Best thought I can pose to that effect is just take your packets and sort by shader. Changing shaders and corresponding states is a pretty disgustingly slow operation. I wouldn't be surprised if UE3 does this considering they seem to have a "devil-may-care" attitude about potential explosion of shader variants.
 
ShootMyMonkey said:
You could do all that simply modifying the textures going into the skydome rather than the dome itself.

Quite. I was thinking only in terms of texture manipulation, not of the dome. Although on a side note you could perhaps make those clouds a little more 3D with some geometry too.

ShootMyMonkey said:
And actually, that's one thing I can think of a possible within a pixel shader. You start with a base HDR Perlin Noise texture and you apply an asymptotic exponential cutoff to it. And this affects mainly alpha but you'd apply a sort of light-occlusion approach based on the alpha values. In order to go from a primarily sunny sky to a thunderstorm, you simply steepen the exponential cutoff and lerp from a bright blue sky texture to a dark sky texture -- 1 float pixel shader constant to change.

Well yes, it's not exactly the toughest of challenges at a certain level as mentioned, and yes you could do it in a pixel shader. But if you were feeling particularly ambitious about your clouds, you may over-run what is practical within SM3.0 ;) (not that Cell would necessarily provide you with all you could need, but the possibility might be worth pondering).

If the cycles were spare on the CPU side, and less free on the GPU, even with more trivial tasks that could be done on the GPU one might consider moving them to the CPU to allow the GPU to do "other" stuff. Although if feeding the GPU remains non-trivial, which is possible, it may not be worth it unless you've more ambitious designs. I guess we'll have to wait and see.

(In general, though, I think this seems to be definitely something Sony wants to encourage. Why, for example, allow for the synchronisation of rounding and cutoff between cell fp work and RSX if such sharing of data and results etc. is not more broadly facilitated? Presumably they feel it's very possible, and it seemed to be a common point of discussion betweem Kutaragi/Kirk interviews on the hardware)
 
The things I would really like to see involve volumetric effects. Indirect lighting, realistic smoke & fire with physics, caustics, and many more, these things are often done using voxels (3d textures in gpu world). I think the speed of these effects can be quite good provided you maintain allot of state from previous frames and a good bit of pre-computed data can help also.

In many ways using more 3d based effects should allow us to mix ray tracing techniques with scan line ones.

It's very easy for these effects to start to consume insane amounts of memory, so we can often trade math performance for memory by not preserving as much state and instead recalculating more information on a per frame basis.

I believe memory capacity has grown significantly, a gpu & cpu combo with 2+ gigs of UMA memory, increased bandwidth, and a flexible api should be able to perform many effects that have until now been exclusive to software rendering.

Xenos seems to be headed in this direction and nvidia will likely start to go in this direction also. Cell also seems to be a good fit for this type of work. This generation will see a huge increase in graphical quality but it only scratches the surface of introducing traditional software methods. The shader performance seems to be strong enough but memory and bandwidth have always been a huge factor in these things.
 
Shrouded in mystery

I think that the relationship between cell and RSX will be quite interesting. The graphics processing will rely mainly on cell and not on RSX.

It has been mentioned often on these forums that Sony wanted a cell only system and they probably couldn't do it so they decided to ditch the second cell processor and slap on a GPU. I think what probably happened is that they succeeded in making a cell based system, however, Sony wanted to make cross platform games much easier to port. So they replaced on cell with a GPU.

Cross platform games will be easy to port because developers can ignore all of cell accept the PPE, the same as x360's. So the developers if they wish can focus on a single GPU and CPU setup. This will also allow for small developers to easily make decent games.

The games that are PS3 exclusives will rely more on cell for graphics than the GPU. I think these games will treat the GPU as another SPE that gets special attention. That to me explains why there is a 35GB/s direct connection between the GPU and CPU.

Also I think Sony chose Nvidia instead of ATI because Nvidia has good Open GL performance and can give them the tools they need to make programing for Cell a sintch.

Basically the methodology for X360 is GPU is the center of attention and will "get'er done." And the methodology for PS3 is that cell is the center of attention and will "get'er done."

my 2 cents
 
flick556 said:
The things I would really like to see involve volumetric effects. Indirect lighting, realistic smoke & fire with physics, caustics, and many more, these things are often done using voxels (3d textures in gpu world). I think the speed of these effects can be quite good provided you maintain allot of state from previous frames and a good bit of pre-computed data can help also.

could procedural geometry fit this bill?
 
blakjedi said:
flick556 said:
The things I would really like to see involve volumetric effects. Indirect lighting, realistic smoke & fire with physics, caustics, and many more, these things are often done using voxels (3d textures in gpu world). I think the speed of these effects can be quite good provided you maintain allot of state from previous frames and a good bit of pre-computed data can help also.

could procedural geometry fit this bill?

yes if the performance is high enough.
 
Re: Shrouded in mystery

NaMo4184 said:
It has been mentioned often on these forums that Sony wanted a cell only system and they probably couldn't do it so they decided to ditch the second cell processor and slap on a GPU.
It has mentioned a lot of times, but is it true? IMHO, NO! :)
At this time no one can renounce to a specialized IC devoted to graphics if one wants to stay competitive.
Since CELL was not designed to be competitive with a GPU I can't how Sony could have thought to employ another CELL chip instead of a GPU, it doesn't make sense.
 
flick556 said:
The things I would really like to see involve volumetric effects. Indirect lighting, realistic smoke & fire with physics, caustics, and many more, these things are often done using voxels
As well as this, expect the Sony platform to offer raytraced reflections. Afterall we keep hearing how what they produce is all smoke and mirrors :p
 
Re: Shrouded in mystery

nAo said:
NaMo4184 said:
It has been mentioned often on these forums that Sony wanted a cell only system and they probably couldn't do it so they decided to ditch the second cell processor and slap on a GPU.
It has mentioned a lot of times, but is it true? IMHO, NO! :)
At this time no one can renounce to a specialized IC devoted to graphics if one wants to stay competitive.
Since CELL was not designed to be competitive with a GPU I can't how Sony could have thought to employ another CELL chip instead of a GPU, it doesn't make sense.
nAo thank you for your reply!. This forums is kind of intimidating so I was hopping to be ignored or torn apart for my post.

I've lurked on these forums for a while so everything i know about graphics hardware is basically from this forum. I read here that many developers claim that they are doing entire game demos on cell with HDR and everything. But everyone thinks they aren't telling the truth? There isn't that much information on how cell works so I don't see why devs who have used it are discounted so easily. Maybe cell is 3 billion well spent.

They even said it was easy.
 
Re: Shrouded in mystery

NaMo4184 said:
Maybe cell is 3 billion well spent.

Have to emphasize - ~$600 million on R&D, ~$3 billion on fabs. Fabs will have some utility regardless of what chip they are put to work on, or in the worst case scenario provide a means of recouping cash through their sale. ;)

So I personally think the investment in Cell has been a bold, but sound move for Sony.
 
rsx.JPG


Link
 
Re: Shrouded in mystery

NaMo4184 said:
It has been mentioned often on these forums that Sony wanted a cell only system and they probably couldn't do it so they decided to ditch the second cell processor and slap on a GPU. I think what probably happened is that they succeeded in making a cell based system, however, Sony wanted to make cross platform games much easier to port. So they replaced on cell with a GPU.

No, what happened was Cell was originally designed to push polygons, but wasn't nearly as good as PC chipsets in Pixel Shading type operations. They had to go to Nvidia to be competative with the ATI equipped 360 in all areas, instead of just blowing them away with geomitry alone.

But I'm sure that Nvidia's expertise in Memory Controllers, API's, and shader languages was a major factor in why they went with Nvidia instead of ATI. I think Sony actually got more than they expected from the Nvidia deal.
 
Just to point out, what Powderkeg has written is generally accepted as the course of events BUT it's not confirmed or offical. It's speculation is to why Sony went with nVidia instead of their own system. For all we know (though not likely) their Cell based GPU solution was on par with anything from ATi or nVidia but Sony chose the tools strength of nVidia, learning from PS2 that power is useless if it can't be tapped.
 
Shifty Geezer said:
For all we know (though not likely) their Cell based GPU solution was on par with anything from ATi or nVidia but Sony chose the tools strength of nVidia, learning from PS2 that power is useless if it can't be tapped.
A CELL based GPU solution has never existed.
 
Status
Not open for further replies.
Back
Top