GPU<->CPU interconnect...what's possible?

DeanoC said:
And we a still talking about 2D audio, add in a 3D sound system with a decent HRTF and watch those FLOPs disappear.

Do you think there is room for more advanced sound algorithms on PS3 than what is available on the PC via (Creative) sound cards? Do you think that might be common?
 
DeanoC said:
And equally don't overestimate an SPE's power or underestimate how hard good audio is.

A psychoacoustic decompresser and fourier mixer with a few simple DSP effects (basic reverb, if your going to try a large windowed impulse reverb for example expect a lot more processing power to be sucked out) is going to take a fair bit of processing power per channel, now repeat for say a hundred audio channels and see how much change you get from a single SPE...

And we a still talking about 2D audio, add in a 3D sound system with a decent HRTF and watch those FLOPs disappear.

Why would you want to apply independent DSP effects to each of a 100 channels individually? Surely a reverb would most likely be applied to the final output once all 100 channels had been mixed. You might want to have several different outputs with different effects on (say 4, 8, or maybe 16 tops), but 100? Would anyone even be able to distinguish 100 effects mixed down to a pair of speakers?
 
Nerve-Damage said:
That should be a problem for PS3, since it’s not bound by a dedicated sound chip. PS3 can offer many different sound medias (formats), via threw firmware or OS updates with the latest audio codec’s.

Another great aspect of the Cell processor that I’m looking forward too is the true real-time representation of individual sounds that can be applied to each individual object or multi-facet environment setting. “The Cell leaves demo anyoneâ€â€¦â€¦â€¦â€¦â€¦â€¦.

I dout it will be a problem afteral PS2 does DTS 4.1 in GTA:SA. So if the EE can do atleast 4.1 while doing a game as taxing as GTA then ONE spe should do the FULL 6.1 with no problem at all.
 
_phil_ said:
Rather than tesselation or displacement mapping (that's not really good if not micropolygonED ) i'd want to see material-based destructable environements ,where chuncks are cleverly created on the fly in relation with the matter it's made of.

This idea sounds really cool.

I've been thinking about this "procedural" stuff and maybe some developers are already having a go at this.

Looking at Evolution Studio's MotorStorm...a moment please...

Real or Not it is the target of their efforts so let's assume they can hit or come very close to their target...or one can assume they're liars and read on for the sake of argument....moving on...

When looking at MotorStorm I'm thinking all that mud slinging around is or would be procedurally generated sprites/alpha textures/whatever with some simplistic physics on them for proper trajectory (mud "clips" the yellow truck in the pic so collision detection isn't or wasn't there as is typical for particles...I think). I also wonder if this could be used to create all that fine debris when the house exploded. Seems the regular geometry blows up like normal and then some procedural stuff fills in the fine details.

A method like this could finally make for the proper "splash" when running through a puddle or swimming across a body of water...if I'm on to something that is.

--------------

MotorStorm again and the "tracks" the cars leave behind in the mud...

I've been wanting to ask for a while whether this was possible via displacement mapping. I was thinking you have a disp map of a "trough" and as a wheel runs over the textured low poly "ground" the geometry of this trough is tessellated in behind where it runs over it. An algorithm handles slightly altering just how the trough it tessellated in every frame or perhaps alters it only once every few frames but the result is troughs in the ground behind the vehicles.

Impossible/Possible?

-----------------------------------------------

The cloth sim in NBA 2K6 and DOA4 on the X360..

.I think this is a product of the CPU<->GPU interconnect between Xenon and Xenos. Xenon can handle the physics and Xenos will have no problems with the mesh density of the cloth but is it right to think it is the bandwidth between Xenon and Xenos that makes it come together here? I'm thinking without the bandwidth Xenon and Xenos would be "waiting" on each other allot and the resulting cloth sim would not be near as smooth (not that it can't and won't get better in other games).

Try again?
 
Last edited by a moderator:
!eVo!-X Ant UK said:
I just want somthing to finally make my denon 3805 shine as im sick of 5.1 and PL2
But technically speaking there should be very little (read: no) advantage of using DTS (ES or not) over DD in games. Sure, we all want it for the movies, but that is something else and will most likely and hopefully be a pass-through function (ie: PS 3 does not decode the DTS stream, it passes it to your receiver for decoding).

I just want to throw this in, but there may be some limitations I am not aware of. The main reasons why DTS sounds better on DVD are:

1. DTS is the native format of many movies. Therefore, we can imagine that the re-encoding (digital theater DTS is much higher bandwidth) may be better.

2. DTS on DVD uses a much higher bitrate than Dolby Digital. DD is part of the DVD spec (AC3) and DTS is not. That means that DD is limited in the bitrate it can use while DTS is technically an unrestrained add-on.

So, if you are talking about gaming, then there really should not be a huge difference because DD can use a higher bit-rate and still be properly decoded by an external reciever. The limit we are used to and may find fault with is imposed by the DVD standard (although it is true that DD is less capable of making good use of high bandwidth. It is optimized for low bandwidth). I would think RAW PCM would be the desired format for games.
 
wireframe said:
But technically speaking there should be very little (read: no) advantage of using DTS (ES or not) over DD in games. Sure, we all want it for the movies, but that is something else and will most likely and hopefully be a pass-through function (ie: PS 3 does not decode the DTS stream, it passes it to your receiver for decoding).

I just want to throw this in, but there may be some limitations I am not aware of. The main reasons why DTS sounds better on DVD are:

1. DTS is the native format of many movies. Therefore, we can imagine that the re-encoding (digital theater DTS is much higher bandwidth) may be better.

2. DTS on DVD uses a much higher bitrate than Dolby Digital. DD is part of the DVD spec (AC3) and DTS is not. That means that DD is limited in the bitrate it can use while DTS is technically an unrestrained add-on.

So, if you are talking about gaming, then there really should not be a huge difference because DD can use a higher bit-rate and still be properly decoded by an external reciever. The limit we are used to and may find fault with is imposed by the DVD standard (although it is true that DD is less capable of making good use of high bandwidth. It is optimized for low bandwidth). I would think RAW PCM would be the desired format for games.

Its true that DTS does indeed use a higher bit-rate, resulting the the LFE channel being so much more controlled in DTS tracks when compared to the DD version. But the main reason i mensioned DTS-ES is that unlike DD-EX it is the FIRST truly DISCRETE 6.1 sound format. And i dont know if you have to facilities for 6.1 but the amount of extra atmoshere that 1 extra channel creates is amazing. Take Blade 2 for example if u watch the DD5.1 version then lissen to the DTS-ES track you'll see how much of a diference the extra discrete channel in its DTS-ES channel creates.

Games, particulary ones with a great music score, and First Person Shooters would greatly benifit from the extra rear channel.
 
scificube said:
When looking at MotorStorm I'm thinking all that mud slinging around is or would be procedurally generated sprites/alpha textures/whatever with some simplistic physics on them for proper trajectory (mud "clips" the yellow truck in the pic so collision detection isn't or wasn't there as is typical for particles...I think).

I'd guess particle systems with some fluid simulation could make the effects in the trailer possible. I was slightly reminded of the AGEIA fluid demos with the car - imagine a more viscous fluid that just didn't roll off the car so easily, and imagine decent mud-look ;) A mix of sprites and geometric particles might work well. Since Cell is supposed to be so (relatively) good with fluid simulation, a game like this would be screaming out to use that in such a manner.

Jusy a layman's observation though ;)

scificube said:
I also wonder if this could be used to create all that fine debris when the house exploded. Seems the regular geometry blows up like normal and then some procedural stuff fills in the fine details.

Probably particle systems again, for the finer bits.
 
Titanio:

Good that you should mention Ageia :)

I've been trying to upload the Ageia interview Nvnews did back at E3.

Anyway in the vid the Aegia fellow showed spoke about the fluid demos many of us have seen in the bink videos.

The fellow stated that at that time at E3 the emmitters where only spitting out 700 physically interactive particles. However still looked good to me and there were several emmitters. The fellow went on to state that they would improve their drivers so that it would possible to have 20-30 thousand particles per emmiter. The fellow again uses the tone that their PPU is very comparable to Cell although he does speak as if Cell is more powerful.

Ok...

Back to MotorStorm (it's a good example...find another I can use if it bothers anyone)...

If each tire can be thought of as an emitter then it would seen Cell has enough gas to push to particles in the scene. As I don't think there anywhere near 20 thousand particles spraying off any individual tire the vid it doesn't seem unfair to say Cell has some gas left over to burn elsehwere.

I'm just saying this may be some indirect evidence that the particle driven mud can be pulled off in a next gen game in real time. (With a little back of the hand math and common practices with respect to visuals)

-------------------------------------

In the same interview the fellow showed a game in development by Artificial Studios (the Reality engine fellows) where the particle driven smoke from explosions were made physically interactable. He describes how a player or NPC would run through the smoke and it would whip around them as the disturbed the air. The video of course was with the same beta driver that handicapped the HW so the gameplay does chug a bit...ok...it chugged allot, but hopefully that was back then and there is a new driver available now. It'd be nice to know how good PhysX currently is for both next gen consoles.

Seems to be evidence of physics based particles to me.

------------------------------------

you can find the bink videos and anything else PPU related here:

http://personal.inet.fi/atk/kjh2348fs/ageia_physx.html

and the video I'm referring to is Ageia.wmv which is an interview with an Ageia fellow at E3...won't upload...can't find it again easy...me = lazy.
 
Jawed said:
It's arguable, for example, that Xenos's two-pass tessellation functionality is a bit of a hack (since it requires data be written to memory and then re-read shortly afterwards). A one-pass Cell-implemented tessellation algorithm could work more efficiently. Who knows, eh?

I wouldn't consider that one-pass either, because you'd have to take the HOS geometry, load it into the SPE, generate tesselated meshes, write that out to memory, then load into RSX and process it like any other bit of geometry. Even if you could send it from the SPE directly to RSX, it'd just as well defeat the main purpose of displacement mapping, which is to 'compress' geometry. Unless you'd also do the skinning on the SPE as well, but that'd be complicated, and you'd still have to transform a lot of the geometry on the RSX.

Ideally, displacement should work like this:
- apply deformations like skinning, morphing etc.
- transform geometry into screen space
- occlusion and backface culling, hidden surface removal whatever etc.
(these might be in a different order, I'm no coder... and PRMan also performs shading before hidden surface removal because displacement could move unseen vertices as well)
- perform view-dependent tesselation and then displacement
- perform shading and rasterization
So the GPU should do as much as possible with the pre-tesselation low-res geometry, including moving it around on its external buses. This is what would allow rendering a few orders of magnitude more detail. Once you tesselate in the CPU, you'll be limited by the external bus of the GPU, which will be the case on both the X360 and PS3 as it seems. Then again this whole approach would require a radically different hardware pipeline that could deal with a sudden increase of vertex data through the rendering process... It'd probably have to resort to bucket (tile) rendering for example.

edit: I also have to add that the quality of the displacement is heavily dependent on the number of vertices; it could look very ugly on meshes under ~50000 triangles.

Still, even the foreseeable implementations of displacement could add a lot of silhouette detail to nextgen game geometry. It should also be used with paralax mapping on all the surface that don't have visible edges to keep an efficient geometry load, which means even more work for developers and artists; so don't expect a wide use of it before second or even third generation games...
 
!eVo!-X Ant UK said:
RSX Shades a texture and applies the standard bump maps, parralex maps etc etc etc.....Then whips it straight to VRAM/MainRam Then Cell either adds Further shadeing + Any post process you can think of.

In a fighting game RSX renders the Scene then whips it straight to Cell for some lighting + post proccess effects. Just imagin the lighting on the next gen Tekken..

I think you should read up on how 3D graphics actually work before you get into speculation...
 
scificube said:
I've been wanting to ask for a while whether this was possible via displacement mapping. I was thinking you have a disp map of a "trough" and as a wheel runs over the textured low poly "ground" the geometry of this trough is tessellated in behind where it runs over it. An algorithm handles slightly altering just how the trough it tessellated in every frame or perhaps alters it only once every few frames but the result is troughs in the ground behind the vehicles.

Impossible/Possible?

With a gazillion of vertices only... and changing tesselation parameters on the fly would cause the terrain to flicker all around...
 
scificube said:
Back to MotorStorm (it's a good example...find another I can use if it bothers anyone)...

Another interesting example to look at might be that updated Mobile Suit Gundam "target video" shown at TGS. There's some pretty amazing looking deformation and particle effects when the robot crashes into the side of the building. Granted its a cut scene but it seems to be based on the same assets used in the real-time demo they had earlier in the year.

http://games.kikizo.com/news/200510/059.asp
(Second from the top, you have to click the link and then right click/save as on the bottom)
 
Laa-Yosh said:
I wouldn't consider that one-pass either, because you'd have to take the HOS geometry, load it into the SPE, generate tesselated meshes, write that out to memory, then load into RSX and process it like any other bit of geometry. Even if you could send it from the SPE directly to RSX, it'd just as well defeat the main purpose of displacement mapping, which is to 'compress' geometry. Unless you'd also do the skinning on the SPE as well, but that'd be complicated, and you'd still have to transform a lot of the geometry on the RSX.
I'll be honest and say I don't understand what skinning consists of and whether there's any fixed-function hardware on a GPU that's involved.

In terms of tessellation, I was specifically thinking of an adaptive process - tweaking model LOD according to distance from camera, deleting polys as well as adding them.

I think the key point with tessellation is not to "get more polys" rendered, but to get more of the right density of polys in the scene, according to perspective. So I would imagine that a game using adaptive tessellation would be rendering the same number of polys per frame as a game without. But the game with this would look far far better because all the detail is in the right places, closer to the camera.

With PS3, if we presume that RSX~G70, then there's no real support for adaptive tessellation, so that particular process seems destined for Cell.

I'm still learning about this stuff.

Jawed
 
Laa-Yosh said:
With a gazillion of vertices only... and changing tesselation parameters on the fly would cause the terrain to flicker all around...

Once the deformation had been made, though, why would you need to keep updating it?

I guess that's a roundabout way of asking - could the CPU not deform the "mud" geometry appropriately, be it through the method discussed above or otherwise?
 
Laa-Yosh said:
I think you should read up on how 3D graphics actually work before you get into speculation...

Its already been stated by Sony that it could indeed work like that on PS3.

David Kirk: SPE and RSX can work together. SPE can preprocess graphics data in the main memory or postprocess rendering results sent from RSX.

David Kirk: Post-effects such as motion blur, simulation for depth of field, bloom effect in HDR rendering, can be done by SPE processing RSX-rendered results.

Nishikawa's speculation: RSX renders a scene in the main RAM then SPEs add effects to frames in it. Or, you can synthesize SPE-created frames with an RSX-rendered frame.
 
Jawed said:
I'll be honest and say I don't understand what skinning consists of and whether there's any fixed-function hardware on a GPU that's involved.

You have a 'skeleton' built from 'bones' that moves the model's vertices individually. Each bone has a transformation matrix and passes that info to the vertices. You generally have some vertices deformed by only one bone in places like the head; then you have vertices that blend multiple transformations, near the joints of a character. You generally wish to have the ability to work with at least 4 bones per vertex, but sometimes more is needed.
You normally need bones for each limb, each finger joint, and a few for the back. However simple skinning won't be convincing enough for detailed characters (think fighting games) so you'll need at least some helper bones to correct deformations, that might be called 'muscle simulation' although it's far from that... Complex facial animation sysems may use dozens of bones for the face alone as well.

Any coder here should be able to give insight to the number of extra vertex shader instructions involved in skinning. The key thing is that if you skin a tesselated model, these extra instructions beyond regular T&L will be multiplied as well, which means a huge increase in processing time, compared to tesselating an already skinned model. It'd also effect the weighting process (setting bone weights for each vertex) - artists would kill if they have to work with a very dense model, and the weights also add to the memory and bandwith requirements of the model. You'd not be able to skin an adaptively tesselated model this way either, unless the tesselator would also interpolate the bone weights.
Bottom line is: you absolutely want to skin before tesselation.


In terms of tessellation, I was specifically thinking of an adaptive process - tweaking model LOD according to distance from camera, deleting polys as well as adding them.

The obvious problem with adaptive tesselation is that as the model moves, the tesselation changes, and it'd be very very noticeable. Either you increase the levels globally (ie. tesselate each poly into n, 2n, 3n polys) which is somewhat easier to hide (based on distance from the camera) but inefficient; or use a complex algorithm depending on edge length etc. (like PRMan). The later is very tricky to get right, and change the number of polys so that it is unnoticed; I'd doubt that it could be used on this gen.

I think the key point with tessellation is not to "get more polys" rendered, but to get more of the right density of polys in the scene, according to perspective. So I would imagine that a game using adaptive tessellation would be rendering the same number of polys per frame as a game without. But the game with this would look far far better because all the detail is in the right places, closer to the camera.

You will need more polygons for more detail... tesselation can only smooth your existing curves but it won't create extra detail on its own. You can't build a face from a cube with tesselation only, and especially can't animate it... the detail has to be there.
 
Back
Top