GRAW2's Ageia Island video

The main problem with the AEGIA hardware is, that you have to move the whole scene twice across the slow PCi buss, and have to process it twice as well. That's where quad cores with improved vector support (SSE4) and DX10 GPUs (geometry shaders) will score big time.


You don't have to move everything twice, just the stuff that's moving.
Why does it need to be processed twice?
 
No they wer'nt
Agreed. Software renderers had no filtering at all, and typically rendered only in 8-bit format using pallettized textures. They typically also topped-out at VGA-or-less resolutions regardless of performnace. Hardware rendering had at least billinear filtering and usually rendered in at least 16-bit format in many more (and typically, higher than the software option) resolutions.

You don't have to move everything twice, just the stuff that's moving.
Why does it need to be processed twice?
Also agreed. You pre-load the card with vertex data, and obviously feed updates as-needed (which are likely quite small). Only when vertexes need to change (ie after a collision) does any data come back, and only the data that actually changed from the last inter-frame calculation.
 
The main problem with the AEGIA hardware is, that you have to move the whole scene twice across the slow PCi buss, and have to process it twice as well. That's where quad cores with improved vector support (SSE4) and DX10 GPUs (geometry shaders) will score big time.

I don`t think GPU based physics will ever score big time. Multiple-core CPUs though will probably kill the PPU, simply due to the fact that more ppl will have one, and devs like to code for the largest installed base. OTOH, AGEIA's physics API is not half bad, and I'd like them to keep evolving that.
 
You don't have to move everything twice, just the stuff that's moving.
Why does it need to be processed twice?
Because physics don't merely change locations of objects, they also change the objects themselves (for example, when hit or exploding, or for vegetation, hair, clothing and fluids), and they trigger events that have to be updated.

For example, if someone gets hit, wet (rain or splases), gets in the shade of a piece of debris, or in the blast radius of an explosion, you would have to reskin them. And of course, you have to calculate all the lighting and texture data afterwards.

So, you first calculate all the new locations and statusses of all your objects, according to your game logic, then you send them to the physics coprocessor that handles collision detection and physics, which returns all the changed objects and states. Which can be very little if not much is happening in the scene, no wind, vegetation, water, explosions or whatever, to most of the scene when you do have many of those.
 
Also agreed. You pre-load the card with vertex data, and obviously feed updates as-needed (which are likely quite small). Only when vertexes need to change (ie after a collision) does any data come back, and only the data that actually changed from the last inter-frame calculation.
Then again, most of the vertices are in mobile actors: you can make a lot of houses for the amount of vertices you need for a single npc, or a small field with vegetation.

Vegetation and fluids move, and you would want to update the location of the npcs limbs before you do collision detection. Or the location of some object in the water before you create ripples. Or the location of a mobile object in a field of vegetation.
 
I don`t think GPU based physics will ever score big time. Multiple-core CPUs though will probably kill the PPU, simply due to the fact that more ppl will have one, and devs like to code for the largest installed base. OTOH, AGEIA's physics API is not half bad, and I'd like them to keep evolving that.
But then again, if you can create geometry on the GPU, it makes it much easier to do most of the effects there.
 
Back
Top