Polyphony are aliens...

i remember, the particle demo was announced to 600 frames/s (then with 10x accumulation certainly) and this is visible in the screen

particle-Mblur.jpg
 
V3 said:
Yeah, that make sense I guess, you only need the motion blur for things that move fast with respect to the camera. So I suppose you can do different N samples for different part of the scene ?
That's the idea - for more visual/dramatic impact this is often preferred (like in movies) - you want to emphasize out some things while blur others more etc. Of course it's more work to manage it all too...

I never actually noticed that with TTT, so thats an interesting detail. It does looks smoother than the arcade version, but the animations are still as clumsy.Who would have tought its rendering the fighters at double frame rate. Do you know any other game doing similar approach ? Does PS2's GTs do motion blur of some sort ?
I don't think it's used all the time anyhow - just for certain really fast actions during which fighters(or their limbs) blur.
As for GT, their screen blur is the aforementioned trail method. Don't know if they use anything else.

How do you generate the local vector efficiently?
Off the top of my head, every frame calculate a camera relative motion vector for every object on screen and store that vector with every rendered pixel in an extra framebuffer (if needed with screenspace projection correction so vector will vary across object surface when large enough).
As you finish rendering the scene you'll have each object's velocities stored for all visible pixels of the said object. Then it's a simple postprocess step where you 'blur extend' the said pixels according to their vectors.
You don't have to stop with this of course - storing more complex path functions (or multiple motion values) you could represent longer, more complex non-linear trails, still per pixel, without rendering scene multiple times.
And that's still just one approach, there are others yet.

Interesting you mentioned the PS2 demo btw.
It's essentially doing exactly the same thing that I described above - the difference is that the process is performed per-particle, instead of per-pixel.
Motion vector for every particle is used to generate blur after-images of every particle (by rerendering the said particle multiple times - 10 times on default setting).
 
Fafalada said:
Interesting you mentioned the PS2 demo btw.
It's essentially doing exactly the same thing that I described above - the difference is that the process is performed per-particle, instead of per-pixel.
Motion vector for every particle is used to generate blur after-images of every particle (by rerendering the said particle multiple times - 10 times on default setting).

I doubt that the particle demo used motion vectors for blurring as that would have allowed them to stretch the particles to avoid the "multiple after-images" look that is so apparent in the screenshot and symptomatic of brute-force accumulation buffer techniques.
 
I doubt that the particle demo used motion vectors for blurring as that would have allowed them to stretch the particles to avoid the "multiple after-images" look that is so apparent in the screenshot and symptomatic of brute-force accumulation buffer techniques.
Source code doesn't lie ;)
This was one of developer-demos, it's purpose was to illustrate interesting VU1 uses - just using stretching would make the example less interesting - this way it runs multiple loops, generates polys, etc. all on VU.
 
Off the top of my head, every frame calculate a camera relative motion vector for every object on screen and store that vector with every rendered pixel in an extra framebuffer (if needed with screenspace projection correction so vector will vary across object surface when large enough).
As you finish rendering the scene you'll have each object's velocities stored for all visible pixels of the said object. Then it's a simple postprocess step where you 'blur extend' the said pixels according to their vectors.
You don't have to stop with this of course - storing more complex path functions (or multiple motion values) you could represent longer, more complex non-linear trails, still per pixel, without rendering scene multiple times.

Thanks for the explanation. This doesn't sound too much of workload, I wonder why developers haven't gotten around to use them.

Interesting you mentioned the PS2 demo btw.
It's essentially doing exactly the same thing that I described above - the difference is that the process is performed per-particle, instead of per-pixel.
Motion vector for every particle is used to generate blur after-images of every particle (by rerendering the said particle multiple times - 10 times on default setting).

So that's how it was done. GPUs should have just stuck with vector processors, instead of vertex shader unit, that we get for the last several years.
 
Back
Top