Hardware Motion Blur - yay or nay ?

Mintmaster said:
Well, for high speed rotating objects developers are supposed to do (and indeed they do) the blurring, like with car wheels in racing games.

For slower rotations, note that I mentioned acceleration, and this includes centripetal acceleration. This will give you a quadratic sampling curve as opposed to linear, so even a blur of, say, +-0.5 radians should look acceptable.

Finally, DOF can't be done correctly in image space, because you don't have any data for what's behind an object, i.e. the framebuffer is single valued.

However, I still have some hope for an image space technique, because:
A) offline renderers can do it pretty well
B) DOF is not correct, but looks acceptable, so even if this isn't correct, it may look good enough, like pretty much everything in realtime 3D graphics.
I should have written 'visually convincing' instead of 'correctly', sorry for that. So IMO your proposal wouldn't be visually convincing on objects rotating with high speed about axes nearly perpendicular to the view vector (because you simply don't have the data for what's on the other side of an object). Imagine some stuff exploding, stuff flying away and rotating, with high speed.
 
Last edited by a moderator:
Mintmaster said:
Writing the velocity texture would need input from the physics engine of a game, so it can't be done transparently (unless a graphics driver tries to trace where each object was last frame, which couldn't be very robust IMO).
Er, physics information wouldn't do it. What you need is the location of each vertex from the previous frame, as well as the transformation matrix applied to the previous frame.

I recall image space motion blur looking pretty good in offline 3D rendering programs. I'm not sure what algorithm they use, though.
Yeah, I don't know. It'd definitely be a challenging thing to implement.
 
Mate Kovacs said:
Objects rotating with high speed would cause heavy artifacting.

DOF is something that can be done in image space correctly, unlike motion blur.
Nah, I don't think it'd be much of an issue. Instead of a curved arc between frames, you'd have a straight line for the between frame intervals (unless you can also find a way to include the acceleration by integrating the two previous frames, though that'd be nontrivial, and would at the very least require two extra frames of geometry information, instead of just one for linear image-space motion blur).

Of course it won't be perfect, but it'll look much better with linear image-space motion blur than without. Just remember that you wouldn't really be doing the image-space blur based upon world-space instantaneous velocities, but rather based on the changes in movement between frames, which amounts to the average velocity in the space of time between the frames.
 
I don't see any use for motion trail blurring, as for that you'd have to guess what time it'll take to render the frame. And since we can't it probably screws up aiming capabilities for people.

However, a driver based solution which 'lies' to the game about it's refresh rate and just accumulates rendered frames until the next v-sync would be a great feature. It's transparent for older games and would only need more video memory to work. And that probably won't be a problem for older games on newer video cards, which can work in excess of 100 fps for example. ;)

Humus, can you try something in the ATi drivers? :p
 
sonix666 said:
I don't see any use for motion trail blurring, as for that you'd have to guess what time it'll take to render the frame. And since we can't it probably screws up aiming capabilities for people.
Oh, no, not at all. As I've been saying, you just use the previous frame's position as well as the current frames, interpolating between the two. There's no need for any velocity or time information at all. Just change in position information.
 
Xmas said:
I don't see how you could improve efficiency that way. No doubt however that such settings should generally be exposed by the API so it can be controlled per application, not just globally.

Chalnoth's comment a few posts back is getting pretty close to what I was thinking of. The trouble is that today's APIs assume everything is static.
 
Chalnoth said:
Er, physics information wouldn't do it. What you need is the location of each vertex from the previous frame, as well as the transformation matrix applied to the previous frame.
Well yeah, and you can figure this out through physics. You can throw your velocity matrix (translational & rotational) into a vertex shader, along with the global camera velocity matrix, and figure out the velocity.

Keeping track of every vertex from the previous frame would take up gobs of memory. Chances are you've already got the velocity matrices for each object (they are moving, after all), so it should be easy. The Tomohide Fur Demo does a similar thing to calculate velocity and acceleration for the fur.
 
Mintmaster said:
Well yeah, and you can figure this out through physics. You can throw your velocity matrix (translational & rotational) into a vertex shader, along with the global camera velocity matrix, and figure out the velocity.

Keeping track of every vertex from the previous frame would take up gobs of memory. Chances are you've already got the velocity matrices for each object (they are moving, after all), so it should be easy. The Tomohide Fur Demo does a similar thing to calculate velocity and acceleration for the fur.
Well, it's just the old question of store it, or recompute it, isn't it?

Just remember that there won't be any need to retain static frame information, and I still contend that even if you're going to be recalculating, it'd be better to do it from position information rather than velocity information.
 
I don't see how the image space technique of linear extrapolation is going to deal with rotating objects like propellar blades.
 
DemoCoder said:
I don't see how the image space technique of linear extrapolation is going to deal with rotating objects like propellar blades.
Well, it's clearly not a perfect approximation. What you'd get, if looking from the top, is what looks like a many-sided polygon making up the outer edge of the propeller blades' path, as opposed to a circle. This isn't perfect, but I think it'd be better than no motion blur, still assuming you have a high framerate.

A better solution would obviously be to take the acceleration into account as well, but I think that might be a bit harder to deal with.
 
DemoCoder said:
I don't see how the image space technique of linear extrapolation is going to deal with rotating objects like propellar blades.
I think linear blur will be good enough when put into practice, but it doesn't have to be linear, and that's why I said store an acceleration vector as well. This will give you a quadratic curve, and that's plenty. It'll be a bit harder to deal with, but not much by any means. ds = v*dt + a*dt^2, repeated for several dt values, to get your sampling offsets (though this is an oversimplification, as mentioned below). Calculating the acceleration vector in the VS just requires a bit of math effort from the coder, nothing complicated.

Very high rotation speeds should be handled separately, just as they are in current games. Clamping the blur to some metric based on object screen size should reduce extrapolation artifacts.

--------------------
Chalnoth, I see what you're saying. Just use the previous frame's transformations matrix. My point about physics was more related to how impossible it would be to make this driver based. Still, I think that if you have the velocity matrix, you might as well use it instead of tracking previous matrices. Mattter of preference, I suppose.

--------------------
I think the bigger problem is figuring out how to do it without making a mess. You can't just blur the image directionally, because a fast moving pixel needs to smear out to its neighbours, not accumulate from its neighbours. Sort of like scatter versus gather. Sometimes this is an equivalent condition, but if it's not then you'll get an ugly effect. It's similar to making good DOF (sample and test before accumulating, or you get halos), but with another dimension, i.e. a 2D vector in MB versus a 1D depth in DOF.

You need to make sure a neigbouring pixel's smear path lies on top of the pixel being rendered before you add it to this pixels average. You could search for appropriate neighbors much like raytracing searches for an intersection, but that'll cost a lot. Alternatively, you could use geometry fins (silhouettes to the viewer) to achieve the scatter. I saw a presentation by NVidia doing something like this, I think. This was one of the early suggested uses of vertex shaders by both IHV's.

Anyway, beyond just using an accumulation buffer or the multisample mask with discrete time slices of each frame, which simply multiplies your scene rendering time by the number of temporal samples, I'm not sure what else you could do besides image space MB. The former solution is, as someone here put it, just a solution to the monitor's limitations, and not any better than just getting more FPS.
 
Mintmaster said:
Chalnoth, I see what you're saying. Just use the previous frame's transformations matrix. My point about physics was more related to how impossible it would be to make this driver based. Still, I think that if you have the velocity matrix, you might as well use it instead of tracking previous matrices. Mattter of preference, I suppose.
Well, except if you use the instantaneous velocity, you'll have the artifact of blur lines not meeting up between frames for accelerating objects.

I think the bigger problem is figuring out how to do it without making a mess. You can't just blur the image directionally, because a fast moving pixel needs to smear out to its neighbours, not accumulate from its neighbours. Sort of like scatter versus gather. Sometimes this is an equivalent condition, but if it's not then you'll get an ugly effect. It's similar to making good DOF (sample and test before accumulating, or you get halos), but with another dimension, i.e. a 2D vector in MB versus a 1D depth in DOF.
And, what's more, you need to have the luminosity of the pixel decrease the further it is smeared. That shouldn't be too hard to calculate, though.

I think the easiest way to implement image-space motion blurring would be to first render the current frame to one texture and the displacement data to a second. For each pixel in the current frame, spawn a quad and set one position to be at its position in the current frame, the other position to be where it was in the previous frame. Divide the color by the length of the quad and add to output. To prevent artifacts, this may only work if the displacement is calculated in pre-perspective-transform space (i.e. simply translated/rotated world space), and the perspective transform then applied to the quad so that neighboring pixels' blurs match up properly.
 
AFAICS Mintmasters approach has the following pros :

- it allows for "screen-space bounded" blur, which at this point seems to me to be pretty essential to getting real-time performance.

- unless I'm misunderstanding what you guys are saying, you'd have to render to a buffer with a guard band around the sides of the actually displayed pixels (otherwise you'd get some pretty awful temporal artifacts as pixels pop into the viewport and suddenly smear). If the blur isn't bounded, I'm not sure how this would work..

- having a lower frame-rate shouldn't affect shutter speed past a certain point. If some other part of the rendering pipeline slows the frame rate down to say 10fps, does it make sense for everything to turn into a blurry mess? Also, I think you'd need more than lerps to handle shutter speeds as long as 1/10th of second, not to mention the fact that computing blur gets slower the larger it's extent. One could imagine a situation where blurring would get slow enough to cause the frame-rate to steadily diminsh or never recover...

On the other hand, Chalnoths approach could be tweaked so that it didn't actually use the previous frames transformation matrices, but some kind of intermediate matrices instead (maybe the physics system could be made to run at some time-step not tied to the frame-rate?)
 
Chalnoth said:
I think the easiest way to implement image-space motion blurring would be to first render the current frame to one texture and the displacement data to a second. For each pixel in the current frame, spawn a quad and set one position to be at its position in the current frame, the other position to be where it was in the previous frame. Divide the color by the length of the quad and add to output. To prevent artifacts, this may only work if the displacement is calculated in pre-perspective-transform space (i.e. simply translated/rotated world space), and the perspective transform then applied to the quad so that neighboring pixels' blurs match up properly.
I was thinking of the same straightforward approach, but that is very expensive! It's like the scatter approach I was talking about. Kinking the quads to take acceleration/rotation into account might be nice too.

You'd get quality results, but even if your blur texture is quarter res (say 800x600), then you have a half million quads to render, and very heavy overdraw with pure alpha blending. You need VTF or R2VB as well, though VTF would be dog slow with so many points on current hardware. I'm not sure how well modern video cards can render thin lines. Since they're quad (the other kind) based, don't they lose a lot of fillrate? Hopefully no more than half.

If we can get 5MP fillrate with blending, though, then devoting devoting 50% of GPU time at 60fps leaves us with the ability to blend 8 pixels on top of one another on average. In practice, unfortunately, it'll probably be much lower due to cache incoherency from such a crazy write pattern.

Not bad of an idea, but I'd love to find a faster solution, even if it doesn't look as good.


psurge, regarding the physics, Chalnoth and I are really just discussing moot points about the same approach. The challenge is getting the blur to look right once you know how fast every pixel is moving.
 
Last edited by a moderator:
Mintmaster said:
Not bad of an idea, but I'd love to find a faster solution, even if it doesn't look as good.
Oh, yeah, definitely.

But since the "correct" method of doing motion blur would be to take one pixel in the rendered texture, and spread it out to multiple pixels in the output, you just have to do that by creating quads.

If you could find an efficient and decent-looking way of looking at one pixel of the output, and asking yourself how many pixels of the input you need to blend together to get the right result, then that would potentially have much better performance. I just don't know how you'd do it, though.
 
First of all, I already have hardware motion blur, it is called Samsung 243t ;-)

There is one thing that I dont get. WTF is with everyone trying to emulate camera and film artifacts??? You get a tool that can do a perfect, sharp rendering. And then you burn processing power to make it worse? Why? Because it makes it more like film? But it is not a film, it is a fricking computer game! And I can bet that most directors curse limitations of lens and film every day.
 
Motion blur isn't about approximating film (at least, not as we're discussing it). It's about removing temporal aliasing.

Temporal aliasing is an artifact of the fact that frames in computer graphics are rendered in discreet steps in time. I and a couple of other people have already discussed how temporal aliasing manifests itself in 3D rendering.

Motion blur, if done well, would be a good thing all around.
 
dominikbehr said:
I'd rather have faster frame/refresh rates and let my eyes do the rest.
That's not enough, though. To show this point, just wave your hand between your eyes and your monitor. You won't see a smooth blur like you would if you waved it between your eyes and some non-flickering object. But no matter what your refresh rate, you'll not see a smooth blur, but instead a large number of discreet copies of your hand as your monitor flickers (this requires a CRT to work). This is one example of temporal aliasing, and how rendering discreet frames is not realistic.
 
Chalnoth, Mintmaster - I have a question for you guys - how does this method handle visibility changes? Even with linear (velocity only displacements) blur, it seems to me that the order in which the generated quads are "added to output" matters...
 
Back
Top