Hardware Motion Blur - yay or nay ?

ophirv

Banned
After reading some of the posts on my previous thread i've noticed the following debate :

"What is better ? Motion Blur or higher framerates ?"

It reminded me the debate some time ago about AntiAliasing :

"What is better ? FSAA or higher resolution ?"

Which the answer to that debate was that if you have good hardware support for FSAA - you can use both FSAA and higher resolution.

I think that ATI and Nvidia should put a lot more effort in developing a good Hardware Motion Blur which will work on any game ( just like you can set FSAA on any game using the driver ) without requiring the game developers to make Moion Blur on the software level ( which is the situation right now ).

To my opinion they can make great Hardware Motion Blur without sacrifying framerates by lowering the quality of every frame when motion blur is applied ( the human eye can't notice details in fast scenes ).

I think that Motion Blur cab be used successfully in the game industry just like it is used successfully in the movie industry .

What do you think ?
 
ophirv said:
To my opinion they can make great Hardware Motion Blur without sacrifying framerates by lowering the quality of every frame when motion blur is applied ( the human eye can't notice details in fast scenes ).
That's impossible because you would need to know beforehands how long it takes to render a scene with a certain quality.

What you could do, however, is accumulate "excess frames", i.e. the number of frames per second that are beyond the refresh rate.
This requires triple buffering and VSync. As long as the frame rate is below 2x the refresh rate, you just do normal rendering. But if the GPU manages to finish both backbuffer scenes before vertical retrace, it starts accumulating the two frames, and if it finishes accumulating before the vertical retrace finishes, the accumulated image is displayed instead of the last one rendered. This is not limited to accumulating two frames and might not even need more memory space.
 
That was my point in the other thread exactly. :)

BTW I think, a little more tweak is necessary, so that the distribution of the samples in the timeframe will be (near) uniform. It's like a breadth-first traversal of a binary tree.

pseudo C-code:
Code:
[b]rational[/b] step = 1;
[b]while[/b](time_for_rendering())
{
  [b]for[/b]([b]rational[/b] t = step/2; time_for_rendering() && t<1; t += step)
    sample_at(t);
  step /= 2;
}
EDIT: Or maybe the engine should just measure how many samples it can take on average, and just divide the timeframe according to that.
 
Last edited by a moderator:
IMHO, it would have to be an addition to the APIs (DX & OGL), for it to be done efficiently.
 
Simon F said:
IMHO, it would have to be an addition to the APIs (DX & OGL), for it to be done efficiently.

Besides that, wouldn't a stochastic or semi-stochastic algorithm make far more sense (and yes my mind goes to Pixar everytime I think about it)?
 
Simon F said:
IMHO, it would have to be an addition to the APIs (DX & OGL), for it to be done efficiently.
I don't see how you could improve efficiency that way. No doubt however that such settings should generally be exposed by the API so it can be controlled per application, not just globally.
 
Before we try to discuss why MB should be used in games (and that's what this topic is about, right, for it to be useful in games?), why don't we discuss what's most important when it comes to even thinking about using it in the current and foreseeable future of the "we are looking at a monitor displaying games the programmers want us to see"?

We are talking about being told what to see/focus on. That's not how it is.
 
3D rendering already uses the concept of a camera, and your display device represents the film, with vertical refresh being the shuttle control. You are already being told what to focus on: what the camera is focused on. The situation where motion blur is at odds with what the user expects is incredibly rare. More often than not, it will yield better than expected imagery for the user, than when it fails. The situations where it fails in games are contrived pathological examples.

The concept of using frame rate rendering capability above the monitor's refresh rate to accumulate exposure between refreshs will lead to higher quality and simply will not violate user expectations of focus.

Rev, you focus on scenarios where the user might track a fast moving object not with the mouse/camera, but with his eye, and thus see something blurred which shouldn't be. I claim that is much more rare circumstance than the opposite:

No motion blur, 60Hz refresh. Eye focused on stationary object, fast moving object flies by. What the eye sees is an image that jumps great distances, which is a worse artifact in and of itself.

So you see artifacts either way. Either you see temporal aliasing (vast majority of the time) or you see inappropriate blur (minority of time)

I claim that temporal antialiasing overall reduces the totality of expected artifacts. It slightly increases inappropriate blur, but vastly reduces temporal aliases.
 
How effective can motion blur be by simply tagging on simple blend type effects, to current rendering? Viable as a readily tangible and problem free improvement? More to current thoughts, how about as an SLI/Xfire feature, for CPU and/or Refresh rate limited displays? Seems a natural fit.

With the scan out type blending nVidia uses, and ATI's hints on hidden flexibility in their compositing chip, how feasible would a "toggle on" feature of time weighted frame blending be and for how much benefit for each architecture? Seems like some sort of weighted blending could possibly free (for frames that would be wasted anyways) for CrossFire for a relatively conservative set of assumptions on compositing functionality, and some sort of blending seems like it should be a viable dropin alternative for nVidia's "SLI SS" option (not sure about weighting possibility in scan out blending, or if the SLI implementation offers an alternative...the same for ATI's bus transferred AA without compositing chip).
 
Reverend said:
I'm focussing on reality, not realism DC.

Huh? The reality is, if you were looking at the world through the lens of a camera (and this is exactly what you are doing with 3D rendering, which has historically been modeled using a pinhole camera), using either film or CCD, you would see motion blur.

The reality is, fast moving objects don't "jump" in the real world.

The reality is, in the real world, when our eyes and head move, the scene changes. You don't get this today, with or without motion blur, unless you have head tracking coupled with eye tracking.

Motion blur *is* reality, and what you see today in 3D neither represents reality, nor realism.
 
Some time ago I saw a few IMAX 3D movies. Motion blur (and depth of field) was somewhat distracting, but without motion blur it would have been even more distracting, running at 24 fps (only IMAX HD is 48 fps). I do think motion blur at higher framerates is more of a gain (smooth motion) than it is a loss (blurring when tracking objects).
 
Yes, that's what I'm getting at.

E(mb) = expected value benefit of motion blur
= W1 * p_alias - W2 * p_track_blur
= Weight1 times the probability that temporal aliasing occurs -
Weight2 times the probability that the user tracks an object which gets blurred

My assertion:

in normal scenes

p_alias >> p_track_blur by a large margin

Weight1 > Weight2 (jerky movement more annoying than blurred objects moving fast, but tracked by the eye)
 
I'd say that some form of motion blur in games is inevitable, at some point or another.

After all, temporal aliasing is just another form of aliasing, just like texture or edge aliasing. Temporal aliasing's pathological cases are typically related to high framerates and objects moving quickly with respect to the screen. This will result in ghosting: the edges of the moving objects will appear to have many copies.

This case typically deals with objects that are moving linearly (an easy place to see this issue is to play an old FPS where you can sustain 100+ fps and just strafe next to a corner). But another pathological case involves rotating objects. This is typically worse (though also more easily solvable by the game developer), and is akin to regular high-contrast textures or regular high-contrast geometry edges. Quickly rotating objects will frequently appear to either be moving in the wrong direction, or just jumping all over the place. The original Half-life's helicopters show this problem.

So, at some point in time, game developers will clearly decide that motion blur needs to be handled in their games. One way that this can be done has already been discussed: render multiple frames and blend them together. This is really akin to ordered-grid supersampling and has a correspondingly high performance cost and low visual quality improvement: its only purpose is to remove the limitation of the monitor's refresh rate on performance.

Similar to how we chose to make the move to rotated-grid or sparse-grid multisampling AA for edge AA, and anisotropic filtering for texture AA, there will be better ways to do temporal AA than simply supersampling.

One method that has been discussed would be random sampling: for each frame displayed, render 10, taking 10% of each frame, randomly-selected, to compose the final frame. The performance for this technique would obviously be very low, but it would be a better technique than simple temporal supersampling, as it will break up temporal aliasing artifacts, instead of just reducing them (though one might opt to select a combination of random temporal and supersampling temporal in order to reduce the graininess of the output).

Another method might be to use some sort of motion tracing technique to attempt analytical temporal anti-aliasing. Imagine a rendering algorithm that supplies the displacement of each vertex rendered in the scene. These displacement vector are tranformed to screenspace, and interpolated across the triangles to form a displacement buffer. The displacement buffer would then be combined with a normally-rendered frame buffer to blur pixels across a distance on the screen. There will clearly be artifacts for objects with high accelerations or objects moving behind other objects, so this technique should only be used in conjunction with high framerates, but I think it would be quite effective as long as the final blurring could be done efficiently.
 
I've seen way too much confusion in the userbase with games claiming to have "motion blur". Someone correct me if I'm wrong but I'd rather call motion trail blurring , than actual antialiasing in the temporal dimension which equals to real motion blur/temporal AA.

Motion trail might not cost as much in performance as real motion blur probably would, but I haven't seen yet an implementation to the day that doesn't looklikeass(tm).

As for motion blur I somewhat have the feeling that 4x samples won't be enough either, hence my question about stochastic or semi-stochastic implementations. It would be interesting if some of the talents around here would code a small demo with a simple ball dropping on a surface which would bounce up and down, where the user can pick different sample densities from 1x up to 16x or even beyond if possible.
 
How good do you guys think an image space motion blur effect might look? You'd have to do another pass to a buffer that writes a velocity vector (w/ acceleration?), and then you'd need a final compositing pass. Writing the velocity texture would need input from the physics engine of a game, so it can't be done transparently (unless a graphics driver tries to trace where each object was last frame, which couldn't be very robust IMO).

I know there are potential artifacts at places on the screen where objects moving at different velocities intersect, but the depth of field effect looks pretty good when done in image space -- when done properly. With DOF you have to make sure that when you blur you don't sample an in focus pixel, and likewise in MB you'd have to make sure you don't sample points whose velocity vector doesn't point the right way. Hmm, sounds expensive...

I recall image space motion blur looking pretty good in offline 3D rendering programs. I'm not sure what algorithm they use, though.
 
Objects rotating with high speed would cause heavy artifacting.

DOF is something that can be done in image space correctly, unlike motion blur.
 
Last edited by a moderator:
Mate Kovacs said:
Objects rotating with high speed would cause heavy artifacts.

DoF is something that can be done in image space correctly, unlike motion blur.
Well, for high speed rotating objects developers are supposed to do (and indeed they do) the blurring, like with car wheels in racing games.

For slower rotations, note that I mentioned acceleration, and this includes centripetal acceleration. This will give you a quadratic sampling curve as opposed to linear, so even a blur of, say, +-0.5 radians should look acceptable.

Finally, DOF can't be done correctly in image space, because you don't have any data for what's behind an object, i.e. the framebuffer is single valued.

However, I still have some hope for an image space technique, because:
A) offline renderers can do it pretty well
B) DOF is not correct, but looks acceptable, so even if this isn't correct, it may look good enough, like pretty much everything in realtime 3D graphics.
 
DemoCoder said:
Huh? The reality is, if you were looking at the world through the lens of a camera (and this is exactly what you are doing with 3D rendering, which has historically been modeled using a pinhole camera),
And that is what I was getting at -- the reality is that we're looking at games through a camera. Until we have technology advanced enough to permit "viewers" to dictate what objects should have motion blur while an entire scene is projected on a relatively flat 2D screen a few feet in front of our eyes, motion blur is nothing more than a controlled technology.

The "reality" part is that motion blur is something you look out for in 3D. The "realism" part is that motion blur is something you don't look out for. Thois may sound contradictory to what I previously posted, but what I meant is that programmers ensures we are focussing on "reality".
 
Back
Top