Motion Blur - Temporal Antiliasing

ophirv

Banned
Hello everyone !

This is my first post in this forum although i have been visiting it for some time .

I've done some reading on motion blur and was quite amazed why this feature wasn't used in the real time 3d graphics market till the this year.
It is important as antialiasing and HDR.

Let me explain why i was so excited from this feature :

When you see a movie ( or playing a game for that matter ) we all know that we see it at 24 still frames per second or more .
In real life the human eye is breaking every second to 24 segments which every one of them is (1/24) 0.0417 seconds long .
Every human eye frame is like exposing a camera for 0.0417 seconds while a video game frame is just a still frame - o seconds .
All video games are Temporal Aliased couse they consisted from 0 seconds frames and not from frames which represent an exposure of 0.0417 seconds .
The human eye can see the difference especially in fast scenes like a fast moving car or even a quick head turn .
Motion Blur is the effect of Temporal ( time based ) Antialiasing which makes the moving scene so much fluid and real .

I hope that new 3d cards and games could make motion blur the right way - so they won't need more then 24 fps .

What do you think ?
 
Well if you've read this forum you know temporal aliasing was discussed a lot.

For one thing you would consider is that motion blur appears when an object has motion relative to your vision. That is if you follow a moving object with your eye you will see it sharp and every non-moving object as motion blurred.

In movies directors decide what they want to focus on so they can use motion blur and depth of field to direct your vision on the things they want you to look at.

In a game you are free to look at whatever you want and dof and motion blur can become distractions. For example motion blurring every moving object is incorrect since you might just want to follow that object with your eye.
 
I too would be interested in the effects of interpolation between frames. Though I doubt that this will remove the need for high framerate, it would go some way to making panning less jerky. Even at 100fps, individual frames are discerned in high motion. HDTV programming @ 60fps is more realistic in this regard.

edit: Project Offset has some pretty neat motion blur demos.
 
Panning has a problem if the rendered fps doesn't match the display device's refresh rate.
Matching rendering at 50Hz/50fps will look better than eg. a mismatched rendering of 100Hz/90fps. (Or the 72Hz/24fps thing you see in movie theaters.)
Of course motion blur can help but it's not an universal solution.

I don't think trying to interpolate frames is actually a good idea...
 
Hyp-X said:
Well if you've read this forum you know temporal aliasing was discussed a lot.

For one thing you would consider is that motion blur appears when an object has motion relative to your vision. That is if you follow a moving object with your eye you will see it sharp and every non-moving object as motion blurred.

In movies directors decide what they want to focus on so they can use motion blur and depth of field to direct your vision on the things they want you to look at.

In a game you are free to look at whatever you want and dof and motion blur can become distractions. For example motion blurring every moving object is incorrect since you might just want to follow that object with your eye.

If Motion Blur is done right it won't distract you at all - the hume eye is seeing motion blur all the time . if you are using motion blur for more then 0.0417 seconds it won't look natural and it will be distracting .

As i said true motion blur should be always turned on and not just in fast scenes ( although in fast scenes they are more noticable ) .

Playing a game at 100 fps make fast scenes look more natural but still it is not close to true motion blur .
 
First let's throw out the stuff about "the human eye" and its supposed 24 fps tick. It's not true. I am quite able to perceive jerkiness during fast (but steady) camera sweeps in movie theaters and I know I'm not alone. And I'm not talking about cheaply made home videos and/or CGI but natural scenes from high production quality stuff like sweeps across the forest near the end of PJ's Lord Of The Rings part 1.

Besides, in interactive software there are more reasons for high framerates than just visuals, namely input turnaround latency.

Now.

Interpolating between frames is not motion blur. Your interpolation endpoints are still your "0 seconds" snapshots.
If you choose to interpolate geometrically, objects will move. But if you want to do this right you ultimately render another snapshot between the endpoints. You might as well run at higher fps. In fact there will be no difference in the work done or the implementation.

If instead you interpolate singular pixels, which is very cheap and hence practical to do, you won't achieve much. Objects will not move relative to the interpolation endpoints, they'll just fade in place which is very unnatural and doesn't help your vision system tracking motion speed etc. It might be an interesting special effect but that's all.

Many claimed "motion blur" implementations used in games are really motion trail implementations. Unworthy.

And finally real motion blur.
I doubt you can do it properly with the rendering hardware we have. You can approximate it pretty nicely by producing multiple scene snapshots at multiple points in time that are spread out across the time interval of the whole frame. Problem is that rendering such as subframe is every bit as expensive as rendering a normal snapshot frame, plus you need more memory to store the subframes, plus you need a composition pass.

In a nutshell you end up with choices
a)Scene without motion blur at 50fps
b)Scene with crude approximation of motion blur (2 temporal samples) at (slightly) <25 fps

And A is clearly preferable to B IMO.
Things will start to get interesting if you can hit 50+ fps with 3 or 4 temporal samples, but don't hold your breath in the PC space. People are very used to burning any extra performance they may have by just upping the res and applying spatial AA and AF. And frankly, IMO that's a better investment than doing real-time motion blur.
 
zeckensack said:
First let's throw out the stuff about "the human eye" and its supposed 24 fps tick. It's not true. I am quite able to perceive jerkiness during fast (but steady) camera sweeps in movie theaters and I know I'm not alone. And I'm not talking about cheaply made home videos and/or CGI but natural scenes from high production quality stuff like sweeps across the forest near the end of PJ's Lord Of The Rings part 1.

Besides, in interactive software there are more reasons for high framerates than just visuals, namely input turnaround latency.

Now.

Interpolating between frames is not motion blur. Your interpolation endpoints are still your "0 seconds" snapshots.
If you choose to interpolate geometrically, objects will move. But if you want to do this right you ultimately render another snapshot between the endpoints. You might as well run at higher fps. In fact there will be no difference in the work done or the implementation.

If instead you interpolate singular pixels, which is very cheap and hence practical to do, you won't achieve much. Objects will not move relative to the interpolation endpoints, they'll just fade in place which is very unnatural and doesn't help your vision system tracking motion speed etc. It might be an interesting special effect but that's all.

Many claimed "motion blur" implementations used in games are really motion trail implementations. Unworthy.

And finally real motion blur.
I doubt you can do it properly with the rendering hardware we have. You can approximate it pretty nicely by producing multiple scene snapshots at multiple points in time that are spread out across the time interval of the whole frame. Problem is that rendering such as subframe is every bit as expensive as rendering a normal snapshot frame, plus you need more memory to store the subframes, plus you need a composition pass.

In a nutshell you end up with choices
a)Scene without motion blur at 50fps
b)Scene with crude approximation of motion blur (2 temporal samples) at (slightly) <25 fps

And A is clearly preferable to B IMO.
Things will start to get interesting if you can hit 50+ fps with 3 or 4 temporal samples, but don't hold your breath in the PC space. People are very used to burning any extra performance they may have by just upping the res and applying spatial AA and AF. And frankly, IMO that's a better investment than doing real-time motion blur.

Ah thanks for the explanation. I was assuming that interpolation between frames and motion blur were one and the same
 
zeckensack said:
I am quite able to perceive jerkiness during fast (but steady) camera sweeps in movie theaters and I know I'm not alone.

Yes, but the reason is not the low framerate (24), it's how it's presented.
Every frame is flashed 3 times so it's kinda equivalent of a CRT 72Hz running something with 24fps.

When you follow something moving with your eye you get triple vision because of this. In case of camera sweeps that's practically the entire screen.

If they'd present every frame just once it wouldn't have that effect - but then of course it would flicker with 24Hz, which would be extemely annoying.

Of course the good solution would be to have the movie at high framerates...
 
Hyp-X said:
Yes, but the reason is not the low framerate (24), it's how it's presented.
Every frame is flashed 3 times so it's kinda equivalent of a CRT 72Hz running something with 24fps.
I have some doubts that this is the case for all movie theaters ...
Anyhow, if this flashing is done, it does not change the argument IMO. It would be done to keep the illusion that every image is displayed constantly for 1/24s (which I think many theaters actually do, more or less; I might be totally wrong). And 72Hz would be good enough of an illusion to me, judging from the 75Hz desktop refresh rate I used for a year or two.
Hyp-X said:
When you follow something moving with your eye you get triple vision because of this. In case of camera sweeps that's practically the entire screen.
Isn't this proof that 24 fps aren't good enough?
If this explains the jerkiness, it means the eye can "track ahead" to the expected position of a moving object with greater than 24fps resolution, and the jerkiness is the manifestation of the image not being there in time, not moving "constantly" enough.
Hyp-X said:
If they'd present every frame just once it wouldn't have that effect - but then of course it would flicker with 24Hz, which would be extemely annoying.
You could hold the frame in place and keep the lamp on for as long as possible before moving on to the next frame. Kinda like an HDD servo can keep the head stack over a track for a while and then very quickly bump it over to the next track, on a bigger scale.

Digital projection systems shouldn't have any problem with holding an image at all.
Hyp-X said:
Of course the good solution would be to have the movie at high framerates...
Yes. Unfortunately IMAX et al never made it out of their niches. It's really much more immersive than regular cinema.
 
Last edited by a moderator:
zeckensack: is t-buffer motion blur "the real" motion blur (temporal downsampling) or just similar hack as you mentioned?
 
zeckensack said:
If this explains the jerkiness, it means the eye can "track ahead" to the expected position of a moving object with greater than 24fps resolution, and the jerkiness is the manifestation of the image not being there in time, not moving "constantly" enough.

Your eye can have a continuous motion, so no resolution there.

Code:
X - object, ^ - your eye
   X
  ^
   X
   ^
   X
    ^
      X
     ^
      X
      ^
      X
       ^
Percieved flashes relative to your eye:
  x
 x
x
  x
 x
x

Because the eye holds the image at least 1/24 long (actually even longer) you don't see the object jumping between 3 locations - instead you see 3 objects.

You could hold the frame in place and keep the lamp on for as long as possible before moving on to the next frame. Kinda like an HDD servo can keep the head stack over a track for a while and then very quickly bump it over to the next track, on a bigger scale.

Well,
1. Very quickly might not be fast enough
2. Now instead of 3 instances of the object you'd see a blur when you try to follow the moving stuff.

Digital projection systems shouldn't have any problem with holding an image at all.

See 2. above

Yes. Unfortunately IMAX et al never made it out of their niches. It's really much more immersive than regular cinema.

Agreed.
 
Reading over the Voodoo 5 paper linked, I began to wonder what would've happened if Voodoo 5 was released on time. They had a pretty great vision there, and kickstarted the FSAA movement. Gosh competition is nice :)
 
Motion Blur is useless in games. In movies and cut-scenes maybe but useless when actually playing games. Until we can control a monitor with just our eyes, it is useless.

Did I say Motion Blur is useless?
 
Motion blur (via temporal antialiasing) isn't any more useless than spatial-AA in games. Spatial AA relieves spatial "jaggies" that ANNOY YOU and temporal AA removes temporal "jaggies" which is equally annoying. The result of true temporal AA is a much more pleasant image in motion that has little to do with whether it is a "cut scene" or not. The motion is just more pleasing to the eye.

This is especially true with wheels and helicopter/turboprop blades.
 
And how do we determine what is too fast for Tom and not for Jerry when both are at the same exact distance to object in question AND both have the same exact optical focus?

TAA and/or SAA will never work properly when it comes to vision focus (which is hugely related to motion blur).

If we aren't talking about reality (there's you, me and a billion others) when we are talking about motion blur, what's the purpose?

3dfx hyped this gimmick.
 
It has nothing to do with what is "too fast" for Tom and Jerry and optical focus has nothing to do with it (or are you refering to "eye tracking"???) It has to do with what is maximum refresh rate your setup is capable of.

Just like the arguments over hi-res framebuffers vs AA, if you had something like 300DPI display resolution on your monitor, you might not need AA/hi-filtering no where near as much.

Likewise, if you could sustain much higher refresh rates and rock solid FPS, you would not need temporal AA.

Unfortunately, we don't have 300DPI framebuffers capable of 192Hz.


There's a reason why 60fps looks better than 30fps in games. But at those refresh rates, motion artifacts still persist. Thus, if your display can't go above 60-75Hz (vast majority of flat panels), your only choice is to render more FPS than you can display, and integrate them.

Sure, if Tom could follow rotor blades with his eye at 300RPM, he'd see no blur (he's see a very artifacted image perhaps sometimes with blades appearing to move backwards), but why don't we focus 3D image quality enhancements on use cases that are relevant, and not the hypothetical guy whose eyes whip around at the speed of light in perfect sync with very fast moving objects.
 
I think the problem here is not if motion blur is a good or bad thing but what it is applied to. In the real world objects will appear blured that are moving quickly relative to what your eyes are tracking. eg if you are watching the grass grow and a car drives quickly between you and the grass, the car will appear blurred. Now If you changed halfway through this example to tracking the car then the car will no longer appear blurred but the grass will.

This poses a problem for when motion blur would be applied within games. You could apply motion blur relative to what the viewport is doing but you have no knowledge of what the users eyes are attempting to track, so may be applying motion blur where its not wanted and may be annoying.

From my experience the most annoying manifestation of motion blur (in films) is when the camera is panning through a room and there is a sign with righting on a wall or door. It is next to impossible to read this while the panning continues.

This kind of argument about frame rates and motion blur has been going on for ages. "I cant tell more than 24fps" "I can see up to 90fps" "Motion blur would /(not) solve all problems" etc. I have wanted to write a little demo to show the differences if I ever had time (Like thats ever going to happen) All you would need is four squares with a bit of text in bouncing from one side of the window to the other and something to control the speed. Top square moves at 60fps next one at 60fps with motion blur next at 30fps last at 30fps with motion blur. Any coders with a bit of spare time?
 
Back
Top