Post processing: Motion blur....why?

motion blur occurs (in real life) when your brain cannot take in all the info presented right? (like looking out a car's side window at the landscape whizzing past)....so why artificially blur a game? isn't it sufficient to just let the scenery whiz by and your brain does the rest?
 
That would require that the frame rate be very high.

Motion blur works in real life because in real life fast-moving objects actually go through all the points between A and B.

On a computer screen, an object is at point A in frame 1, and point B in frame 2.
The computer needs to go back and fill in where the object would have been between frames to create the illusion of continuity.
 
I totally agree with you. In NFS Carbon I turned it off the moment I tried the game. I mean it just looks terrible. It is okay for an effect that last a couple of seconds, like in the NFS Underground when you use the Nitrous, but to be constantly present is totally annoying.

Just my 2 cents...
 
motion blur occurs (in real life) when your brain cannot take in all the info presented right? (like looking out a car's side window at the landscape whizzing past)....so why artificially blur a game? isn't it sufficient to just let the scenery whiz by and your brain does the rest?
It's a lot more complicated than that.

"Motion blur" should really be called "temporal antialiasing" (and by this I don't mean that horrible name used by some for spatial AA done over several frames). Just as aliasing in the spatial domain looks wrong to our brain, so does aliasing in the time domain. We expect fast moving objects to be filtered over time otherwise the motion just looks jerky.

An example in real-life (TM) is the effect you get with strobe lighting. It doesn't look correct.
 
Its only wild guess but (IMHO) the biggest problem with MB is lack of info which object viewer is fallowing with his eyes in scene. Even if object has big velocity relative to screen it shouldnt be motion blurred when one focuses on it. If thats the case, would it help if rendering system had data from some eye tracing system and adjusted level of MB accordingly?
 
isn't the brain actually just taking snapshots as well? (unable to process all of it). i mean, the whole animation thing is an illusion (like a flip book of 100 drawings can trick the brain into thinking the drawing is animated). so if you're running a game at say 60 fps, isn't that already overwhelming the eye/brain's ability to capture it? i've just never been impressed with motion blur i've seen and wondered why it was necessary. is a game running at 60fps unable to create 60 accurate frames say looking sideways at 60 mph? 60 frames that you're brain would pick up just as it attempts to take snapshots of the real world? the eye/brain is snapshotting right? or what? is animation (illusion) completely different than how the eye/brain picks up and processes visual information?
 
The eye is NOT snapshotting; otherwise, indoor lighting would give you a headache and make you want to jump off the window! How the brain handles that information afterwards is a fair bit more complicated though...

Uttar
 
motion blur occurs (in real life) when your brain cannot take in all the info presented right?
No. It's the exact opposite, actually. Motion blur occurs because your brain IS taking in all the info presented, but is processing that into images at a slower rate than it is taking it in. If you were not taking in all the information, you'd have no knowledge that motion occurred in between the images that your brain creates. You'd only have a snapshot.

so why artificially blur a game? isn't it sufficient to just let the scenery whiz by and your brain does the rest?
Because when we render one state, we only render information about the absolute positions and orientations at a single given instant in time... not all the information that your brain would collect in a given period of time. Quite simply, there's information missing, and your brain can see that there's something missing. If not for this, animation at 30 fps would not lack smoothness (assuming you could do perfect motion blur, which you really can't). The fact that games inherently only render a single instantaneous snap as opposed to a full time exposure is why, even at 60 fps, it is not as smooth as a film at 24 fps (where the film is capturing a time-exposure).

isn't the brain actually just taking snapshots as well? (unable to process all of it). i mean, the whole animation thing is an illusion (like a flip book of 100 drawings can trick the brain into thinking the drawing is animated).
No... the brain is processing into images at a pretty high latency -- about 80 milliseconds -- which happens to be the approximate threshold framerate where the brain starts to piece together separate images into a moving animation. The thing is that the brain still takes in and accumulates information into that image at rates welll into the hundreds of fps, and until you can provide THAT much information, you can't really completely fool the eye.
 
UPO, I agree with you, but to me motion blur is more about simulating a camera than the human eye. It gives games a more cinematic feel to them. The same thing is true with depth of field.

Very few people know what explosions and gunfire and exotic lands look like with their own eyes. We see this stuff in movies and photos. That's why things look more "realistic" (or more precisely "photorealistic") with DOF and MB on.

Personally, I think this approach is just fine. We're unlikely to get 3D displays with 1000Hz refresh rates so that our eyes can do it all for us, so a camera feels like the most realistic alternative.
 
Hmmm, I've always wondered why video cards and monitors don't support interlacing. This provides the equivalent of doubling the FPS and the interlacing provides it's own form of motion blur. Any comments on that?

P.S. I know everyone is touting "progressive scan", but to me there really isn't much difference unless you're in the business of screen captures.
 
Hmmm, I've always wondered why video cards and monitors don't support interlacing. This provides the equivalent of doubling the FPS and the interlacing provides it's own form of motion blur. Any comments on that?

P.S. I know everyone is touting "progressive scan", but to me there really isn't much difference unless you're in the business of screen captures.

Interlacing is more of a compromise on signal bandwidth than it is about frame rate. Any motion blur it provides is equal to the amount of image fidelity it destroys.
Most TV is motion-blurred thanks to the footage itself being motion-blurred, not the screen's delivery of the image.

edit:
An interlaced screen would only make the edges of some objects fuzzy, it won't somehow interpolate the blur needed for an object that crosses the screen.

Interlacing would probably be very noticeable on higher-resolution monitors that are running graphics output that isn't blurred.

Since the average monitor is much closer to the viewer than a TV is, it becomes more noticeable as well.
 
Last edited by a moderator:
Hmmm, I've always wondered why video cards and monitors don't support interlacing. This provides the equivalent of doubling the FPS and the interlacing provides it's own form of motion blur. Any comments on that?

P.S. I know everyone is touting "progressive scan", but to me there really isn't much difference unless you're in the business of screen captures.

Back in the early-to-mid 90's, PC monitors that supported interlacing were quite common. They were mostly abandoned because running content with too much sharp edges and small features invariably resulted in very noticeable flickering (the Windows desktop, as well as many modern PC games, are MUCH harsher in this respect that your ordinary TV broadcast or DVD movie).

Interlacing itself doesn't provide any blur; rather, it relies on the TV screen and/or the broadcasted content to provide the requisite blur instead.

Also, it is essentially impossible to meaningfully perform interlacing on a non-CRT screen; if you try, the result is massive combing artifacts once you get quick movement. (on Plasma/LCD TVs, the usual workaround for this problem is to run the interlaced frames through a heavy de-interlacing post-processing filter, then display the de-interlaced result at a greatly increased refresh rate; this works most of the time, but most certainly not always.)
 
I think there are two basic biological aspects of the human eye that matter here:

- focus

Your eye is a lense that can be contracted and expanded to foucs the light falling onto your retina for optimum detail. The right contraction depends on the distance of the object. However, although your eye can focus quite quickly, if an object moves to fast, you can't focus on it properly, and it will stay blurry. This is one important part of motion blur, that is then amplified by the way the brain processes this data. But if people employing motion blur concentrate on the focus part, the motion blur effect already gets much better. ;) This is why, imho, it is and should be working closely together with depth-of-field effects and other focus related effects.

Of course, normally you see with two eyes, which allows the brain to form a more detailed 3D view. That's a wholly different story though, but probably shouldn't be forgotten completely either.

- reception to movement vs reception to detail

The center of your eye's retina is more receptive to detail and less to motion, and the outer rims vice versa. This partly makes sense from the way the lense can focus something better in the center of your retina, and it makes things in your peripheral vision inherently out of focus. Therefore you see a lot of motion blur effects in (racing) games occur on the edges of your screen, so that it covers your peripheral vision and attempts to fool it into fast movement that you cannot focus on.

At least, that's how I understand it.
 
Focus has nothing whatsoever to do with motion blur. Motion blur is most pronounced with objects moving laterally. Lateral motion produces no change in focus.
 
Motion blur in games is way over done. I often get sick from games that use motion blur.
Yup. Motion blur in games isn't temporal anti-aliasing. It's an overdone effect that can be quite nauseating at times. Just once I'd like to see a game implement motion blur for the express purpose of temporal anti-aliasing, instead of adding a gimmicky effect.
 
Motion blur in games is way over done.
I wouldn't describe most of the MB occurances in games as "over done". I'd just call it mostly ridiculously implemented.

We can all blame 3dfx for that I think!! I mean, remember when 3dfx hacked Quake3? :)
 
Yup. Motion blur in games isn't temporal anti-aliasing. It's an overdone effect that can be quite nauseating at times. Just once I'd like to see a game implement motion blur for the express purpose of temporal anti-aliasing, instead of adding a gimmicky effect.
The typical approach used in PS2 games is just to blend in frontbuffer copies, which is more of a "ghosting" than a motion blur. While it's perfectly suitable for evoking a sense of disorientation, it's not really going to fill in the information gaps.

The shaderized motion blur techniques we're capable of doing now are at least a few notches better and involve multisampling the texture at offset positions based on a render target containing per-pixel velocities. While you basically only have linear data, it's still information between frames, and as long as the framerate is consistent and you're not taking snapshots of fast motion, it's an improvement.
 
Hmmm, I've always wondered why video cards and monitors don't support interlacing. This provides the equivalent of doubling the FPS and the interlacing provides it's own form of motion blur. Any comments on that?

P.S. I know everyone is touting "progressive scan", but to me there really isn't much difference unless you're in the business of screen captures.

Others have already commented on interlacing. An alternative would be "frameless rendering". There was a paper presented on it just over 10 years ago. I'm not sure, however, how one would make efficient hardware to implement it.
 
Simon, that's awesome. It would seem that anything is better than the full screen refresh they're doing now. If half the pixels are randomly refreshed every frame, then low FPS would just look more "motion blurred" than high FPS, the choppiness would be gone forever. Sounds good to me :)
 
Back
Top