Digital Foundry Article Technical Discussion Archive [2011]

Status
Not open for further replies.
It was pretty clear to me. Especially when I removed motion blur from options the animation didnt look as fluid and smooth at all.

Personally I d like to see motion blur in some 60fps games

Yes the difference was pretty big - going from no mb to mb was like the game was running at an even higher frame-rate...looked amazing.

It sure as hell would've been awesome to see more games running at 60fps implementing object motion blur, the sad thing is that we rarely have games running at 60fps in the first place at this gen...seeing a Call of Duty game with an object motion blur option would've been really interesting though. :p
 
@Shifty: Thank you for the great explanation.

Here is a comparison between 30fps with moblur, 60fps with moblur and 60fps without moblur.
moblurredlpwa.jpg
 
These are somewhat different things we're talking about here...

What the eye does with the incoming signals is a separate issue. There is something we perceive as blurring, but in actual reality it is continuous movement. This is what realistic graphics should aim to reproduce.

Old studies have determined that around 24 discrete frames per second we start to accept the illusion of movement, too; but if the images are completely sharp (so the camera has fast shutter speed) then we still get a sense of strobing, too. So to help sell the illusion, we use slower shutter speeds and let the lengthy exposure time create blur on the individual still images themselves.
This gives 24 fps cinema a certain look or feel, but it also destroys information. Usually, even with the strobing, our eyes could catch details or movement, track an object as it moves along the screen - but the motion blur covers it up completely.

On top of it all, no current video game can deliver realistic motion blur, just some more or less effective fake solutions instead. In most cases objects will keep their normal silhouettes, standing still, and just blur the pixels within those silhouettes. True 3D motion blur is still quite resource intensive even for CGI, but it is absolutely necessary for realistic movie VFX work.
Good game engines are pretty selective with their motion blur, and tune it back significantly compared to what's expected with cinema and usual shutter speeds. And they mostly use it to compensate for the lack of a better frame rate.

So I still believe that 60fps with no motion blur is a better way to deliver visual information in an interactive environment like a video game, compared to either 30fps with motion blur or 60fps with motion blur. The best, of course, would be 100-120 fps with no blur, where our own eyesight would "add" the "motion blur" naturally, as with real life movement. Gameplay shouldn't look and feel like cinema, it's not good for control and reaction times and such; and 3D is going to require smoother frame rates as well. James Cameron is heavily pushing for at least 48fps, because it's evident that 24fps is absolutely not enough even for cinema when you add stereoscopy. For them it's a more complex question because of the visual language of movies that we've gotten used to, but it's still more likely that they'll abandon 24fps and motion blur in favor of a smoother viewing experience, even though the associated costs will be significant (4 times the rendering and storage capacity compared to a mono 24fps movie).

So again, in short - the entire existence of motion blur is there to compensate for a low frame rate, even in the cinema. Games shouldn't try to replicate that and instead leave it to the human vision to interpret and blur the images when necessary, IMHO.
 
In most cases objects will keep their normal silhouettes, standing still, and just blur the pixels within those silhouettes.
True, this recently posted screenshot shows it quite clearly (soldier in the bottom left).
http://img156.imageshack.us/img156/8852/avermediacenter20020207.jpg

Camera motion blur works really well in the Crysis games though. Now here I rendered an interesting comparison. To my eyes, the 30fps clip looks more pleasing than either of the 60fps clips :oops::oops:

You can verify yourself that the clips are named properly, because Quicktime supports frame-by-frame forwarding (left and right arrow keys).
http://www.mediafire.com/file/mgtck24i12fu2md/moblur_compare.mov
moblur_smoothieo4.jpg
 
One small, but important question: is this blur applied in post, or are these three completely separate recordings?

Edit: nevermind, now I see that it's been rendered out with photo textured simple geometry.
Still, it'd probably be quite different if it was all fully modeled and lit, and then rendered with PRMan's true 3D motion blur.
 
They are all separate renders, and it's raytraced 3d blur, not done in post. The environment is a simple cameramapped room.
 
Plenty of studies - Google is your friend

Evolution also has taught us many lessons why stereo vision (and hearing) is better than mono. We now have many studies to help us understand how. We are understanding that we have neurons dedicated for many automated processing tasks, like identifying camouflaged object, using slightly different view of the scene from left and right eyes. All together, the benefits of stereo vision and sound are vast.

Desire is the greatest obstacle in the path to enlightenment.
 
Evolution also has taught us many lessons why stereo vision (and hearing) is better than mono. We now have many studies to help us understand how. We are understanding that we have neurons dedicated for many automated processing tasks, like identifying camouflaged object, using slightly different view of the scene from left and right eyes. All together, the benefits of stereo vision and sound are vast.

Desire is the greatest obstacle in the path to enlightenment.

I feel like I'm reading from the Ultimate Warrior's blog.
 
I don't know that blog but

I feel like I'm reading from the Ultimate Warrior's blog.

Do you know that your brain does calculus? It is doing calculus right now. It uses integration to decide eye position as you read this text. I hope you are very proud of your brain.
 
There're two different issues there. The brain itself doesn't take discrete snapshots as a camera does. TBH I don't know how its sampling works, but I imagine there's a form of perceptual 'samples per second' as the brain evaluates visual information. Regardless of that, the eye's physiology produces blurred and unclear image.

The image on the retina is formed by photons breaking down photosensitive pigments, like classic silver based photography. This releases energy that sends a signal down the nerves, and the eye has to chemically rebuild the pigments, which takes time, resulting in image retention and lack of sensitivity to those wavelengths just seen. The process is far from quick, and that's why we don't have amazing 1/2,000th of a second type frame definition. This was exploited by Heinz in the colour of their baked bean tins, which is the negative of the colour of the baked bean sauce. If you stare at a blue Heinz beans tin for 30 seconds and then look at a white wall, that blue colour becomes desensitized as the pigments need to be rebuilt, and you see an after image of Heinz baked-bean orange on the wall. To see blur in action, as others say, wave your spread hand in front of you and the fingers blur. Alternatively if you wave a firework sparkler around at night you see light trails, but a fast shutter camera would just see a point of light on the sparkler and the sparks.

So eyes do work with light accumulation over time like a camera, with image retention and blur, only without discrete samples. Physiological motion blur is different to movie motion blur where in movies the blur is fragmented into time slices, and you get discrete juddering of blurry objects moving across the screen, where in real life the blur is a uniform transition. Potentially in computer games you can create a more realistic blur than movies by blurring to points outside the time-slice, across frames as it were.

It's not quite like snapshots or time slices or however one may think of a series of discrete images. It's still an analog process.

What we think of as motion blur in real life, as opposed to motion blur in cinematography, photography or CGI is a combination of how quickly ones eyes can focus on an object and light retention of the retina (as you illustrated quite well). A more extreme example of light retention is the use of flashbangs to clear a room.

You can actually train your eyes to focus more quickly but there's nothing to you can do about light retention of the retina, which I believe deteriorates over time and with exposure to extremes (similar to gradual hearing loss). Thus each person has their own individual time to focus and perception of "motion blur" varies from individual to individual.

Where things differ from say cinematography is that in cinematography each frame will capture the "blur" of a moving object from the start of exposure until the end of exposure. If one were able to do a snapshot of what the human eye see's at any given point it would instead show an out of focus object with very little trailing blur. In the case of fast motion the eye hasn't been exposed to the color/light/object for a lengthy time as in your example above, nor is it an extremely bright flash as in my flashbang example.

However, since the eye is an analog device, a fast moving object is just an out of focus object moving across your field of view. If you can track the object fast enough either by eye movement or head movement, you can give your eyes enough time to focus on it, but then they don't have time to focus on the background. Which isn't all that similar to a "smear" across a frame of film, picture, or some forms of computer generated motion blur. Well it's similar in that there is a time associated with being able to focus on an object (camera's with fast exposure film can focus far more quickly for example) how each perceives it or represents it is quite different.

When put into motion, the brain will attempt to fit it in with what it expects to see, and thus we generally have a good illusion of such. Although I have to say some forms of computer generated motion blur are far from convincing.

Regards,
SB
 
Do you know that your brain does calculus? It is doing calculus right now. It uses integration to decide eye position as you read this text.
It bloody well does not!!! (that anger aimed at scientists and not you ;))The brain is not good at numbers. Hell, numbers are a late invention of human language to describe quantities, appearing way later than the nature's need to see. Scientists and engineers may be able to model physical behaviour or apsects using mathematical models, but those are models and not how the brain (or universe) actually works. The brain works making comparisons and best guesses, without the fidelity of perfect number crunching.
 
It's not quite like snapshots or time slices or however one may think of a series of discrete images.
I didn't say it was. Indeed I even explicitly said it isn't taking snapshots! By perception I mean the eye is providing a constant signal stream, but the brain will have to 'look' at that stream to work out what it's seeing - it cannot be performing an unlimited number of comparisons or evaluations a second, so there's going to be a limit. Basically the difference between 24 and 60 fps shows the brain is processing the scene much faster than 24 fps because we can perceptually see the difference, whereas if 120 fps and 600 fps look just as smooth as each other, that suggests the periodicity of noticing changes is around 100 times a second. i don't know what that rate is on average, it's going to vary a lot from person to person as people always vary a lot! during seeing, the eye is accumulating light equally and feeding that info to the brain, but the brain isn't perceiving changes at the same frequency light is impacting the retina - very far from it.

But that is discrete from the issue of blurring which is a matter of light accumulation. Hence why I mentioned the brain's perception in one paragraph, but went on to describe the eye in a separate paragraph.
 
Do you know that your brain does calculus? It is doing calculus right now. It uses integration to decide eye position as you read this text. I hope you are very proud of your brain.

I believe calculus is just us humans trying to model natural processes and systems. Calculus is man-made. The human brain is... natural. Just because I wrote a flight simulator emulating the flight pattern of a rigid body communications satellite in geosynchronous orbit doesn't mean I know what it's like to be a satellite in space flying around Earth. I'm making best guesses...

Ahh... @ShiftyGeezer already made the same point! Yah, it's all smoke and mirrors.
 
Last edited by a moderator:
These are somewhat different things we're talking about here...

What the eye does with the incoming signals is a separate issue. There is something we perceive as blurring, but in actual reality it is continuous movement. This is what realistic graphics should aim to reproduce.

Old studies have determined that around 24 discrete frames per second we start to accept the illusion of movement, too; but if the images are completely sharp (so the camera has fast shutter speed) then we still get a sense of strobing, too. So to help sell the illusion, we use slower shutter speeds and let the lengthy exposure time create blur on the individual still images themselves.
This gives 24 fps cinema a certain look or feel, but it also destroys information. Usually, even with the strobing, our eyes could catch details or movement, track an object as it moves along the screen - but the motion blur covers it up completely.

On top of it all, no current video game can deliver realistic motion blur, just some more or less effective fake solutions instead. In most cases objects will keep their normal silhouettes, standing still, and just blur the pixels within those silhouettes. True 3D motion blur is still quite resource intensive even for CGI, but it is absolutely necessary for realistic movie VFX work.
Good game engines are pretty selective with their motion blur, and tune it back significantly compared to what's expected with cinema and usual shutter speeds. And they mostly use it to compensate for the lack of a better frame rate.

So I still believe that 60fps with no motion blur is a better way to deliver visual information in an interactive environment like a video game, compared to either 30fps with motion blur or 60fps with motion blur. The best, of course, would be 100-120 fps with no blur, where our own eyesight would "add" the "motion blur" naturally, as with real life movement. Gameplay shouldn't look and feel like cinema, it's not good for control and reaction times and such; and 3D is going to require smoother frame rates as well. James Cameron is heavily pushing for at least 48fps, because it's evident that 24fps is absolutely not enough even for cinema when you add stereoscopy. For them it's a more complex question because of the visual language of movies that we've gotten used to, but it's still more likely that they'll abandon 24fps and motion blur in favor of a smoother viewing experience, even though the associated costs will be significant (4 times the rendering and storage capacity compared to a mono 24fps movie).

So again, in short - the entire existence of motion blur is there to compensate for a low frame rate, even in the cinema. Games shouldn't try to replicate that and instead leave it to the human vision to interpret and blur the images when necessary, IMHO.

The artists at my studio like motion blur because they want the "film" look. Same reason they want depth of field, and film-based tonemapping curves. They like the look, and people who play the games like the look (either because they're conditioned to like it, or because they like the artistic style, or who knows why). Surely they're not alone, considering the tremendous amount of R&D going into reproducing film and camera-based phenomenon for real time graphics.

Also most object-based motion blur implementations I've seen take measures to ensure the blur goes outside of object silhouettes, because it looks pretty terrible if you don't. Some do it by extending the geometry, some do it by blurring the velocity buffers, and some do it through other means.
 
Also most object-based motion blur implementations I've seen take measures to ensure the blur goes outside of object silhouettes, because it looks pretty terrible if you don't. Some do it by extending the geometry, some do it by blurring the velocity buffers, and some do it through other means.

Heh, was just noticing at the beginning of the BF3 trailer how the silhouette of the viewweapon could still be seen at the starting point. It's bugging me. :p
 
Status
Not open for further replies.
Back
Top