Digital Foundry Article Technical Discussion Archive [2011]

Status
Not open for further replies.
Your mistake is thinking that our brain perceives 3D displays the same way it perceives the natural world. It doesn't. Otherwise we wouldn't have to simulate effects like motion blur.

The brain will interpret the data as 3D, but as blurry 3D ;)

You are mixing up different issues.

Motion blur and DOF is a result of the eyeball function, not the brain processing. Watching a TV (2D or 3D) means the eye always has all objects in equal focus regardless of virtual distance because it is all only on single perpendicular plane and object movement is very slow because it moves only centimeters on a screen rather than feet or meters in the real world. This is why things like motion blur and DOF have to be simulated. For video games it is a form of exaggerated artificial selective focusing and blurring and also helps mask video jerkiness.
 
Last edited by a moderator:
Texture detail

It looks like you're arguing different points. The brain should not be able to fill in missing details from lower resolution textures, but it can perceive more info in 3D art given same resolution, dpi, etc.

is better perceived in 3D and even more perceived is light, materials types, and movement. This is why camouflaged object or person is much easier to see in stereo images than mono. Even the slightest movement also is much easier to spot.

The brain has amazing automated functions. For example, the brain even finds and highlights distant light sources at night.
 
Watching a TV (2D or 3D) means the eye always has all objects in equal focus regardless of virtual distance because it is all only on single perpendicular plane and object movement is very slow because it moves only centimeters on a screen rather than feet or meters in the real world.
I can see motion blur if I wave my hand a few inches from my head. The reason you don't see motion blur in a screen is because nothing is actually moving, it's just images flashing and the brain can tell the difference.

Just like it can tell the difference between a natural 3(4)D world and a 3D display.
 
is better perceived in 3D and even more perceived is light, materials types, and movement. This is why camouflaged object or person is much easier to see in stereo images than mono. Even the slightest movement also is much easier to spot.

The brain has amazing automated functions. For example, the brain even finds and highlights distant light sources at night.

Cool, do you have reference material for people to read for themselves ?
 
Motion blur in the strict definition only exists on film, when the aperture is open for a certain amount of time and objects are moving within this time frame. We've all grown up on 24fps cinema and have been conditioned to expect it in movies. However even this can be exploited - most of the beach landing scene in Private Ryan and some of the fights in Gladiator had a very fast shutter speed and thus almost no motion blur, even on fast moving dirt particles thrown around by explosions or people fighting in the mud.

Eyesight is analog so theoretically there's no motion blur because - as far as I know - the retina doesn't accumulate light the way film or a digital sensor does. It's all constant, although of course fast moving objects are harder to focus on or follow with head/eye movements.

Nevertheless, motion blur in a video game only makes sense if the time slice represented by a single frame is relatively large, like with 30fps, but even there it can get disturbing if you're trying to track an object moving along the screen with your eyes. And most games don't have proper motion blur anyway.
Ideally a game should run at 60 to 100fps and thus motion blur wouldn't be necessary, your eyes would treat everything the way they do in real life. And real cinematic motion blur is undesirable, because it isn't fit for an interactive environment.
 
is better perceived in 3D and even more perceived is light, materials types, and movement. This is why camouflaged object or person is much easier to see in stereo images than mono. Even the slightest movement also is much easier to spot.

The brain has amazing automated functions. For example, the brain even finds and highlights distant light sources at night.

Wow, you're just throwing darts at a dartboard and just hoping something sticks.

A camouflaged person isn't any easier to see in 2D or 3D. The only way to discover a properly camouflaged person is through motion which is applicable in 2D or 3D. And whether it is 2D or 3D it doesn't necessarily make motion easier to see. In fact, in real life motion is easier to see out of your peripheral vision (which lacks most depth information) than it is through direct vision for various reasons (lack of color information, lack of direct focus, etc.).

As well so many of your other observations are just plain incorrect. Motion blur has nothing to do with the focal point of your eye. It has far more to do with an analog progression of motion that cannot be replicated with a digital medium or any medium that reduces motion into a series of static images without either capturing the blurred motion (camera) or through artificial blurring on computer generated images. With faster motion you have a harder time focusing on the object even if you're eyes are both focused on the focal plane at which the object is moving, a sign hanging above the point at which the object will be passing for instance.

The brain also doesn't interlace any images. You perceive both images at all times. Although usually for people there is one dominant eye such that if there is conflicting information the brain will tend to favor one eyes view over another. In real life both eyes have a far greater difference in what is perceived by each eye than you will ever have with simulated 3D (stereoscopic or otherwise).

This is easy to see just by focusing on your finger in front of your face and moving it towards you or away from you. You notice anything in the foreground or background will not have ANY sort of "combining" or interpolation. Each image remains distinct and at the same perceived sharpness (or resolution) as each eye would when viewing the scene seperately.

Or sit about 6 meters from a window and focus on a distant object through the window. Each eye will see a distinct and seperate image of the window frame.

When you view an object at the point of focus both eyes see an image that is virtually identical. If you do not have a dominant eye (like me) you'll actually see 2x images of each "side" of the object (a finger for example). If you have a dominant eye your brain will focus on one image and tend to ignore/fade out the other. I actually drove my Optometrist crazy after I had my eye surgery as I do not have a dominant eye and all his tests for a dominant eye had me giving him responses that he wasn't expecting.

In real life the brain does derive and interpret all sorts of information from the world around us. But key in this is that it is a combination of the the point at which an eye focuses as well as the distance between the eyes. In effect the brain is triangulating the locations of objects in a "3D" environment using 3 points of reference.

With illusory 3D (stereoscopic or otherwise) you lose 1 element and thus the brain has to work with just 2 points and thus doesn't have the information to accurately do what it does in reality.

And even then nothing in either case allows for the virtual doubling or quadrupling of apparent resolution, clarity, or sharpness over what the actual resolution is. It also, as pointed out by others, cannot recreate information that is completely absent in the first place due to the lack of resolution.

Regards,
SB
 
Wow, you're just throwing darts at a dartboard and just hoping something sticks.

A camouflaged person isn't any easier to see in 2D or 3D. The only way to discover a properly camouflaged person is through motion which is applicable in 2D or 3D. And whether it is 2D or 3D it doesn't necessarily make motion easier to see. In fact, in real life motion is easier to see out of your peripheral vision (which lacks most depth information) than it is through direct vision for various reasons (lack of color information, lack of direct focus, etc.).

...

Do you have reference material also ?
 
Half res 3D is obviously worse in image quality than full res 2D. Whether or not someone cares about it is a different issue, but there's no point in trying to debate this fact...
 
Do you have reference material also ?

With regards to the camouflaged bit? Standard military training. Soldiers in Vietnam especially were trained to detect hidden VC through peripheral vision, actively searching for them with primary vision actually hindered and in many cases prevented you from actually finding snipers hidden in trees for instance.

Regards,
SB
 
Half res 3D is obviously worse in image quality than full res 2D. Whether or not someone cares about it is a different issue, but there's no point in trying to debate this fact...

Yes, but the added dimension may give a different experience to the gamer. We probably need more production techniques and experiments to flesh out this added depth more… as in no matter how high the resolution goes, a 2D image is still "flat". How can we use the depth info better ?
 
Eyesight is analog so theoretically there's no motion blur because - as far as I know - the retina doesn't accumulate light the way film or a digital sensor does. It's all constant, although of course fast moving objects are harder to focus on or follow with head/eye movements.
The human eye does perceive motion blur. Somebody else mentioned waving your hand quickly. You can also whip your head left and right or look out the side window of a moving car.

I assume that the eye needs to collect light similar to a sensor. 0ms of "exposure time", which would be needed for there to be no motion blur, cannot yield an image.
 
With regards to the camouflaged bit? Standard military training. Soldiers in Vietnam especially were trained to detect hidden VC through peripheral vision, actively searching for them with primary vision actually hindered and in many cases prevented you from actually finding snipers hidden in trees for instance.

That doesn't tell me a 3D primary vision is worse off than 2D primary vision for detecting camouflaged objects though. It could mean peripheral vision is ultra-sensitive to motion. Or the early VC camouflage was relatively primitive: http://www.roggenwolf.com/theory/

The purpose of camouflage is to prevent detection and recognition. This can be done by providing false cues to the human visual system. To do so effectively, a camouflage pattern should include a macropattern, to mislead peripheral vision; a micropattern, to mislead central vision; as well as an appropriate colourway, to mislead colour vision. Such a camouflage pattern might be called an integrated camouflage pattern.

But regardless, some reading material for stereoscopic 3D vision would be nice.
 
Eyesight is still analog - what the brain can do with the input is a different story.

Most people consider the definition of motion blur to be what you see in the cinema, or on a still photograph. This is very different from whatever you see with your own eyes. Reproducing it for movie VFX and game cinematics makes sense, mostly for artistic reasons - but gameplay is a different matter. 60fps with no motion blur should always be better.
 
60fps with no motion blur should always be better.
The excellent motion blur in Crysis at 30fps makes camera movement look MUCH better to me than 60fps games with no motion blur.
[strike]I am a staunch supporter of 24fps in film, so this preference no doubt carries over to games.[/strike]
Edit: Actually, I like higher frame rates in games, but they still have to have motion blur.
 
Last edited by a moderator:
By any logic, at 60fps you should see about 1/3 of the motion blur that's there at 24fps. Or half of what you see at 30fps.
I'd argue that it's such a small amount that you're better off without it, it isn't worth the effort and the resources. Especially when you consider that it also has a bad effect on gameplay and visibility. Most 30fps games only use it to make the game feel smoother, but at 60fps you already get that feel :)
 
By any logic, at 60fps you should see about 1/3 of the motion blur that's there at 24fps. Or half of what you see at 30fps.
I'd argue that it's such a small amount that you're better off without it, it isn't worth the effort and the resources. Especially when you consider that it also has a bad effect on gameplay and visibility. Most 30fps games only use it to make the game feel smoother, but at 60fps you already get that feel :)

I think Tekken 6 proved for most people that motion blur still really added quite a bit of smoothness even at 60fps. But obviously the gains will decrease with higher framerates in general.
 
Eyesight is still analog - what the brain can do with the input is a different story.
There're two different issues there. The brain itself doesn't take discrete snapshots as a camera does. TBH I don't know how its sampling works, but I imagine there's a form of perceptual 'samples per second' as the brain evaluates visual information. Regardless of that, the eye's physiology produces blurred and unclear image.

The image on the retina is formed by photons breaking down photosensitive pigments, like classic silver based photography. This releases energy that sends a signal down the nerves, and the eye has to chemically rebuild the pigments, which takes time, resulting in image retention and lack of sensitivity to those wavelengths just seen. The process is far from quick, and that's why we don't have amazing 1/2,000th of a second type frame definition. This was exploited by Heinz in the colour of their baked bean tins, which is the negative of the colour of the baked bean sauce. If you stare at a blue Heinz beans tin for 30 seconds and then look at a white wall, that blue colour becomes desensitized as the pigments need to be rebuilt, and you see an after image of Heinz baked-bean orange on the wall. To see blur in action, as others say, wave your spread hand in front of you and the fingers blur. Alternatively if you wave a firework sparkler around at night you see light trails, but a fast shutter camera would just see a point of light on the sparkler and the sparks.

So eyes do work with light accumulation over time like a camera, with image retention and blur, only without discrete samples. Physiological motion blur is different to movie motion blur where in movies the blur is fragmented into time slices, and you get discrete juddering of blurry objects moving across the screen, where in real life the blur is a uniform transition. Potentially in computer games you can create a more realistic blur than movies by blurring to points outside the time-slice, across frames as it were.
 
I think Tekken 6 proved for most people that motion blur still really added quite a bit of smoothness even at 60fps. But obviously the gains will decrease with higher framerates in general.

It did? I thought it was almost impossible to notice myself.
 
It did? I thought it was almost impossible to notice myself.

It was pretty clear to me. Especially when I removed motion blur from options the animation didnt look as fluid and smooth at all.

Personally I d like to see motion blur in some 60fps games
 
There're two different issues there. The brain itself doesn't take discrete snapshots as a camera does. TBH I don't know how its sampling works, but I imagine there's a form of perceptual 'samples per second' as the brain evaluates visual information. Regardless of that, the eye's physiology produces blurred and unclear image.

The image on the retina is formed by photons breaking down photosensitive pigments, like classic silver based photography. This releases energy that sends a signal down the nerves, and the eye has to chemically rebuild the pigments, which takes time, resulting in image retention and lack of sensitivity to those wavelengths just seen. The process is far from quick, and that's why we don't have amazing 1/2,000th of a second type frame definition. This was exploited by Heinz in the colour of their baked bean tins, which is the negative of the colour of the baked bean sauce. If you stare at a blue Heinz beans tin for 30 seconds and then look at a white wall, that blue colour becomes desensitized as the pigments need to be rebuilt, and you see an after image of Heinz baked-bean orange on the wall. To see blur in action, as others say, wave your spread hand in front of you and the fingers blur. Alternatively if you wave a firework sparkler around at night you see light trails, but a fast shutter camera would just see a point of light on the sparkler and the sparks.

So eyes do work with light accumulation over time like a camera, with image retention and blur, only without discrete samples. Physiological motion blur is different to movie motion blur where in movies the blur is fragmented into time slices, and you get discrete juddering of blurry objects moving across the screen, where in real life the blur is a uniform transition. Potentially in computer games you can create a more realistic blur than movies by blurring to points outside the time-slice, across frames as it were.

Cheers Shifty i was just about to post this myself ;-)

Saved me a job in typing, and you explained it more eloquently than i would have.
 
Status
Not open for further replies.
Back
Top