Spatial AA vs. Temporal AA

The question is, do we really want Temporal AA / Motion Blur in games?

When watching a film and theres a panning shot with a sign in the background or something else to read, you cant, its blurred out by the motion. Now when you go to something like IMAX with 48 fps the amount of blurring dcreases and more detail can be seen in panning shots, and imo looks a lot more realistic.

Now the eye has a "sample rate" of around 25fps I think, but can detect flickering motion much higher than this, upto around 60fps IIRC (although I could swear I can tell a difference upto 120fps when playing q3 :) ). To render motion blur in realtime, your basically going to have to render multiple frames and average them, which would mean your still going to have discreet images but just displayed as one, and will still display temporal aliasing. I think it would be best to render at as high a frame rate as possible and let the eye do the taa.

That is unless motion data can be passed to the rendering engine and pixels rendered in an arc describing the motion of the pixel from one frame to the next. Maybe R900/NV80 :LOL:
 
Randell said:
wave a pen, it is more dramatic effect.
Especially if it's a leaky fountain pen...

Seriously though, 3D games are effectively rendering as if you had a strobe light, and you know how that ruins the perception of smooth motion.
 
Jabbah said:
The question is, do we really want Temporal AA / Motion Blur in games?
In an ideal world, yes we do. The eye/brain expects continuous motion and finds it easier to interpret the data.
 
But, of course, we also want very high framerates, for the simple reason that the player's eye will not always be at the center of the screen. If one tracks an object across the screen, the ideal situation would be one in which that object would be crystal-clear. This requires very high framerates. Motion blur still helps, but not as much as when the player is assumed to be staring at the center of the screen.
 
Jabbah said:
When watching a film and theres a panning shot with a sign in the background or something else to read, you cant, its blurred out by the motion. Now when you go to something like IMAX with 48 fps the amount of blurring dcreases and more detail can be seen in panning shots, and imo looks a lot more realistic.

Correct, because IMAX has a higher sampling frequency (48 FPS) as well as motion blur. You'll need less temporal AA as your sample frequency increases, but unless you have infinite sample frequency, AA will always improve the situation.
 
Jabbah said:
The question is, do we really want Temporal AA / Motion Blur in games?
Regarding motion blur, in most cases I'd say no.

A critical thing to remember is that a moving object is blurred when it is moving relative to the retina. When you track a moving object with your eyes it stationary relative to your retina. Forcing blurring on it is unnatural. This is the camera=eyes fallacy. On the screen you are free to track any object you wish. The rendering application does not have any information about this. Understanding this is essential.

Motion blur on film is often mentioned as a positive thing only. Sure, it helps to reduce temporal aliasing and make the scene look more natural, but you lose information too. This isn't a problem for movies since they are a largely passive experience. The camera directs your attention, camera movements are usually smooth and controlled and you don't have to extract critical information from arbitrary locations on the screen. For computer games where you have to focus your attention to arbitrary locations the situation is completely different. And we are not trying to model film. We are trying to model reality.

Motion blur is beneficial for computer generated animations (as a cinematic effect) but for fast-paced computer games it is problematic.

Now the eye has a "sample rate" of around 25fps I think

There's no such thing. The human visual system is not perceiving images as discrete "frames."
 
No, the "lost information" is not the blur, it's the missing individual frames in between the action.

Sure, if we could produce display devices and 3D cards that could run at 1000+Hz, you might have an argument, but we can't, just like we can't produce 10000x10000 desktop displays. Unless that time, we need antialiasing. It *adds* detail, it doesn't remove it.
 
Not necessarily, if the sample pattern is poorly chosen. Nvidia's old Quincunx AA ended up filtering out high frequency details in the original image despite having twice as many samples per pixel.
 
GraphixViolence said:
Not necessarily, if the sample pattern is poorly chosen. Nvidia's old Quincunx AA ended up filtering out high frequency details in the original image despite having twice as many samples per pixel.

That's Quincunx is not propper AA. It's not a matter of choosing the sample pattern...it's a matter of Quincunx utilizing improper pixels from which to take samples.

Proper AA (spatial or temporal) is a "good thing".
 
DemoCoder said:
No, the "lost information" is not the blur, it's the missing individual frames in between the action.
Moving objects on individual frames get smeared when you photograph them with film cameras. You lose information on those objects's high-frequency spatial characteristics.

Rendered images on individual frames without applied blur have all their high-frequency spatial information intact.
 
Bolloxoid said:
Moving objects on individual frames get smeared when you photograph them with film cameras. You lose information on those objects's high-frequency spatial characteristics.

Which is bad for a single snapshot case, but not for several snapshots over time.

Rendered images on individual frames without applied blur have all their high-frequency spatial information intact.

You are looking at this strictly from a spatial (non time dependent) viewpoint. In a moving situation, you're not necesarily suppossed to have said "high frequency spatial information" at any given spatial location, over a given period of time.
 
The display device and computer has the same limitations as the camera. It cannot sample the underlying temporal scene at the correct frequency, hence you get aliasing. This is not a matter of debate, it's a matter of mathematics.

There is no detail being lost. It's just going somewhere else. With respect to film cameras with long exposure times, you are trading spatial detail for temporal detail. If I take a photo with 1/4000th exposure, I'll get zero motion blur, but I'll lose all information about the movement of objects in the scene. I'll "freeze time" so to speak.

I expect that like the "spatial antialiasing causes blurring!" debate around the time of 3DFx, some people will need education in the figure to disabuse them of the notion that temporal AA is just a "blur"
 
Joe DeFuria said:
Which is bad for a single snapshot case, but not for several snapshots over time.
Suppose that you film a car going by with a stationary camera. The car gets blurred on the film. Suppose I stand there next to you and follow the car with my eyes. My perception of the car is not blurred at all. If I afterwards watch your film clip the car is blurred even if a follow it with my eyes. Hence, lost information, an artifact of the filming process.

You are looking at this strictly from a spatial (non time dependent) viewpoint. In a moving situation, you're not necesarily suppossed to have said "high frequency spatial information" at any given spatial location, over a given period of time.
Any moving real-world object has its spatial information perfectly intact and any observer following it with his eyes is supposed to have all that information available to him.
 
DemoCoder said:
The display device and computer has the same limitations as the camera. It cannot sample the underlying temporal scene at the correct frequency, hence you get aliasing. This is not a matter of debate, it's a matter of mathematics.
Have I somehow contradicted that?

There is no detail being lost. It's just going somewhere else. With respect to film cameras with long exposure times, you are trading spatial detail for temporal detail. If I take a photo with 1/4000th exposure, I'll get zero motion blur, but I'll lose all information about the movement of objects in the scene. I'll "freeze time" so to speak.
Spatial detail is lost, like I said, when filming moving objects with long exposures. It is a basic principle of photography. I don't understand what your objection is.
 
Because you are cherry picking your example. Sure, with a 1/30th of a second exposure, you'll get spatial loss. On the other hand, if you render with very high temporal AA and "down filter" to the limits of your display device, there is no such loss.

If you have a game running at 120Hz and no temporal AA vs a game at 120Hz but with 16x temporal antialiasing, the temporal AA version will have more detail than the non-TAA version, it's as simple as that.

Your original assertion about TAA not being appropriate for fast paced games is simply wrong.
 
DemoCoder said:
Because you are cherry picking your example. Sure, with a 1/30th of a second exposure, you'll get spatial loss. On the other hand, if you render with very high temporal AA and "down filter" to the limits of your display device, there is no such loss.
If you read my post you can see I said that motion blur in the context of films is not purely a good thing, which is relevant because the comparison to films is usually made when discussing rendering in computer games. A filmed sequence of FPS-like action with the camera flailing around wildly and with several fast-moving objects around would be a mess. And yet people usually mention motion blur in films simply as a positive thing.

Your original assertion about TAA not being appropriate for fast paced games is simply wrong.
I used the term motion blur which does not imply that it is properly done temporal antialiasing.

The examples of motion blur I have seen in CGI have all mimicked the exposure effect of film and have severely blurred moving objects seen on still frames which brings us again to the fact that the "camera" in rendering is not the same as the eye.
 
Bolloxoid said:
If I afterwards watch your film clip the car is blurred even if a follow it with my eyes. Hence, lost information, an artifact of the filming process.

No, it's an artifact of temporal aliasing.

Suppose in a FPS, I hold my mouse still, and watch a car go by really fast. Without temporal AA, I might see the car pop in one frame and pop out another (If it's moving fast enough, I might not see it at all.) I don't even have information to tell me which direction it was going, etc.

With temporal AA, I would get that information.

Now suppose I "follow" the car precisely at speed with my mouse and/or cursor movement at speed. Now, there's no temporal aliasing on the car, and I see it's perfect "spatial" representation.

Where's the problem?
 
Anti-aliasing refers to any technique intended to reduce or eliminate aliasing, so I don't know where you guys are getting this "proper" definition from. A blur filter is still doing anti-aliasing because it reduces aliasing, albeit at the expense of detail. Same with the motion blur Bolloxoid is describing. There is no requirement for anti-aliasing to add or even maintain detail.
 
Back
Top