Anti-Aliasing in movies?

Metalmurphy

Newcomer
I know your probably thinking this is a stupid thread :p But i just need your help in something.

I just got into a fight with a guy that says that the 360 can do AA in movies. Obviously that just rediculous. However he truly believes that he can and now hes not the only one, im almost the only one there that knows that its impossible (or is it)?

I'm aware that AA is done by rendering an image at an higher resolution and then displaying it at a lower one. That alone proves it can't be done in movies? How can you "render" a movie at an higher resolution? oO You can't. The closest thing you can do is upscale it, but when you show it again @ the lower resolution it will just look the same as the original right?

Then he said "Well if the source is at 1080p and the 360 shows it at 720p there will be AA" This is also false right? I mean thats not really AA thats just downscaling. For AA you HAVE to render it at an higher resolution first right?

Anyway what i wanted you guys to help me out (the more the merrior becouse if only 1 guy posts they'll just say that he doesnt know anything) is by posting in technical terms why AA in movies is simply rediculous. The more i try to explain to him the more he tries to make me sound redicolous and everyone else thinks he's the right one oO.

Thanks in advanced.

(Btw i wasn't sure if this should be posted here)
 
It's kind of true, in the sense that you will, in theory, get a better picture from downsampling a 1080p image to 720p rather than just a 720p image. It isn't called AA though. It's just downsampling.

And most AV equipment won't downsample the way that one might think -- its concern isn't avoiding jaggies when it downsamples... So, even if one considers downsampling to be AA, then they'd likely be disappointed with the result.

It sounds like he just doesn't really understand what he is talking about... but the assumptions he seems to be making aren't too far from the truth, in some sense though.
 
Movies, as in film of real life, do not have aliasing in any common sense of the word. Computer generated graphics show aliasing because the various percisions we use to determine the color of each pixel are all well lower than the precisions of each pixel derived from the molecular structure of the reality which is captured though film. Our games don't have sharp and smooth detail like CG movies do for exactly this reason; the CG movies are rendered on huge networks of high power computers at precisions far beyond anything we use in gaming and taking great lengths of time on each frame.

That isn't to say that you won't see aliasing in movies though. Poor quality resizing methods and poor deinterlacing can cause very bad aliasing, so a bad setup will show aliasing and in various situations and sometimes a bad copy of the film will have such aliasing issues inherent in it. So in that sense, the 360 could be mistaken for removing aliasing in movies when compared to those same movies watched though other means; but in fact, the movies don't tend to have any aliasing in the fist place and the 360 simply avoids introducing any.
 
Last edited by a moderator:
Talk about getting tecnical :p

Great explanation kyleb.

So like i said AA can't be done in movies simply becouse they're not beeing rendered in real time, so theres no way you can change the native resolution of the movie to an higher one.


Got one more question. What do you think it looks better, a 1080p movie on a 1080p TV (suposing its has no compression like when using HDMI) or a 1080p on a 720p?
 
Metalmurphy said:
Got one more question. What do you think it looks better, a 1080p movie on a 1080p TV (suposing its has no compression like when using HDMI) or a 1080p on a 720p?

1080p on 1080p all things equal... Oddly, you'll likely find a 720p image/video will look best on a 720p screen too (although that depends wholly on the scaler).

One more thing: AA doesn't necessarily mean rendering in a higher resolution... the most common type of AA used in games and such just takes multiple samples of the image and blends them. Supersampling involves rendering at a higher resolution, but Multisampling (easier on performance) is at the stated rendering resolution.
 
I can't see why anyone would think that a movie would need to be anti-aliased. Games need to be because the pixels along a polygon edge will create a stair step effect. The pixel color in a movie is already "anti-aliased" as when it was sampled, it mixed the colors together.

I think you can authoritatively say that your friend has no idea what he's talking about.
 
OtakingGX said:
I can't see why anyone would think that a movie would need to be anti-aliased. Games need to be because the pixels along a polygon edge will create a stair step effect. The pixel color in a movie is already "anti-aliased" as when it was sampled, it mixed the colors together.

I think you can authoritatively say that your friend has no idea what he's talking about.

Yeah real life people don't need to be anti aliased. It's weird anybody would say that. Was that guy talking about CGI movies like Toy Story?
 
mckmas8808 said:
Yeah real life people don't need to be anti aliased. It's weird anybody would say that. Was that guy talking about CGI movies like Toy Story?
Real movies and CGI movies are no different: they're both already anti-aliased by the time they get recorded.
 
Grr lol he still doesn't get what AA is.


Hes with an argument that "HD-DVDs are at 1080p so AA is good for them" oO


I give up...
 
Metalmurphy said:
Got one more question. What do you think it looks better, a 1080p movie on a 1080p TV (suposing its has no compression like when using HDMI) or a 1080p on a 720p?
A 1080p movie on a 1080p TV can look sharper than being downsampled to a 720p display, but the distance you are sitting from the display plays a large part in how much resolution your eyes can resolve from a given display anyway. Beyond that, different TV technologies and designs have inherently different sharpness as well, so the sharpness seen on one given resolution display often won't be the same level of sharpness you find on other models of HDTVs.

Regardless, that all that is just sharpness, as that is the only thing directly relevant to the resolution of the display. Factors like color saturation, contrast and a whole host of other things all play important roles in what "looks better" as well, so the simple answer to your question is; it depends on which two TVs you are talking about.
 
The difference between video games and CG in movies is that movies don't need to be done in real time. Isn't always going to be better in movies to generate CG at very much higher resolution than you need rather than using AA, and then transfer it into celluloid?
 
Bobbler said:
One more thing: AA doesn't necessarily mean rendering in a higher resolution... the most common type of AA used in games and such just takes multiple samples of the image and blends them. Supersampling involves rendering at a higher resolution, but Multisampling (easier on performance) is at the stated rendering resolution.

Strictly speaking, both MSAA and SSAA render at a higher resolution, and average several samples to create the final AA'd pixel. i.e. both are super-sampling methods.

The difference is that while SSAA simply renders directly to a high-resolution buffer, MSAA has an *effective* lower rendering resolution. However MSAA actually writes out to a super-sampled buffer - adjacent pixels will be copies of each other, except on an edge. This makes the assumption that AA only actually matters on an edge, saving pixel shading cost and also bandwidth (identical pixels are obviously easy to compress).

Coverage based AA, as implemented (brokenly) in PS2's GS is an alternative method of AA which does not require a higher-resolution buffer.
 
Aliasing is when high frequency components (above Nyquist frequency) aren't filtered out from the signal. Those components will then reappear folded down to lower frequencies. In games this is visible as jaggies on edges, and shimmer in textures.

When downsampling a movie (or something else), there's frequency components that won't fit in the new resolution. These must be removed to avoid aliasing, and doing so can correctly be called anti-aliasing. It's not done in the same way as the usual AA in a 3D accellerator, but it's still AA.

So I'd say your friend is right.
(Well, I have no idea if the 360 actually does any filtering when downsampling. But it should, so I take his word for it.)
 
SPM said:
The difference between video games and CG in movies is that movies don't need to be done in real time. Isn't always going to be better in movies to generate CG at very much higher resolution than you need rather than using AA, and then transfer it into celluloid?
They do both: render at very high resolution, and use AA (both spatial and temporal...i.e. motion blur).
 
Basic said:
Aliasing is when high frequency components (above Nyquist frequency) aren't filtered out from the signal. Those components will then reappear folded down to lower frequencies. In games this is visible as jaggies on edges, and shimmer in textures.

When downsampling a movie (or something else), there's frequency components that won't fit in the new resolution. These must be removed to avoid aliasing, and doing so can correctly be called anti-aliasing. It's not done in the same way as the usual AA in a 3D accellerator, but it's still AA.

So I'd say your friend is right.
(Well, I have no idea if the 360 actually does any filtering when downsampling. But it should, so I take his word for it.)
The 360 has a plain-Jane DVD player. Movies are at best 480p played on it, so you would actually be upsampling video to 720p, not downsampling from 1080p. So I don't think the 360 anti-aliases movies, since it never downsamples them.
 
Any non-square resolution change will degrade the quality of the image in this context. 720p content displayed at 720p resolution is likely to look better than 1080p content displayed at 720p resolution.
 
Chalnoth said:
They do both: render at very high resolution, and use AA (both spatial and temporal...i.e. motion blur).

Is the AA applied for effects (ie. simulating atmospheric dust/softening image, or for motion effects), or is it to improve IQ?
 
SPM said:
Is the AA applied for effects (ie. simulating atmospheric dust/softening image, or for motion effects), or is it to improve IQ?
I believe blurring is separate from AA. I've heard that 32x supersampling is the norm for movie-quality rendering...I'm not sure how many frames they blend for temporal AA (but freeze any frame of, say, Shrek, and you'll see what I mean, it's definitely for AA).
 
This thread brings to mind two lingering tangential questions I've always wondered about, and have sort of presumed an answer to, but never got around to researching.

First, when capturing an image digitally (i.e. CCD) does the resulting image contain a form of inherent AA? Does this depend on the field of view of each element? To be more descriptive, and if I wasn't so lazy I guess I'd do this exeriment, imagine taking a 320 x 240 CCD image of a diagnoal black line against white background. When you view the image blown up, would you find greyscale pixels along the line edge, or only black and white pixels? And is this behavior depenendent on the actual design of the CCD, i.e. do some have a wide enough field of view for each pixel to blend the white and black to grey, and others take more like a point sample of the middle of the pixel? Or are all digital captures more one or the other?

Second, line doublers and quadruplers have been around for ages. What in the hell are they actually doing? This has always nagged at me. I'm pretty sure they can't generate an image with more information than was in the original frame, so what exactly do they dod that is of benefit? Are the extra lines a sort of blur of the ones above and below? Is that actually useful? Are any lines left intact as in identical to lines in the original frame, or are all lines actually new lines created using information from the originals? I've always thought of this problem as a sort of "bizarro-AA" problem... how do you make an inbetween line when you don't know what was there to begin with? Does a guassian blur look sort of like AA in this instance?




As for downsampling, I know that most hardward just does a poor job of this, but in theory shouldn't 1080 displayed at 720 look better than 720 displayed at 720? Shouldn't a good filtering of 1080 to 720 result in an image that has more information about the original scene than one taken natively at 720? And if the above assumption is true in theory, is there any good hardware available on the market that applies a good lancos or bezier type filtering scheme when downsampling?
 
Bigus Dickus said:
First, when capturing an image digitally (i.e. CCD) does the resulting image contain a form of inherent AA?
Yes. The final color of the pixel can be thought of determined by the sum total of all photons that hit that pixel. So, in effect, it's something like some massively-large number of stochastic samples.

Does this depend on the field of view of each element? To be more descriptive, and if I wasn't so lazy I guess I'd do this exeriment, imagine taking a 320 x 240 CCD image of a diagnoal black line against white background. When you view the image blown up, would you find greyscale pixels along the line edge, or only black and white pixels?
Well, I suspect that what will actually happen is that you will get some discoloration of the pixels along the line edge that would look grey from a distance. This could happen if, for example, each CCD pixel is made up of three distinct color receptors that couldn't overlap. But I don't know for certain how color differentiation is done in today's digital cameras.

And is this behavior depenendent on the actual design of the CCD, i.e. do some have a wide enough field of view for each pixel to blend the white and black to grey, and others take more like a point sample of the middle of the pixel? Or are all digital captures more one or the other?
There may be some differentiation. The arrangement of the color elements may affect how much or how noticeable any discoloration is. The quality of the CCD will determine how much unused space there is between neighboring pixels. Unfortunately, I don't know the specfiics of how these cameras are engineered, just that it seems likely that there are differences in this regard. I just don't know which aspects would be most noticeable.

As for downsampling, I know that most hardward just does a poor job of this, but in theory shouldn't 1080 displayed at 720 look better than 720 displayed at 720?
No, it never will, not if both come from the same even higher-resolution content. Think about it this way: if you start from some very high resolution (say, 4000p or somesuch), and downsample straight to 720p, it will look better than if you first downsample to 1080p, and then downsample again to 720p.

Also bear in mind that the resolution change between 1080p and 720p is rather small, so you'll have a whole lot of aliasing inherent in the downsampling. Consider, for example, taking one corner of the 720p image. This corner pixel represents 1.5 pixels in each direction from the 1080p image, such that its color, in a simple box filter, would be determined by the color of the corner pixel, 1/2 the color of the pixel to the right, 1/2 the color of the pixel below (if we're on the top left corner), and 1/4 the color of the pixel diagonally adjacent. But the colors of these pixels which go into your final 720p image will also be shared by the neighboring pixels in the final 720p image, which will result in blurring.

This situation is very similar to attempting to view an image on an LCD at non-native resolutions that aren't integer divisors of the native resolution.
 
Back
Top