Anyone know the level of AA that prerendered CGI movies like

I believe it changes from tittle to tittle and recently they have an adaptive one where it applys the proper fsaa to each frame .


Anyway they are also in insanly high res too
 
That's what I heard too. That the clean look of CGI movies had more to do with the insane resolution they're being rendered at, then the kind of AA used. I remember reading an article years ago on Toy story 2 and it said that the native resolution was something like 3000x4000 (approximate numbers), then it was downsampled in 640x480, or whatever resolution they were shooting for.
 
clem64 said:
That's what I heard too. That the clean look of CGI movies had more to do with the insane resolution they're being rendered at, then the kind of AA used. I remember reading an article years ago on Toy story 2 and it said that the native resolution was something like 3000x4000 (approximate numbers), then it was downsampled in 640x480, or whatever resolution they were shooting for.

Somehow I doubt they used 640x480 on the big screen. ;)

Renderman (what they used for rendering) is incredibly powerful and high-quality when it comes to supersampling.
 
clem64 said:
That's what I heard too. That the clean look of CGI movies had more to do with the insane resolution they're being rendered at, then the kind of AA used. I remember reading an article years ago on Toy story 2 and it said that the native resolution was something like 3000x4000 (approximate numbers), then it was downsampled in 640x480, or whatever resolution they were shooting for.
I think normal movie frames are in the 2k range but of course they have to downsample to 640x480 for the dvd version.
Renderman (what they used for rendering) is incredibly powerful and high-quality when it comes to supersampling.
I wonder what type of algorithm is used in something like that and can it be run in real-time. Quality of AA and lighting to me is the biggest difference for me between real-time and renders. Poly counts aren't. I can tell if a simple ball is rendered vs. real-time.
 
The usual movie resolution is 2048*1536 pixels, but for 2.35:1 aspect ratio, a large amount of the screen is cut from both the top and the bottom side of the image. Movie vFX studios have to work in the full res though, to have their work printed onto film; I dunno about full CG movies.

Sometimes, for special cases, VFX studios can up the resolution from 2K to 4K. IMAX requires high res as well, AFAIK it's 4K too.

Antialiasing is quite a mixed bag and depends heavily on the renderer's implementation. Most renderers are using the same settings for both shading and geometry AA; however a few, like the most commonly used Pixar Photorealistic Renderman, have them decoupled. Also, most offline renderers offer adaptive AA, and can increase the number of samples from 1 to as much as you want, depending on arbitrary contrast values. Edges usually require higher sampling values, and lots of small geometry like hair can require huge numbers to get rid of the aliasing artifacts.

A few examples...
3D Studio Max has a geometry AA of 8*8 samples, and by default it takes 1 shading sample. Various supersampling methods can be set either globally or per object / per material; with shading samples of 5, 9, 25 etc. Max however isn't that widely used in VFX production. Its default renderer is also commonly replaced with plug-in renderers like Brazil or V-Ray that are quite similar to Mental Ray.
Mental Ray is the second most used renderer for VFX, it can work with adaptive undersampling/oversampling. You can set min/max values, 1 means one sample, 2 means 2*2 etc. Usual sampling rates are 1-2/3-5, but I'm not so sure about what goes into movie resolution renders.

PRMan is the most common of movie VFX renderers, and I can add some practical experience here. We're usually working with a pixel sampling rate of 4-6/4-6, which means 16-36 samples; shading rate is 0.5 which means about 4 samples. Since PRMan shades micropolygon grid vertices only, and their tesselation is view-dependent, the actual number of samples can vary. However, we're quite a small studio and haven't produced that much of movie VFX; so I'm not sure about the average for movie production. We can get away with small imperfections because game cinematics are always compressed and that hides stuff like motion blur artifacts. I only remember that my pal working on Harry Potter 3 had to use a shading rate of 0.1 for some shots, which means ~100 shading samples per pixel.

So, that was the long answer. The short is that the actual AA level is always hand-tuned for the requirements of each scene, and also depends on the capabilities of the renderer, but it always has to be enough to get rid of all the aliasing.
 
It's been some time since I read the Reyes paper by Catmull and co. but I seem to remember something about dicing the geometry to ~ four micropolygons per pixel (to get below the Nyquist frequency), and then stochastic sampling of those four polys, to make the sampling pattern less visible. So maybe its 2x AA with stochastic sampling (and as already said very high resolution)?
 
Wow, awesome post, Laa-Yosh. Much appreciated.

We're usually working with a pixel sampling rate of 4-6/4-6, which means 16-36 samples; shading rate is 0.5 which means about 4 samples.
Is that 16-36 samples analagous to the 2X,4X etc. we see used in current real-time hardware.
 
No, the shading rate defines how small a micropoly has to be so that it won't get diced again. From Advanced Renderman:

In the RenderMan Interface, the ShadingRate of an object refers to the frequency with which the primitive must be shaded (actually measured by sample area in pixels) in order to adequately capture its color variations. For example, a typical ShadingRate of 1.0 specifies one shading sample per pixel, or roughly Phong-shading style. In the Reyes algorithm, this constraint translates into micropolygon size. During the dicing phase, an estimate of the raster space size of the primitive
is made, and this number is divided by the shading rate to determine the number of micropolygons that must make up the grid. However, the dicing tessellation is always done in such a manner as to create (within a single grid) micropolygons that are of identically-sized rectangles in the parametric space of the primitive. For this reason, it is not possible for the resulting micropolygons in a grid to all be exactly the same size in raster space, and therefore they will only approximate the shading
rate requested of the object. Some will be slightly larger, others slightly smaller than desired.

AFAIK PRMan first shades the grids, then it does the actual rendering with stochastic sampling. It's an interesting method but it allows for the fast displacement and motion blur that are the trademarks of PRMan...
 
You're welcome :)

ralexand said:
Is that 16-36 samples analagous to the 2X,4X etc. we see used in current real-time hardware.

Sort of. As I've said, PRMan has the shading and geometry AA decoupled, so it's actually comparable in this case. It's probably quite similar to ATI's AA, but obviously a lot more flexible.

However, most other renderers take geometry and shading samples at the same time, which is more similar to supersampling AA on the earlier generation of GPUs; but it is usually adaptive as well, so the renderer can up the sampling from the base value when required.
 
Laa-Yosh said:
AFAIK PRMan first shades the grids, then it does the actual rendering with stochastic sampling. It's an interesting method but it allows for the fast displacement and motion blur that are the trademarks of PRMan...
I assume this can't be done with a realtime render. It would be nice to have that nice motion blur in realtime.
 
at the rate that consoles are gaining anti-aliasing quality & quantity each generation, we are not going to see CGI-level AA in our lifetimes :(
 
Megadrive1988 said:
at the rate that consoles are gaining anti-aliasing quality & quantity each generation, we are not going to see CGI-level AA in our lifetimes :(

Well, it depends on the resolution ;) If the Xbox 360 is going to apply 4x AA at 1280x720 and then scale the image to fit into 640x480 (letter boxed?) that will be a decent amount of anti aliasing. I would guess the next gen in 2010-2012 would be even more. But I guess the problem then is shader and texture aliasing. Those are aliasing issues that seem to be ignored so far. The quality of textures and effects is one of the bigger differences between CGI and a game. But oh well, over time it will get better. Never as good as CGI, but it should be able to fake it good enough eventually that the minor deviations wont matter.
 
I know this topic was comparing Toy Story and Toy Story 2 graphics and using Renderman but did Shreks, Finding Nemo, and Monsters Inc. use Renderman as well?

I remember watching a making of the first Shrek and they showed an example where Shrek and Donkey were in the castle with the Dragon and it took them around 12-24 hours to render certain scenes (or parts of them). They were using server farms if I remember correctly. Talk about needing hardware to run the graphics :devilish:
 
Dreamworks uses its own custom-developed renderer for Shrek, Sharktale, Madagascar and the rest. It has a feature set similar to PRMan, but little is known about the details.
Pixar uses its own internal version of PRMan for all their movies: Toy Story, Bug's life, Monsters, Nemo, Incredibles - and of course for their shorts as well.

Shader AA is mostly a quality/performance tradeoff. With faster GPUs, it'll be done more often, but I'd say that the current generation will concentrate on performance and features first.
 
Acert93 said:
Megadrive1988 said:
at the rate that consoles are gaining anti-aliasing quality & quantity each generation, we are not going to see CGI-level AA in our lifetimes :(

Well, it depends on the resolution ;) If the Xbox 360 is going to apply 4x AA at 1280x720 and then scale the image to fit into 640x480 (letter boxed?) that will be a decent amount of anti aliasing. I would guess the next gen in 2010-2012 would be even more. But I guess the problem then is shader and texture aliasing. Those are aliasing issues that seem to be ignored so far. The quality of textures and effects is one of the bigger differences between CGI and a game. But oh well, over time it will get better. Never as good as CGI, but it should be able to fake it good enough eventually that the minor deviations wont matter.



no doubt it will improve of course, but we'll not be even close to current CGI level of AA even next-next gen (Xbox3, PS4). even with this coming gen (Xbox2, PS3) we are not even at the level of the best realtime AA used in realtime applications like commercial & military simulators of 1990s, much less of this decade.
 
Megadrive1988 said:
Acert93 said:
Megadrive1988 said:
at the rate that consoles are gaining anti-aliasing quality & quantity each generation, we are not going to see CGI-level AA in our lifetimes :(

Well, it depends on the resolution ;) If the Xbox 360 is going to apply 4x AA at 1280x720 and then scale the image to fit into 640x480 (letter boxed?) that will be a decent amount of anti aliasing. I would guess the next gen in 2010-2012 would be even more. But I guess the problem then is shader and texture aliasing. Those are aliasing issues that seem to be ignored so far. The quality of textures and effects is one of the bigger differences between CGI and a game. But oh well, over time it will get better. Never as good as CGI, but it should be able to fake it good enough eventually that the minor deviations wont matter.



no doubt it will improve of course, but we'll not be even close to current CGI level of AA even next-next gen (Xbox3, PS4). even with this coming gen (Xbox2, PS3) we are not even at the level of the best realtime AA used in realtime applications like commercial & military simulators of 1990s, much less of this decade.

That's depressing because that's the biggest difference between realtime and rendered graphics for me.
 
Back
Top