ralexand said:Toy Story is using?
Thanks.london-boy said:ralexand said:Toy Story is using?
Some people mentioned some obscurely named 64x AA.
clem64 said:That's what I heard too. That the clean look of CGI movies had more to do with the insane resolution they're being rendered at, then the kind of AA used. I remember reading an article years ago on Toy story 2 and it said that the native resolution was something like 3000x4000 (approximate numbers), then it was downsampled in 640x480, or whatever resolution they were shooting for.
I think normal movie frames are in the 2k range but of course they have to downsample to 640x480 for the dvd version.clem64 said:That's what I heard too. That the clean look of CGI movies had more to do with the insane resolution they're being rendered at, then the kind of AA used. I remember reading an article years ago on Toy story 2 and it said that the native resolution was something like 3000x4000 (approximate numbers), then it was downsampled in 640x480, or whatever resolution they were shooting for.
I wonder what type of algorithm is used in something like that and can it be run in real-time. Quality of AA and lighting to me is the biggest difference for me between real-time and renders. Poly counts aren't. I can tell if a simple ball is rendered vs. real-time.Renderman (what they used for rendering) is incredibly powerful and high-quality when it comes to supersampling.
Is that 16-36 samples analagous to the 2X,4X etc. we see used in current real-time hardware.We're usually working with a pixel sampling rate of 4-6/4-6, which means 16-36 samples; shading rate is 0.5 which means about 4 samples.
In the RenderMan Interface, the ShadingRate of an object refers to the frequency with which the primitive must be shaded (actually measured by sample area in pixels) in order to adequately capture its color variations. For example, a typical ShadingRate of 1.0 specifies one shading sample per pixel, or roughly Phong-shading style. In the Reyes algorithm, this constraint translates into micropolygon size. During the dicing phase, an estimate of the raster space size of the primitive
is made, and this number is divided by the shading rate to determine the number of micropolygons that must make up the grid. However, the dicing tessellation is always done in such a manner as to create (within a single grid) micropolygons that are of identically-sized rectangles in the parametric space of the primitive. For this reason, it is not possible for the resulting micropolygons in a grid to all be exactly the same size in raster space, and therefore they will only approximate the shading
rate requested of the object. Some will be slightly larger, others slightly smaller than desired.
ralexand said:Is that 16-36 samples analagous to the 2X,4X etc. we see used in current real-time hardware.
I assume this can't be done with a realtime render. It would be nice to have that nice motion blur in realtime.Laa-Yosh said:AFAIK PRMan first shades the grids, then it does the actual rendering with stochastic sampling. It's an interesting method but it allows for the fast displacement and motion blur that are the trademarks of PRMan...
Megadrive1988 said:at the rate that consoles are gaining anti-aliasing quality & quantity each generation, we are not going to see CGI-level AA in our lifetimes
Acert93 said:Megadrive1988 said:at the rate that consoles are gaining anti-aliasing quality & quantity each generation, we are not going to see CGI-level AA in our lifetimes
Well, it depends on the resolution If the Xbox 360 is going to apply 4x AA at 1280x720 and then scale the image to fit into 640x480 (letter boxed?) that will be a decent amount of anti aliasing. I would guess the next gen in 2010-2012 would be even more. But I guess the problem then is shader and texture aliasing. Those are aliasing issues that seem to be ignored so far. The quality of textures and effects is one of the bigger differences between CGI and a game. But oh well, over time it will get better. Never as good as CGI, but it should be able to fake it good enough eventually that the minor deviations wont matter.
Megadrive1988 said:Acert93 said:Megadrive1988 said:at the rate that consoles are gaining anti-aliasing quality & quantity each generation, we are not going to see CGI-level AA in our lifetimes
Well, it depends on the resolution If the Xbox 360 is going to apply 4x AA at 1280x720 and then scale the image to fit into 640x480 (letter boxed?) that will be a decent amount of anti aliasing. I would guess the next gen in 2010-2012 would be even more. But I guess the problem then is shader and texture aliasing. Those are aliasing issues that seem to be ignored so far. The quality of textures and effects is one of the bigger differences between CGI and a game. But oh well, over time it will get better. Never as good as CGI, but it should be able to fake it good enough eventually that the minor deviations wont matter.
no doubt it will improve of course, but we'll not be even close to current CGI level of AA even next-next gen (Xbox3, PS4). even with this coming gen (Xbox2, PS3) we are not even at the level of the best realtime AA used in realtime applications like commercial & military simulators of 1990s, much less of this decade.