Can Pixar's Rendering techniques be brought to the PC?

Pixar's techniques are most likely optimised to take advantage of offline rendering (ie not having to be realtime) and the massive renderfarms (ie massively parallel processing) they use to do these things. Today's consumer cards have to work in realtime and with very few cores (ususally one only) so different techniques will be used.

You're really trying to compare apples and oranges here.
 
Seems to me that there are actually two questions here. The first is "can the calculations that Pixar do when rendering be done in real time on current hardware?" The answer to that is clearly "hell, no".

But the second and more interesting question is "can rendering done on current hardware look as good as Pixar rendering?" The answer to that seems a bit more ambiguous. The original Toy Story really doesn't look all that impressive any more (to my eyes, anyway). Pixar's later films would present more difficult problems: in Monsters, Inc. there's the question of the procedural animation of Sully's fur, and how on Earth you can render a space several miles across; in Finding Nemo there's the issue of rendering a scene underwater; and The Incredibles presents a whole host of problems (procedural animation of human hair and cloth, for instance). But the original Toy Story seems like a target that probably could be achieved fairly soon. We won't be able to produce something that looks the same, but we will (I suspect) be able to produce something that looks just as good (at least in terms of rendering - not in terms of the animation).
 
Could current graphic cards be used to replace render farms? Not in real-time of course, but as a way to accelerate the offline render process? Perhaps by adapting the code for use on GPUs?
 
From what I remember when the scene is actually being rendered doesn't Renderman actually use REYES? I'm unsure of this right now. Laa Yosh, can you clarify this for us?

It certainly has been before, but I'm not sure what they've done to it to hack in some proper raytracing. The advent of ambient / reflection occlusion created a strong demand for it in the VFX industry, and it's against the very nature of REYES...
 
Pixar's techniques are most likely optimised to take advantage of offline rendering (ie not having to be realtime) and the massive renderfarms (ie massively parallel processing) they use to do these things. Today's consumer cards have to work in realtime and with very few cores (ususally one only) so different techniques will be used.

Actually, PRMan doesn't really work well on multiple CPUs, especially when they're distributed around in multiple computers. Until recently, a single frame has always used a single CPU; although it's more appropriate to call that a single layer of a frame, like foreground, background, character #1, particle dust cloud #1, etc etc. These layers are then composited together in a 2D app to create the final image. Speeds up rendering and iterative tweaking a lot (no need to re-render everythinh to change an individual element).
 
Could current graphic cards be used to replace render farms? Not in real-time of course, but as a way to accelerate the offline render process? Perhaps by adapting the code for use on GPUs?

Pixar developed a lighting tool for Cars, where the GPU provides real-time deferred lighting and shading for a static scene. The artist renders out color, normal, reflection, occlusion, depth and other passes and the app uploads them into the VRAM, then a pixel shader composites them together and lights it in image space. I think it can also calculate shadows and evaluate simplified Renderman shaders as well.

http://www.vidimce.org/publications/lpics/

It can't provide movie quality images but it can considerably accelerate the lighting process.
 
Back
Top