nutball said:
How widely used is ray-tracing (non-realtime of course) in the high-end CGI business, for movies, etc. You know, the Pixar / Renderman / ILM / blah blah blah industry
This is the $10.000. question
Raytracing is used very-very sparingly in CGI, for effects that really require it, and even in those cases, it only contributes to 1-3 passes that are combined with dozens of other passes to get the final composite.
For example, in the case of Gollum, the skin lighting utilized subsurface scattering that required raytracing; and there was an ambient occlusion pass too. As an interesting side note, the tracing itself was performed using an array of depth maps (from shadow casting lights) scattered on the surface of a large sphere, all done in the Renderman shader - so no builtin raytracing features were used. The rest of the passes, including the eye shaders with reflections and so on, used no raytracing at all, as far as I know (I can ask some Weta guys though, if you want to know for sure about the eyes
.
Other cases for raytracing were the big bottle stuffed with nuts in Bug's life, and some scenes in the Matrix sequels (the mega-fight in the end with all the rain around the CG doubles).
But most of the shadows are depth mapped shadows, processed in a 2D compositing app to get the area light look; most of the reflections are simply rendered into textures; and most of the lighting is spot lights, with global illumination only contributing to an occlusion pass. It's faster, artists have more control, in other words: it's better
The general problem with this discussion is that it only centers on the technical aspects - how easy it is to code, accelerate, pre-calculate the effects, what to do with the hardware, etc. What you miss is the artistic part, which in the end is the more important, IMHO. After all, most people don't care about how you generate the images they see in games, movies and commercials - they only care about the looks, and that is defined by the artists.
Now, artists do care about the physics of the world around us, but they generaly prefer not to let them interfere with their work
. That is why actors wear makeup in movies, that's why there is a need for lighting, to add atmosphere and guide the eye of the viewer around the scene, that's why you usually can't turn the camera around in movie scenes (the lighting would fall apart). You can say that they paint with light just as they paint with the colors of the clothing, the set, scenery and so on.
Adding raytracing will only make the life of the CG artist a lot more complicated - instead of painting in the effect that they want to achieve, they first have to fight the realistic results already present in the scene, just like in real life. I can tell you that it would only anger them, especially if they have to wait more for their renders as well.
Just look at the Quake3 stuff - sure there are shadows, reflections and whatever, but does it make the game look any better, really?
So, in my opinion, raytracing in hardware is not the way to go, and I'd not expect it to replace today's approach at all. I realize the differences between a movie scene with a defined camera path and a game world with complete freedom, but I still believe that the answer is to develop better tools for the artists (as an example, to allow them to really PAINT lighting), instead of trying to calculate everything as physically correct as possible...