I don't think you get the point here.
PRMan's standard rendering method is like this:
- load all the objects in the scene
- split them up into smaller pieces until they're smaller than a preset limit (the smallest primitive is usually a quadratic b-spline patch, PRMan converts even polygons to these)
- when they're small enough, check if they're visible and throw them away if not
- for actual rendering, tessellate primitive until each polygon is smaller than a pixel (exact amount is set by shading rate)
- apply displacement when used
- shade the vertices of the resulting grid (this includes lighting, shadows etc)
- recombine using stochastic sampling
The problem is that if you start using raytracing, every bounced ray will require PRMan to repeat this same procedure. If an object is visible in a reflection it'll be loaded, bounded and split until it's small enough, tesselated, displaced, shaded and so on. This makes the process incredibly slow, especially when using multiple bounces. Think about global illumination with multiple bounces, or subsurface scattering.
The solution is to replace objects with a simplified version, which is the point cloud I've mentioned. They're basically sampling the object using a spatial grid and store just these points, complete with color info, and this is what's loaded and used for raytracing calculations. It's a lot less data and it'll of course be an approximation but it's more than enough for GI, SSS and such stuff.
You basically never load anything that's not visible to the camera, you use the point cloud as a simplified representation of the scene. Every point is treated as a disc facing the ray that you're tracing so that there won't be any holes and such. I think it's even good enough for glossy (blurred) reflections.
The problem is of course that you need to update the point cloud for every frame of animation if there are moving or deforming objects in the scene, which is pretty much guaranteed with action movies.
Now this is only a guess but I think the guys at Weta are using GPU computing to calculate these point clouds for all the objects in the scene.
As for LOTR and raytracing, it was a very ugly hack. Basically you put in a hundred spotlights and render shadow maps for them which gives you a crude 3D representation of the scene in those 100 shadow (depth) maps. You can then raytrace using this data structure and it'll be faster - but less accurate - than using full blown raytracing in PRMan. Back in 2002-2003 raytracing wasn't optimized at all and it was even slower than today.
The downside was that this data had no color info so it could only be used for SSS and ambient occlusion.
It's worth to note that traditional raytracing renderers are getting a lot of R&D and practical use nowadays. Arnold renderer is used on all Sony Imageworks productions (we use it too
) and it has a very different approach compared to PRMan - no need to precalculate point clouds and shadow maps and such, it requires far less artist time but render times are somewhat longer. It seems that eventually offline CG is going to resort to traditional raytracing, although there are still some significant advantages with PRMan and Reyes.
* PRMan is Pixar's Renderman if it's not clear for someone