Many thanks! I remember the name Arnold from way back, and I'm surprised they've managed to grow.
I'll read the article later. I also feel somwhat sorry for the Mesakannen brothers who created Real3D. Back in the day they were implementing features way ahead of the game, taking a pure maths approach that shunned shortcuts for elegant solutions.
I think there is a big difference, because Arnold isn't really using pure raytracing for the beauty of it, it's more about heavy optimizations and dropping rasterizing is just one part of this approach. The interviews and docs mention that they go as far as to quantize and/or compress the originally float values of geometry to reduce the memory overhead - having to wait for data because it can't fit into RAM is usually one of the main reasons for slowdowns in a raytracer.
Also note their emphasis on sampling and its efficiency. Offline rendering, particularly for movie VFX, usually requires a completely aliasing free image - so you're not looking for the best quality with a given budget, but the lowest possible time budget for a certain quality level.
So games are going to be a different case because you'll want raytracing to produce consistent rendering times, which is pretty hard in itself - but we might get progressive, iterative quality levels, where more complex scenes will lower shadow and AA quality to maintain the FPS rate.
Also, games don't really need to render 3D DOF and motion blur, and post effects will probably remain good enough in their 2D form. So any wins that an advanced sampling implementation might have here will be worthless for games. See, our movies are 24-30fps and have a lot of motion blur with any fast movement, but we'd like games to be 60 fps, and too much blur is annoying anyway.
But a raytracer like Arnold, or even REYES based ones, are fast for film production because you have ALL the quality features turned on, and so you can render a motion blurred object with far, far lower quality settings for reflections and shading and such, because it won't be seen clearly. You can then use the processing power you've saved on the actual object to spend on better motion blur, instead of adding up the rendering times of the two.
Rendering a few spheres with shadows is usually a lot faster with more simple renderers, but when you have millions of polygons in hundreds of objects with hair, lit with global illumination, and with fast motion, very few renderers can even put out a picture at all. It really is like beating a Formula 1 car in a sraight line - it's built to corner at 100mph and not to win simple drag races. So just because raytracing is good for movie level production, it doesn't mean it'll be fit for every purpose. For example Arnold has already decided to give up the architectural visualization market when they decided not to have irradiance caches.
It's also interesting about the reliance on processors instead of GPGPU rendering, as offline rendering is one the high-performance tasks one would assume GPUs would be good at, what with it all being graphics and all!
Well the issue is more complex. Our rendering farm works not only on the final 3D frames, but also on very complex 2D post processing (compositing with Nuke), and we run all the cloth simulations for weeks on the farm as well. Also, even with rendering, we use a different one for some of the fluid dynamics based particles like smoke, fire, water.
Oh, and Arnold is an external renderer anyway, so the render node has to first open Maya, build the scene from an XML file, reference objects, animation files etc., then calculate any dynamic stuff, and only after all these can it start Arnold and translate the Maya scene to start the rendering.
So we need systems that can run all these renderers well, as we need to be able change how many CPUs we allocate to each kind of task, depending on the needs of the production (there's no cloth sims, but a LOT of 2D in the final days, like now).
A GPU or Cell based farm would only work if it could run every one of the above well. It would also have to be able to fit into a server rack, run cool enough not to break the air conditioning, be light enough so that it won't fall through the floor (I'm told our farm weighs several tons, and it's just 3 racks wide). We can't afford to divide our budget and maintain two separate farms as ILM does (they've ported their particle renderer to support GPUs as far as I know).
How well the GPUs or Cell could actually run a raytracing renderer, with the scene complexities we have, is a completely different issue on top of the above. I imagine there'd have to be some complex calculations based on cost in $, watts, kilograms, space and all vs. rendering speeds to find out if it's worth it at all.
On the other hand Nvidia did purchase Mental Images and they are working on porting to GPU, so it seems that they see a business opportunity here. I'm told they want to do realtime arch viz, so that they can change stuff in front of the client, which would definitely be a competitive advantage. Too bad it resulted in less focus on the actual Mental Ray renderer.