Which is justified, as a few years ago I was definitely anti-raytracing here
RealSoft3D was (and is) a true raytracer through and through, but one of it's biggest drawbacks is speed. It's GI solutions is brute-force tracing of multiple rays, as is it's soft-shadows, and it's an order of magnitude slower than sophisticated multi-step rasterisers with optimised pipelines for each phase of rendering.
We're now using Arnold, which is also the official renderer of Sony Pictures Imageworks used on "Cloudy with a chance of meatbals" and it is a very different breed compared to Real3D. It skips on a lot of traditional elements, like the rasterizer, irradiance caches and other related pre-calculated data, and also on SIMD optimizations and stuff. I'll dig up a few links at the end of my reply.
So why the change for raytracing? Have the shortcuts become so complicated that a back-to-basics engine is actually faster?
Yeah, exactly. We've already used a hybrid pipeline for the Warhammer movies where PRMan did the color and (enviroment mapped, so fake) reflection and related passes, but Mental Ray calculated ambient occlusion and reflection occlusion. Keeping the two renderers in sync was complicated (DOF, motion blur, displacements, AA...) and we had to do a LOT of work to fix shadow maps for PRMan. Lighting was using HDR images but no bounces and took a lot of artist time as well.
We've soon moved to Mental Ray where we solved the shadow problem with area shadows; we've "solved" the reflection occlusion with full traced reflections (but glossy reflections were very slow and still aliased), and added global illumination with bounces and color bleeding; and also more complex shaders like subsurface scattering.
Eventually MR (for Maya) proved to be far too buggy and slow and Arnold has emerged after almost a decade of keeping a very low profile; and it turned out to be much faster then MR's hybrid rasterizing/raytracing approach. We're also able to render hair with the scene lights, ambient occlusion and GI which is an incredible bonus; MR required separate light rigs and lots of comp massaging to get it right.
Assassin's Creed Brotherhood was the first project to use Arnold; look for our second, far more polished movie early next week
Blur studio (mostly known from CGI for Star Wars games and DC universe online recently, also the Warhammer online RPG movies) has also used raytracers, first Brazil, then MR as well.
On the other hand, Blizzard's cinematics team has managed to break Brazil on the SC2 announcement movie (too many polygons in the armor and machinery to raytrace) so they've flushed their 3ds Max / Brazil pipeline, hired a bunch of movie VFX guys and moved to Maya / Renderman. It was a real pain for them (those that have the collector edition should listen to the director's commentary track) and there's a lot of stuff noticeably different on Tychus in the intro and outro. Skin shaders, armor reflections are the most important. PRMan does have raytracing but it's inherently slow in a REYES architecture, and most of the stuff that would require it have alternate solutions there using point clouds: SSS, GI with color bleeding, AO etc.
Soft shadows may be pixel-perfect, but they take 20x as long to render at quality. Then again, is the simplicity with no worries about tuning bias and compensating for artefacts a time gain in the end?
Exactly. I've spent at least 2 days lighting the 30-second Warhammer
teaser, and it wasn't even about placing the lights, just getting the shadows relatively free of artifacts and render in a reasonable timeframe. And that's for a single shot 30-second trailer; for the full intro it took less time because it had a far longer shedule (almost a year compared to 6 weeks) but it was a pain nonetheless.
A shift towards pure raytracing in offline renderers certainly bodes well for RT in future realtime graphics, and I'd like to understand more why this shift may be occuring.
Serious savings on artist time, simplification of rendering/shader code, coupled with a revitalizing of CPU speed increase. We've added 8-core systems to our rendering farm and the extra speed was a great help. Still, Arnold is probably going to require more power; it produces some amazingly sharp and aliasing free images compared to MR but at those sampling settings it takes a LOT of time too.
So, this is actually a very interesting question in the future of realtime rendering. There's a lot of push for REYES-style approaches with micropolygons, but raytracing is a big simplification, even if it requires huge performance increases. Moore's law guarantees that the power will come, and considering the man-months invested in shadow rendering that's still far from perfect, in SSS that's still buggy, in reflections that are hacks (although work well) and so on, it will eventually become a reasonable alternative.
I am no programmer so I can't really evaluate Arnold's approach and the differences from "traditional" rasterizer/raytracer hybrids, but I can tell that the rendering quality of our stuff has increased dramatically and it's faster then MR, too. PRMan was really amazing at antialiasing and texture filtering and motion blur/DOF, but Arnold is finally a serious contender.
We still can't displace as much, though, which is a problem, having to use normal maps on top of displacement because tesselation levels are limited.
So, some links are here, with some discussion:
http://forums.cgsociety.org/showthread.php?f=59&t=908871
And here's the ACB trailer rendered with Arnold (which is, again, a pure raytracer and not a hybrid like Mental Ray)
http://www.youtube.com/watch?v=zzNs4-kRLaE&hd=1