Quakecon 2010

Everything B3D needs to know has been said: no AA on consoles for Rage!


Laa-Yosh: not 10%, 1% of the polys for Rage on iPhone.
 
Heh, I feel for him, having like 6 guys doing the stuff I used to do on my own and knowing that at times I could do a lot better is frustrating - I can't even imagine how it must be for someone with his talent not to be able to do all the stuff himself... ;)
 
Full raytracing is an interesting option. Coincidentally we've moved to a new rendering app that's a pure tracer as well, no rasterizer hybrid.
But it's not yet possible on nextgen hardware - could be for the one after that. It really answers a lot of problems, like shadows - raytraced area shadows are just basically fire and forget compared to shadow maps, one of the reasons we've abandoned PRMan.
I'm curious about that. RealSoft3D was (and is) a true raytracer through and through, but one of it's biggest drawbacks is speed. It's GI solutions is brute-force tracing of multiple rays, as is it's soft-shadows, and it's an order of magnitude slower than sophisticated multi-step rasterisers with optimised pipelines for each phase of rendering.

So why the change for raytracing? Have the shortcuts become so complicated that a back-to-basics engine is actually faster? Soft shadows may be pixel-perfect, but they take 20x as long to render at quality. Then again, is the simplicity with no worries about tuning bias and compensating for artefacts a time gain in the end?

A shift towards pure raytracing in offline renderers certainly bodes well for RT in future realtime graphics, and I'd like to understand more why this shift may be occuring.
 
Not passionate about dedicated server support, although he understands people getting upset about it - Rage's probably gonna have it though.

Cloud gaming has interesting stuff going on. Haven't beta tested it though, unclear when the market can sustain it - but eventually a lot of people will play it, there's a lot of reasons and advantages, even if it's not fit for fast twitch games. That's also the future for Linux as direct porting doesn't make sense.

Moved to the Doom4 team's floor, but he's not allowed to talk about it. Lots of people working hard on it, wait for next year, Todd's gonna take the flak.
 
I'm curious about that.

Which is justified, as a few years ago I was definitely anti-raytracing here ;)

RealSoft3D was (and is) a true raytracer through and through, but one of it's biggest drawbacks is speed. It's GI solutions is brute-force tracing of multiple rays, as is it's soft-shadows, and it's an order of magnitude slower than sophisticated multi-step rasterisers with optimised pipelines for each phase of rendering.

We're now using Arnold, which is also the official renderer of Sony Pictures Imageworks used on "Cloudy with a chance of meatbals" and it is a very different breed compared to Real3D. It skips on a lot of traditional elements, like the rasterizer, irradiance caches and other related pre-calculated data, and also on SIMD optimizations and stuff. I'll dig up a few links at the end of my reply.

So why the change for raytracing? Have the shortcuts become so complicated that a back-to-basics engine is actually faster?

Yeah, exactly. We've already used a hybrid pipeline for the Warhammer movies where PRMan did the color and (enviroment mapped, so fake) reflection and related passes, but Mental Ray calculated ambient occlusion and reflection occlusion. Keeping the two renderers in sync was complicated (DOF, motion blur, displacements, AA...) and we had to do a LOT of work to fix shadow maps for PRMan. Lighting was using HDR images but no bounces and took a lot of artist time as well.

We've soon moved to Mental Ray where we solved the shadow problem with area shadows; we've "solved" the reflection occlusion with full traced reflections (but glossy reflections were very slow and still aliased), and added global illumination with bounces and color bleeding; and also more complex shaders like subsurface scattering.

Eventually MR (for Maya) proved to be far too buggy and slow and Arnold has emerged after almost a decade of keeping a very low profile; and it turned out to be much faster then MR's hybrid rasterizing/raytracing approach. We're also able to render hair with the scene lights, ambient occlusion and GI which is an incredible bonus; MR required separate light rigs and lots of comp massaging to get it right.
Assassin's Creed Brotherhood was the first project to use Arnold; look for our second, far more polished movie early next week ;)

Blur studio (mostly known from CGI for Star Wars games and DC universe online recently, also the Warhammer online RPG movies) has also used raytracers, first Brazil, then MR as well.

On the other hand, Blizzard's cinematics team has managed to break Brazil on the SC2 announcement movie (too many polygons in the armor and machinery to raytrace) so they've flushed their 3ds Max / Brazil pipeline, hired a bunch of movie VFX guys and moved to Maya / Renderman. It was a real pain for them (those that have the collector edition should listen to the director's commentary track) and there's a lot of stuff noticeably different on Tychus in the intro and outro. Skin shaders, armor reflections are the most important. PRMan does have raytracing but it's inherently slow in a REYES architecture, and most of the stuff that would require it have alternate solutions there using point clouds: SSS, GI with color bleeding, AO etc.

Soft shadows may be pixel-perfect, but they take 20x as long to render at quality. Then again, is the simplicity with no worries about tuning bias and compensating for artefacts a time gain in the end?

Exactly. I've spent at least 2 days lighting the 30-second Warhammer teaser, and it wasn't even about placing the lights, just getting the shadows relatively free of artifacts and render in a reasonable timeframe. And that's for a single shot 30-second trailer; for the full intro it took less time because it had a far longer shedule (almost a year compared to 6 weeks) but it was a pain nonetheless.

A shift towards pure raytracing in offline renderers certainly bodes well for RT in future realtime graphics, and I'd like to understand more why this shift may be occuring.

Serious savings on artist time, simplification of rendering/shader code, coupled with a revitalizing of CPU speed increase. We've added 8-core systems to our rendering farm and the extra speed was a great help. Still, Arnold is probably going to require more power; it produces some amazingly sharp and aliasing free images compared to MR but at those sampling settings it takes a LOT of time too.


So, this is actually a very interesting question in the future of realtime rendering. There's a lot of push for REYES-style approaches with micropolygons, but raytracing is a big simplification, even if it requires huge performance increases. Moore's law guarantees that the power will come, and considering the man-months invested in shadow rendering that's still far from perfect, in SSS that's still buggy, in reflections that are hacks (although work well) and so on, it will eventually become a reasonable alternative.

I am no programmer so I can't really evaluate Arnold's approach and the differences from "traditional" rasterizer/raytracer hybrids, but I can tell that the rendering quality of our stuff has increased dramatically and it's faster then MR, too. PRMan was really amazing at antialiasing and texture filtering and motion blur/DOF, but Arnold is finally a serious contender.
We still can't displace as much, though, which is a problem, having to use normal maps on top of displacement because tesselation levels are limited.


So, some links are here, with some discussion:
http://forums.cgsociety.org/showthread.php?f=59&t=908871

And here's the ACB trailer rendered with Arnold (which is, again, a pure raytracer and not a hybrid like Mental Ray)
http://www.youtube.com/watch?v=zzNs4-kRLaE&hd=1
 
Last edited by a moderator:
Excellent post, Laa-Yosh! Thanks for all the info.

I wasn't aware about pure raytracing solutions having made so many advances. Will take a look at Arnold.

Lots of interesting stuff in the keynote, Carmack surely knows how to run the show.
 
Carmack's talks are always interesting. Guess a few of you are new to these from the comments expressing surprise at that? :LOL:

I wish there was a video of the whole talk somewhere though. Even if a lot of it has now been "spoiled" by text summations. I dont understand why somebody doesn't just video the whole talk and upload it pronto every year. Instead it's always haphazard and some years we dont get one at all.
 
Missed the entire presentation. Will have to catch up over the weekend.

Interesting that multi-core CPU and ray tracing are coming back.
 
The cynic in me sees this as meaning (not enuf ppl wanted to use our engine) better to save embarrassment + just alter your story.

Its a very competitive market look at the long list of games using the cryengine

Or they hadn't started offering it yet and their new parent company decided to keep it in house.
 
Its okay. Elder Scrolls with id tech 5 , fall out 4 with id tech 5 would be more than enough to keep me happy.

I just hope they give enough love to the pc verison of the engine. I hope its dx 10/11 only on the pc.
 
The cynic in me sees this as meaning (not enuf ppl wanted to use our engine) better to save embarrassment + just alter your story.

It's possible, but it's also about focusing your resources in the most effective manner (since other companies have already staked their claim on some markets). It may mean that now, they also need talents who are extremely good at making game content to achieve their goals.

Btw, how much does Epic make from the Gears franchise compared to their licensing business. The former should be very small compared to their main business, but if we combine MS's share in Gears earning, what is the picture ? I am just curious.
 
Back
Top