Quakecon 2010

Or they hadn't started offering it yet and their new parent company decided to keep it in house.
But they had offered it as early as 2007

GS: When do you expect the first id Tech 5 licensed games to hit retail?
SN: It's hard to say. We expect the first licenses to happen later this year, but expect the normal two- to three-year [development] timelines beyond that. It's a next-next generation solution
also see the id website

Its not strange that perhaps noone or very few wanted it, look at the last id engine
http://en.wikipedia.org/wiki/Id_Tech_4
Unlike the preceding and widely-used id Tech 3 (Quake III Arena engine) and id Tech 2 (Quake II engine), id Tech 4 has had less success in licensing to third parties.

Its a tough market, look at the cryengine, which we all agree is near the top of the class yet
http://en.wikipedia.org/wiki/CryEngine
It looks like theres been a grand total of a single 'external' game using it!!!
 
Supposedly they demoed all three versions of Rage at the same time on stage. Any video of this out there?
 
Exactly. I've spent at least 2 days lighting the 30-second Warhammer teaser, and it wasn't even about placing the lights, just getting the shadows relatively free of artifacts and render in a reasonable timeframe. And that's for a single shot 30-second trailer; for the full intro it took less time because it had a far longer shedule (almost a year compared to 6 weeks) but it was a pain nonetheless.

Awesome that I can now thank someone for doing the masterpiece movies of the Warhammer: MOC series of games. :) Fabulous work and I go back and rewatch them from time to time as well as to play the games.

Game still remains a masterpiece of real time wargaming that I play through every year. :)

Regards,
SB
 
Which is justified, as a few years ago I was definitely anti-raytracing here ;)...
Many thanks! I remember the name Arnold from way back, and I'm surprised they've managed to grow. I'll read the article later. I also feel somwhat sorry for the Mesakannen brothers who created Real3D. Back in the day they were implementing features way ahead of the game, taking a pure maths approach that shunned shortcuts for elegant solutions. It seems this is the way things are headed, at least by some people, but I dare say Real3D is a distant memory with little appreciation for what it introduced.

It's also interesting about the reliance on processors instead of GPGPU rendering, as offline rendering is one the high-performance tasks one would assume GPUs would be good at, what with it all being graphics and all! At which point one has to look at the Cell and think it should be being applied in offline renderers! A properly optimised Cell engine should be top-tier efficiency.

And here's the ACB trailer rendered with Arnold (which is, again, a pure raytracer and not a hybrid like Mental Ray)
http://www.youtube.com/watch?v=zzNs4-kRLaE&hd=1
I remember this from the chromatic aberration discussion Incredibly realistic.
 
Many thanks! I remember the name Arnold from way back, and I'm surprised they've managed to grow. I'll read the article later. I also feel somwhat sorry for the Mesakannen brothers who created Real3D. Back in the day they were implementing features way ahead of the game, taking a pure maths approach that shunned shortcuts for elegant solutions. It seems this is the way things are headed, at least by some people, but I dare say Real3D is a distant memory with little appreciation for what it introduced.

It's also interesting about the reliance on processors instead of GPGPU rendering, as offline rendering is one the high-performance tasks one would assume GPUs would be good at, what with it all being graphics and all! At which point one has to look at the Cell and think it should be being applied in offline renderers! A properly optimised Cell engine should be top-tier efficiency.


I remember this from the chromatic aberration discussion Incredibly realistic.

But it's all been advanced in support of a single rendering pipeline, while it's now trying to regain some flexability, it enjoys no advantage outside that single pipeline.

Btw Laa-Yosh, great great stuff. Is that the video you hinted at or is there another about to come out?
 
Many thanks! I remember the name Arnold from way back, and I'm surprised they've managed to grow.
I'll read the article later. I also feel somwhat sorry for the Mesakannen brothers who created Real3D. Back in the day they were implementing features way ahead of the game, taking a pure maths approach that shunned shortcuts for elegant solutions.

I think there is a big difference, because Arnold isn't really using pure raytracing for the beauty of it, it's more about heavy optimizations and dropping rasterizing is just one part of this approach. The interviews and docs mention that they go as far as to quantize and/or compress the originally float values of geometry to reduce the memory overhead - having to wait for data because it can't fit into RAM is usually one of the main reasons for slowdowns in a raytracer.

Also note their emphasis on sampling and its efficiency. Offline rendering, particularly for movie VFX, usually requires a completely aliasing free image - so you're not looking for the best quality with a given budget, but the lowest possible time budget for a certain quality level.
So games are going to be a different case because you'll want raytracing to produce consistent rendering times, which is pretty hard in itself - but we might get progressive, iterative quality levels, where more complex scenes will lower shadow and AA quality to maintain the FPS rate.

Also, games don't really need to render 3D DOF and motion blur, and post effects will probably remain good enough in their 2D form. So any wins that an advanced sampling implementation might have here will be worthless for games. See, our movies are 24-30fps and have a lot of motion blur with any fast movement, but we'd like games to be 60 fps, and too much blur is annoying anyway.
But a raytracer like Arnold, or even REYES based ones, are fast for film production because you have ALL the quality features turned on, and so you can render a motion blurred object with far, far lower quality settings for reflections and shading and such, because it won't be seen clearly. You can then use the processing power you've saved on the actual object to spend on better motion blur, instead of adding up the rendering times of the two.
Rendering a few spheres with shadows is usually a lot faster with more simple renderers, but when you have millions of polygons in hundreds of objects with hair, lit with global illumination, and with fast motion, very few renderers can even put out a picture at all. It really is like beating a Formula 1 car in a sraight line - it's built to corner at 100mph and not to win simple drag races. So just because raytracing is good for movie level production, it doesn't mean it'll be fit for every purpose. For example Arnold has already decided to give up the architectural visualization market when they decided not to have irradiance caches.

It's also interesting about the reliance on processors instead of GPGPU rendering, as offline rendering is one the high-performance tasks one would assume GPUs would be good at, what with it all being graphics and all!

Well the issue is more complex. Our rendering farm works not only on the final 3D frames, but also on very complex 2D post processing (compositing with Nuke), and we run all the cloth simulations for weeks on the farm as well. Also, even with rendering, we use a different one for some of the fluid dynamics based particles like smoke, fire, water.
Oh, and Arnold is an external renderer anyway, so the render node has to first open Maya, build the scene from an XML file, reference objects, animation files etc., then calculate any dynamic stuff, and only after all these can it start Arnold and translate the Maya scene to start the rendering.
So we need systems that can run all these renderers well, as we need to be able change how many CPUs we allocate to each kind of task, depending on the needs of the production (there's no cloth sims, but a LOT of 2D in the final days, like now).

A GPU or Cell based farm would only work if it could run every one of the above well. It would also have to be able to fit into a server rack, run cool enough not to break the air conditioning, be light enough so that it won't fall through the floor (I'm told our farm weighs several tons, and it's just 3 racks wide). We can't afford to divide our budget and maintain two separate farms as ILM does (they've ported their particle renderer to support GPUs as far as I know).

How well the GPUs or Cell could actually run a raytracing renderer, with the scene complexities we have, is a completely different issue on top of the above. I imagine there'd have to be some complex calculations based on cost in $, watts, kilograms, space and all vs. rendering speeds to find out if it's worth it at all.
On the other hand Nvidia did purchase Mental Images and they are working on porting to GPU, so it seems that they see a business opportunity here. I'm told they want to do realtime arch viz, so that they can change stuff in front of the client, which would definitely be a competitive advantage. Too bad it resulted in less focus on the actual Mental Ray renderer.
 
Btw Laa-Yosh, great great stuff. Is that the video you hinted at or is there another about to come out?

Thanks guys :) AC was for E3, but we also have some new stuff coming up for Gamescon early next week, probably tuesday.
 
A GPU or Cell based farm would only work if it could run every one of the above well. It would also have to be able to fit into a server rack, run cool enough not to break the air conditioning, be light enough so that it won't fall through the floor (I'm told our farm weighs several tons, and it's just 3 racks wide). We can't afford to divide our budget and maintain two separate farms as ILM does (they've ported their particle renderer to support GPUs as far as I know).

How well the GPUs or Cell could actually run a raytracing renderer, with the scene complexities we have, is a completely different issue on top of the above. I imagine there'd have to be some complex calculations based on cost in $, watts, kilograms, space and all vs. rendering speeds to find out if it's worth it at all.

The Cell is designed exactly to solve the above problems. That's why it's in Road Runner.

But I imagine since STI didn't continue to improve it, other CPUs may have overtaken it in terms of efficiency ? Desktop GPU is a different kettle of fish in power consumption altogether. ^_^
 
So the recent project with pure raytracing for everything including hair that I've been talking about is the Dragon Age 2 trailer; it's available in quicktime at here, for example:
http://na.llnet.bioware.cdn.ea.com/u/f/eagames/bioware/dragonage2/assets/gallery/videos/destiny.720.zip
(it's actually a cut version and the full is hopefully going to be released next week)

There are a couple of cases where the hair looks quite realistic, and also we have some nice color bleeding from the GI. Thought it would be interesting to see after all the stuff I talked about ;)
 
Sorry to revive an old thread, but if anyone has spotted a video of Mr. Carmacks Keynote in the wild it would be greatly appreciated if someone would link it here. I always expect to wait a little for a video of it but I'm starting to have waited a looong time now.
 
Weird decision on their part in that case, considering the live stream it... why not go the extra mile and actually save the thing? Now I'm severely disappointed. Thanks for the reply though.
 
If there was a livestream anybody can save it with a program. Someone should have in that case, even one of us.
 
Back
Top