Carmack demos new iD engine at WWDC keynote

Here's the info from id's site:

In a surprise demonstration during Steve Job's keynote at the Apple Worldwide Developers Conference today, John Carmack unveiled id's latest revolution in game engine technology with the very first showing of id Tech 5 running live on the Mac with OS X.

The ground breaking technology unveiled today will power id's new internally developed game and will be available for licensing to third parties. The new id rendering technology practically eliminates the texture memory constraints typically placed on artists and designers and allows for the unique customization of the entire game world at the pixel level, delivering virtually unlimited visual fidelity. Combined with a powerful new suite of tools designed to specifically facilitate and accelerate this content creation process, id Tech 5 will power games that contain vast outdoor landscapes that are completely unique to the horizon, yet have indoor environments with unprecedented artistic detail.

Photo by Albert Watson While shown for the very first time running in real time on a Mac, id Tech 5 additionally supports the Xbox 360 and Playstation3 console platforms as well as the PC, and will be available for licensing to developers and publishers interested in working with a truly next generation rendering and game development solution. id Software will be showing id Tech 5 to interested developers and publishers by appointment only at the E3 Media & Business Summit from July 11 - 13, 2007 in Santa Monica, Calif. Companies interested in id Tech licensing information can visit www.idsoftware.com or email licensing@idsoftware.com with an E3 appointment request.
 
Textures has to be stored in RAM/VRAM and the space is not infinite. Streaming is only good to a certain point and using large amount of huge textures will require its space.

Then what does this mean?

The new id rendering technology practically eliminates the texture memory constraints typically placed on artists and designers and allows for the unique customization of the entire game world at the pixel level, delivering virtually unlimited visual fidelity
 
Here's the info from id's site:

In a surprise demonstration during Steve Job's keynote at the Apple Worldwide Developers Conference today, John Carmack unveiled id's latest revolution in game engine technology with the very first showing of id Tech 5 running live on the Mac with OS X.

The ground breaking technology unveiled today will power id's new internally developed game and will be available for licensing to third parties. The new id rendering technology practically eliminates the texture memory constraints typically placed on artists and designers and allows for the unique customization of the entire game world at the pixel level, delivering virtually unlimited visual fidelity. Combined with a powerful new suite of tools designed to specifically facilitate and accelerate this content creation process, id Tech 5 will power games that contain vast outdoor landscapes that are completely unique to the horizon, yet have indoor environments with unprecedented artistic detail.

Photo by Albert Watson While shown for the very first time running in real time on a Mac, id Tech 5 additionally supports the Xbox 360 and Playstation3 console platforms as well as the PC, and will be available for licensing to developers and publishers interested in working with a truly next generation rendering and game development solution. id Software will be showing id Tech 5 to interested developers and publishers by appointment only at the E3 Media & Business Summit from July 11 - 13, 2007 in Santa Monica, Calif. Companies interested in id Tech licensing information can visit www.idsoftware.com or email licensing@idsoftware.com with an E3 appointment request.

Well the non portable Macs have at best the X1900 (dunno if it is XT or not). So that bodes well for "normal" GPU's. Working on it for consoles is a great thing. Although UE3 may be so entrenched I dunno if Tech 5 will get any real use.
 
Richard said:
A single texture means less polygons and less batches are required to process the same scene.
That's actually quite debatable, and either way using MT, you'll be paying for it with processing power and memory upfront.
The benefit would be the cost becomes largely static and thus more predictable.
 
That's actually quite debatable, and either way using MT, you'll be paying for it with processing power and memory upfront.
The benefit would be the cost becomes largely static and thus more predictable.

It seems to me that this engine then is quite well thought for consoles too then: I am under the impression that console game programmers hate non determinism as far as memory usage and processing power (at CPU and GPU level) are concerned and love instead static and predictable things.

You pay an upfront cost of more processor cycles (well, you do have those free SPU's ;)) and higher amount of base memory you have to set aside, but if you can live with that in the rest of the engine, you will give your artists a pretty nice canvas to work on and maybe you will have to fight a lot less against them (and with your publishers if this new canvas and tools mean that the artists can be more productive and allow programmers/gameplay designers to get their things done on time working concurrently with the artists building the levels and coloring them). I said maybe, do not send Alex after me :p.
 
I'd still like to know what art application can load a 20 Gb texture and have all the needed tools to edit it ...
 
Then what does this mean?

Well, how much VRAM do you actually need each frame? In theory, only one texel for each pixel in the destination image is needed for maximum texture resolution; and this (and its predecessor MegaTexture which only worked for terrain-like objects) tries to get somewhat closer to that ideal. If you read one of Carmack's old .plan files, he talks about the need for "virtual memory" (in the sense it is used on PCs) on graphics cards, where individual texture pages are only fetched as they're needed. This seems to a software solution to the same problem...
 
So basically what you're saying is that his engine doesn'tr particularly deal with these mega textures in any different way to other renderers and the real emphasis is more on how the data is organised in external storage (Disc, HDD etc..) for increased productivity with respect to artistic development of the assets within the content pipeline..?

For increased freedom, but they do have the problem of getting the right (chunks of) textures into memory in time.

Maybe I'm missing something but it doesn't seem to have that same "wow look at that! We need to put that good sh** into our next engine!" kind of appeal that Doom3 brought with it when it was first unveiled..

One thing I'd say is that unique assets/texturing across a game world wouldn't really offer much evident gain in just some screenshots. It's probably something you might only appreciate as you play through the game. By my understanding, this approach isn't really boosting the per-frame texture complexity of a scene, and thus wouldn't be easily appreciable in a given screenshot. What it is doing is boosting the 'per-world' variety/uniqueness of data. So this patch of rock over here isn't a repetition of that patch of rock from over there, to take a relatively simple example. There are certainly games where reuse of assets has been, perhaps, annoyingly evident throughout a game, so approaches that allow increased variety, with tools to support that on the asset creation side, should be welcome.
 
One thing I'd say is that unique assets/texturing across a game world wouldn't really offer much evident gain in just some screenshots. It's probably something you might only appreciate as you play through the game. By my understanding, this approach isn't really boosting the per-frame texture complexity of a scene, and thus wouldn't be easily appreciable in a given screenshot. What it is doing is boosting the 'per-world' variety/uniqueness of data. So this patch of rock over here isn't a repetition of that patch of rock from over there, to take a relatively simple example. There are certainly games where reuse of assets has been, perhaps, annoyingly evident throughout a game, so approaches that allow increased variety, with tools to support that on the asset creation side, should be welcome.

True I'd say..

After thinking about it some more I figure this kind of technology would be really good for vast sprawling worlds in games with large-scale open landscapes..

I find repeating textures isn't really all that evident in most games today where we have close-quarter gameplay with confined proximities of play. However I'm sure things like flight sims would really benefit alot considering they're trying to represent vast sprawling landscapes made up of the same component materials/objects (trees, grassy fields, roads etc..) but can benefit from the added uniqueness of texturing across the greater distances..

However I'm not quite sure how visible the benefits of this technology can be in any other case.. After all, unique texturing isn't something that "impossible" via a more conventional content pipeline and widely used practices/technologies/tools.. And above all you're still fundamentally constrained by the production time available to spend working on those unique textures in the first place.. (hence this being the biggest reason I would have imagined texture reuse would occur in digital content development for games in the first place over some technical limitation of the hardware/software..)
 
I'd still like to know what art application can load a 20 Gb texture and have all the needed tools to edit it ...

id Software's toolset, id Studio (which itself is a heavily modified version of the tools Splash Damage built for Quake Wars, so if you're curious you might want to check them out).

archangelmorph said:
However I'm not quite sure how visible the benefits of this technology can be in any other case.. After all, unique texturing isn't something that "impossible" via a more conventional content pipeline and widely used practices/technologies/tools.. And above all you're still fundamentally constrained by the production time available to spend working on those unique textures in the first place.. (hence this being the biggest reason I would have imagined texture reuse would occur in digital content development for games in the first place over some technical limitation of the hardware/software..)

Texture resolution can also be bumped up significantly. In fact I suspect this demo has pretty high-res textures, they just aren't visible because of the crappy bluriness of the cam shots (check out UE3 or Crysis cam shots and you see the same kind of thing). We'll just have to wait for proper screens to really see.
 
Panajev said:
It seems to me that this engine then is quite well thought for consoles too then:
Assuming you can afford the costs associated with reaching your target fidelity, which remains to be seen.
 
That's actually quite debatable

I can only go with what the devs have told me, and they are saying MT saves them polygons and batch counts.

, and either way using MT, you'll be paying for it with processing power and memory upfront.
The benefit would be the cost becomes largely static and thus more predictable.

Yes on the CPU power, no on the memory. You'll need to decompress it but the entire MT (in Quake Wars) only uses around 30mb total ram at any given time. So, less than using standard textures.
 
I find repeating textures isn't really all that evident in most games today where we have close-quarter gameplay with confined proximities of play. However I'm sure things like flight sims would really benefit alot considering they're trying to represent vast sprawling landscapes made up of the same component materials/objects (trees, grassy fields, roads etc..) but can benefit from the added uniqueness of texturing across the greater distances..

I'd say a trend toward uniqueness in assets generally would be welcome, beyond just textures also. There've been a few games I've played recently where the recycling of assets in general has been pretty notable.

And above all you're still fundamentally constrained by the production time available to spend working on those unique textures in the first place.. (hence this being the biggest reason I would have imagined texture reuse would occur in digital content development for games in the first place over some technical limitation of the hardware/software..)

Well, as limits are raised, one might expect developers to start to rise to the challenge. There's obviously a limit to development resources, but with good tools one might be hopeful that those resources might stretch far beyond where we were last generation or even are currently. A key part of iD's approach has been on the tools side.

Yes on the CPU power, no on the memory. You'll need to decompress it but the entire MT (in Quake Wars) only uses around 30mb total ram at any given time. So, less than using standard textures.

That's interesting...Carmack had alluded to reduced memory footprint but I'd never seen specific figures. I wonder if this is more down to smart/efficient texture streaming rather than the idea of using huge textures or uniquely texturing surfaces, though (i.e. if such data management could be applied to 'standard' textures also..).
 
I can only go with what the devs have told me, and they are saying MT saves them polygons and batch counts.
I am honestly tired of devs saying techniques are "saving them polygons". Unless they are reducing multipassing somehow, or their engine is not aimed at real-time rendering, then all known techniques to 'save on polygons' ALWAYS look worse than just having a much higher number of polygons in the first place.

What would be disruptive is not a new technique to save on polygons, but one that actually allows you to render a much higher number of polygons without significant performance drops. The massively diminishing returns in terms of polygon counts is not at 5K-10K; it's more along the lines of 25-50K. Then, in addition to that, you obviously still need normal mapping or basic parallax mapping for what is very high-frequency.

P.S.: Unlike some people, I *am* impressed technically by this new engine, and I believe that MegaTexture has a lot of potential. I would argue, however, that it might be one generation too early for it and that some of the design tradeoffs in the engine definitely seem to be suboptimal, because the end result has some clearly lacking parts imo.

They probably sacrificed too many potential 'evolutionary' techniques for the sake of being able to fit in a few 'revolutionary' techniques in their performance goals and development time, imo.
 
I agree with the above, but we must keep another thing in mind(that has been beaten to death already in this very thread:) ):the fact that he`s targetting consoles automatically forces some trade-offs.
 
Assuming you can afford the costs associated with reaching your target fidelity, which remains to be seen.

It depends on what your target fidelity is, surely iD producing a game that gets efficiently ported on PC's, Xbox 360, and PLAYSTATION 3 looking great on all platforms might make people interested into this new engine as an alternative to UE3... if it does cost less to license than UE3, it might make some fans ;).
 
Arun, it seems to me that saving polygons/batch count statement would be relative to very similar geometry using repeating textures - for instance, you might have to introduce extra geometry around the borders of things that use e.g. a rock texture to cut it off from the completely grassy texture right next to it. The alternative would be to blend the 2 different textures across a single polygon, but I imagine using a single texture is quite a bit faster than that. Also, if there is a crossover from one texture to another at edge (v1,v2), won't you have to duplicate those vertices (since depending on which side of the edge you are drawing a polygon on, you will want texture coordinates for a different texture)? Presumably this happens a lot less in id's new engine. So those comments might just mean they need less polygons to reach a given level of detail, not that the engine will in general use less polygons than competing solutions (especially since they don't seem to be using stencil shadows anymore)...
 
The alternative would be to blend the 2 different textures across a single polygon, but I imagine using a single texture is quite a bit faster than that.

This is one of the areas I expect id Tech 5 (and Quake Wars to some extent) to really perform much better than other engines. Whereas most engines need to use extensive real-time texture blending in places (especially outdoor areas), id Tech 5 needs nothing because it can all be in the texture already. From my experience those real-time blends add up very quickly and bring performance to its knees, so this texture solution allows for unlimited quality in texture blending (i.e. blending based off of normal map details and such) with zero performance hit.
 
Back
Top