Carmack demos new iD engine at WWDC keynote

Why do characters in PC type games feel so lifeless, just saw MGS4 trailer again and it felt many times better.
I was wondering about this myself. The technology is there, so what's up? Hell, even the Metal Gear Solid 2 scenes on the PS2 were much more lifelike and immersive than anything i've ever seen on the pc to this day! Now obviously it can't have anything to do with polycounts or the use of technologies (where pcs excel), so what's left? The scripting?
About the id video - looks amazing!!! Not sure how ahead it is of the Unreal 3 engine, but they both are spectacular! And the unique texturing is the next step Carmack wanted to take the Megatexturing technology in Enemy Territory! :)
 
The video was much better, I'll give it that. The out door areas are rather amazing and very impressive, especially considering it was done in 10 days apparently. I still must stand in say that I felt the guy it was hovering around towards the end looked very bad however and was in extremely stark contrast to the other parts of the world. Hopefully that is simply a flaw of the rush and therefore it'll be much better. Still some concerns and some of the surfaces have that Carmack look, but time will tell.
 
The video was much better, I'll give it that. The out door areas are rather amazing and very impressive, especially considering it was done in 10 days apparently. I still must stand in say that I felt the guy it was hovering around towards the end looked very bad however and was in extremely stark contrast to the other parts of the world. Hopefully that is simply a flaw of the rush and therefore it'll be much better. Still some concerns and some of the surfaces have that Carmack look, but time will tell.

It is like the D3 Engine. Looked shit on Screens yet in Motion it was Awesome. Anyway this Screen looks like out of Ninja Theorys game:

apple-wwdc-2007-gal-017.jpg


i think.
 
This might be useful in debates about storage:

Enemy Territory: QUAKE Wars features a tool suite for creating MegaTextures called MegaGen. Once the Artist or Level Designer has completed the artwork, MegaGen outputs two entirely unique 4GB textures - a diffuse map containing colour data, and a normal map - and then combines them into a single 5GB data file. This data file is then split into unique tiles suitable for streaming, and then compressed to reduce disk space usage.

The resulting unique MegaTexture is around 500MB in size. This represents a reasonable tradeoff between ETQW’s visual quality and disk space usage (maintaining a shippable size for the game).

This sounds great in theory.. But for some reason, for the past couple of months, every screenshot and video i've seen of ET:QW has made the game look drastically like it recieved a massive downgrade in the texture department.. The early screenshots looked great and really peeked my interest in the game.. However when they finally released the first videos, the sheer degree of smudged, blurry texturing put me right off the game.. (consequentially this happened to be around the time they starting tooting this MT technology)..

It seems to me like you'd need a little bit more than 500MBs of disk storage to get your levels looking decent.. If that is for a single level at the level of fidelity I've seen of the game currrently..

Heck isn't 500MB for one level a tad extreme already? what so your game is going to have like 10-12 levels if your restricting your game to a single disc? Also what if your not a PC tech junkie and don't have a GFX card with 500MB+ of VRAM? Does this mean performance is going to suffer as a result?

I hate to be critical of Carmack's work on this because on the surface it really does seem to show some impressive stuff.. I'm just a little confused since even after this official unveiling they still haven't succeeded in convincing me why you'd need one big ass texture (as opposed to collections of smaller textures, localised to the individual objects, models they relate to) anyway? Even with respect to performance, what are the benefits of this? And finally how well would this engine deal with things like LOD, texture streaming etc? better or worse?

I think I need some more info to be honest (over fancy screenshots and footage which don't particularly scream "the-most-uber-advanced-engine-of-the-current-generation-&-a-technical-milestone" over some of the other cutting-edge solutions we've seen to date; e.g. UE 3.0, Offset Engine, Motorstorm engine, Heaven Sword engine, Uncharted engine, CRYENGINE 2.0 etc..)

:cry:
 
This sounds great in theory.. But for some reason, for the past couple of months, every screenshot and video i've seen of ET:QW has made the game look drastically like it recieved a massive downgrade in the texture department.. The early screenshots looked great and really peeked my interest in the game.. However when they finally released the first videos, the sheer degree of smudged, blurry texturing put me right off the game.. (consequentially this happened to be around the time they starting tooting this MT technology)..

Well..the physical size of the texture doesn't necessarily speak for its quality, just that it contains a lot of unique data. There are some cases Carmack discussed where the visual result isn't as pleasing as it could be, but it seemed like he was working on it. They're also going higher resolution with these megatextures in their next project, it seems. But again, that doesn't necessarily mean the texturing will be super high(er) quality than you'd be used to, it 'just' means that it'll be unique in every part of the level.


Heck isn't 500MB for one level a tad extreme already? what so your game is going to have like 10-12 levels if your restricting your game to a single disc? Also what if your not a PC tech junkie and don't have a GFX card with 500MB+ of VRAM? Does this mean performance is going to suffer as a result?

500MB in one level doesn't mean you need to keep 500MB of texture data in memory at a single time. You'd be streaming tiles of textures in and out of memory depending on the requirements currently. x MB of texture data in a level != x MB of texture data used in the current frame.

Further, we already have PC games breaking through 10GB of data on the HDD (as opposed to whatever its size on the shipping disc might be), so that amount of texture data in a given level doesn't seem so outrageous at all. If we're talking 2GB per level the numbers should scale much higher again, but that's hardly unusual..disc and HDD install sizes have always been going up and up, the surprise would be if they suddenly stopped - right? I know we've had many arguments that this would happen, that we'd hit a plateau, but that never seemed like a reasonable argument to me.


I'm just a little confused since even after this official unveiling they still haven't succeeded in convincing me why you'd need one big ass texture (as opposed to collections of smaller textures, localised to the individual objects, models they relate to) anyway?

I don't think they have one big ass texture that covers every piece of geometry in the game, but they have big-ass textures covering objects they relate to (i.e. one big ass texture for the ground, one big ass texture for that car over there, one big ass texture for the building on the left etc. etc.). With large objects, like a landscape, you'd typically use a small set of textures and stitch/blend them together across the object in a manner that looked reasonably/hopefully random and natural, but Carmack wants every square inch to be genuinely unique compared to every other. IIRC, those big ass textures are split into chunks that are streamed in and out as necessary.
 
This sounds great in theory.. But for some reason, for the past couple of months, every screenshot and video i've seen of ET:QW has made the game look drastically like it recieved a massive downgrade in the texture department.. The early screenshots looked great and really peeked my interest in the game.. However when they finally released the first videos, the sheer degree of smudged, blurry texturing put me right off the game.. (consequentially this happened to be around the time they starting tooting this MT technology)..

I've actually seen the opposite...looking at the latest screens of the same stuff from the second trailer, there has been a massive improvement in texture quality. You have to remember that it takes a while to render the Megatexture (Splash Damage has a set of computers linked together just for rendering Megatextures) and as such they wait to do the detailing until they've worked all of the gameplay stuff out.

Just look at these comparison shots:

Old shot - New shot

Old shot - Old shot - New shot
 
It looks great. And the megatexture technology sounds really promising.

Carmack really likes these general purpose solutions to problems. And mega texture is just his latest and greatest example of that.

I imagine that in the editor, instead of assigning textures to certain faces, you select something like a rock texture and just paint/smear it on there with brush strokes-- something like the clone brush in Photoshop. (I haven't used PS in a long time there might be a better way now) and then if you want to add details on to of that can do it all in the editor. Probably can paint normal maps on top surfaces as well like this to add some relief and also paint on masks for material types.

I guess it just remains to be see how efficient this method will be with texture streaming over current solutions. That could make or break this. In this demo the surfaces didn't strike me as really high res-- even though it was probably running on very high end nvidia HW. But then it was a crappy shaky cam.
 
I've actually seen the opposite...looking at the latest screens of the same stuff from the second trailer, there has been a massive improvement in texture quality. You have to remember that it takes a while to render the Megatexture (Splash Damage has a set of computers linked together just for rendering Megatextures) and as such they wait to do the detailing until they've worked all of the gameplay stuff out.

Just look at these comparison shots:

Old shot - New shot

Old shot - Old shot - New shot
Ah I hadn't seen these before!

Thanks for the heads up! It looks vastly improved since the previous shots.. However I do remember seeing some of the initial promo shots of when it was first announced which look just as good as these latest shots.. Maybe they were pre-renders? In anycase you can colour me a little more optimistic...

Titanio said:
500MB in one level doesn't mean you need to keep 500MB of texture data in memory at a single time. You'd be streaming tiles of textures in and out of memory depending on the requirements currently. x MB of texture data in a level != x MB of texture data used in the current frame.
So basically what you're saying is that his engine doesn'tr particularly deal with these mega textures in any different way to other renderers and the real emphasis is more on how the data is organised in external storage (Disc, HDD etc..) for increased productivity with respect to artistic development of the assets within the content pipeline..? So it sounds like at run-time it doesn't make much difference whether you store your data as hundreds of small compressed textures of whether you have one huge MT since the system is going to stream the same sized chunks across into VRAM anyway.. If this is the case I find it hard to see what (if any performance gains can be made using this approach?)..

I'm starting to see more and more that the benefits of his engine related almost entirely to the content pipeline and productivity with respect to how assets are constructed as opposed to conquering some fundamental performance bottleneck with respect to how textures are dealt with at run-time (to provide new perf gains).. If this is true that I really don't see a problem with that, in fact it sounds great.. Except two things:

- It really wasn't what I was expecting from Carmack's new "revolutionary" new technology..

- And lastly any improvements in productivity any development team may gain from using this technology are fundamentally constrained by a number of factors:-

a) available tools and DCC apps to provide the artists/designers with enough flexibility to efficiently leverage the technology well..
b) available knowledge/experience of staff of these tools, apps and technology in order to fully benefit from the productivity gains (over the team deciding to use alternative technology they may already be fully equipped and have the necessary experience to leverage really well)

Maybe I'm missing something but it doesn't seem to have that same "wow look at that! We need to put that good sh** into our next engine!" kind of appeal that Doom3 brought with it when it was first unveiled..
 
IMO its not THAT impressive, i meen :

ID Tech Shot

Heavenly Sword

UT 2007

Crysis

I really hate contrived screenshot comparisons.

Crysis I MIGHT give you, but can you really say a video of UT3 or Heavenly Sword looks better than the video we just saw? Probably heck no. That's why video>screenshots.

Of course the character face was the weakest part of the ID game anyway.

Also, I've seen people on other forums talking about how Crysis beats this. Well, news flash, the whole thing about Id's game to me is it runs on the consoles. Carmack seems to be saying it will be equivalent as well (Same data sets he said?) Making something within the limitations of the consoles is a whole nother ball game. Put it this way, the Crysis guys are scared to even try. They have repeatedly stated Crysis could not be done as is on the consoles, and that any console version of crysis will be a whole different game in scope and scale.

I mean, it's obvious you can pull of a lot more if you just aim at really high specced PC's.

And even at that, I'm not sure this doesn't beat Crysis in spots.

Shame about the whole dirigible thing though. We're all disappointed it's not a FPS, but I'll be the one brave enough to say it :)
 
Seeing these screenshots makes me wonder whether the PS3 games Warhawk and Motorstorm are already using huge textures as well.
 
I really hate contrived screenshot comparisons.

Crysis I MIGHT give you, but can you really say a video of UT3 or Heavenly Sword looks better than the video we just saw? Probably heck no. That's why video>screenshots.

Of course the character face was the weakest part of the ID game anyway.

Also, I've seen people on other forums talking about how Crysis beats this. Well, news flash, the whole thing about Id's game to me is it runs on the consoles. Carmack seems to be saying it will be equivalent as well (Same data sets he said?) Making something within the limitations of the consoles is a whole nother ball game. Put it this way, the Crysis guys are scared to even try. They have repeatedly stated Crysis could not be done as is on the consoles, and that any console version of crysis will be a whole different game in scope and scale.

I mean, it's obvious you can pull of a lot more if you just aim at really high specced PC's.

And even at that, I'm not sure this doesn't beat Crysis in spots.

Shame about the whole dirigible thing though. We're all disappointed it's not a FPS, but I'll be the one brave enough to say it :)

Good post, reflects my thoughts to.

But still the ss are to small and blurry, I mean wtf they even have ghosting ading bluriness even though the ground looks a bit to blurry compared to the rest but...
 
I'm not dissapointed it's not an FPS. I've got good FPS's coming out of my ears as it is, and with plenty more on the way this year another one is the last thing I need.

Canyon racing ftw!
 
Seeing these screenshots makes me wonder whether the PS3 games Warhawk and Motorstorm are already using huge textures as well.

Textures has to be stored in RAM/VRAM and the space is not infinite. Streaming is only good to a certain point and using large amount of huge textures will require its space. MS and Warhawk may use large sized textures but not much of them. Even old games use them but few and it is used often for the ground/sky and is stretched so the ground is still blurry despite the size of the texture.
 
I'm not dissapointed it's not an FPS. I've got good FPS's coming out of my ears as it is, and with plenty more on the way this year another one is the last thing I need.

Canyon racing ftw!

You are aware that on the end of the demo, game becomes a FPS? ;)
 
How long has ID's new engine/game been in development?
:LOL:

BTW
They can go in and look at the world and, say, change the color of the mountaintop, or carve their name into the rock.
Is this a property of megatexture and physically simulated, or one of tech in id Tech 5 that uses HDD?
 
How long has ID's new engine/game been in development?

For the game, less than UT 2007 and longer than Crysis. For the engine, less than UE 3 and less than CryEngine 2. Yes, they've been working on the game longer than the engine. id is trying a novel (for them - Valve partially did this with Half-Life 2) way to develop this next game. They concentrated on nailing the gameplay, game flow dynamics, interactions, story, whatever and now they're "throwing artists at it" to paraphrase Carmack.

Which is smart considering Graphics tech evolves at a much faster rate so it pays to only work on the bling-bling later in the development, unlike with D3 where most of the graphics features in the retail game were demonstrated 3 years before at MacWorld 2001. (And when the game shipped, every graphics programmer and their mother had already implemented normal maps + stencil shadows that D3 no longer looked "new").

Is this a property of megatexture and physically simulated, or one of tech in id Tech 5 that uses HDD?

It's one of the benefits of using unique texturing (which id calls MegaTexture). If you're using X mb of textures for all surfaces you see at any given point, instead of X number of textures like current engines do, then it doesn't matter if those pixels are white, red or black because they will need the same memory and polygons. Currently, if you want to add unique pixels you have to add another texture and more polygons to the scene. So performance-wise it doesn't matter if you have a mile-long brick wall with bricks that look exactly the same or instead paint that mile-long wall so that each brick is unique because the mb of texture memory used is the same with MT.

If this is the case I find it hard to see what (if any performance gains can be made using this approach?)..

A single texture means less polygons and less batches are required to process the same scene.

- And lastly any improvements in productivity any development team may gain from using this technology are fundamentally constrained by a number of factors:-

a) available tools and DCC apps to provide the artists/designers with enough flexibility to efficiently leverage the technology well..

id has a new set of editing tools which they call "id Studio" which deal with MT editing natively. ET: Quake Wars which uses 1st generation of MegaTexture also features improved editors with MT support (for terrains only on that game).

b) available knowledge/experience of staff of these tools, apps and technology in order to fully benefit from the productivity gains (over the team deciding to use alternative technology they may already be fully equipped and have the necessary experience to leverage really well)

This is a problem common to all workflow changes and is not exclusive of MT. If someone makes a engine using solely procedural textures you can be sure the artists will need training before hand.

Maybe I'm missing something but it doesn't seem to have that same "wow look at that! We need to put that good sh** into our next engine!" kind of appeal that Doom3 brought with it when it was first unveiled..

Changing your shadow implementation is not as disruptive as changing the way you manage your assets. Like you yourself said, MT needs dedicated tool support. It's not really something a programmer can whip out in an afternoon because he's going to be a lot more dependent on content for testing than say toggling between shadow volumes and shadow maps. Even the normalmapping implementation was mild compared to this: you only needed a shader and one normal map that any programmer can create. To implement MT you'll need:

An editor that allows painting huge textures in real time.
A dedicated shader that handles all the filtering and mip-map chain complexities.
A proprietary file format to store all this information (MT is more than graphics, it also handles physics interactions).
An efficient algo to stream pages into memory, fill dirty pages, flush, etc.
An efficient algo to decompress the huge texture (this is optional but even if you get all the above working correctly, unless you can somehow fit all the huge textures your game will use into a DVD (or two) it's not going to be practical).
Finally, to see and measure the benefits of reduced polygon usage you'll need new meshes.
 
Back
Top