John Carmack PCGamer interview...next engine to have "unique texturing on everything"

Gabrobot

Regular
Interesting tid bits about his next engine:

Are you working on a new rendering engine?

Yeah. For the last year I’ve been working on new rendering technologies. It comes in fits and starts. Our internal project is not publicly announced on there. We’re doing simultaneous development on Xbox 360, PC, and we intend to release on PS3 simultaneously as well, but it’s not a mature enough platform right now for us to be doing much work on.

Game engines have their own certain look to them. Quake 3 era games all have a similar lighting and texture model, so do Doom 3 era games, from the high-poly bump maps. Can you predict what the engine is going to look like from the start?

Usually when I set out making the technical decisions I don’t know how it’s going to turn out. A lot of it is working out what works, and what ideas come to you. It is worthwhile mentioning, as you said, that there’s a characteristic look to the new engine, and it’s going to be centred around Unique Texturing.

This is an argument I get into with people year after year. Every generation, someone comes up and says something like “procedural and synthetic textures and geometry are going to be the hot new thing. I’ve heard it for the last three console generations – it’s not been true and it’s never going to be true this generation too. It’s because management of massive data-sets is always the better thing to do. That’s what a lot of the technologies we are working on centre around – both the management for the real time use of it, and the management of the efficient content creation side of it. I think that’s going to give a dramatically better look than what we’re seeing in this generation.

Can you describe how it will look, in a layperson term.

When you start seeing screenshots of games designed like this, it’ll be obvious that they’re of a new generation. I’m not sure how much it comes through, but Quake Wars: Enemy Territory, the game Splash Damage are working on, that uses an intermediate half-way technique, the Megatexture stuff I did originally. They’ve really gone and run with that. Some of their screenshots are really starting to show the promise of unique texturing on everything. We’ve got an interesting combination of techniques on that – they did a procedural offline synthetic synthesis to generate the basis of the terrain, and I built some technology to let artists dynamically stamp things into all the channels in game. We’re starting to see some really, really spectacular results out of this, as everyone climbs up the skill curve of using these new tools. The technology we’re working on here at id takes that a step further with a terrain texturing system is applied throughout for everything.

When you create a technology, do you build features specifically for a game, or is a case you just testing to see what the silicon can do?

It’s somewhere in between. You don’t build technology for technology’s sake. The people who would just build 3D engines without a game attached, those have never been the really successful products. In any case of engineering, you really need to tailor your design to what you’re trying to accomplish. There are always the types of situations when you can say, “we know we want our game to have this type of outdoor stuff, or this type of indoor stuff,” and you start trying to write technology for it, but you find out something might be harder than you expected, or you might get a novel idea, and that might feed back into the game design. We commonly switch gears during our development process when a really good opportunity comes up. We’re not going to be pig-headed about something, and say “this is what our design spec says, so this is what we’re going to do”. We’ll pick targets of opportunity when we get them, but the technology does very much try to build around what we want to accomplish with our games.

Was there one of these “targets of opportunity” in the development of Doom 3?

When we left Quake 3 I had several different directions of technology that I considered potentially useful for next generation game engines. One of those was uniquely texturing the surfaces, and one of them was this bump-mapped and unified lighting thing that wound up in Doom. The decision I made at the time was that something that made less quality screenshots but a more dynamic environment would make a better game, which is why I took that direction, but it’s interesting, now that the technology of the hardware has progressed, I can combine both of what I wanted to do with the unique texturing and the fully dynamic environments.

Moderator note: The link to the full interview has been removed.
 
Looking over the latest Quake Wars articles, it's interesting to compare with what Carmack says here:
John Carmack said:
The technology we’re working on here at id takes that a step further with a terrain texturing system is applied throughout for everything.

In this interview with Paul Wedgwood on Quake Wars, he says this about the terrain system (what Carmack mentions in the text I just quoted):
Paul Wedgwood said:
We took this basic implementation of the technology and then started developing it further so we had it working on a 3D mesh, we introduced a single parallel light source for lighting, the ability to put other models and things on the landscape, foliage, tools like mega-gen which generates the texture, geometric texture distribution, the road tool that lets you just plop roads down along a route.

As a piece of technology, it's really good because it generates really good visuals, and that helps with player immersion. But almost more importantly, it's great for gameplay because you're finally unlocked from polygons. You derive all of your properties - vehicle traction, particles, audio effects - from the MegaTexture, even things like the stuff distribution of debris, foliage placement. All of these things can be derived from texture masks and so you no longer have to have a strip of polygons that separate the road texture from the grass texture. So it also helps with performance because we can have huge terrains that use less polygons and this disconnection between the polygons and the MegaTexture means we have more effects and more efficient texture usage as well.

So it sounds like the tool Carmack is talking about is this "mega-gen" tool. It sounds like they take materials (procedurally generated?) and paint them on the landscape using this tool (and probably make hand tweaks to it as well...perhaps this system works something like ZBrush?)...it sounds like they can paint various layers ("texture masks") to control different things. I would bet that in Carmacks next engine, this works so that you can combine things like shaders as well, like what UE3 does (only this is much more powerful as you can custom tweak every single surface...you can have texture layers to control the physical properties of the surface and stick vegatation like moss hanging off the surface).

If my speculation is accurate, then is pretty awesome...this is exactly the kind of answer that's needed for the next generation to allow for super detailed environments while making it much easier for the artists to make. You'd basically be able to move through the environment in this "mega-gen" tool and quickly make custom surfaces for everything, painting properties onto them. :)
 
Gabrobot said:
What, a 32,000 by 32,000 pixel texture over the whole mesh? :p


actaully the way the cry engine works is multiple layers of masks, pretty much you paint textures on to the terrain like you would with any brush in photoshop. Quite a bit more flexible then one huge texture, plus you can control the level of detail with your own textures sizes.

You set up the outdoor level with x number of layers of terrain textures, then go in and make masks for each layer :).
 
Razor1 said:
actaully the way the cry engine works is multiple layers of masks, pretty much you paint textures on to the terrain like you would with any brush in photoshop. Quite a bit more flexible then one huge texture, plus you can control the level of detail with your own textures sizes.

You set up the outdoor level with x number of layers of terrain textures, then go in and make masks for each layer :).

One large texture is much more flexible though because you aren't forced to blend several textures in different ways to make tiling less apparent. You can have large details like big rocky mountains with bumpmapping and everything. With MegaTextures you can go in and hand tweak the actual pixels. How do you do a huge open field with CryEngine? Unless you blend between a bunch of different textures, there will be some pretty atrocious tiling. This simply isn't an issue with MegaTexture. How do you do things like twisting roads (not just dirt ruts like in Far Cry) with a tile based texture system such as CryEngine's? How would you apply it to every surface like in Carmack's next engine? You can't, and that's the beauty of Carmack's tech...it gives complete control to the artist to paint whatever they want on things.

And besides which, CryEngine is limited to 4096 x 4096 pixel masks...a far cry from 32,000 by 32,000. ;)
 
hmm you really won't notice the tiling unless the texture orginally made wasn't very tilable. The artist still has control per pixel.

4096x4096 is limited because of terrain size, not the actually texture size. This is because terrain size has a direct effect on amount of system ram usage. Masks are smaller but there really is no need to go anymore from what we have seen so far.

If you take a 4096x4096 terrain and have 1024x1024 tilable textures (each texture has its own bump map and lightmaps), you end up with a hell of alot more then 32,000 x 32,000 pixels.

I highly don't think there is much use of procedural textures for terrain, maybe I'm reading it wrong. But anyways, even with procedural texture for something like this, they will have to stored in vram once made. Which will nullify the benefits of doing such a thing and increase load times.

Maybe there is something with this engine that will cause it to go too slow due to amount of alpha layers. I know in the Cry engine this is something they have to be careful of (how many alpha blends are visable between different layers).

you are able to paint anything you want in the Cry engine, there is almost no restriction other then vram and what I mentioned above, but these two constants are for all engines.
 
Last edited by a moderator:
Razor1 said:
you are able to paint anything you want in the Cry engine, there is almost no restriction other then vram and what I mentioned above, but these two constants are for all engines.

From the interviews a 32.000 x 32.000 MegaTexture only takes 8mb in vram. Also, the 4096^2 restriction in FC terrains comes from the hardware (and most is restricted to 2048^2 actually) which is completely by-passed with MT.
 
Mordenkainen said:
From the interviews a 32.000 x 32.000 MegaTexture only takes 8mb in vram. Also, the 4096^2 restriction in FC terrains comes from the hardware (and most is restricted to 2048^2 actually) which is completely by-passed with MT.
Hmm...I really wonder what they're actually doing, then. Because a 32kx32k texture with 32 bits per texel would require 4GB.
 
Chalnoth said:
Hmm...I really wonder what they're actually doing, then. Because a 32kx32k texture with 32 bits per texel would require 4GB.
I guess they keep in vram only the necessary mipmaps for each tile of the texture.
EDIT: I mean miplevels, of course. :)
 
Last edited by a moderator:
Mate Kovacs said:
I guess they keep in vram only the necessary mipmaps for each tile of the texture.
EDIT: I mean miplevels, of course. :)
Doesn't that sort of defeat the purpose of having a 32k x 32k texture in the first place?
 
Mordenkainen did a great write up of what he found from Doom 3's MegaTexture stuff that hadn't been ripped out (scroll down about 2/3rds of the page):

http://www.doom3world.org/phpbb2/viewtopic.php?t=10673&start=20


Razor1 said:
If you take a 4096x4096 terrain and have 1024x1024 tilable textures (each texture has its own bump map and lightmaps), you end up with a hell of alot more then 32,000 x 32,000 pixels.

You also end up with very small texture features and a blocky overall pattern. I notice that while tiling is never really noticeable in Far Cry, the terrain is extremely blurry further away. Since there wasn't anything better, that was fine at the time. Now with MegaTexture, however, you can do a proper texture over everything without the limits using tiles places on it. This is most noticeable in the latest Quake Wars magazine screens where there are big rocky mountains off in the distance with a clearly detailed bumpy rocky look. This isn't possible with the CryEngine method.
 
Chalnoth said:
Hmm...I really wonder what they're actually doing, then. Because a 32kx32k texture with 32 bits per texel would require 4GB.

Yup. And a MegaTexture based off a 32k x 32k .TGA is actually closer to 6 GB because the proccess of creating them adds quite a bit of information to the resulting .MEGA file. If you check how the engine loads textures you see that a MTisn't actually loaded in its entirety (nor could it be - as you said you'd need gigabytes of ram to do that - even if the 4096 x 4096 limit could be avoided), instead parts of it are loaded as four discrete MT levels. From my testing, each of these as a 512 x 512 texture taking 1.3mb of vram (I'm guessing 512^2 * 4 bytes + 300kb probably the information added when creating the MT).

Also: http://www.bluesnews.com/cgi-bin/finger.pl?id=1&time=20000308010919

JC .plan said:
Given a fairly aggressive six texture passes over the entire screen,
that equates to needing twice as many texels as pixels. At 1024x768
resolution, well under two million texels will be referenced, no matter
what the finest level of detail is. This is the worst case, assuming
completely unique texturing with no repeating.

Emphasis mine, one of the features of MT.
 
Mordenkainen said:
If you check how the engine loads textures you see that a MTisn't actually loaded in its entirety
I haven't been following this too closely (so excuse my apparent ignorance) but you're talking about that iforgetthename command in the D3 engine, right?
 
Reverend said:
I haven't been following this too closely (so excuse my apparent ignorance) but you're talking about that iforgetthename command in the D3 engine, right?

I'm referring to "listImages", yes.
 
Well, that cmd wasn't what I was trying to remember due to what this thread is about. Just checked and I see that you've been messing with JC's experimental huge texture cmd in D3.
 
Reverend said:
Well, that cmd wasn't what I was trying to remember due to what this thread is about. Just checked and I see that you've been messing with JC's experimental huge texture cmd in D3.

"makemegatexture"?
 
Mordenkainen said:
From my testing, each of these as a 512 x 512 texture taking 1.3mb of vram (I'm guessing 512^2 * 4 bytes + 300kb probably the information added when creating the MT).
Ah, I think I understand now. I bet, though, that the 300kb is just the MIP map tower. The added material information may not ever go to the video card. In fact, one simple way to apply the added material information would just be to encode it into the alpha channel, which would, on a 32-bit texture, end up supporting 256 different types of materials.

Now, here's what I'm going to propose is going on. Let's imagine that you store the full 32k x 32k megatexture in 256 x 256 blocks. It would be relatively easy to pull out the nearest four blocks and pack them into a single 512 x 512 texture. Then, just do the same exact thing for the next three MIP map levels (16k x 16k, 8k x 8k, and 4k x 4k), always ensuring that the nearest four 256 x 256 blocks are stored in video memory as a 512 x 512 texture each.

Then, on the system side, you could encode into an unused channel (say, the alpha channel) material information. For effects calculated by the CPU, this information would be trivial to use.

But a really good implementation would also make use of this same information for telling the GPU what pixel shader to use. With good dynamic branching, one could just write one obscenely-long shader, with 8 if statements at the beginning of the shader to select the proper material.
 
Back
Top