John Carmack PCGamer interview...next engine to have "unique texturing on everything"

Chalnoth said:
Ah, I think I understand now. I bet, though, that the 300kb is just the MIP map tower. The added material information may not ever go to the video card. In fact, one simple way to apply the added material information would just be to encode it into the alpha channel, which would, on a 32-bit texture, end up supporting 256 different types of materials.

Yes. What I have doubts about is normal maps/specular map support. From the screenshots/trailer in Enemy Territory: QW MegaTexture terrains do have these but the partial support in D3/Q4 doesn't explicitly mention them. An alternate theory of mine is that the alpha channel stores a heightmap (which could probably also be used as a rough visibility implementation).

Now, here's what I'm going to propose is going on. Let's imagine that you store the full 32k x 32k megatexture in 256 x 256 blocks. It would be relatively easy to pull out the nearest four blocks and pack them into a single 512 x 512 texture. Then, just do the same exact thing for the next three MIP map levels (16k x 16k, 8k x 8k, and 4k x 4k), always ensuring that the nearest four 256 x 256 blocks are stored in video memory as a 512 x 512 texture each.

IIRC makeMegaTexture creates 128 x 128 blocks (and lower). BTW, recent Quake Wars previews talk about the game using 8mb of vram, instead of the 2mb used in the MT implementation in D3/Q4; the difference in these could account for the presence of normal maps.
 
i dont think the 32k 32k texture is used as an actual diffuse texture (ie u dont stick it on a polygon) it may only be 2bits. also i imagine u could compress it well.
its more of a starting point for the engine to create a RGB texture from or geometry/vegetation etc. ie its just a nice big array of data
 
I was just wondering that this technology is already present, but JC is now further refining it for his next engine. Is this right? If so it is not as ambitious as his previous technological jumps in software.

Edit: Or that shows that the new engine is close to completion and is an evolution of the Doom3 engine.
 
I'm sure JC also refers to the technique used by "not yet announced" product :) And since he prefers XBOX 360 over PS3 you shouldn't have problems figuring that one out :D
 
JC .plan said:
Given a fairly aggressive six texture passes over the entire screen,
that equates to needing twice as many texels as pixels. At 1024x768
resolution, well under two million texels will be referenced, no matter
what the finest level of detail is. This is the worst case, assuming
completely unique texturing with no repeating.

I wonder how much difference this will make between the R580 or G71, once they take a differente number of TMUs? I guess that Nvidia will win this benchmark.
 
zed said:
i dont think the 32k 32k texture is used as an actual diffuse texture (ie u dont stick it on a polygon) it may only be 2bits.

Right now MT is indeed the diffuse texture you "stick it on a polygon".

Tahir2 said:
I was just wondering that this technology is already present, but JC is now further refining it for his next engine. Is this right? If so it is not as ambitious as his previous technological jumps in software.

Edit: Or that shows that the new engine is close to completion and is an evolution of the Doom3 engine.

Not really. The MT stuff present in the upcoming Enemy Territory: Quake Wars is used for terrains only. JC's next engine will use MT for everything, as in arbitrary geometry. So while in QW every pixel in the terrain will be unique, the man-made structures like factories, bridges, etc. still use "regular texturing".

In JC's next engine you could have a mile long wall with a "brick" texture where every single brick is different from the others, while in other games you'd see a "mossy brick", or a "broken brick" that is actually repeated a couple times along the wall. No more need to have decals to hide repetition, no more trying to make textures "fit" your polygons/world, etc.
 
Tahir2 said:
I was just wondering that this technology is already present, but JC is now further refining it for his next engine. Is this right? If so it is not as ambitious as his previous technological jumps in software.

Edit: Or that shows that the new engine is close to completion and is an evolution of the Doom3 engine.
IMO, I tend to view all past accomplishments by JC as really clever performance tricks and optimizations. As far (and as little) as I know about MT, I think it will be his most "ambitious" as far as his efforts to improve graphics quality in his games go if you'll ignore the dynamic lights in the original Quake and "shaders" in Q3. He'd been taking advantage of the stuff 3D hw offered thus far; MT may be quite a departure, yes?
 
BTW (and OT, depending on how you see it), is that PCGamer interview available online? I can't seem to find it on their website, and if it's not available online, and even though Gabrobot appears to have cut-and-pasted from the author of that thread from Rage3D, I'm not sure this is what we should be seeing in a B3D forum/thread.

There was no link to PCGamer provided, of course.

Sorry for spoiling the fun, but I still get these funny feelings when it comes to these kind of stuff. B3D staff, is this okay? (and that is basically what's important).
 
Mordenkainen: I'll fully agree with you (and JC's excitement) that this is a rather kickass technology, even though I'd like some more details on how it really is implemented anyway, but I do question however if this is what we want, moving forward.

The goal of any decent programmer should, nowadays, be to ower the art budgets of the game while increasing the overall visual quality at the same time. I don't see how MT does this. Sure, there'll always be $50M+ game projects (and tbh, I doubt the next JC game has such a budget), but I'd tend to feel that this is not the direction the industry is going. Personally, I'm more impressed by ideas such as Will Wright's "Spore" than things that complicate proper cost management. I'm probably the only one here caring about that, but let that be if need be.

Uttar
 
We need more "John Carmack said..." threads. Really. Honest.

MegaTexture is just a dynamic method of streaming texture data. The 32K x 32K texture is tiled in smaller parts that are, in their turn, dynamically loaded into the VRAM. Revolutionary, indeed, and deserves all the fuss made about it...
Reverend said:
Sorry for spoiling the fun, but I still get these funny feelings when it comes to these kind of stuff. B3D staff, is this okay? (and that is basically what's important).
What has been posted in the forum is not a copy/paste of the full PC Gamer interview, only tidbits, thing that is tolerated.

The link to the original source contains the full transcription of the interview, this part is against the Forum's rules. Thus, I edited that part from Gabrobot's post.
 
MegaTexture isn't a new idea (though I'm sure its got some clever stuff beyond the basics in it), Purple Heart had 16Kx16K terrain textures with 1Kx1K geometry resolution back in 2003... And that was just based on Ulrich's Chunked LOD stuff.
I posted some shots back in some of my early posts

http://www.beyond3d.com/forum/showpost.php?p=111950&postcount=1

Essentially the 'trick' IIRC was a mipmapped, streaming engine. It took the Unreal multi-pass terrain (artists painted as many levels as the liked, blending multiple textures etc.), then a pre-compute pass rendered the high res version (baking in shadows as well etc.). The pre-compute pass was interesting because it used more RAM than Win32 allowed, so I had adapt some out-of-core image techniques to make it work.

The data was stored on disk in lots multi-resolution DXT1 chunks and a cache kept in memory (8Mb if I remember). A background system streamed every thing in. Very fast and effectively gave you unlimited texture and geometry resolution for the terrain.

The main issue is more disk size... because it was designed for maximum speed, the DXT1 on disk were uncompressed (the terrain textures was measured in the 100's of Mb)... These days I would add a second compression layer on top and use a second processor to decompress to reduce disk bandwidth.
 
Uttar said:
The goal of any decent programmer should, nowadays, be to ower the art budgets of the game while increasing the overall visual quality at the same time. I don't see how MT does this.
Well, it does help in that it frees up the artists to not have to resort to any sorts of tricks to apply detail: they just paint it on. This is a great technology in removing the artists one more step from the underlying graphics programming.
 
ok i had a bit more of a read, basically its just paged in textures
this may be of interest
http://www.graphicshardware.org/previous/www_1999/presentations/v-textures/index.htm

also 32k x 32k whilst seems a lot is nowhere enuf for a flight simulation
In JC's next engine you could have a mile long wall with a "brick" texture where every single brick is different from the others,
will not happen. artists dont have the time to texture each brick indiviually. they will use some template to create 1000s of different bricks and then maybe tweak these.
of course the obvious point is cut out the middle man and just provide the templates.

also this technque is not gonna solve the problem of things look worse the closer u get to them, a better solution is to generate the brick 100% procedurally ingame thus even when u look at the brick from 1cm distance its as defined as from 1 meter, with megatextures at 1cm its just gonna be a washed out mess.
 
zed JC has specifically stated the procedural texturing has always been mentioned as the next big thng in "consoles" and other generational hardware but it never works.

Every generation, someone comes up and says something like “procedural and synthetic textures and geometry are going to be the hot new thing. I’ve heard it for the last three console generations – it’s not been true and it’s never going to be true this generation too. It’s because management of massive data-sets is always the better thing to do.

Now I am thinking that looking at it from a higher point of view that what JC does is to program an idea to make games look better, more life like, and then he sexisfies it. Anything JC says regarding 3D software and hardware development is taken extremely seriously yet I feel to an extent JC targets one area at a time and moves on. Case in point would be Doom3 focused on lighting and the new engine focuses on MegaTextures (both are clever tricks as mentioned by Rev).

Whilst I find this interesting I also think other developers are outdoing JC, for example, Crytek because they focus more on the whizbang(TM) effects and seem to be able to add new code in a modular fashion (e.g. 3dc and HDR).

Now that quote above is interesting because JC specifically mentions the three console generations which would be (1) PS1 + Saturn + N64 (2) Dreamcast + PS2 + XBOX + Gamecube (3) PS3 + XBOX360 + Revolution. One might conclude that JC is shifting focus away from OGL or PC hardware but in reality he loves consoles as the Jaguar version of Doom which he worked on himself testifies to.

My question now is... is JC really that forward looking anymore or is his time coming to an end too?
 
Chalnoth said:
Well, it does help in that it frees up the artists to not have to resort to any sorts of tricks to apply detail: they just paint it on. This is a great technology in removing the artists one more step from the underlying graphics programming.
I hate spoiling the fun out of one of the proof-of-concept projects I'm working on, but... "Terrain Artists?! What artists? Mappers?! What mappers?". And yeah, I'm serious about this, perhaps I'll release a small part of the results if I got the time in half a year or so. And before anyone says I'm talking procedural geometry/textures, I'm talking procedural mapping (with some rather nice twists). And things like MegaTexture just don't mix too well with it, sadly.

Uttar
 
Last edited by a moderator:
zed said:
will not happen. artists dont have the time to texture each brick indiviually. they will use some template to create 1000s of different bricks and then maybe tweak these.
of course the obvious point is cut out the middle man and just provide the templates.
Well, I think the idea is that you either take a tiled texture, or some sort of procedural generation tool to generate the initial megatexture. Then you have total freedom to go in and edit this texture at any location. Imagine, for example, a rust stain creeping down a brick wall below an iron lighting fixture.

And from a gameplay standpoint, Megatexture would have the really neat capability of having infinite-persistence on decals: you'd just edit the megatexture instead of applying the decal on top of it.
 
Tahir2 said:
Case in point would be Doom3 focused on lighting and the new engine focuses on MegaTextures (both are clever tricks as mentioned by Rev).
I'm not sure the new engine focusses on MT; just that we're focussing on MT in his new engine!
 
Uttar said:
Mordenkainen: I'll fully agree with you (and JC's excitement) that this is a rather kickass technology, even though I'd like some more details on how it really is implemented anyway, but I do question however if this is what we want, moving forward.

The goal of any decent programmer should, nowadays, be to ower the art budgets of the game while increasing the overall visual quality at the same time. I don't see how MT does this. Sure, there'll always be $50M+ game projects (and tbh, I doubt the next JC game has such a budget), but I'd tend to feel that this is not the direction the industry is going. Personally, I'm more impressed by ideas such as Will Wright's "Spore" than things that complicate proper cost management. I'm probably the only one here caring about that, but let that be if need be.

Actually, I quite agree with you on your second point. I'm a part-time mapper and I'm a bit aprehensive that MT is going to make mapper's lives more difficult. I mean, right now I build the geometry for a map and just apply the "pre-made textures" that come with the game. With MT, as the "textures" are unique to each level's specific geometry, along with building the geometry I'll have to paint it as well. Not to mention download sizes are going to go way up.

OTOH I still think MT may be a great idea for terrains. IIRC, BF2 uses up to 4096 terrain masks and I still see seams and tiling (even with max settings - turn off the vegetation, etc. and it's plainly obvious). With terrains you still have to more or less "paint" them, even if not using unique texturing so I don't think MT for terrains would increase workloads by much. But again, for man-made structures, etc. I agree it will put an enormous strain on both mappers and artists. It might be worth it for ISVs (will have to wait until JC's next engine's first screenshot I guess) but even so, modders will suffer.

So anyway, right now I also have doubts (a generalised) MT is really where the industry needs to go.

DeanoC: very impressive for the time period. Shame it didn't get through. Didn't FarCry spur some interest in reviving this?
 
Mordenkainen said:
Actually, I quite agree with you on your second point. I'm a part-time mapper and I'm a bit aprehensive that MT is going to make mapper's lives more difficult. I mean, right now I build the geometry for a map and just apply the "pre-made textures" that come with the game. With MT, as the "textures" are unique to each level's specific geometry, along with building the geometry I'll have to paint it as well. Not to mention download sizes are going to go way up.
Ideally, a full Megatexture toolset would come with very robust features for painting these effects on, such as procedural generation of all sorts of textures both as the base set, and for painting on (imagine just painting a roadway onto the terrain, for instance). Of importance for the road, for example, would be some elevation of the geometry that is under the surface of the road, as well as a realistic blending between the surface of the road and the surrounding terrain. For very hilly terrain, you could even flatten the terrain as the road is painted.

So I think that Megatexture as an idea is a really possibly-neat thing, but it needs superb content-generation tools to fully come into its own. In other words, I think it's just like advanced pixel and vertex shaders (i.e. SM2/SM3 and future shaders). You can gain some benefit just by working in the assembly, but to truly get good content out there, you need to have superb tools for generating the shaders with minimal work, and no programming.
 
Back
Top