John Carmack on MegaTexture

That was a very nice read. I appreciate you linking it.

I liked the picture on the second page as well, the conifers looked nice for some reason.
 
Yes, ditto.

JC even with some un-fished-for XB360 love:

I am having a really good time working on the Xbox 360 right now, graphic technology-wise.
 
32k x 32k texture eh? whos gonna paint it? ;)
Realistically you will have those textures generated semi-automatic - here I want a lill bit grass, there a bit more and over there alot of mud, the rest would be done automatically. Aslong as the textures are simple enough, the approach of using multiple Textures on top and/or procedural pixel-shaders seems more logical to me.

Similary a tool could let you deal with textures of any size and automatically chop them up in more parts, polygons which contain more than 1 Part could use a Shader for "merging" (I believe QuakeWars will do something similar). Could be useful for limited stuff like the brickwall of a House. Thats as far as I see use for "painted" Megatextures. In the end, you wont have unique textures for everything, as one of these Megatexture will have ~4GByte, maybe 1GB with compression - restrictions have to be made anyways, be it because streaming the tiles fast enough or having artists painting all this stuff. Let alone if you want more interactivity like the grass on the floor being turned to mud after an explosion, or even deformable terrain a dynamically created surface will be better suited.
(And TBH, I`d rather see interactivity than unique, but prebaked dirt)
 
Npl said:
32k x 32k texture eh? whos gonna paint it? ;)
Realistically you will have those textures generated semi-automatic - here I want a lill bit grass, there a bit more and over there alot of mud, the rest would be done automatically. Aslong as the textures are simple enough, the approach of using multiple Textures on top and/or procedural pixel-shaders seems more logical to me.
That's what is said in the interview, procedural generation, then artists hand touched...

Npl said:
Similary a tool could let you deal with textures of any size and automatically chop them up in more parts, polygons which contain more than 1 Part could use a Shader for "merging" (I believe QuakeWars will do something similar). Could be useful for limited stuff like the brickwall of a House. Thats as far as I see use for "painted" Megatextures. In the end, you wont have unique textures for everything, as one of these Megatexture will have ~4GByte, maybe 1GB with compression - restrictions have to be made anyways, be it because streaming the tiles fast enough or having artists painting all this stuff. Let alone if you want more interactivity like the grass on the floor being turned to mud after an explosion, or even deformable terrain a dynamically created surface will be better suited.
(And TBH, I`d rather see interactivity than unique, but prebaked dirt)
I'm still wondering how it's done, from the file it seems rather obvious the texture is indeed tiled & mipmapped.
From the interview it seems it "just" is virtual texturing...
Definetly something anyone would want to implement in his/her/its engine.
 
Ingenu said:
I'm still wondering how it's done, from the file it seems rather obvious the texture is indeed tiled & mipmapped.
From the interview it seems it's "just" virtual texturing...
Definetly something anyone would want to implement in his/her/its engine.
Have you seen this thread?
Chalnoth's guess might turn out to be pretty close, IMHO. :)
 
If you want to see a game with completely unique-textured levels, look no further than Balder's Gate Dark Alliance.

http://media.ps2.ign.com/media/015/015580/imgs_1.html

However, BGDA's unique texturing only works because the game has a fixed, top-down camera which allows it to stream level geometry and textures in tiny chunks seconds before it becomes visible. JC is going for unique texturing with a free camera -which is much, much harder.
 
I do think unique texturing is the key for the coming generation.


Isn't that the complete opposite of what everybody else thinks? It's been all shaders/physics/AI for the past couple months - nobody is talking about texturing.
 
Need to create an account to read it ?
That sucks...

Ah... no it wants cookies though.
 
In the video interview I saw, Carmack said that megatextures wasn't that big a deal, only affecting about 10% of the overall game graphics. Just a piece in the puzzle.
 
Mordenkainen said:
New interview with a more details WRT compression and the tools use to create megatextures. (plus some other generic stuff about consoles, etc.)

http://spong.com//detail/editorial.jsp?eid=10109375&&cb=0.3115325894789901&&cb=0.5813667167945591

So DCT and BTC? I can't see that being mapped into the the default texture compression standard very quickly. Presumably they just decompress a small portion of the "mega-texture" and then put into the graphics memory uncompressed. It doesn't sound very bandwidth friendly.
 
Last edited by a moderator:
Simon F said:
So DCT and BTC? I can't see that being mapped into the the default texture compression standard very quickly. Presumably they just decompress a small portion of the "mega-texture" and then put into the graphics memory uncompressed. It doesn't sound very bandwidth friendly.

Any suggestion on something more bandwidth friendly, S3TC ?
 
Ingenu said:
Any suggestion on something more bandwidth friendly, S3TC ?
I would think that it could, at least, replace the BTC part, with DCT possibly used to further compress the indices <shrug>
 
I wonder how he's implementing "similar" functionality for all geometry and fixing warps/bluriness on steep slopes?

Would this be at all a good idea:

  • consider geometry at a fairly high level (i.e. an instance of a model, a chunk of terrain, a single building, etc...). UV unwrap to create a single texture for each chunk of geometry
  • place unwrapped textures onto mega-textures (doesn't matter how they are arranged)
  • each big 'chunk' has a reference to its section of the megatexture

For models, I guess you'd have a separate mega-texture(s?) into which you place as many copies of a standard "base" texture as there are likely to be model instances. Each model instance could then effectively be detailed differently.

Has there been any mention of compressing and streaming updates to a mega-texture back to disk (so that things like tire tracks, bullet holes, footprints in the snow, bloody shirts and so on can actually persist without taking up crazy amounts of RAM)?
 
psurge said:
Has there been any mention of compressing and streaming updates to a mega-texture back to disk (so that things like tire tracks, bullet holes, footprints in the snow, bloody shirts and so on can actually persist without taking up crazy amounts of RAM)?

Not AFAIK. But that would require a copy of the megatexture at loadtime so that the original is never overwritten. On my SATA drive copying a 500mb file takes at least 20 seconds so it could make level loads too long. Also while I know that the compression ratio on each mega-texture can be defined by the level designer I don't know how feasible it is to compress it in real-time. But like you said this would be a great feature that would further reduce memory usage.
 
Mordenkainen -
I would proceed as follows: when installing, write out a mega-texture swap-file containg a 500MB(say) of zeroes. This would just be reused every time you play so you wouldn't have such a huge load time penalty (nor would you run into the situation where someone no longer has enough free disk space to play). In engine you'd keep track of whether or not a given page of the megatexture has been modified and hit the swap-file or original megatexture as necessary.

I have no idea how much of a problem compression would be performance-wise... Simon F would you say it was feasible (on a high end PC)?

Another problem is what you'd do on game save/reload... it doesn't seem sane to keep around all the modifications :smile: Maybe just those in the immediate vicinity of the player?
 
Mordenkainen said:
Not AFAIK. But that would require a copy of the megatexture at loadtime so that the original is never overwritten. On my SATA drive copying a 500mb file takes at least 20 seconds so it could make level loads too long. Also while I know that the compression ratio on each mega-texture can be defined by the level designer I don't know how feasible it is to compress it in real-time. But like you said this would be a great feature that would further reduce memory usage.
I assume you are speaking in terms of experience with the Doom engine. Have you asked Carmack about improvements made (and I am pretty sure there have been improvements post-Doom at least in this aspect of the MT tech) ?
 
psurge said:
I have no idea how much of a problem compression would be performance-wise... Simon would you say it was feasible (on a high end PC)?
Are you asking what the cost of "transcoding" (well, full decomperssion and recompression really) from one compression system (e.g. DCT + BTC as suggested in the posted link) to DXTC? To do it well is costly but certainly you can do cheap and cheerful version but it would seem a shame to compromise the quality.

I personally feel that it would be better to build an additional, CPU-only, compression scheme on top of the hardware supported scheme, probaby some sort of transform+entropy encoding method.
 
Back
Top