Mega Meshes - Lionhead

I haven't read the paper.
But I very much doubt they do DXT1 compression in the runtime.
Personally I'd compress the compressed textures, if their compression is lossy I'd just live with whatever format I processed for the compression.

Decompression is usually comparably expensive to a memory copy, the bulk of the cost is usually cache misses.

A lot of games compress all assets on disc and decompress them on read, because it's faster.
 
Read the paper, there's enough additional information to explain the issue I think. It's not that long.
 
ERP said:
But I very much doubt they do DXT1 compression in the runtime.
They use a PTC derivative according to the paper, and transcode it to DXN. I don't see overhead being an issue considering the whole point of Virtualizing your data is that temporal-coherency is your friend, so you won't be doing these operations often relative to what's already there (and the disk reads will cost more regardless).

And of the few other approaches that target texture virtualization - NDD had presentations on using PWC compressors and extensive detail on SPE transcoders to DXT, ID packs their stuff with HDPhoto derivative (and I presume, transcode to DXT as well).

Personally I'd compress the compressed textures, if their compression is lossy I'd just live with whatever format I processed for the compression.
Well, a simple-approach that works even better, is to split indices and color-data of an existing DXT texture and recompress each with your personal favourite choice of lossless or lossy algorithm.
It won't quite match compression ratios of PWC, but you have near-zero overhead to get back a DXT texture, and it will beat just doing a LZW(or some other lossless) pass on DXT texture as is (which is indeed the most common way to pack data on discs).

But MS is trying to patent this -_-
 
And virtual texturing needs more effective compression because the megatextures are just too big for DVDs, both in terms of disk space and in reading the actual data.
Virtual texture data streaming during runtime is actually considerable more bandwidth friendly compared to other streaming techniques. Virtual texturing only loads the surfaces that are visible (object backfaces and occluded parts are not loaded) and it loads only the exact mip level used for each area. Object based streaming systems load whole mip levels, are more conservative (load bigger mips sooner, because they use approximation) and load pixels that are hidden at the current point of view.

A good ballpark estimate would be that virtual texturing requires around 2-4x less runtime texture streaming bandwidth compared to object based streaming techniques. But virtual texturing requires more seek operations, as the data loading is more fine grained.
 
Well, a simple-approach that works even better, is to split indices and color-data of an existing DXT texture and recompress each with your personal favourite choice of lossless or lossy algorithm.

But MS is trying to patent this -_-

Yes, that's what I suspected they do for their MCT texture compression that is part of the XDK.
I didn't know they were trying to patent it though. That's lame.
 
But without virtual texturing you're reusing the same maps all around the level, whereas games like Rage will not have that luxury. It could counterbalance the higher efficiency to some level.

It's only thinking loudly on my part though, so of course if you have actual measurements giving you that figure then simply ignore it ;)
 
That's impressive,but does anybody notice the 2D folliage on the first video???It has really a wierd effect when turnin around a bush...
 
That's impressive,but does anybody notice the 2D folliage on the first video???It has really a wierd effect when turnin around a bush...
Yup, standard camera facing 'particles' for a foliage.
Most likely it would have been way too expensive to use meshes for foliage or to do some sort of layered parallax effect.
 
MfA said:
The odds of Microsoft suing a game developer who develops for their platform are pretty slim ...
Well it's part of XDK (the MCT, as Barbarian mentioned) - so not sued, but likely kindly asked to keep its use exclusive to 360 version only.

To be fair with modern packaged games, 360 is the one that's always running out of disk space so every-little thing helps, but it's still lame because there's a lot of platforms and distribution methods out there that could benefit from that outside of "desktop" consoles.

also, everything is patented.
Of course - especially the obvious stuff with prior art.

Laa-Yosh said:
But without virtual texturing you're reusing the same maps all around the level
It's sort of counter intuitive, but streaming bandwith is only marginally affected by reusing maps(assuming "reasonable" reuse) - basically you end up increasing the frequency of entire mip-chains being loaded, while with unique data you will only get actually used-portions loaded.
 
Well it's part of XDK (the MCT, as Barbarian mentioned) - so not sued, but likely kindly asked to keep its use exclusive to 360 version only.
Shrug, best policy on software patents is ... don't ask, don't tell.

If they don't know you will be using it they can't ask you not to, if you don't know for certain they patented it you avoid tripple damages ... software patents are just best left ignored altogether for a developer, no good can come from admitting to reading them.
 
It's sort of counter intuitive, but streaming bandwith is only marginally affected by reusing maps(assuming "reasonable" reuse) - basically you end up increasing the frequency of entire mip-chains being loaded, while with unique data you will only get actually used-portions loaded.
Yes. I tried to explain this to our artists too, when we were dicussing about reusing some of our decal data. If you want to be sure the reuse works all the time you have to restrict the artists/designers to use certain texture/object sets in certain world areas. Otherwise each singular object instance near the camera means all it's texture mips must be loaded. If the count of differently textured objects in the area rises too much, the texture cache starts to trash a lot (and you lose performance). Virtual texturing decreases this problem a lot.

But without virtual texturing you're reusing the same maps all around the level, whereas games like Rage will not have that luxury. It could counterbalance the higher efficiency to some level.
You don't have to save each unique pixel to the disc/HDD. You can instead save the base textures to the tiles, and blend all the decals on top of the base textures whenever you load a tile from the disc/HDD. This saves huge amount of space. And it's much faster to blend decals once at virtual texture tile loading compared to rendering the decals on top of the geometry every frame like most engines do.
 
Thanks for the explanations, guys!

Also, the decal thing with virtual textures will only work if the content is produced as in id's Tech 5. This Mega Mesh approach also results in really, truly unique textures, with no base tiles or decals are present in the editor, because everything is Polypainted in Zbrush.
Polypaint is similar to vertex colors, but they assign one color per quad polygon, instead of per vertex. This is the info that Lionhead's tool can bake into texture maps. I assume they paint color and specular layers separately in Zbrush.
 
For simple geometry you might be able to get away with some layers using projected textures (during editing) so you can still have some texture reuse even with sculpted level design?
 
Last edited by a moderator:
Their entire workflow has been designed to not have any UV projection to do while editing. That doesn't mean that it can't have such features but still, they'll only be able to view the final geometry in Zbrush as far as I can tell and that might make any actual editing problematic.
 
Also, the decal thing with virtual textures will only work if the content is produced as in id's Tech 5. This Mega Mesh approach also results in really, truly unique textures, with no base tiles or decals are present in the editor, because everything is Polypainted in Zbrush.
I don't know how big their game world is, but good enough pixel density (for looking down in a first person game) needs at least 256x256 pixels per square meter (for 720p, you'd need more for PC resolutions).

Assuming they have a 4096x4096 meter terrain (if you have one kilometer view range that's actually pretty small). With acceptable pixel density that would be 256*4096 x 256*4096 (= 1024k * 1024k) pixels. A single uncompressed (8888) texture of that size would take 4096 gigabytes of storage space. Their color compression has 60:1 ratio, so the color texture would be 68 gigabytes. Add in the normal layer (compressed at 40:1 ratio), and the total compressed size would be almost 200 gigabytes. This is the reason why we do not (and seems that Rage doesn't either) save all the baked terrain data to the media.

When using virtual texturing, you do not need to save all your tiles as baked pixel data. Some virtual tiles can point to exactly same physical tiles (and only have different decals blended on top during loading). And some tiles can be stored as highly compressed information for an algorithm that generates the image of the tile. You can have a set of detail textures and lots of blending shapes and use them with the loaded terrain tile data to generate the tile's uncompressed pixel data. We for example only store the upper mips of our terrain as baked pixel data (from 128x128 down to 16k x 16k). All the closeups are generated with an artist driven texture generator. Each tile can have dozens of polygonal texture area definitions (that are blended together with artist driven importance blending system). In addition to color and normal we also have per pixel displacement (or parallax mapping) and material properies (specularity and glossiness). It would be impossible to save this much data, if all mips had to be stored as baked pixel data.

Naturally if Lionhead's game has only limited player movement area, they could only store the highest mips of their terrain texture in that area. The virtual texture system would never request higher mips of the further away terrain. This would save huge amount of space. As far as I have understood, id is also using similar optimization in Rage.
 
Actually Lionhead has tools that measure how close the camera gets to each polygon basically, and generate the automatic UV based on this data. This could probably be done without the megamesh tool with just some manual UV editing but that'd still be guesswork, and then there's the obligatory sniper rifle in every FPS game.
Anyway, their texture density is intentionally uneven and this way they can keep the dataset size at reasonable levels. It's not meant to be a more generalized solution like tech5, just as I'm sure your engine is tailored to the needs of your game as well. It's also a very interesting and complex one as it seems ;)
 
Back
Top