The main problem as I see is that megaTexture requires longer, more performance intensive shaders; you need to look up the texel in the atlas on your own, probably perform anisotropic filtering on your own and so on.
The simplest virtual texture lookup requires just a single extra unfiltered texture read (to a 16-16-16-16 floating point indirection texture), and a single extra mad instruction. The texture read is also very cache friendly (big areas read the same pixel).
You can use the hardware trilinear filtering without any problems (no extra performance hit at all) if you include one mip layer to your page cache texture, and four pixel borders to the pages. Also 2 tap hardware anisotropic is fully accessible if you need it, but majority of console games don't even use trilinear for the whole scene so it's not likely something that current generation virtual textured console games will use. Virtual texturing also gives the developer more options about custom filtering, since it's easy to create your own filtering and mipmapping systems (for example indirection to non square mipmaps is cheap and can generate anisotropic style filtering).
In our system we optimized the indirection texture first to 8888 format and then down to 565. I never thought 565 textures would be useful anymore
. 565 fits perfectly for indirection texture (we have a 4096x4096 page cache texture that stores 128x128 pages). With a small integer indirection texture you need 2 extra instructions, but the indirection texture becomes really tiny and really bandwidth friendly to sample.
Basically virtual texturing increases your main shader ALU instruction count by 3 and your TEX instruction count by one (on PC by one extra since you have no custom texture formats with free range scaling). As our main shaders are around hundred of instructions, this extra performance hit is negligible.
The bigger GPU performance hit comes definitely from the page fault texture rendering, especially if the game has long view range and hundreds of thousands of visible objects. A naive implementation (with fat 16-16-16-16 buffers and high resolution) can take a few milliseconds to render, but optimization (and proper multithreading on CPU side) is the key here as well to keep the performance hit really low.
But virtual texturing is definitely just not a performance hit. Simply the fact that all objects sample from the same big texture reduces a ton of render state changes, and allows the game engine to render more objects with a single render call. Hidden surfaces are faster to render, since the texture data is not loaded for those surfaces (lower mip fallback is more cache friendly to sample). Decals can be rendered in the virtual texture pages during runtime (and stay there permanently) instead of being rendered every frame on top of the geometry (huge savings in highly decaled areas). Ambient occlusion and light baking during runtime is also possible to the virtual texture pages (very nice for example for further away geometry that is not animated - but the scene still needs to be used during night time and during day time --- or if you have in game editor and heavy commitment to user created content
).
It would be really interesting to learn all the technical details and optimizations id-software has implemented for their virtual texturing system. Since they have been using a similar technology (terrain only megatexturing in Quake Wars) for a long time, they likely have bumped into lots of nice ideas. 60 fps virtual texturing on iPhone and their prototype Wii virtual texturing system pretty much proves that their system is very well optimized. It would be especially interesting to learn how much (artist controlled) procedural generation they have used to keep the virtual texture HDD usage manageable.
They compress the hell out of the megatextures. Let me remind you that it also has normal and specular layers because characters and objects are included as well
This is one of the things I have been wondering, as they are using a JPG-style compression extensively for the virtual texture pages. Color data is nowadays only a fraction of the whole material data, and already compresses really well in one DXT RGB triplet. In our tech we have two DXT5 textures per material (encoding color, normal, spec, glossiness and emissive/ambient), so the color is only 25% of the data (RGB triplet is half of a DXT5). JPG compression to normal vectors and material properties is very destructive, since JPG is designed only to compress color information (only luminance is pixel precise even at highest settings, chroma is always very imprecise)... I doubt they have no normal maps in their terrain, since that would mean no specular highlights and no dynamic lighting at all.
Rumors state that there can be more than single MT at any time, so the limit of 128k can be bypassed even for a huge landscapes.
More than one virtual texture is entirely possible. And seems that id has chose that route.
A single virtual texture can be also larger than 128k (unless you use 10-10-10 textures for page request instead of 11-11-10 or packing bits to 8888).
Basically the hard limit for a single virtual texture size is formed by your page size and your indirection texture size. We have for example 128x128 pages and 2048x2048 indirection texture, so the virtual texture is 256k * 256k. This might sound really big, but is not even enough to uniquely map a big terrain (without any tricks). For example if you have a 16km * 16km (16*1024) terrain and planar map the whole VT over it, you will only have 16*16 pixels per meter. By mapping the nearby area and the whole terrain area separately in the virtual texture, you can basically get as high texel density as you want and as high world size you want (if you do not need to store all this data to a HDD).