Virtual Texture Demo

Hi Guys,

1.jpg


I've released an implementation of Virtual Texturing (also known as MegaTexture in id Tech 5). It is written in C# and uses Direct3D 10 with the help of SlimDX. I'm releasing it under the MIT license.

Virtual texturing is analogous to virtual memory. A large texture (potentially larger than can fit in memory) is split into blocks of pixels called pages in a preprocessing step. During runtime the scene is rendered to determine which pages are required. The information is downloaded to the CPU, analyzed, then the required pages are scheduled to load in the background. Loaded pages are uploaded into a texture atlas. Pages are tracked in a quad tree which is used to build a page table. When rendering the scene, the page table is used to map from normal texture coordinates to the page stored in the texture atlas.

The maximum texture size is page table size times page size. So if your page table is 256 and your page size is 128, your virtual texture is 32k pixels wide. You can use a large page table (up to the maximum texture size of your GPU) or even multiple page tables to give you huge texture sizes. Also because you only upload what is required for the view you typically use less memory than standard texturing for a whole scene.

This is a simple, non optimized, implementation designed to illustrate the technique. I have support for bilinear and trilinear filtering. Trilinear filtering is calculated by doing two lookups of the virtual texture and lerping between them.

id Software and Sean Barrett talk about compressing the pages on disk as well as with DXT in GPU memory. I simply load the pages uncompressed. As a result the tile caches are huge on disk and loading may be a little slower than if compressed. Defragmenting your hard drive can greatly help with this. As it is now this implementation benefits from disk caching. It wouldn't be hard to add compression, I left it out for simplicity.

I've only tested it with textures up to 64k. I'm assuming it will work up until the point you reach float precision limits, in which case you could probably use doubles with D3D11, or pack your texture coordinates into multiple values.

This project was built and developed on an x64 machine with 6gb of RAM. All my memory is used during the tile generation phase. There may be address space issues on x86. I've tested it on an ATI Radeon HD4890 and an Nvidia 9600GSO with Windows Vista x64. I don't expect there to be too many issues but if you find bugs or fix any problems please let me know!

You can view a video, screen shots and download the demo and source here.

Thanks,
Brad Blanchard
www.linedef.com
 
That is really neat.
I might suggest you do another simpler demo that procedurally generates some large test textures - so you don't have to download 300+MB to try out the demo.
 
So basically I can replace the texture with anything? Can I change the viewing distance from the texture?

I'll have to download this and investigate. Maybe even install a compiler to see if I can make any kind of simple modification.

What I'd like to do is use this for viewing some large hubble/nasa pictures of space and such.
 
So basically I can replace the texture with anything? Can I change the viewing distance from the texture?

I'll have to download this and investigate. Maybe even install a compiler to see if I can make any kind of simple modification.

What I'd like to do is use this for viewing some large hubble/nasa pictures of space and such.

Yes you can. It supports 4 channel 8bit targa and raw formats. I've included a simple plane for viewing the entire texture without excess geometry. I've used it for viewing family photos.
 
Chalk it up to [strike]laziness[/strike] being busy for not checking out the source directly (thanks btw) but what's the size of the buffer you're passing to the CPU for the visual acuity determination?
 
Chalk it up to [strike]laziness[/strike] being busy for not checking out the source directly (thanks btw) but what's the size of the buffer you're passing to the CPU for the visual acuity determination?
The demo lets you choose, but by default it's 64x64. I found this works well for most things. Sometimes it has problems with small objects. I haven't seen any issues with 128x128 though.
 
Sorry to bump this thread, noob question...

Can you use the detail/res of mega/virtual textures to create a super detailed displacement map and then use dx11 tessellation to illustrate that dispacement?

Kinda like having a extremely high detail zbrush/mudbox model in realtime. Also I'm guessing that calculating the skin weights on a rigged character of such detail wont be too much hassle, performance wise?


Cheers.
 
Sure, it's easy to add more channels to your virtual texture, and there is nothing which prevents you from including displacement maps.

In order to get good performance, you have to do the tesselation/displacement after animating, but this should be no problem.
 
I think I'm confused about the feedback buffer.

It does not need to cover each pixel because the less detailed mip-levels pages will always be valid, and small areas on screen will not need the more detailed texels?

Only the feedback buffer can generate page-faults (?), so I don't see how this solution converges over time for small triangles which are nearby in screenspace, unless these small triangles are guaranteed to nearby in the MegaTexture.
 
Let's assume a page is 128x128 pixel in size. It should cover (with stretching, etc.) something like 32x32 pixels on screen (at least.) So if your feedback buffer is 1/32th the resolution, you still get one pixel touched per page (as your original worst-case minified page has 1x1 px then -- an "ideal" page will still cover 4x4 px in the 1/32th resolution buffer.)
 
I still don't understand. The feedback buffer, when read back, tells you what texture pages must be loaded, right?
Imagine two triangles that fit inside one feedback buffer element. Only one triangle's 'id' will be written to the feedback buffer. The other triangle's id will not. Is the assumption that the other triangle will be so small that texturing the tiny polygon with low-detail samples copied from the smaller mipmap pages won't be jarring?

Is it even guaranteed that there will be a correct (albeit highly minified) sample available for this other tiny triangle that is never written into the feedback buffer?

It seems weird that the frame doesn't converge towards the highest needed resolution.
 
I still don't understand. The feedback buffer, when read back, tells you what texture pages must be loaded, right?
Imagine two triangles that fit inside one feedback buffer element. Only one triangle's 'id' will be written to the feedback buffer. The other triangle's id will not. Is the assumption that the other triangle will be so small that texturing the tiny polygon with low-detail samples copied from the smaller mipmap pages won't be jarring?
Your understanding is fundamentally correct: if a triangle covers a sample in the main scene, but does not cover a sample in the feedback scene, no page fault occurs, and the page isn't loaded. In practice, it seems to work out pretty well anyway. Once a page is faulted in, it tends to stay there for awhile. As the scene animates, your triangle that fails to cover any feedback samples in one frame, will mostly hit one in the near future. The idea is that you'll get hits often enough for all necessary pages to stay resident.

If you were really concerned about this (say, a static scene), I suppose you could translate the geometry drawn into the feedback buffer by a different random offset every frame.

Is it even guaranteed that there will be a correct (albeit highly minified) sample available for this other tiny triangle that is never written into the feedback buffer?

It is only guaranteed if your page allocator guarantees it. That is entirely up to the app developer.
 
Thank you. The thought of jittering the feedback buffer had occurred to me; have any prominent devs spoken about it?
 
It's possible to write to UAVs from pixel shaders in DX11 right? So with DX11 you could presumably create a pixel accurate list of page misses during the normal rendering pass?
 
Thank you. The thought of jittering the feedback buffer had occurred to me; have any prominent devs spoken about it?

We jitter our feedback buffer, just a small random viewport jitter each frame (IIRC our feedback buffers is 1/8 our framebuffer) that ensures most the small triangles get picked up.
 
Back
Top