BradBlanchard
Newcomer
Hi Guys,
I've released an implementation of Virtual Texturing (also known as MegaTexture in id Tech 5). It is written in C# and uses Direct3D 10 with the help of SlimDX. I'm releasing it under the MIT license.
Virtual texturing is analogous to virtual memory. A large texture (potentially larger than can fit in memory) is split into blocks of pixels called pages in a preprocessing step. During runtime the scene is rendered to determine which pages are required. The information is downloaded to the CPU, analyzed, then the required pages are scheduled to load in the background. Loaded pages are uploaded into a texture atlas. Pages are tracked in a quad tree which is used to build a page table. When rendering the scene, the page table is used to map from normal texture coordinates to the page stored in the texture atlas.
The maximum texture size is page table size times page size. So if your page table is 256 and your page size is 128, your virtual texture is 32k pixels wide. You can use a large page table (up to the maximum texture size of your GPU) or even multiple page tables to give you huge texture sizes. Also because you only upload what is required for the view you typically use less memory than standard texturing for a whole scene.
This is a simple, non optimized, implementation designed to illustrate the technique. I have support for bilinear and trilinear filtering. Trilinear filtering is calculated by doing two lookups of the virtual texture and lerping between them.
id Software and Sean Barrett talk about compressing the pages on disk as well as with DXT in GPU memory. I simply load the pages uncompressed. As a result the tile caches are huge on disk and loading may be a little slower than if compressed. Defragmenting your hard drive can greatly help with this. As it is now this implementation benefits from disk caching. It wouldn't be hard to add compression, I left it out for simplicity.
I've only tested it with textures up to 64k. I'm assuming it will work up until the point you reach float precision limits, in which case you could probably use doubles with D3D11, or pack your texture coordinates into multiple values.
This project was built and developed on an x64 machine with 6gb of RAM. All my memory is used during the tile generation phase. There may be address space issues on x86. I've tested it on an ATI Radeon HD4890 and an Nvidia 9600GSO with Windows Vista x64. I don't expect there to be too many issues but if you find bugs or fix any problems please let me know!
You can view a video, screen shots and download the demo and source here.
Thanks,
Brad Blanchard
www.linedef.com
I've released an implementation of Virtual Texturing (also known as MegaTexture in id Tech 5). It is written in C# and uses Direct3D 10 with the help of SlimDX. I'm releasing it under the MIT license.
Virtual texturing is analogous to virtual memory. A large texture (potentially larger than can fit in memory) is split into blocks of pixels called pages in a preprocessing step. During runtime the scene is rendered to determine which pages are required. The information is downloaded to the CPU, analyzed, then the required pages are scheduled to load in the background. Loaded pages are uploaded into a texture atlas. Pages are tracked in a quad tree which is used to build a page table. When rendering the scene, the page table is used to map from normal texture coordinates to the page stored in the texture atlas.
The maximum texture size is page table size times page size. So if your page table is 256 and your page size is 128, your virtual texture is 32k pixels wide. You can use a large page table (up to the maximum texture size of your GPU) or even multiple page tables to give you huge texture sizes. Also because you only upload what is required for the view you typically use less memory than standard texturing for a whole scene.
This is a simple, non optimized, implementation designed to illustrate the technique. I have support for bilinear and trilinear filtering. Trilinear filtering is calculated by doing two lookups of the virtual texture and lerping between them.
id Software and Sean Barrett talk about compressing the pages on disk as well as with DXT in GPU memory. I simply load the pages uncompressed. As a result the tile caches are huge on disk and loading may be a little slower than if compressed. Defragmenting your hard drive can greatly help with this. As it is now this implementation benefits from disk caching. It wouldn't be hard to add compression, I left it out for simplicity.
I've only tested it with textures up to 64k. I'm assuming it will work up until the point you reach float precision limits, in which case you could probably use doubles with D3D11, or pack your texture coordinates into multiple values.
This project was built and developed on an x64 machine with 6gb of RAM. All my memory is used during the tile generation phase. There may be address space issues on x86. I've tested it on an ATI Radeon HD4890 and an Nvidia 9600GSO with Windows Vista x64. I don't expect there to be too many issues but if you find bugs or fix any problems please let me know!
You can view a video, screen shots and download the demo and source here.
Thanks,
Brad Blanchard
www.linedef.com