Do unused MIP levels take video memory?

brandonm

Newcomer
If an object is only drawn far away, do its textures just take up video memory for the necessary mip levels, or the highest resolutions? In particular Oblivion has separate meshes for distant buildings, and I'm wondering if it would save video memory if they used lower resolution textures.

I've found various information online, but no answers: Virtual video memory should make it easy to let high mip levels page out. OpenGL and DirectX both have functions to say you only want rendering to use lower-resolution levels in the mip map, which kind of implies it might actually help on some hardware (especially as the OpenGL docs talk about using it while you load the high resolution data). This MegaTexture stuff sounds basically like manually managing what mip maps to load yourself so you can fit just the necessary filterings of huge textures on the video card, which sounds like the hardware doesn't handle this stuff really well.
 
In general, no, the video hardware doesn't cleverly do any paging for you. As you mention it's quite possible for this to be handled by hardware, but it implies an MMU with a page walker, which current GPUs do not have.

For now, you basically do this in software (a la megatexture or other "virtual texturing" methods), although Larrabee will allow a lot more options on this front.
 
Yes , Unused MIP levels still take video memory.
Artist saved all MIP levels in one DDS file, unless we know we never need those mipmap.

A object moving from near to far away, Tex init will read texels from the same dds texture file.
But different mipmap level.
The DDS files always take the same xxx byte in Vram.
But when object is far away, you may save a lot of bandwidth.
 
Tex unit read texels from the same dds texture file.
But different mipmap level.

If you maps are huge.
The textures need a lot of memory that more than your Vram.
Mega texture may help.
 
Thanks, that takes care of my question. I've already written some code for walking a bunch of meshes and calculating how much resolution I need in each of their common textures if they are only going to be drawn at a given distance. It's good to know it might actually be useful for something beyond a little practice with file formats and geometry.

Andrew, do you have links to more information about MMUs on GPUs? I hear they have had virtual memory since at least the g80, but I guess I've just been assuming that meant the same kind of granularity and features as on a CPU.
 
Andrew, do you have links to more information about MMUs on GPUs? I hear they have had virtual memory since at least the g80, but I guess I've just been assuming that meant the same kind of granularity and features as on a CPU.
To my knowledge G80 and no currently GPUs support "real" virtual memory, at the level of doing page table translations on texture taps, but I could be wrong.

D3D10's driver model was at one point supposed to virtualize GPU memory, but that got descoped... I believe it is also no longer a part of D3D11, so you'll have to wait until at least D3D12 until there's OS, driver-level support for virtual memory and paging of GPU memory IIRC.
 
To my knowledge G80 and no currently GPUs support "real" virtual memory, at the level of doing page table translations on texture taps, but I could be wrong.

I don't think we could tell whether it does or not, given Vista's driver model (WDDM1.0). From what I know of it, WDDM1 memory management is built around the capabilities of older hardware (R300, NV30), and doesn't give a lot of opportunity for NV and AMD to take advantage of more advanced features (like having surfaces partially resident or physically discontiguous, never mind demand paging).

Intel has been making a big deal about how Larrabee can do all these neat virtual memory tricks. Seeing how much of that Vista actually lets them pull off should be enlightening.
 
I don't think we could tell whether it does or not, given Vista's driver model (WDDM1.0).
Yes that's certainly true, although I would have expected it to be exposed in CUDA/PTX or CAL/CTM if it was supported. IIRC R600+ can read/write directly from host memory, but I know of no support for virtual memory or paging... please someone let me know if I'm wrong though!
 
If I understand it right, CUDA and CTM are still subject to WDDM. The APIs -- DX9, DX10, OGL, CUDA, CTM, etc. are peers; they're user-mode clients of the WDDM kernel services. WDDM virtualizes the physical resources between all of these.
 
On PS2 many games (included a game I worked on) determined in advance what mip maps were needed for any given texture (this was usually done precomputing the max deltaU and deltaV for any object in the scene and making some not-entirely-accurate assumptions on the max and min texture-pixel ratio), and only the required mip maps were going to be uploaded from PS2's main memory to its GPU's embedded memory.
This neat trick used to save a lot bandwidth and rendering time.
 
i have a question about mipmaps, in the nv control panel I have the option to auto generate them do I want to enable this ?
 
Back
Top