Controlling video memory fragmentation

K.I.L.E.R

Retarded moron
Veteran
I frequently delete textures(VBOs as well) and create new ones during my program's lifetime.
I have yet to run into any performance issues due to the nature of my program, however I'm interested in avoiding memory fragmentation(Just because!), naturally someone will say "The driver determines where in video memory things go.", well that's all nice however I still want to know if there is any way from my program that I can influence memory fragmentation when I'm constantly creating and deleting textures and VBOs.

Assume there is no alternative to creating and deleting textures frequently.
The issue isn't about how I can do things differently, but how to control fragmentation.

I assume that with the new OpenGL 3.0 object model I will have a pointer that is mapped(I wish I could have a pointer directly from SRAM to VRAM :) ) to a part of video memory?
Under the new object model, I assume there would be nothing technical stopping me from merging and reusing the pointers to accommodate new data?

Code:
struct ObjectPointers{
 ObjectPointers *next;
 GLvoid *ptr;
}

ObjectPointers a, b;
ObjectPointers newObj;

newObj.next = &a;
newObj.next.next = &b; //done this way for clarity
 
Well stuff in video memory doesn't just come and go as you please... It's aligned and driver will decide when and where to put your textures when they are needed. If you're constantly creating and destroying 256x256 textures for example they will probably be more or less in the same spot in vram all the time. Similary with other resources. That's why knowing how much ram does a card have doesn't help you much, you can run out of it before you actually create say 256MB of resources (front, back, z buffer included).
 
Last edited by a moderator:
First of all, such approach was dropped some years ago as they were designing the VBO. Nvidia's *_range extensions work exactly as you want, letting you reserve an area in the video memory and mapping it to the pointer. But, this approach was dropped in a favor of more abstract one. So I don't believe that they will move backwards. Moreover, I think it isn't something you should worry about :) Still, nothing prevents you of using glMapBuffer...

p.s. Drivr already does that sort of management, I am sure, why would you bother?
 
The closest I've heard to this (which is D3D-centric, so don't know how applicable to OGL) is the resource allocation strategy.

If you structure your application to create the largest and most static resources first (when the VRAM is fresh) they should get the best spots, then you work towards the smaller and more 'variable' resources as well as managed textures...

The idea, as I understand it, is that by allocating all the static resources up front you're going to reduce fragmentation by not having a, for example, bunch of dynamic 128x128 textures stuck in between two huge static 4096x4096 render targets that never move....

How effective this is I don't know - I played around with this sort of thing a few years back but I never really got any results worth writing about :smile:

hth
Jack
 
I prefer to have complete control of resources and how I use them.
No you don't, you just think you do. You couldn't manage these resources as well as the driver can.
The entire way how you think about video memory may actually be false. Who says video memory can fragment? Who says a texture needs to be one contiguous block in memory? If you had it your way, and that kind of exposure existed, the driver might rather have to emulate an inefficient behaviour just to appease the API definition.

It's better when such details are hidden.

Besides, even if everything works behind the scenes exactly as you think it might, having the optimizations done once, in one place, by a party that can do it properly and has high incentives to provide such optimizations, is far far more sensible than making every graphics programmer on the planet shooting their feet off trying.
 
Who says video memory can fragment? Who says a texture needs to be one contiguous block in memory?

I find it quite interesting. Is there some sort of mechanism similar to what we can find in x86 (i mean resource->page-> frame translation?)
I understand that its not as neccesary as in cpu memory system becouse its easier to reallocate memory when you have (most of the time) fixed resource unit size like for example 256x256 texture).

Any info would be highly appreciated :)
 
It is.
However my point is to have more control over it, that's all.
If it is implementation dependent, asking for more control is asking for more direct access to the specific platform, and thus reducing compatibility. As such, outside of vendor-specific APIs, your request is unlikely to make much sense, I fear.

Having some idea of the restrictions applying to your target hardware, so as to structure your program properly within your own constraints, is likely your best option at this point. The only way to get access to this kind of information would likely be through the IHVs' developer relations departments.
 
This isn't about a specific program. If I have to go through IHV's then I'm probably not going to get the information out of them.
I'd highly doubt that they would tell me what the driver devs are doing behind the scenes.

Thanks everyone, looks like a dead end.
 
Yes, I do influence RAM fragmentation.
Yes I do apply the same techniques where it makes sense to VRAM to a degree(VRAM != SRAM).


The question rise : do you control RAM fragmentation ?
If so why don't you try to apply the same rules to VRAM ?
 
Then it should be fine really, the drivers are gonna help you too, there's not a lot of things you might do.
 
The advice I've heard from NVIDIA/AMD regarding this is to create your largest and most static resources first (and then in order from largest to smallest), and try to minimize "on-the-fly" resource allocations. This is pretty similar to what one does with standard RAM, so presumably the same techniques apply as much as they can be implemented through the API.
 
Back
Top