Need to know memory requirements for videocard settings

K.I.L.E.R

Retarded moron
Veteran
I would love a formula (explained as well :)) so I can work out a theoretical VRAM requirements in a scene.

I would like it to work out texture storage, buffers, AA mode used, AF mode used, textures, width, height, colour depth, geometry in scene and anything else related.

Thanks.

PS: I know I am asking for a formula with 50 variables. ;)
I am hoping to make a personal program so I can calculate all the bandwidth and memory requirements for my video card settings. Mainly so I know what settings are best used.

My current formula:

bitDepth = bitDepth/8;

buffers = 2*antiAliasing+3(front, back, z);

videoMemory = width*height*bitDepth*buffers;

is basically

W * H * 4 * (2N + 3)

1600x1200x4x(2*6(AA_MODE)+3)

= 115MB VRAM eaten up

should I divide by 6 because of colour compression (6:1)?

Sorry if this sounds like the math of a mumbling idiot. :LOL:

All I have left is average scene geometry and textures.
 
Dunno about the rest but I do know that you don't divide by 6 because of the colour/z compression. If the graphics card didn't always reserve the maximum and then you had a frame that managed to be completely uncompressable you're going to have a problem :)
 
Almost riht, but this one is wrong:
K.I.L.E.R said:
buffers = 2*antiAliasing+3(front, back, z)
Here you add two AA'ed buffers, then you add them once more unantialiased (maybe I should just say aliased) in the + 3. You should use:
buffers = 2*antiAliasing+1
This is for architectures that downsample to the frontbuffer (and it assumes that all the buffers have the same bitdepth). For situations where the AA samples are combined in the ramdac use:
buffers = 3*antiAliasing
 
tripple buffering:
buffers = 2*antiAliasing+2;
videoMemory = width*height*bitDepth*buffers;

double buffering:
buffers = 2*antiAliasing+1;
videoMemory = width*height*bitDepth*buffers;

I understand double buffering is right so must tripple buffering be right too?

Thanks
 
K.I.L.E.R said:
tripple buffering:
buffers = 2*antiAliasing+2;
videoMemory = width*height*bitDepth*buffers;
Yeah, as long as the architecture downsamples the finished images in memory (instead of in the ramdac), that should be right.
 
Thank You.

Nearly 110 MB of VRAM used on my R300 with 1600x1200 32bpp 6x AA and triple buffering.

Nearly 100MB used with same settings + double buffering. Why such a small difference going from DB to TB?
 
It's amazing how much you actually learn when doing the maths. :LOL:
My last years math teachers always proved that. :)

Thank You again for the help.
 
Eek. . . I guess this shows people that all the video RAM that recent cards have certainly -is- being fully utilized. . .
 
K.I.L.E.R said:
Thank You.

Nearly 110 MB of VRAM used on my R300 with 1600x1200 32bpp 6x AA and triple buffering.

Nearly 100MB used with same settings + double buffering. Why such a small difference going from DB to TB?
Something is wrong with your equations.

1600 * 1200 * 4 * (1 (front buffer) + 1 (back buffer) + 1 (Z buffer) + 6(AA back buffer) + 6 (AA Z buffer)) = 110 MB. When forcing AA, you have to keep the non-AA Z buffer around because there are many applications that need it.

With triple buffering, add another 7 MB to the above. However, I don't think triple buffering is needed with AA on the R300 because the AA buffer itself is like a third buffer.
 
OpenGL guy said:
Something is wrong with your equations.

1600 * 1200 * 4 * (1 (front buffer) + 1 (back buffer) + 1 (Z buffer) + 6(AA back buffer) + 6 (AA Z buffer)) = 110 MB. When forcing AA, you have to keep the non-AA Z buffer around because there are many applications that need it.

With triple buffering, add another 7 MB to the above. However, I don't think triple buffering is needed with AA on the R300 because the AA buffer itself is like a third buffer.
Hm, I was wondering whether you need another buffer to downsample to when doing AA, but came to the conclusion that it shouldn't be necessary. Why do you think you need it, and moreso, why would you need a low-res Z-buffer?
 
Xmas said:
OpenGL guy said:
Something is wrong with your equations.

1600 * 1200 * 4 * (1 (front buffer) + 1 (back buffer) + 1 (Z buffer) + 6(AA back buffer) + 6 (AA Z buffer)) = 110 MB. When forcing AA, you have to keep the non-AA Z buffer around because there are many applications that need it.

With triple buffering, add another 7 MB to the above. However, I don't think triple buffering is needed with AA on the R300 because the AA buffer itself is like a third buffer.
Hm, I was wondering whether you need another buffer to downsample to when doing AA, but came to the conclusion that it shouldn't be necessary. Why do you think you need it, and moreso, why would you need a low-res Z-buffer?
I don't understand your first question. You can't downsample directly to the front buffer because of tearing. As far as the low-res Z buffer goes, think about render-to-texture.
 
OpenGL guy said:
I don't understand your first question. You can't downsample directly to the front buffer because of tearing. As far as the low-res Z buffer goes, think about render-to-texture.
As for the downsampling, I thought you could do that in the vertical retrace phase if VSync is on. And if it's off, you'll get tearing anyway.

Additionally, you might not have to downsample to the front buffer, depending on the memory layout. Let's say you have five buffers and use the first four as AA backbuffer and the last one as front buffer. If the frame is finished, downsample it to the first buffer (that works because you only overwrite values that are already downsampled). Then use 1 as front buffer and 2-5 as AA back buffer.

Render-to-texture, ok, but why keep a low-res Z-buffer for 'ordinary' rendering?
 
Xmas said:
OpenGL guy said:
I don't understand your first question. You can't downsample directly to the front buffer because of tearing. As far as the low-res Z buffer goes, think about render-to-texture.
As for the downsampling, I thought you could do that in the vertical retrace phase if VSync is on. And if it's off, you'll get tearing anyway.

Additionally, you might not have to downsample to the front buffer, depending on the memory layout. Let's say you have five buffers and use the first four as AA backbuffer and the last one as front buffer. If the frame is finished, downsample it to the first buffer (that works because you only overwrite values that are already downsampled). Then use 1 as front buffer and 2-5 as AA back buffer.
You don't necessarily need 4 buffers for 4x AA; you may need just one large buffer.

In any event, you'll still need the backbuffer anyway because of the way D3D and OpenGL work. I.e. It's not the driver that determines the backbuffer, but the application's flipchain (in D3D). Also, how would you handle the case where the application wants to lock the backbuffer? The driver can't know whether you're going to lock the backbuffer or not.
Render-to-texture, ok, but why keep a low-res Z-buffer for 'ordinary' rendering?
How can the driver possibly know if you are ever going to use it or not?
 
Guys, I am now getting 142MB.

bitDepth = bitDepth/8;

buffers = 1+1+1+antiAliasing+antiAliasing;

videoMemory = (width*height*bitDepth*buffers)+textureSize*pow(10, 6);
 
OpenGL guy said:
You don't necessarily need 4 buffers for 4x AA; you may need just one large buffer.
I just said 4 buffers because it was easier to explain that way.

In any event, you'll still need the backbuffer anyway because of the way D3D and OpenGL work. I.e. It's not the driver that determines the backbuffer, but the application's flipchain (in D3D). Also, how would you handle the case where the application wants to lock the backbuffer? The driver can't know whether you're going to lock the backbuffer or not.
I don't exactly know how the D3D flip chain works, but in OpenGL you have hardly any influence on that process.
If the application wants to lock the framebuffer, you need to allocate a buffer and downsample to it at that time, but you do not need to have this buffer permanently (although it might be a good idea in some cases).
Render-to-texture, ok, but why keep a low-res Z-buffer for 'ordinary' rendering?
How can the driver possibly know if you are ever going to use it or not?
I was under the impression that you have to do certain API-specific things before you can render to a surface and then take it as a texture. And even if you want a depth texture, wouldn't it be possible to generate this from the high-res Z-buffer if necessary?
 
Xmas said:
OpenGL guy said:
You don't necessarily need 4 buffers for 4x AA; you may need just one large buffer.
I just said 4 buffers because it was easier to explain that way.

In any event, you'll still need the backbuffer anyway because of the way D3D and OpenGL work. I.e. It's not the driver that determines the backbuffer, but the application's flipchain (in D3D). Also, how would you handle the case where the application wants to lock the backbuffer? The driver can't know whether you're going to lock the backbuffer or not.
I don't exactly know how the D3D flip chain works, but in OpenGL you have hardly any influence on that process.
If the application wants to lock the framebuffer, you need to allocate a buffer and downsample to it at that time, but you do not need to have this buffer permanently (although it might be a good idea in some cases).
In DX9, you get calls like SetRenderTarget and SetDepthStencil. Remember, it's the application that allocates these surfaces (backbuffers and depth buffers). If the driver is forcing AA, then this forcing has to be transparent to the application.

When forcing AA, you'll more often need that non-AA Z buffer because the application could use it anytime (there are many AA unfriendly apps out there that make my life hell). With application enabled AA, this isn't a problem because the application should be doing things in an AA friendly way.
Render-to-texture, ok, but why keep a low-res Z-buffer for 'ordinary' rendering?
How can the driver possibly know if you are ever going to use it or not?
I was under the impression that you have to do certain API-specific things before you can render to a surface and then take it as a texture.
This is true. In D3D, for example, you have to allocate a texture with the 3DDEVICE flag set. However, the driver still doesn't know what (if any) Z buffer will be used with this texture.
And even if you want a depth texture, wouldn't it be possible to generate this from the high-res Z-buffer if necessary?
How do you downsample the Z buffer? I mean, it can be tricky to manage different states of the same buffer (using AA Z buffer for non-AA use).
 
How much ram would I need for 2560x1600 with 16xAA vs 2560x1600 with 8xAA (using 16xAF ....not counting textures)

Just asking because if I do that particular settings change, I go from 60fps to 4 fps in the valve video stress test.

edit: okay now I've done it. It's alive. What a resurrection. Well, again the same excuse, I was about to make new thread, this popped up.
 
2560 * 1600 * 4 * (1 (front buffer) + 1 (back buffer) + 1 (Z buffer) + 16(AA back buffer) + 16 (AA Z buffer)) = quite a lot (~547 MiByte).
 
Back
Top