Triple buffering in OpenGL

Neeyik

Homo ergaster
Veteran
I see it enough times around the Net but I don't think I've ever read an exact explanation anywhere. From how I've read it, a D3D has triple buffering enabled by the back buffer count in the D3DPRESENT_PARAMETERS structure but I can't see a similar thing in OGL - what am I missing?
 
In general it's left to the driver to decide. IF you request a double buffered pixel format, you might get a triple buffered format. OpenGL doesn't handle stuff like buffer allocation and pixel format selection, that is generally done by a small "glue" layer separate from OpenGL: wgl on Windows, xgl in X, agl for Apple etc. It's possible one of these has built in capabilities to ask for a triple buffer format, but there is at least no facility for it in WGL as far as I know.

edit: fixed typo that was very misleadng glu->glue
 
In the ATi driver control panel you can turn triple buffering on or off. It seems that triple buffering is always on when you select this, and you have enough memory. Otherwise, double buffering is used.
I don't think the application can control this at all.
 
Thanks for the replies - can I take it, then, that OGL offers no direct means of enabling triple buffering? I can partially understand why it's okay to just let the hardware drivers handle this but considering that just about every modern graphics card I can think (a) supports triple buffering and (b) has enough local memory to cope with, it seems odd that it's never been integrated into the core.

On a related topic, is there a specific reason why drivers do not offer the option of overriding the back buffer count in D3D - eg. use three instead of two? I feel that I actually know the reason why but I have no confidence in the reason.
 
the more buffers the more laggy the gameplay will be.. because the time till a frame you played on gets on screen takes more and more time.
 
Which might be the reason it's not allowed in D3D, MS is very picky about keeping latency low IIRC. Back in the day Carmack had to put a glFlush at the Quake III title screen because otherwise the low geometry count (textured quads mainly I think) coupled with the immense buffers the drivers used, caused mouse lag.

Anyway, this sort of thing isn't in OpenGL core by design, windowing system stuff like this is kept out to increase portability. Why it isn't in WGL I can't say , but WGL is really a big mess in general, there isn't even an actual spec.
 
Anyway, this sort of thing isn't in OpenGL core by design, windowing system stuff like this is kept out to increase portability. Why it isn't in WGL I can't say , but WGL is really a big mess in general, there isn't even an actual spec.

Doesn't it actually DECREASE portability when you need wgl/glx/etc extensions to do the same thing on different platforms?
You can't recompile a wgl-based application as-is anymore.
With the Direct3D model, there is a function in the API for stuff like swapchain management, setting resolution etc...
That would allow the same sourcecode to recompile as-is for any platform that implements the API, for MacDX for example.

By the way, I seem to recall that in the NVIDIA display driver you could set a maximum nr of buffers. So I guess if you set that to 2, you effectively disable triple buffering aswell (for Direct3D that is).

I disapprove of the forced settings in the display driver anyway... forcing AA could give problems with certain software, and forcing AF can decrease performance considerably, since it will force it on all textures, even ones that never require it, or for whom it has no meaning (like a LUT). But I guess it's a nice hack for software that doesn't offer the settings itself.
 
Thread edited - thank you for all the responses so far. Scali's comment about forcing functions with the drivers raises a point - if NVIDIA are happy to allow end-users to potentially bork their games with AA/AF, why not offer triple buffering? I know this thread doesn't contain many replies but I'm getting the impression that there is no API restriction that insists that the back buffer count cannot be overridden in the drivers.
 
I'm getting the impression that there is no API restriction that insists that the back buffer count cannot be overridden in the drivers.

This is correct.
The API can think it's rendering to the backbuffer in a double-buffering scheme, but the driver can just give the API an offscreen texture everytime it requests the backbuffer, or at least another buffer in the chain. And the driver also controls the actual swapping, so it can include any number of operations there (like the supersampling hack on older hardware, as mentioned in another thread here).
Just like the driver controls the texture filtering and AA level (and in OpenGL also the texture format), whatever the API thinks it's setting.

If NVIDIA doesn't support triple buffering in OpenGL, they probably have a reason for it. Perhaps they don't do it because it decreases performance (less texture memory available) and makes applications feel more laggy, which may give the impression that the application is running slower. Or perhaps the thought of adding it just never crossed their minds :)
 
Scali said:
Doesn't it actually DECREASE portability when you need wgl/glx/etc extensions to do the same thing on different platforms?
You can't recompile a wgl-based application as-is anymore.
With the Direct3D model, there is a function in the API for stuff like swapchain management, setting resolution etc...
That would allow the same sourcecode to recompile as-is for any platform that implements the API, for MacDX for example.

The idea was that windowing systems are/were diverse enough that any general system for allocating bit depth and attaching gl contexts to windows etc wouldn't capture the underlying complexity of the OS. In practice I think this wasn't a very good choice, today we often need to allocate auxilary buffers and having to do it via the OS is kludgy. Not putting windowing and input functions in OpenGL was a good choce, but they were a littlet over zealous. Anyway, I was just pointing out why it is designed the way it is, not agreeing with the design choice. FWIW, almost all of the proposed modern additions/amendments to the OpenGL spec change this so that the application can control stuff like this more directly via OpenGL.
 
The idea was that windowing systems are/were diverse enough that any general system for allocating bit depth and attaching gl contexts to windows etc wouldn't capture the underlying complexity of the OS.

Ah, I see what you mean. I guess this makes OpenGL itself more portable, since it is not limited by a certain kind of window/display format.
They shot themselves in the foot then, because effectively they only moved the issue from OpenGL itself to the application. And they completely cut off access to certain other features, such as this triple buffering. The OS doesn't expose it, and if OpenGL or the extensions/wgl/etc don't expose it either, you just don't have access at all.

FWIW, almost all of the proposed modern additions/amendments to the OpenGL spec change this so that the application can control stuff like this more directly via OpenGL.

Perhaps there are some extensions to control the amount of buffers in the swapchain then... even if only vendor-specific.
 
Neeyik said:
...considering that just about every modern graphics card I can think (a) supports triple buffering and (b) has enough local memory to cope with, it seems odd that it's never been integrated into the core.
I think if a graphics card renders frames in a framerate higher than the monitor frequency (and you have vertical-sync on), it doesn't need triple buffering.
 
davepermen said:
the more buffers the more laggy the gameplay will be.. because the time till a frame you played on gets on screen takes more and more time.
ofc, but at, say, 100Hz refresh you get avg 5, max 10ms added latency for double->triple buffers.
In my experience, it's an excellent tradeoff as long as you don't run into memory limits.
 
Actually, with vsync enabled, I latency shouldn't change any, since the graphics card doesn't need to actually have three framebuffers going at the same time. Triple buffering just means that the rest of the system keeps processing while the graphics card waits to display the next frame.

Of course, there is added latency between triple buffering with vsync and double buffering without vsync.
 
Back
Top