Does a D3D app have to specify double or triple buffering?

kyleb

Veteran
Or can the app just leave the number of backbuffers up in the air for the video drivers to do as they please?
 
DirectX docs:

BackBufferCount
This value can be 0 (or 1, 0 is treated as 1), 2, or 3. If the number of back buffers cannot be created, the runtime will fail the method call and fill this value with the number of back buffers that could be created. As a result, an application can call the method twice with the same D3DPRESENT_PARAMETERS structure and expect it to work the second time.

The method fails if one back buffer cannot be created. The value of BackBufferCount influences what set of swap effects are allowed. Specifically, any D3DSWAPEFFECT_COPY swap effect requires that there be exactly one back buffer.

So by that it's application explicity set, however I wouldn't be surprised if certain drivers overrides this for some 'select' games.

So according to this, no, you can't leave it up to the driver. Personally I just leave it in as a parametre in my code when creating a device. Usually use 2.
 
Any chance that "0 is treated as 1" is only referring to certain drivers; as in, DirectX will simply pass the 0 value on to the drivers and they choose to use 1 backbuffer? I am curious because I have come across many D3D games where I get hash framerate transitions of double buffering on my Geforce, but not with my Radeon. However, a few games give me harsh transitioning of framerate on both my Geforce and my Radeon; so it would make sense that those games set BackBufferCount to 1 while the many games that don't behave the same on both cards have the setting at 0, eh?
 
kyleb said:
Any chance that "0 is treated as 1" is only referring to certain drivers; as in, DirectX will simply pass the 0 value on to the drivers and they choose to use 1 backbuffer? I am curious because I have come across many D3D games where I get hash framerate transitions of double buffering on my Geforce, but not with my Radeon. However, a few games give me harsh transitioning of framerate on both my Geforce and my Radeon; so it would make sense that those games set BackBufferCount to 1 while the many games that don't behave the same on both cards have the setting at 0, eh?

For TB you need 2 backbuffers. Working without a Backbuffer at all will force the GPU to rendern in the frontbuffer. Nobody want this today. Because of this a 0 is allways treated as 1.
 
kyleb said:
Any chance that "0 is treated as 1" is only referring to certain drivers; as in, DirectX will simply pass the 0 value on to the drivers and they choose to use 1 backbuffer?

This value is never passed to the driver.
The D3D runtime creates the back buffer(s) using DdCreateSurface calls.
Actually those surfaces are created before the 3D device is initialized with D3dContextCreate.
 
Last edited by a moderator:
Fair enough, so it obviously not triple buffering I am seeing. However, does anyone have any idea why in so many games, most recently Fable, my Geforce gives me drops form my 60hz refresh straight to 30fps and then directly to 20fps, yet with my Radeon it feels smoother and with FRAPS I see the framerate often wind up somwhere inbetween the normal 1, 1/2, 1/3 the refresh rate as I would expect with only double buffering?
 
kyleb said:
Fair enough, so it obviously not triple buffering I am seeing. However, does anyone have any idea why in so many games, most recently Fable, my Geforce gives me drops form my 60hz refresh straight to 30fps and then directly to 20fps, yet with my Radeon it feels smoother and with FRAPS I see the framerate often wind up somwhere inbetween the normal 1, 1/2, 1/3 the refresh rate as I would expect with only double buffering?

I suspect that's an nVidia driver bug...
 
How do you mean? Maybe I am confused here but as I understand it, a single back buffer should always give me a framerate that is 1:1, 1/2:1, 1/3:1, and so on with respect to my refresh rate; so i don't follow on how that could be a bug in Nvidia's drivers?
 
I Have used both vsync forced in drivers and application controlled with it on in game and, best I could tell, both ways had the same results.
 
If the game uses trilinear and the frame rate is still locked to (refresh/n) values, than it's a driver problem.

I've seen such thing on my nVidia card at work, but that was in windowed mode so it might not apply.
(Also forcing vsync on from the driver doesn't work for windowed apps, but it's probably by design.)
 
Yeah, I have noticed that about vsync and windowed apps; but I am curious about the relationship between trilinear and the framerate lock that you mention, how does that work?
 
I checked windowed mode on an X800 and found that when the FPS is above the refresh rate it locks into multiples of the refresh rate.
So with a refresh of 60Hz I get 60, 120, 180 etc. FPS.

This is with vsync disabled from the app, and App preference set on the control panel.
 
Back
Top