I am having problems detecting forced multisample antialiasing in DX9.
Background info:
I create the d3d device with automatic depth buffer and color buffer. Before creating the device I search for a suitable 24+ bit depth buffer format with at least 8 bits of stencil. Then I find a compatible color buffer for it (with same color depth and using CheckDepthStencilMatch to verify compatiblity). These d3dformats are set as AutoDepthStencilFormat and BackBufferFormat in the D3D_PRESENT_PARAMETERS structure. No problems so far.
Then I create my four g-buffers (render targets) for the deferred renderer. These are created with same size and same format as the backbuffer, and use the same z-buffer as the backbuffer. However if forced AA is enabled, the driver silently creates the z-buffer as a multisample buffer and it's incompatible with the non-multisampled g-buffers.
ATI drivers seem to handle this situation properly, and notice that the z-buffer is used with a non-multisampled buffer and disable the forced driver AA. NVidia drivers however do not handle this situation properly, causing the graphics get really messed up when forced AA is used.
To fix this issue I could either:
A) Make all g-buffers multisampled if the z-buffer buffer is multisampled. However both NVidia and ATI drivers tell me incorrect information about the real buffer format. D3DSURFACE_DESC always informs me that z-buffer is not multisampled, making it impossible for me to detect the real format of the z-buffer.
B) Check that if the depth buffer is multisampled, and give user a warning stating that he should disable the forced MSAA if he wants to run the application. This is also impossible to do, as the driver doesn't reveal the true buffer format for the application.
C) Create a separate non-multisampled z-buffer for the g-buffers, and hope that the driver is clever enough to not force it to be multisampled. However if I use this technique, I must do a separate z-only pass for both buffers, and this costs a lot of performance (I cannot copy z-buffer from multisampled format to non-multisampled format). As I cannot detect in any way that the forced driver AA is enabled, this is a really bad way to fix the problem (causing the performance drop even if antialiasing is disabled).
Is there any way to solve this problem properly, without sacrificing any performance? Are other games using deferred rendering handled differently by the graphics card drivers (driver detection perhaps)?
Update:
According to my testing I assume that when forced MSAA is enabled, GeForce drivers silently create two copies of the z-buffer, one multisampled (used with backbuffer) and one non-multisampled (used with g-buffers). At beginning of frame rendering I set the backbuffer & z-buffer as my rendertarget combination, and clear the color, depth and stencil values. This seems to clear only the multisampled driver created extra z-buffer, and when I later in the frame set my g-buffers as the render target, the drivers silently starts to use the non-multisampled z-buffer and it still has last frame information in it. Only the surfaces with smaller z-values are rendered. I could of course clear the z-buffer twice in the frame rendering, but that would cause problems, as I use the z-information with both g-buffer rendertargets (geometry rendering) and with backbuffer (light area rendering, particle rendering, etc).
Background info:
I create the d3d device with automatic depth buffer and color buffer. Before creating the device I search for a suitable 24+ bit depth buffer format with at least 8 bits of stencil. Then I find a compatible color buffer for it (with same color depth and using CheckDepthStencilMatch to verify compatiblity). These d3dformats are set as AutoDepthStencilFormat and BackBufferFormat in the D3D_PRESENT_PARAMETERS structure. No problems so far.
Then I create my four g-buffers (render targets) for the deferred renderer. These are created with same size and same format as the backbuffer, and use the same z-buffer as the backbuffer. However if forced AA is enabled, the driver silently creates the z-buffer as a multisample buffer and it's incompatible with the non-multisampled g-buffers.
ATI drivers seem to handle this situation properly, and notice that the z-buffer is used with a non-multisampled buffer and disable the forced driver AA. NVidia drivers however do not handle this situation properly, causing the graphics get really messed up when forced AA is used.
To fix this issue I could either:
A) Make all g-buffers multisampled if the z-buffer buffer is multisampled. However both NVidia and ATI drivers tell me incorrect information about the real buffer format. D3DSURFACE_DESC always informs me that z-buffer is not multisampled, making it impossible for me to detect the real format of the z-buffer.
B) Check that if the depth buffer is multisampled, and give user a warning stating that he should disable the forced MSAA if he wants to run the application. This is also impossible to do, as the driver doesn't reveal the true buffer format for the application.
C) Create a separate non-multisampled z-buffer for the g-buffers, and hope that the driver is clever enough to not force it to be multisampled. However if I use this technique, I must do a separate z-only pass for both buffers, and this costs a lot of performance (I cannot copy z-buffer from multisampled format to non-multisampled format). As I cannot detect in any way that the forced driver AA is enabled, this is a really bad way to fix the problem (causing the performance drop even if antialiasing is disabled).
Is there any way to solve this problem properly, without sacrificing any performance? Are other games using deferred rendering handled differently by the graphics card drivers (driver detection perhaps)?
Update:
According to my testing I assume that when forced MSAA is enabled, GeForce drivers silently create two copies of the z-buffer, one multisampled (used with backbuffer) and one non-multisampled (used with g-buffers). At beginning of frame rendering I set the backbuffer & z-buffer as my rendertarget combination, and clear the color, depth and stencil values. This seems to clear only the multisampled driver created extra z-buffer, and when I later in the frame set my g-buffers as the render target, the drivers silently starts to use the non-multisampled z-buffer and it still has last frame information in it. Only the surfaces with smaller z-values are rendered. I could of course clear the z-buffer twice in the frame rendering, but that would cause problems, as I use the z-information with both g-buffer rendertargets (geometry rendering) and with backbuffer (light area rendering, particle rendering, etc).
Last edited by a moderator: