Render to depth texture with EXT_framebuffer_object

Ostsol

Veteran
Rendering the colour buffer to a texture was relatively easy, but for some reason I can't seem to render to a depth texture. If I follow the example provided in the spec, I get a "framebuffer incomplete" error.

I'm using a Radeon 9700 Pro with the Cat 5.7 drivers. Here's the relevant functions:

Code:
void CreateRenderTarget ()
{
	glGenFramebuffersEXT (1, &g_nFrameBuffer);
	glGenRenderbuffersEXT (1, &g_nDepthBuffer);

	glBindRenderbufferEXT (GL_RENDERBUFFER_EXT, g_nDepthBuffer);
	glRenderbufferStorageEXT (GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT24, 512, 512);

	CheckFrameBufferStatus ();

	glBindRenderbufferEXT (GL_RENDERBUFFER_EXT, 0);
}

void LoadDepthTarget ()
{
	while (glGetError () != GL_NO_ERROR);

	glGenTextures (1, &g_nDepthTarget);

	glBindTexture (GL_TEXTURE_2D, g_nDepthTarget);

	glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
	glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
	glTexParameteri (GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);
	glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
	glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

	glTexImage2D (GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 512, 512, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);

	CheckGLErrors ();

	glBindTexture (GL_TEXTURE_2D, 0);
}

void Draw ()
{
	glMatrixMode (GL_MODELVIEW);

	// draw to the texture
	glBindFramebufferEXT (GL_FRAMEBUFFER_EXT, g_nFrameBuffer);

	glFramebufferTexture2DEXT (GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, g_nDepthTarget, 0);

	CheckFrameBufferStatus ();

	glClearColor (0.5f, 0.0f, 0.0f, 1.0f);
	glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

	glPushMatrix ();
		glTranslatef (0.0f, 0.0f, -100.0f);

		DrawBackground ();
	glPopMatrix ();

	// draw to the back buffer
	glBindFramebufferEXT (GL_FRAMEBUFFER_EXT, 0);

	glClearColor (0.0f, 0.0f, 0.0f, 1.0f);
	glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

	glPushMatrix ();
		glTranslatef (0.0f, 0.0f, -100.0f);

		glEnable (GL_TEXTURE_2D);
		glBindTexture (GL_TEXTURE_2D, g_nDepthTarget);

		DrawBackground ();
	glPopMatrix ();

	glDisable (GL_TEXTURE_2D);

	SDL_GL_SwapBuffers ();
}
 
Ah, crap. Just searched through the OpenGL.org forums and it appears that ATI doesn't support render-to-depth-texture in current hardware. :(
 
I did write a quick test for depth textures and FBOs a while back, and I remember things working properly. The only thing I see different is that I used a 16 bit depth format for the texture. That may be the entire issue, though I might have been testing it on a later driver.

I would suggest quickly trying the 16bit format.

-Evan
 
Yup, 16bit is all we support ATM. Another thing is that you seem to have forgot to call glDrawBuffer(GL_NONE). This is an FBO state so you only need to call it once in your creation code and it will stick to that framebuffer object.
 
Ah. . . that works. (EDIT: Both of those helped. ;)) Is there any hope for 24 bit depth texture support in the future?
 
You should be able to write depth to a single-channel FP32 target. It's a waste of bandwidth, of course, as you still need to read/write the "real" depth buffer for HSR. But it works, and current ATI chips don't have double Z or PCF for shadow mapping anyway.
 
Xmas said:
You should be able to write depth to a single-channel FP32 target. It's a waste of bandwidth, of course, as you still need to read/write the "real" depth buffer for HSR. But it works, and current ATI chips don't have double Z or PCF for shadow mapping anyway.
Unfortunately, EXT_framebuffer_object doesn't seem to allow one to try and render depth to a float texture.
 
You can't use a float texture as a depth attachment. Z can only go into textures with a depth format. But you can render to a float texture as a color attachment, and write depth into it.
 
Ostsol said:
Well, I seem to be having an issue with shadow-popping using 16 bit depth textures.

I'll bet that's not a precision problem....
Unless you're projecting to some huge depth range, it'll almost certainly just be an artifact of the reprojection. Reprojection artifacts will be MUCH worse than you intuitively think especially if the light and the camera are a long way from co-linear.

I'd suggest either midpoint depth maps or using the backfaces to generate the depth maps. And even then you'll still need to bias the samples.
 
Hmm. . . Maybe I should study more into these types of shadows. . . I didn't think that my simple test would have such problems. What seems to work best is if the far-clip is set to the maximum distance that shadow will be cast upon. Still, even with the far clip set to 10.0, a shadow cast upon an object just 1 unit away will disappear.
 
Got it working with an FP32 texture. Just curious, but are there any suggestions as to how I should go about it?

The "depth" pass in my method used projective texture texcoord generation and output the distance, as stored in the texcoord's z coordinate, into the float buffer. In the render pass I projected the float texture onto the scene and compared the sampled texel with the texcoord's z coordinate (minus 0.1 to avoid shadowing artifacts).

Basically the same as normal depth-buffer shadows, but potentially using the full range of floating point numbers instead of the [0, 1] range of normal depth buffers. I didn't think that simply outputting the z-depth to the float would be a good idea since the fragment pipelines internal mantissa precision is only 16 bits -- the same as the 16 bit depth textures I was using before.

So, is there a better way of doing this?
 
It feels to me like you might have a bug, because as you mentioned, the Depth16 texture is nearly all the precision available. The shadow depth offset smells funny to me as well. An 0.1 offset is just less than 1/8. I think that you are killing a major number of your bits of precision by adding that factor in. Remember that the projective divide does not map the eye-space depths linearly to into the depth buffer.

-Evan
 
ehart said:
It feels to me like you might have a bug, because as you mentioned, the Depth16 texture is nearly all the precision available. The shadow depth offset smells funny to me as well. An 0.1 offset is just less than 1/8. I think that you are killing a major number of your bits of precision by adding that factor in.
The 0.1 offset is only used in the float texture method (which appears to be working perfectly). Indeed, without an offset, I get artifacts resembling tearing when the light is moving away from the geometry. That may certainly be due to a bug elsewhere, since the scene renders fine when there is no motion.

Anyway, I had no such offset enabled when I'm using a normal depth texture.

Remember that the projective divide does not map the eye-space depths linearly to into the depth buffer.
Yep, I realise this.
 
You'll need to Bias with both float or integer depth.

Personally I'd suggest using linear depth, at least then your offset will behave consistently. Offsets generally have to be quite large, which is why
people use methods like Midpoint and Backface shadows, because they largely solve the problem of shadow offsets without unshadowing things that should be shadowed.
 
Very nice that GL_EXT_framebuffer_object is now implemented for ATI! there's one problem though; I try to detect which texture formats can be rendered to with the following function. (I let fmt iterate over various formats):

Code:
            // Create and attach texture
            glGenTextures(1, &tid);
            glBindTexture(target, tid);
            glTexParameteri(target, GL_TEXTURE_MAX_LEVEL, 0);

            // 2D
            glTexImage2D(target, 0, fmt, PROBE_SIZE, PROBE_SIZE, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);

            // Create and attach framebuffer
            glGenFramebuffersEXT(1, &fb);
            glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
            glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, target, tid, 0);

            // Check status
            GLuint status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
            if(status == GL_FRAMEBUFFER_COMPLETE_EXT)
            {
                mProps[x].valid = true;
                
                // Continue detection for depth stencil etc ...
            }

            // Delete texture and framebuffer
            glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
            glDeleteFramebuffersEXT(1, &fb);
            glDeleteTextures(1, &tid);

It crashes with a segmentation fault inside the driver when unbinding the framebuffer if the format is unsupported, for example GL_LUMINANCE8 or GL_LUMINANCE16. The extension document says that trying out formats to see wether they are supported is the way to go.

This happens with Catalyst 5.7. It works fine with NVIdia. Help!
 
Ostsol,

normal 16 bit z-buffers only give you 16 bits of presision for the 1/z range from 1 to 0.5

fp24 will give you full presision for the 1/z range from 1 to 0,0000000000000000001 or something like that.

Besides, with the implied bit, fp24 has 17 bits of presision, so it's a win even for the small 1 to 0.5 part of the range. On the other hand, only using the [0,1] range only wastes one bit of exponent compared to using the [0,inf] range...
 
Using FBO depthbuffers with multiple texture units

Hi
I started this as a response to this thread - it is closely related - but in the end I thought it was different enough to warrant a separate thread - see
http://www.beyond3d.com/forum/showthread.php?t=25031

I'm certainly getting good results now using FBOs for depthtextures, see above for more details and a followon question
 
Back
Top