Hi
I am getting great results after reading the post
http://www.beyond3d.com/forum/showthread.php?p=530048#post530048
and setting up code to work with 16bit depths. But I have a question about using multiple texture units. Until I started trying to use FBOs to improve shadow jagged edges (which it has done, excellent), I was using multiple texture units to handle the shadow maps for multiple lights in one render pass. With FBOs for the shadowmaps, I am now struggling to figure out how to make it render directly to the different shadowmap textures which are intended for use by the multiple texture units. In the end, I got it to work by using the FBO to generate the depthmap into a fixed texture object, then copying that into the texture object used by the other texture unit.
I may be confused about the semantics of texture objects in the presence of multiple texture units...???
Anyhow, here's the code with the question repeated at the bottom in more detail:
Questions
---------
1. My main question is how do I render to a texture that is in a different texture unit to the one that is active while I am generating the shadowmap?
That is, I want to get rid of the call to glCopyTexImage2D.
Things work fine (as above) when I render to a depth texture which I allocate, then glCopyTexImage2D
to copy the depth texture generated into the other texture unit.
But if I try to render directly into the texture 'tex' and just bind it then it doesn't work - it creates a bunch of stripes for shadows.
I may be just confused about the semantics of creating and binding/unbinding texture objects with multiple texture units.
I used the above code just changing the fixed reference to
Another question:
2. When defining the texture I'm now using
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, shadowsize, shadowsize, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
I was having all sorts of problems until I saw a post in this forum saying that ATI only supports 16 bit depths where I was using 24.
Now that I have changed to 24, it works fine, but I wondered about the effect of the second last param which is now GL_UNSIGNED_INT.
In some other forum discussion I saw this used:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, shadowsize, shadowsize, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
Notice it is non-specific (GL_DEPTH_COMPONENT without a 16 or 24 suffix)... what does that do?
Also it uses GL_UNSIGNED_BYTE ... is that important?
= = =
Footnote: I have an ATI Radeon 9700 and am using OmegaDrivers.net (Catalyst version 5.10a, driver ver 6.14.10.6575).
I was getting crashes in the glBindFramebufferEXT call (atioglxx.dll exceptions) until I found a helpful note at
http://www.gamedev.net/community/forums/topic.asp?topic_id=336643
whereupon I turned off the floating point exceptions during these calls and things are working now.
I am getting great results after reading the post
http://www.beyond3d.com/forum/showthread.php?p=530048#post530048
and setting up code to work with 16bit depths. But I have a question about using multiple texture units. Until I started trying to use FBOs to improve shadow jagged edges (which it has done, excellent), I was using multiple texture units to handle the shadow maps for multiple lights in one render pass. With FBOs for the shadowmaps, I am now struggling to figure out how to make it render directly to the different shadowmap textures which are intended for use by the multiple texture units. In the end, I got it to work by using the FBO to generate the depthmap into a fixed texture object, then copying that into the texture object used by the other texture unit.
I may be confused about the semantics of texture objects in the presence of multiple texture units...???
Anyhow, here's the code with the question repeated at the bottom in more detail:
Code:
// Here's the code #ifdef'd to show broken and working states, (working state uses glCopyTexImage2D)
//---- during init only
glGenFramebuffersEXT(1, &m_Framebuffer);
glGenRenderbuffersEXT(1, &m_DepthRenderbuffer);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, m_DepthRenderbuffer);
glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT16, shadowsize, shadowsize);
//---- rendering shadowmaps only when scene geometry changes, not during view position changes
#ifdef BROKEN
SetCurrentTextureUnit(n+1); // shadow map for light 0,1,2... uses texture unit 1,2,3...
// just calls glActiveTextureARB(GL_TEXTURE3_ARB); for n=2 etc
GLuint tex = ShadowTextureID(n); // previously have done glGenTextures to allocate these
#else
GLuint tex = m_tex; // a fixed texture number allocated during init
#endif
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, shadowsize, shadowsize, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_Framebuffer);
glDrawBuffer(GL_NONE); // no color buffer dest
glReadBuffer(GL_NONE); // no color buffer src
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, tex);
CheckFramebufferStatus();
glBindTexture(GL_TEXTURE_2D, 0); // don't leave this texture bound (?unsure if this is correct but it appears to make no diff)
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); // want depths only
DrawScene();
#ifndef BROKEN
SetCurrentTextureUnit(n+1); // shadow map for light 0,1,2... uses texture unit 1,2,3...
ShadowBindTexture(n);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 0, 0, shadowsize, shadowsize, 0);
#endif
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
//-----
// The scene is then rendered for each frame with the texture units active for shadow mapping
//-----
// ShadowBindTexture(n) is these steps:
glBindTexture(GL_TEXTURE_2D, ShadowTextureID(n));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_INTENSITY);
if (m_bHas_GL_ARB_shadow_ambient)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FAIL_VALUE_ARB, 0.5f);
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR);
glTexGeni(GL_R, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR);
glTexGeni(GL_Q, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR);
Questions
---------
1. My main question is how do I render to a texture that is in a different texture unit to the one that is active while I am generating the shadowmap?
That is, I want to get rid of the call to glCopyTexImage2D.
Things work fine (as above) when I render to a depth texture which I allocate, then glCopyTexImage2D
to copy the depth texture generated into the other texture unit.
But if I try to render directly into the texture 'tex' and just bind it then it doesn't work - it creates a bunch of stripes for shadows.
I may be just confused about the semantics of creating and binding/unbinding texture objects with multiple texture units.
I used the above code just changing the fixed reference to
Another question:
2. When defining the texture I'm now using
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, shadowsize, shadowsize, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
I was having all sorts of problems until I saw a post in this forum saying that ATI only supports 16 bit depths where I was using 24.
Now that I have changed to 24, it works fine, but I wondered about the effect of the second last param which is now GL_UNSIGNED_INT.
In some other forum discussion I saw this used:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, shadowsize, shadowsize, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
Notice it is non-specific (GL_DEPTH_COMPONENT without a 16 or 24 suffix)... what does that do?
Also it uses GL_UNSIGNED_BYTE ... is that important?
= = =
Footnote: I have an ATI Radeon 9700 and am using OmegaDrivers.net (Catalyst version 5.10a, driver ver 6.14.10.6575).
I was getting crashes in the glBindFramebufferEXT call (atioglxx.dll exceptions) until I found a helpful note at
http://www.gamedev.net/community/forums/topic.asp?topic_id=336643
whereupon I turned off the floating point exceptions during these calls and things are working now.