OpenGL texture blit via framebuffer not copying alpha

FoxMcCloud

Newcomer
I have a simple texture blit function in OpenGL that I'm trying to use to copy an RGBA16F texture to another one of 1/4 size:

Code:
void texture_blit(const GLuint SourceTexture, const int SourceX0, const int SourceY0, const int SourceX1, const int SourceY1, 
                  const GLuint DestinationTexture, const int DestinationX0, const int DestinationY0, const int DestinationX1, const int DestinationY1)
{
    GLint PreviousReadFramebuffer;
    GLint PreviousDrawFramebuffer;
    glGetIntegerv(GL_READ_FRAMEBUFFER_BINDING, &PreviousReadFramebuffer);
    glGetIntegerv(GL_DRAW_FRAMEBUFFER_BINDING, &PreviousDrawFramebuffer);
    using namespace application;
    glNamedFramebufferTexture2DEXT(BlitSourceFramebuffer, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, SourceTexture, 0);
    glNamedFramebufferTexture2DEXT(BlitSinkFramebuffer, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, DestinationTexture, 0);
    glBindFramebuffer(GL_READ_FRAMEBUFFER, BlitSourceFramebuffer);
    glBindFramebuffer(GL_DRAW_FRAMEBUFFER, BlitSinkFramebuffer);
    glBlitFramebuffer(SourceX0, SourceY0, SourceX1, SourceY1, 
                      DestinationX0, DestinationY0, DestinationX1, DestinationY1, 
                      GL_COLOR_BUFFER_BIT, GL_LINEAR);
    glBindFramebuffer(GL_READ_FRAMEBUFFER, PreviousReadFramebuffer);
    glBindFramebuffer(GL_DRAW_FRAMEBUFFER, PreviousDrawFramebuffer);
}

However, when I use that to copy between two RGBA16F textures Alpha seems to be discarded - the target texture's RGB is a copy of the destination's but the target's alpha is a constant 1. Am I doing something wrong or is this a driver issue?
 
What is your hardware?

http://www.atomicmpc.com.au/Feature...and-degrading-game-quality-says-nvidia.aspx/1

However, FP16 Demotion aims to change how HDR lighting is calculated; its full name is the R11G11B10 format, which means the Red and Green channels have 11 bits each, with Blue sitting at 10 bits for a total of 32 bits per pixel - dropping Alpha channel data completely. The end result is a halving in frame buffer size, cutting the amount of memory demanded by the HDR lighting engine by fifty per cent.
 
amd also have an "enable surface format optimization" check box (looks like they took it out of cat a.i and gave it its own control)
it replaces fp16 with fp11
 
That's exactly what I linked to. But I dunno if the optimization works on OGL. Worth a shot though, assuming he's on AMD hardware.
 
The behavior exhibits itself on my Intel HD 3000 and Radeon HD 6490M GPUs. I'm running on Mac OS X 10.7.2, so that catalyst switch is out. Any ideas?
 
hang on, your copying a texture to one thats 1/4 of the size, getting rid of the alpha channel would be the only way (well not the only way) it would fit
copy the texture to a texture of identical size
ps: any help

No Alpha in the Framebuffer

Be sure you create a double buffered context and make sure you ask for a alpha component.
 
I'm sure, I don't think I understand what you said. I'm copying between two RGBA16F textures. I'm using FBO objects that I bind the textures to so I can use BlitFramebuffer() to do the copy with linear filtering for the downsampling. Is there a better way to do that? My windowing system framebuffer is RGBA - unsigned byte. Should that make any difference though?
 
I have a simple texture blit function in OpenGL that I'm trying to use to copy an RGBA16F texture to another one of 1/4 size ... However, when I use that to copy between two RGBA16F textures Alpha seems to be discarded
Before the blit, add:

glColorMask( GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE );

It is not crystal clear from the spec whether the buffer writemasks are applied in the blit operation (thread link, another link), but I have experienced them to be (with one vendor's drivers at least).
 
Back
Top