I have an MBX lite chip mounted on a iMX31. Display is 16bpp, so window surfaces and context can be opened only in 565 (without alpha channel). If i create a 32bit texture with
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 128, 128, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels32) the driver seems to upload pixels inside its dedicated texture ram in a 4444 format, resulting (when the texture is used) in an ugly/low color block artifacts.
If i upload pixels in 5551 format, like this glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 128, 128, 0, GL_RGBA, GL_UNSIGNED_SHORT_5_5_5_1, pixels16) the driver seems to hold the pixels as they are, without downsampling they in an internal 4444 format like in the case above.
How can i make the OpenGL|ES driver use a full 32bit texture? i think that the screen format conversion must be done only when a pixel is going to be written, so if i use a 32bit texture, then a conversion to the framebuffer 565 format is done only in the final stage, not when texture is loaded...am i right?
In addition the driver seems to report an extension called GL_IMG_texture_format_BGRA8888, that seems not be documented. Is that extension the thing i miss to make the chip use internal 32bit textures/calculations? Can someone tell me how that extension can be used? Can someone tell me if it refers to an additional texture format (like for example GL_RGBA). In this case can someone give me the correct enumerated value correspoinding to that format?
Thanks in advance.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 128, 128, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels32) the driver seems to upload pixels inside its dedicated texture ram in a 4444 format, resulting (when the texture is used) in an ugly/low color block artifacts.
If i upload pixels in 5551 format, like this glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 128, 128, 0, GL_RGBA, GL_UNSIGNED_SHORT_5_5_5_1, pixels16) the driver seems to hold the pixels as they are, without downsampling they in an internal 4444 format like in the case above.
How can i make the OpenGL|ES driver use a full 32bit texture? i think that the screen format conversion must be done only when a pixel is going to be written, so if i use a 32bit texture, then a conversion to the framebuffer 565 format is done only in the final stage, not when texture is loaded...am i right?
In addition the driver seems to report an extension called GL_IMG_texture_format_BGRA8888, that seems not be documented. Is that extension the thing i miss to make the chip use internal 32bit textures/calculations? Can someone tell me how that extension can be used? Can someone tell me if it refers to an additional texture format (like for example GL_RGBA). In this case can someone give me the correct enumerated value correspoinding to that format?
Thanks in advance.