Simple OpenGL matrix question

Adrian M

Newcomer
I'm kind of confused about how opengl matrices work. Say i have a rotation around the Z axis. _mij are sequential in memory, that is _m12 follows immediately after _m11 and _m21 follows just after _m14.

Code:
	_m11 = cosAngle; _m12 = sinAngle; _m13  = 0.0f; _m14 = 0.0f;
	_m21 = -sinAngle; _m22 = cosAngle; _m23 = 0.0f; _m24 = 0.0f;
	_m31 = 0.0f; _m32 = 0.0f; _m33 = 1.0f; _m34 = 0.0f;
	_m41 = 0.0f; _m42 = 0.0f; _m43 = 0.0f; _m44 = 1.0f;

If i load this with glLoadMatrixfv i obtain the same results as when calling glRotatef(30,0,0,1).When calling glGetFloatf with GL_MODELVIEW_MATRIX i obtain the same results with both methods(glLoadMatrixf and glRotatef). But if i want to correctly transform a column vector and obtain the same results as when opengl transforms it with this matrix i have to post multiply with the my matrix, like this:

Code:
transformedVertex = Transpose(Transpose(transformedVertex) * myMatrix).

but opengl Premultiplies a vector with a matrix like this:

Code:
transformedVertex = myMatrix * transformedVertex.

Why is this difference? Does OpenGL store my matrix inverted in memory? What about retriveal of the matrix from OpenGL?
 
_mij are sequential in memory, that is _m12 follows immediately after _m11 and _m21 follows just after _m14.

I don;t think that is true. IIRC, ogl stores it's matrices in a column major order.
 
OpenGL expects matrices in column-major order when you call glLoadMatrixf(). The way you describe your memory layout is row-major. Hence you should use glLoadTransposeMatrixf().
 
OpenGL expects matrices in column-major order when you call glLoadMatrixf(). The way you describe your memory layout is row-major. Hence you should use glLoadTransposeMatrixf().

Okay, but why is this so?
Adrian M said:
If i load this with glLoadMatrixfv i obtain the same results as when calling glRotatef(30,0,0,1)
 
I suppose glRotatef() simply rotates the other way, which effectively transposes the matrix.
 
I finally understanded what was happening. In the hope that this will help someone else, here's a snapshot of a discussion on gamedev.net.

Brother Bob said:
The matrix you show, given the linear order of the elements, is a row-major matrix. Elements are linearized row by row. OpenGL's matrices are column major, and they are the transpose of what you presented aswell.

This transposed and majorness change cancel each other, which means you can upload the matrix stored in row-major order, and get the same result as OpenGL's rotation matrix stored in column-major order.

There will, however, be a difference if you try to multiply the matrices something. The matrix you show must be multiplied by a row vector on the left hand side to get the same result as OpenGL, which multiplies a column-vector on the right side.

Brother Bob said:
The matrix you presented in your original post is a row base vector matrix and must be transposed to match the matrices used by OpenGL. Base vectors in OpenGL are column. So a matrix for a rotation about the Z-axis is in OpenGL is (s and c means sine and cosine of the angle to rotate):

c -s 0 0
s c 0 0
0 0 1 0
0 0 0 1


This is transposed to your matrix. Linearize this in column-major order, you get this array.

c s 0 0 -s c 0 0 0 0 1 0 0 0 0 1


If you linearize your matrix in row-major order, you get the same memory layout (which is why you can pass "double-wrong" to OpenGL and get correct result), but multiplication with a column vector to the right is different, because the matrices are transposed.
 
I finally understanded what was happening. In the hope that this will help someone else, here's a snapshot of a discussion on gamedev.net.

Thanks for posting the "full solution" to the issue you raised... it is not uncommon to see "solved, thanks!" and not finding anything more than few hints here and there to what the solution might be.
 
Back
Top