OpenGL Pipeline - At what stage is lighting calculated?

Toasty

Newcomer
Is it after the modelview transform (eye-space) and before the projection transform? Or is it after both the modelview and projection transformations (clip-space)?

I ask because I've created my own implementation of the OpenGL lighting model in clip-space and I am not always getting identical results versus my reference ATI implementation. I would like to know whether some of these bugs are because I'm in the wrong coordinate-space :oops:.

My hope is that it can be performed in clip-space, so that I can perform primitive assembly and trivial clipping rejection first such that I only calculate the lighting at vertices belonging to primitives visible on the screen.

Thanks,
-Toasty
 
If your talking about vertex lighting then it occurs before the projection matrix is applied. Although you should be able to do it in any space you want prior to the projection.

If you want to do it post clip, then you'll have to keep the original vertex data around.
 
Thanks for the reply.

I guess for now I will focus on correctness and try moving things to eye-space.

Down the road though I would like to try something like the following when GL_LIGHTING is enabled:

1) Modelview transform - write result to temporary array
2) Modelview-Projection transform - write result to position
3) Primitive Assembly + Clipping - find clip vertices but do not perform color interpolation, but store the interpolation factor.
4) Light only the vertices belonging to visible primitives (using temp data)
5) Perform color interpolation to arrive at color at clip vertices.

If anyone sees a fundamental problem with this approach please let me know.

-Toasty
 
For what it's worth, I calculate lighting in swShader in model space. Lights are just transformed to this space too, so I avoid transforming all vertices more than once.

I also cull back-facing polygons in model space, only sending vertices of front-facing polygons to the T&L pipeline. Early frustum culling is done on batches of 16 polygons by testing the bounding box against the frustum, again in model space.
 
Be careful that it does not cost you more time to skip the stuff you don't want to do than to do it anyway.
 
Toasty said:
1) Modelview transform - write result to temporary array
2) Modelview-Projection transform - write result to position
3) Primitive Assembly + Clipping - find clip vertices but do not perform color interpolation, but store the interpolation factor.
4) Light only the vertices belonging to visible primitives (using temp data)
5) Perform color interpolation to arrive at color at clip vertices.

so how do you get the color information into step 5? as you'd do your lighting calculation somwhere inbetween object and view space* (includingly), you'll then have to separately transform you color vertex attributes to the space of step 5.

* lighting would be carried in a 3d homogeneous, object-linear space. now, clipping space (post-projection-transform, pre-persp-division) is not exactly 3d homogeneous (it's 4d), and anything past and including NDC space (normalised device coord space) is not linear wrt obj space.
 
I believe he intended in the 5-step process to forward the data transformed to eye-space along with the vertices, and then use the data after he had determined whether those polygons had passed clip and facing tests.

-Evan
 
ehart said:
I believe he intended in the 5-step process to forward the data transformed to eye-space along with the vertices, and then use the data after he had determined whether those polygons had passed clip and facing tests.

ok, using clipping informatuion to know which verts to light and which not to would generally work. but i'm still not sure where he'd take the interpolation factors at step 3 if he hasn't got the vertices' colors yet.
 
Nick said:
I also cull back-facing polygons in model space, only sending vertices of front-facing polygons to the T&L pipeline.
Facedness can change before/after perspective divide. This isn't obvious during most "normal" rendering. I.e. with backface culling, closed meshes and active z test/write, the front faces will always obscure the false positives that "accidentally" made it through the culling stage. But client applications may do things differently and that could then lead to glitches.
I hope you take care of this?
 
Back
Top