How to combine per-pixel/vertex lighting with DM?

991060

Regular
We all know that both per-pixel/vertex lighting algorithms require knowing of vertex normal which is very easy to compute on static mesh. With displacement mapping, the shape of the objec could change frame to frame, how can we compute the vertex normal accurately and efficiently without adjacent vertices' information? I can figure out a few ways to compute face normal in vertex shader, but vertex normal is the averaged face normals, I don't see there's a trivial way to compute them.
 
This is not a nice solution (as you say without access to the local neighbourhood it's a PITA) but how about the following:

I assume you're multiplying the sampled height by the vertex normal to compute the displacement...

Store two tangent vectors at each vertex (necessary anyway if you were doing dot3 bump mapping). For convenience, assume these tangents are 'aligned' with the U and V directions in your height map.

Compute a small offset along each tangent from the vertex. Use the equivalent U and V offsets to access the height map another two times and add those heights (* the normal direction) to the tangent offsets to get two points in the local neighbourhood of your displaced vertex. Use these three height-displaced points (taking deltas and a cross product) to get the new normal.

Of course, you should be able to reduce the above workload by eliminating the common offsets/directions, but I hope you get the idea.

Actually, perhaps you could do it ALL in local space.... but it's far too early in the morning for me to work that out :)
 
Obviously this method has no base on theory, but I feel it'll be better than plainly using the original vertex normal instead, nice idea anyway. :LOL:
 
991060 said:
Obviously this method has no base on theory, but I feel it'll be better than plainly using the original vertex normal instead, nice idea anyway. :LOL:
In what sense? Averaging the neighbourhood of triangles isn't necessarily the correct approach either. For example, if your data was a tessellated sphere, you'd be better off directly computing the source normal from the sphere equations.
 
Of cource, averaging neighbouring triangles normals is not the most accurate approach under specific situation, but it works pretty good on arbitrary surface, agree?

If I understand you correctly, you're actually building a virtual triangle and using its normal as the result, how about this approach:

Compute the displaced position for each vertex on the model and output the position data as a texture in this pass. In the next pass, use texcoord offset(we can store adjancet vertices' texcoord as the vertex shader input) to lookup adjancet vertex's position and do the math. Now what we have is the accurate face normal, if we just use it as the vertex normal by avoiding the averaging process, will it be good enough?
 
Back
Top