Hey;
I'm writing a height-/normal-map baker for for an orthogonal terrain mesh, so U is along world-x and V is along world-y. The mesh is passed to the shaders just with the vertex-normal and I calculate the TNB-matrix in the vertex shader.
Okay, I just have a bit of a feedback-loop trying to think how do I convert the object-space height-field and normal-map I already have, to the same tangent space which is used in the shaders.
In the vertex shader the camera- and the light-direction are transformed into tangent-space and then linearly interpolated over the face. Just to understand it: does the interpolated tangent-space light-vector cancel out any of the "virtual normal smoothing" over the face and the projection of the normal-map onto the face is in effect flat (constant over the face-normal)? If I'd interpolate the normal directly and modify my interpolated vertex-normal with the normal-map normal in the pixel-shader, using that to perturbate the un-tranformed light-vector in object-space, would that be exactly the same?
Sorry if I can't find the right words, I hope it's understandable.
If that would be the case, I can calculate the tangent-space normal map simply by converting all normal-map texels on the face-surface into the face tangent-space (not vertex tangent-space). Is that correct?
This would leave me with the height-field. Initially I was thinking to calculate the z-position of each heightmap-texel and subtract the face's z-position at the same coordinates. This would enable me to do view-space z-axis aligned reverse heightfield-tracing (position = float3(pos.x, pos.y, pos.z + texel)). Though I'd like to reuse the POM-code I have, so I ask myself what exactly would be the object-space to tangent-space transform for heightmap-texels.
If the texture-projection onto the face is flat (as in the case with the normals above), I could just cast a ray originating from the texels position on the face-surface and intersect it with the heightfield and write the length of the resulting vector to the height-map. Is that correct?
I think the case is rather simple, the UV coordinates are a top-projected regular square on a square terrain-mesh. I just strugge to mentally visualize the linearity or non-linearity of the texel-projection in tangent-space.
My biggest doubt is, that if I do this, won't I get normal-map normal discontinuities at the face-edges? Because there is an interruption from one face's tanged-space to another's.
Thanks
I'm writing a height-/normal-map baker for for an orthogonal terrain mesh, so U is along world-x and V is along world-y. The mesh is passed to the shaders just with the vertex-normal and I calculate the TNB-matrix in the vertex shader.
Okay, I just have a bit of a feedback-loop trying to think how do I convert the object-space height-field and normal-map I already have, to the same tangent space which is used in the shaders.
In the vertex shader the camera- and the light-direction are transformed into tangent-space and then linearly interpolated over the face. Just to understand it: does the interpolated tangent-space light-vector cancel out any of the "virtual normal smoothing" over the face and the projection of the normal-map onto the face is in effect flat (constant over the face-normal)? If I'd interpolate the normal directly and modify my interpolated vertex-normal with the normal-map normal in the pixel-shader, using that to perturbate the un-tranformed light-vector in object-space, would that be exactly the same?
Sorry if I can't find the right words, I hope it's understandable.
If that would be the case, I can calculate the tangent-space normal map simply by converting all normal-map texels on the face-surface into the face tangent-space (not vertex tangent-space). Is that correct?
This would leave me with the height-field. Initially I was thinking to calculate the z-position of each heightmap-texel and subtract the face's z-position at the same coordinates. This would enable me to do view-space z-axis aligned reverse heightfield-tracing (position = float3(pos.x, pos.y, pos.z + texel)). Though I'd like to reuse the POM-code I have, so I ask myself what exactly would be the object-space to tangent-space transform for heightmap-texels.
If the texture-projection onto the face is flat (as in the case with the normals above), I could just cast a ray originating from the texels position on the face-surface and intersect it with the heightfield and write the length of the resulting vector to the height-map. Is that correct?
I think the case is rather simple, the UV coordinates are a top-projected regular square on a square terrain-mesh. I just strugge to mentally visualize the linearity or non-linearity of the texel-projection in tangent-space.
My biggest doubt is, that if I do this, won't I get normal-map normal discontinuities at the face-edges? Because there is an interruption from one face's tanged-space to another's.
Thanks