I've started to play with the idea of using PTMs (Polynomial Texture Maps - http://www.hpl.hp.com/ptm/) to encode the offsets used in the parallax mapping (called offset mapping by some) technique thats all the rage at the moment
To start with I stupidly had two PTMs, one for both a u offset and v offset. Then it dawned on me (doh) that as you already have the eyes direction you can simply use one PTM function that gives you a scale to apply to the eye vector after it has been projected onto the texture space plane.
Thats really cool as that only takes 6 coefficents per texel so you only need two three component textures for example. It gets even more awesome when you realise that this gives you your self shadowing as well - no need for horizon maps (humus!). This is because given a direction vector this function is telling what is _occluding_ the current texel. So replace the eye vector with the light vector and if your scale offset comes out as zero the pixel is not self shadowed(!).
Pseudo pixel shader code would roughly be:
shade(Vec2 uv, Vec3 tsLight, Vec3 tsEye)
{
Vec2 occluderUV = PTMFunc(uv, tsEye);
Vec2 occluderOccluderUV = PTMFunc(occluderUV, tsLight)
if(occluderOccluderUV == occluderUV)
{
// Pixel is not self shadowed - it can see the light!
Colour diffuse = diffuseMap(occluderUV);
Vec3 normal = normalMap(occluderUV);
return dot(tsLight, normal)*diffuse;
}
else return 0; // Pixel self shadowed
}
With the cheap approximation given in the OpenGL.org thread (http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/011292.html) you could of course try and compute self shadowing with that as well - but I'm not sure how well that would work out.
Some people may prefer to use Spherical Harmonics instead of PTMs. You could argue that in certain cases of surfaces the approximated PTM wouldn't work very well. So in those cases I suggest as a quality improvement that you try this:
You have some number of 3d textures that contain the coefficients of whatever approximating function you wish to use - each axis represents u, v, and phi respectively. Where (u, v) is the current texel coordinate and phi is the angle of the eye/light vectors coordinate in spherical coordinates that goes from 0 to 360 degrees. Your approximating function is a function of theta - second angle of the eye/light vectors spherical co-ordinates that goes between 0 and 180 degrees. This should give you a better approximation (as your function is now approximating a function with less parameters and you have introduced more sample data) at the cost of an increase in memory usage.
It's interesting that I was surprised that the original demo that was mentioned in the opengl.org thread worked so well - but then when you think about it the approximation works so well as the heights are relatively small (mentioned in the thread) but also the texels that occlude will always be of at _least_ the same height of the texel that is occluded.
A future where you have some mad application that you give your real world sampled data to, that then goes off and decides *per* texel or per surface, or per vertex for particularly bland un-interesting surfaces what approximation function to use is not far away - indeed I believe the game STALKER is already leading the way in this kind of direction but not down to such a fine grain level yet.
This parallax effect seems like it will be the nextgen lens flare - but more useful
Warrick Buchanan.
To start with I stupidly had two PTMs, one for both a u offset and v offset. Then it dawned on me (doh) that as you already have the eyes direction you can simply use one PTM function that gives you a scale to apply to the eye vector after it has been projected onto the texture space plane.
Thats really cool as that only takes 6 coefficents per texel so you only need two three component textures for example. It gets even more awesome when you realise that this gives you your self shadowing as well - no need for horizon maps (humus!). This is because given a direction vector this function is telling what is _occluding_ the current texel. So replace the eye vector with the light vector and if your scale offset comes out as zero the pixel is not self shadowed(!).
Pseudo pixel shader code would roughly be:
shade(Vec2 uv, Vec3 tsLight, Vec3 tsEye)
{
Vec2 occluderUV = PTMFunc(uv, tsEye);
Vec2 occluderOccluderUV = PTMFunc(occluderUV, tsLight)
if(occluderOccluderUV == occluderUV)
{
// Pixel is not self shadowed - it can see the light!
Colour diffuse = diffuseMap(occluderUV);
Vec3 normal = normalMap(occluderUV);
return dot(tsLight, normal)*diffuse;
}
else return 0; // Pixel self shadowed
}
With the cheap approximation given in the OpenGL.org thread (http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/011292.html) you could of course try and compute self shadowing with that as well - but I'm not sure how well that would work out.
Some people may prefer to use Spherical Harmonics instead of PTMs. You could argue that in certain cases of surfaces the approximated PTM wouldn't work very well. So in those cases I suggest as a quality improvement that you try this:
You have some number of 3d textures that contain the coefficients of whatever approximating function you wish to use - each axis represents u, v, and phi respectively. Where (u, v) is the current texel coordinate and phi is the angle of the eye/light vectors coordinate in spherical coordinates that goes from 0 to 360 degrees. Your approximating function is a function of theta - second angle of the eye/light vectors spherical co-ordinates that goes between 0 and 180 degrees. This should give you a better approximation (as your function is now approximating a function with less parameters and you have introduced more sample data) at the cost of an increase in memory usage.
It's interesting that I was surprised that the original demo that was mentioned in the opengl.org thread worked so well - but then when you think about it the approximation works so well as the heights are relatively small (mentioned in the thread) but also the texels that occlude will always be of at _least_ the same height of the texel that is occluded.
A future where you have some mad application that you give your real world sampled data to, that then goes off and decides *per* texel or per surface, or per vertex for particularly bland un-interesting surfaces what approximation function to use is not far away - indeed I believe the game STALKER is already leading the way in this kind of direction but not down to such a fine grain level yet.
This parallax effect seems like it will be the nextgen lens flare - but more useful
Warrick Buchanan.