Thoughts on Self Shadowing Parallax Mapping Improvements....

Warrick

Newcomer
I've started to play with the idea of using PTMs (Polynomial Texture Maps - http://www.hpl.hp.com/ptm/) to encode the offsets used in the parallax mapping (called offset mapping by some) technique thats all the rage at the moment :)

To start with I stupidly had two PTMs, one for both a u offset and v offset. Then it dawned on me (doh) that as you already have the eyes direction you can simply use one PTM function that gives you a scale to apply to the eye vector after it has been projected onto the texture space plane.

Thats really cool as that only takes 6 coefficents per texel so you only need two three component textures for example. It gets even more awesome when you realise that this gives you your self shadowing as well - no need for horizon maps (humus!). This is because given a direction vector this function is telling what is _occluding_ the current texel. So replace the eye vector with the light vector and if your scale offset comes out as zero the pixel is not self shadowed(!).

Pseudo pixel shader code would roughly be:

shade(Vec2 uv, Vec3 tsLight, Vec3 tsEye)
{
Vec2 occluderUV = PTMFunc(uv, tsEye);
Vec2 occluderOccluderUV = PTMFunc(occluderUV, tsLight)


if(occluderOccluderUV == occluderUV)
{
// Pixel is not self shadowed - it can see the light!

Colour diffuse = diffuseMap(occluderUV);
Vec3 normal = normalMap(occluderUV);


return dot(tsLight, normal)*diffuse;
}
else return 0; // Pixel self shadowed
}

With the cheap approximation given in the OpenGL.org thread (http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/011292.html) you could of course try and compute self shadowing with that as well - but I'm not sure how well that would work out.

Some people may prefer to use Spherical Harmonics instead of PTMs. You could argue that in certain cases of surfaces the approximated PTM wouldn't work very well. So in those cases I suggest as a quality improvement that you try this:

You have some number of 3d textures that contain the coefficients of whatever approximating function you wish to use - each axis represents u, v, and phi respectively. Where (u, v) is the current texel coordinate and phi is the angle of the eye/light vectors coordinate in spherical coordinates that goes from 0 to 360 degrees. Your approximating function is a function of theta - second angle of the eye/light vectors spherical co-ordinates that goes between 0 and 180 degrees. This should give you a better approximation (as your function is now approximating a function with less parameters and you have introduced more sample data) at the cost of an increase in memory usage.

It's interesting that I was surprised that the original demo that was mentioned in the opengl.org thread worked so well - but then when you think about it the approximation works so well as the heights are relatively small (mentioned in the thread) but also the texels that occlude will always be of at _least_ the same height of the texel that is occluded.

A future where you have some mad application that you give your real world sampled data to, that then goes off and decides *per* texel or per surface, or per vertex for particularly bland un-interesting surfaces ;) what approximation function to use is not far away - indeed I believe the game STALKER is already leading the way in this kind of direction but not down to such a fine grain level yet.

This parallax effect seems like it will be the nextgen lens flare - but more useful ;)

Warrick Buchanan.
 
Re: Thoughts on Self Shadowing Parallax Mapping Improvements

Warrick said:
This parallax effect seems like it will be the nextgen lens flare - but more useful
Great line. That's a brilliant way of expressing exactly what I thought when I first saw it (and when I saw how cheap it was!).

I'm open to suggestions on good names for it, I just culled offset mapping from a one-line throwaway in the original article. Who's writing the paper for Siggraph? :)
 
Wow, Warrick. I don't remember you joining the forum (i.e. your first post), but it's good to have a another competent developer in here.

Did you notice my posts in the other "offset mapping" thread?
I was talking about using PTMs in the same way, as well as using the same map for shadowing. Glad to see someone else was thinking on the same wavelength, and even implemented the ideas!

Did you use HP's fitter program? If so, how did you figure out the format they used in the PTM file? If not, what method did you use?

Your lighting algorithm is not quite right, though. A point that is off the ground can still be shadowed by a higher point, yet that first point will return an offset. Your pseudocode seems to describe something slightly different from what your saying, but that also looks wrong. If you send me a private message with your email, I'll send you a description of the correct method.

Since you're already using PTM's, you might as well just add another two textures for the shading, and then you don't need bump maps either.

I've never tried it, but how many instructions did you use to compute lu^2, lv^2, lulv, and put the quantites into the right spots to allow a dot product with the textures? Or did you use HLSL?

Sorry for the barrage of questions, but I'm really interesting in this stuff, as you can tell :)
 
Yes I did notice your suggestion of using PTMs ;) Along with the downloads of the PTM stuff from HP there is a file that describes the file format - which makes using their fitter reasonably easy.

Ages ago when I first started playing with PTMs on a GF3 I used pixel shader asm that was pretty optimal (with per PTM scaling and bias as well - I think it was about 6(?) PS 1.1 instructions) but now I use HLSL with a Radeon 9600 - much easier for prototyping ;)

I'm confused by your comment: "Your lighting algorithm is not quite right, though. A point that is off the ground can still be shadowed by a higher point, yet that first point will return an offset".

The function F(u, v, phi, theta) lets us determine given a direction vector which texel due to height displacements is first visible along that direction. So if you put the tangent space light vector into the function instead of the eye vector it will tell you what from the lights viewpoint is first occluding your current texel if anything - if the light can see your texel you get zero offset in your parallax displacement function.

Of course I may be missing something and be being thick ;)

Using another PTM for the diffuse light intensity is of course a possibility - in fact at some point I want to have something up and running that uses 3 textures for view dependent parallax displacement map and diffuse light PTMs, and another 3 textures for view dependent transparency (thinks clouds/ice) and view dependent specular gloss PTMs.

It's a pain about specular as I haven't figured out yet how to decently do specular to combine with this apart from modulate 'normal' specular lighting by a view dependent specular gloss map. Ideally you want a 6d texture ;)

I'm going to be spending my time soon I think fiddling writing an app to (volume) raytrace heightfields - and allow custom procedural functions for various surface parameters (view dependent transparency etc) to generate the data with - it's a pain one doesn't exist already!

Warrick.
 
Regarding the file format, I'm such an idiot. You're right, a link is there called "Download PTM file format document (.pdf file, 26 KB)". I just looked in the zips and didn't find anything.

The problem with your lighting is as follows. F tells you the height of the occluding pixel, from which you can find out what you should look up, so this is effectively a precomputed raytrace. The ray for your light vector doesn't start at the occluderUV in your code. You need to find a ray that starts at a height of zero at a different coordinate, passes through the point you wound up rendering, and then possibly passes through another occluding object (in which case it is shadowed).

I'll try describing the correct method with pseudocode.
Code:
h1 = PTM(UV, EyeVec)
occluderUV = UV + h1 * EyeVec

colour = diffusemap(occluderUV)
h2 = heightmap(occluderUV)
occluderOccluderUV = occluderUV - h2 * LightVec
h3 = PTM(occluderOccluderUV, LightVec)

if (h3 > h2) pixel is shadowed

Note that occluderOccluderUV is calculated with a minus sign, as we need to find the coordinates to which this pixel would have been rendered to had it been viewed from the light source.

As you can see, this is very difficult for me to explain without a picture, and I probably did a crappy job at it. I don't have any web space right now, but if you let me know your email via PM, I'll send you a diagram. In any case, it doesn't seem to be worth it beyond academic curiousity.

When you say "view dependent specular gloss", do you mean anisotropic gloss + variable fresnel? That's pretty crazy. I don't know how much of an effect that would have over more traditional methods, but it would be interesting to see.

Regrading specular, doesn't the half vector work well with the diffuse PTM? You have to recompute the lu lv products again, and I guess if your diffuse PTM includes object interreflection then it won't be correct either. Specular is a real pain in the ass for SH lighting, too. Guess you can't win 'em all.

EDIT: clarity problems.
 
Yes you are right about the self shadowing it is more complicated as you say - thinking more on this before blurting out again ;) - but as you say use a PTM for the lighting intensity and it goes away.

Regards the half way vector for specular - doh! It had been previously mentioned in the paper maybe as looking at my old code from a year or so again when I first started playing with this stuff I had toyed with it. Digging out of that code I was using this pixel shader at the time for evaluating PTMs:

;a0*lu2 + a1*lv2 + a2*lulv + a3*lu + a4*lv + a5

; v1 = lu2, lv2, lulv
; v0 = lu, lv, 1, lz

ps.1.1

tex t0 ; RGB
tex t1 ; ABC
tex t2 ; DEF


sub_x2 r0,t1,c0
dp3_sat r0,v1_bx2,r0

sub_x2 r1,t2,c1
dp3_sat r1,v0_bx2,r1

add r0,r0,r1
mul r0,r0,t0

mul r0,r0,v0.w

(I think I may have even got all or part of that from Adam Moransky)


Warrick
 
I would love to see some screenshots of this, especially compared to offset/parallax mapping... (yes, that was a pretty please :D )

Serge
 
Warrick said:
; v1 = lu2, lv2, lulv

Hmm, I don't think this is a good idea unless you have a highly tesselated mesh. Obviously those three quantities do not vary linearly across the face of a polygon, so you'd get some artifacts similar to but probably worse than what you get when you don't normalize per pixel in dp3 lighting. I don't have first hand experience, however.

I was think something more along the lines of the following, as it only needs one more instuction in the PS.
Code:
;v0 = lu, lv, 1
;v1 = lu, lu, lv
;v2 = lu, lv, lv

mul r1, v1, v2; // r1 now has lu2, lulv, lv2

Come to think of it, I did this before on paper, so I don't know why I asked you about that. I think I was trying to optimize higher order polynomials, like a*lu4 + b*lu3lv + c*lu2lv2 + d*lulv3 + e*lv4 + f*lu3 + g*lu2lv + h*lulv2 + i*lv3 +j*lu2 + k*lulv + l*lv2 +m*lu +n*lv + o. That was a bit more of a challenge to fit into a Radeon 8500's instruction count.

As you can see, I really liked the idea behind PTM's at the time, and my imagination was running wild, but then I was introduced to spherical harmonics. They are immensely powerful for lighting -- much more so than PTM's, as they allow dynamic light environments rather than point lights. However, "offset mapping" only needs one eye, so it's fine here.

I think higher order polynomials (cubic or quartic) might be worth it though, because the offset amount can change rather suddenly and frequently with viewing direction.
 
Playing around more with the parallax offset hack it becomes a bit horrible if you rotate an object in more than one direction at once - somewhat 'swimmy'. I also implemented bumpy edges by using texkill if my offset sent the texel out of the texture coordinate range 0-1 but _only_ if the offset was in the direction _away_ from the viewer. Notions of whether or not you have 'wrapping' on the displacement map become interesting from a modelling stand point. It still looks cool though ;)

I'm using a PTM for the lighting and self shadowing (although my app that calculates the self shadowing for the PTM is point sampling at the moment :( ) and soon shall have a PTM to calculate the offset (at which stage I shall put a demo up somewhere). Also I have to 'fix' the lighting for the displaced texel in my shader at some point as at the moment it's using the light vector to the occluded texel still.

I started to look through the siggraph 2003 VDM paper and the whole thing of surface curvature is a mind bending one. I'm thinking of attempting the 5d offset function approximation F(u, v, phi, theta, c) with a PTM and two 3d textures. Where each 3d texture has axis (u, v, c) and gives the coefficients of the PTM. I'm still scratching my head somewhat about actually calculating a measure of curvature in the pixel shader but could be being thick again ;)

(Edit: Just noticed in the VDM paper they compute curvature per vertex and interpolate...)
 
A teaser ;) :


PTMWithParallaxHack.jpg
 
Cool it deforms the edges :)

But it seems to me that there must come a point where its just better to make a high poly model?
 
arrrse said:
Cool it deforms the edges :)

But it seems to me that there must come a point where its just better to make a high poly model?

the point is there where you have radical surface changes, a.k.a. overlaps..

these are only "up-downs" on a flat cube. doing those with high-poly would be simply stupid overhead.


btw, the pict looks great, congrats!
 
davepermen said:
arrrse said:
Cool it deforms the edges :)

But it seems to me that there must come a point where its just better to make a high poly model?
the point is there where you have radical surface changes, a.k.a. overlaps..
Well, you also have to consider z-buffer optimizations...doing a technique like this would destroy those, which may make it better to just go for higher-poly models. However, I don't see any reason why hardware can't be made to work well with silhouette rendering...I just doubt it would today.
 
Chalnoth said:
Well, you also have to consider z-buffer optimizations...doing a technique like this would destroy those, which may make it better to just go for higher-poly models. However, I don't see any reason why hardware can't be made to work well with silhouette rendering...I just doubt it would today.
Because you have to have feedback which also depends on memory accesses and making that fast would be a complete pain in the sphincter. :?
 
davepermen said:
arrrse said:
Cool it deforms the edges :)

But it seems to me that there must come a point where its just better to make a high poly model?

the point is there where you have radical surface changes, a.k.a. overlaps..

these are only "up-downs" on a flat cube. doing those with high-poly would be simply stupid overhead.

What about modeling the edges but leaving the faces flat, using bumpmaps to create the bumpiness to the faces?
 
I agree that the technique is cool and probably has a future.

But it doesn't look good as is. The edges has stones cut in the middle, and the joint edges don't match.

And I wonder if it would be possible to map it so they are matching...
 
Back
Top