Offset Mapping

Dio

Veteran
Andy found this today. This is astounding.

offsetmapping.jpg


http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/011292.html

The basics of the technique are to displace the texture coordinates such that the texture occludes itself in accordance with a height map. If you look along the edges of the bumpy bits you can see the slight smearing effect (which doesn't look nasty as it follows the 'grain' of the overlying image).

The results are, I think you'll agree, very impressive. Total cost? 3 instructions in the pixel shader (if you already have a tangent space vector).

<edit: fixed link, thanks to those who pointed it out>
 
Hmmm. I can't get to that link.

Is this the same as Wang et al's "View-Dependent Displacement Mapping" that was at Siggraph 2003? If so, it seemed to a few people at the lecture that there was a chance that holes could accidentally appear in the images.
 
I don't think so - since there is no change to the geometry, but only to the texture sampling locations you shouldn't get any holes.
 
Link doesn't work. Are the developer boards gone?

But from your description, it sounds like an approximative solution that could mean that the stones will "move" when you change camera position.

I was thinking about an effect like this for the shader competition here, but instead of using the height from the texel that is initially at the pixel position, I wanted to use the height of the texel after displacement. To do this I wanted a Newton-Raphson iteration or two (starting with what I assume the stuff you linked does). And then use this displacement for all texture reads (normal map, color, whatever).

But since I had no DDX/DDY for the NR steps, I scrapped the idea.

I must admit that the image look realy nice. Maybe it looks ok in motion too, but if it's as simple as you say, then it can't be correct.
It would be nice to see it in action.
 
It's certainly not 'correct' - there's a link to a Window's executable demo at :

http://www.bostream.nu/tunah/offsetmap.zip

Basically I think it'll look ok as an approximation for small displacements - just enough to give a hint of additional depth to brick/stone etc. If you try to make the displacements larger then you'll start getting significant visible anomalies in the shading that will spoil the effect, but it's so cheap to implement that I'm sure there are creative uses of this scheme that will look good.
 
andypski said:
I don't think so - since there is no change to the geometry, but only to the texture sampling locations you shouldn't get any holes.
Ahh. I hadn't noticed that on the silhoutte edges there is no displacement.

I wonder how it compares to Polynomial texture maps?

I also just looked at the paper by Wang et al and their system needs about 40 Pixel shader instructions to do the displacements but it does have bumpy edges. For those who're interested, their paper is here
 
andypski said:
If you try to make the displacements larger then you'll start getting significant visible anomalies in the shading that will spoil the effect, but it's so cheap to implement that I'm sure there are creative uses of this scheme that will look good.
There are no ways to similarly displace the shading to hide that problem for a few extra instructions?
 
Bigtime DOH! No DDX/DDY needed. Might need more iterations, but each step is simpler.

Replace the displacement step:
Code:
TEX height, fragment.texcoord[0], texture[2], 2D;
MAD height, height, 0.04, -0.02; # scale and bias
MAD newtexcoord, height, eyevects, fragment.texcoord[0];

With multiple steps of the same kind:
Code:
TEX height, fragment.texcoord[0], texture[2], 2D;
MAD height, height, 0.04, -0.02; # scale and bias
MAD newtexcoord, height, eyevects, fragment.texcoord[0];

TEX height, newtexcoord, texture[2], 2D;
MAD height, height, 0.04, -0.02; # scale and bias
MAD newtexcoord, height, eyevects, fragment.texcoord[0];

TEX height, newtexcoord, texture[2], 2D;
MAD height, height, 0.04, -0.02; # scale and bias
MAD newtexcoord, height, eyevects, fragment.texcoord[0];

(But be careful with maximum texture dependency level on R300.)

I think it should give better result. There are still errors on edges, but I think the error at least should be smaller.
Maybe it's better to start with a positive lod bias, and reduce it for every iteration down to 0. That could help convergence.

I don't have a computer that I can test it on nearby though.
 
Can this be combined with per-pixel lighting, or does all lighting have to be pre-baked into the underlying texture?
 
It's all about per pixel operations, but not neccesarily per pixel lightning.

But yes, it's possible to combine with pretty much any other per pixel lightning effects. When you've caclulated the displacement, you just use it to displace all texture accesses. Like base map, normal map (for specular or reflective bumpmapping), gloss map, horizon map, or whatever.
 
Simon F, I don't think Polynomial Texture Maps is well suited to solving what this technique does. PTM's are good for approximating a smooth lighting function with more flexibility than a dot product (though I think SH lighting is more effective and robust), but you couldn't really use it to change what part of a texture you were looking at. You could parameterize the colour channels, but it wouldn't look very good if you tried to make it change colours from displacement. You'd probably just wind up with a blur.
 
I think I spoke too soon. PTM's could be used to generate a much better offset coordinate, and should get rid of a lot of the nastiness of this technique. Things might look like they're morphing a bit, but it should look very good.

Instead of a simple MAD to change the coordinate, you could have a precomputed 4D table approximated with PTM's and get a nice view dependant function to generate the offset.

Using spherical harmonics might create an even better function, but that'll probably be quite expensive to get a better function than with PTM's.
 
Back
Top