Relief mapping: beyond parallax mapping

Wow, awesome!

Too bad self-shadowing generation requires an extra ~400 instructions, that combined with the fact that it can't run on a Pixel Shader 2.0 specifiation supporting card. Another thing that also sucks is that this technique only works on planar surfaces.

But damn does it look better than regular bump mapping, normal mapping and parrallax mapping.

~600 instructions, hehe you'll want an NV3x, NV4x or R42x for something like that.

Relief mapping....one step ahead of Parrallax mapping and one step further to achieving displacement map quality geometry detail all in stored texture data instead of having to calculate all those vertices which of course kill your frame-rate and are very limited.
 
Raytracing takes a lot more operations than adaptive view dependent displacement mapping ... if the hardware could do it it would be better to just have the vertices.
 
This (non real-time) relief mapping was presented a few years ago I believe (yep just checked it references the original paper)

That had some cool movies (not real-time) back then. IIRC the authors had a slightly dodgy real-time version working back then.

Still not convinced that this kind of advanced bump mapping has a future with displacement maps having so much better results for little (if any) extra cost.

Still a very pretty ALU stress tester.
 
DeanoC said:
Still not convinced that this kind of advanced bump mapping has a future with displacement maps having so much better results for little (if any) extra cost.

I really fail to see how you get that displacement maps have little or no cost try 4x ATI Truform on a card other then the 8500/9200 and see if their is a cost.
 
bloodbob said:
DeanoC said:
Still not convinced that this kind of advanced bump mapping has a future with displacement maps having so much better results for little (if any) extra cost.

I really fail to see how you get that displacement maps have little or no cost try 4x ATI Truform on a card other then the 8500/9200 and see if their is a cost.

He is comparing it to a solution that requires literall 100's of instructions/pixel.
 
ERP said:
bloodbob said:
I really fail to see how you get that displacement maps have little or no cost try 4x ATI Truform on a card other then the 8500/9200 and see if their is a cost.

He is comparing it to a solution that requires literall 100's of instructions/pixel.
And how many instructions do you it would take for the cpu to subdivide every surface and to normalise the new vertices so that their was atleast 1 triangle per pixel( Which is what is done in displacement mapping)? don't forget the trilinear filtering that has to be done on the cpu either. Don't forget the CPU also has alot less float pointing power.

If you want to argue this can be done on the GPU then yes you could use vertex texture but its only point sampling and you would still have to subdivide every surface to 1 or more per pixel and you would probably end up being vertex shader limited.
 
I don't think he was thinking about the CPU doing it at all.

Neither solution is particularly practical right now, by the time GPUs are fast enough to this sort of "advanced bump mapping" in any sort of real scenario, they will most likely support programmable tessalation.
 
bloodbob said:
And how many instructions do you it would take for the cpu to subdivide every surface and to normalise the new vertices so that their was atleast 1 triangle per pixel( Which is what is done in displacement mapping)? don't forget the trilinear filtering that has to be done on the cpu either. Don't forget the CPU also has alot less float pointing power.

If you want to argue this can be done on the GPU then yes you could use vertex texture but its only point sampling and you would still have to subdivide every surface to 1 or more per pixel and you would probably end up being vertex shader limited.
You don't have to subdivide a surface to at least one triangle per pixel in order to do displacement mapping. You don't have to subdivide at all, although displacement mapping without subdivision is pointless in many situations. Regardless the subdivision doesn't need to be to pixel size.
 
3dcgi said:
You don't have to subdivide a surface to at least one triangle per pixel in order to do displacement mapping. You don't have to subdivide at all, although displacement mapping without subdivision is pointless in many situations. Regardless the subdivision doesn't need to be to pixel size.

Well if you want per-pixel displacement such as these demos can do. If we are going to compare apples to apples then we need to do this per-pixel. After all you could calculate the sample offsets at a 16x16 resolution and store it in a texture and then render the scene and apply offset based downsampled version I'm sure it would run hell quick and would actually beat parralex mapping.
 
Will we be seeing things like this in next gen consoles? Stuff like that seems just as, if not more improtant than squeezing out a few thousand mroe polygons per second.



That is some CRAZY stuff!!!!
 
Does anyone know if they can tile these suckers? If the relief only works "within" the texture, it would not be very useful for tiled wall, floor and ceiling textures (which would be its main use, I'd think).
 
bloodbob said:
I really fail to see how you get that displacement maps have little or no cost try 4x ATI Truform on a card other then the 8500/9200 and see if their is a cost.
You can do the Truform except for the tesselation completley on the vertex shader 1.1 its quite expensive (Part of my ShaderX2 article did just this), using render-to-vertex and VS 3.0 it could be made much faster. Certainly an order of magnitude faster than 200+ instructions per pixel.

bloodbob said:
And how many instructions do you it would take for the cpu to subdivide every surface and to normalise the new vertices so that their was atleast 1 triangle per pixel( Which is what is done in displacement mapping)?
You don't have to tesselate to 2 triangle per screen pixel, thats one particular method of rendering a displacement map. Its a cool and very high quality technique but isn't required.

bloodbob said:
If you want to argue this can be done on the GPU then yes you could use vertex texture but its only point sampling and you would still have to subdivide every surface to 1 or more per pixel and you would probably end up being vertex shader limited.

Even with 'just' SM3.0 for ShaderX3 I did a fully GPU rendered, lit and modified displacement map. Considering it was even fairly fast on REFRAST it screamed on NV40. Use render to vertex to do filtering, calculating normals etc.

Now move forward a year or two which will be the time we can afford 200-600 instructions per pixel for this effect, you won't have vertex shader's (unification with vertex texturing the same a pixel texturing (filters and cost i.e. virtually free bilinear texture sampling) ) and some form of tesselation unit. Possible (not sure about PC) arbitary read/write memory access.

So you do something like (this is off the top of my head so may have errors)
1st pass. Use a no output shader to calculate the screen extents (and as such the amount of subdivision needed). Export this into the command buffer as the tesselation unit parameters.
2nd pass. Use a pixel frequency shader to re-sample the displacement map into a screen extents sized one.
3rd pass. Use a pixel frequency shader to calculate vertex normals (if needed i.e. Realtime changing displacement maps)
4th pass. Tesselate and use a vertex frequency shader to look up the the resampled texture.

Currently we can do most of the 2nd/3rd and 4th passes (we have to use fixed subdivision steps).

Displacement mapping has got an aura of being massively expensive so nobody even looks at it. I suspect that displacement mapping can be achieved at roughly the same cost of a 400 instruction pixel shader (of course it will stress other parts of the chip though)
 
Back
Top