bump mapping, normal mapping, offset mapping, and....

PowerK

Regular
Hi, folks.

Is there any good article which explains differences between bump mapping, normal mapping, offset mapping, and virtual displacement mapping ?

It seems to me that they are all designed to give textures more 3D look, but I'm not sure what the fundamental differences are between them.

I've recently found out the differences between bump mapping, and normal mapping by watching this video clip. Normal mapping demonstion video taken from Riddick .
http://www.gametrailers.com/gt_vault/t_chroniclesofriddick_efbb_tech.wmv

Cheers.

EDIT : bad link corrected.
 
Bump mapping: Includes many techniques that are designed to fake a non-flat appearance on a flat surface.

Normal mapping: One specific implementation of bump mapping that calculates lighting differently for each pixel to simulate bumps. Basically a normal map stores the desired "texture" of a surface, which the hardware uses for proper lighting.

Offset Mapping and Virtual Displacement mapping: The same thing. This technique comes under a few other names as well. This technique is the same as Normal Mapping, but with the added consideration that since the surface isn't smooth, the position of its pixels shouldn't be calculated as if it were. So, the textures are offset by some amount dependent upon the curvature of the surface in order to improve the illusion given by Normal mapping.
 
My grasp of it is :

Normal Maps: Since the reflection of light is dependent on the angle at which the light hits the surface normal maps provide this information by representing the surface normal at the particular point in the texture. As such when lighting is done against the textured object it is the angle of reflection of the light that gives the ordinary flat texture more geometric detail.

Displacement Mapping : The end result is the same as above but instead of manipulating reflections the vertices of the object are directly moved (displaced) by some value stored in the displacement map. Of course, since the object is changed the way it reflects light will change also. This method should be more accurate than normal maps since the lighting trick won't be effective at certain angles . e.g looking at a wall head on.

All of this would be much better understood with diagrams and I'm sure there is a good article out there somewhere.
 
Chalnoth said:
Bump mapping: Includes many techniques that are designed to fake a non-flat appearance on a flat surface......

Nice summary Chalnoth.
 
Working clicky

That really is a pretty inaccurate explanation though. For one, they claim that normals mapping stores both depth/displacement info and information about complex angles and phases whatever that means. They're right in that it's suitable for reducing geometric detail of objects while retaining lighting quality though. You could do the same thing with a bump map, but you'd need more pixel shader math and you'd have to use more bits for the bump channel than the tradiotional eight.
 
Chalnoth's point about bump-mapping is that there isn't just one technique for it. Of course, the most common is dot3 bump-mapping (Phong), which used in normal-mapping as well. There's also emboss bump-mapping, but I don't think anyone uses that anymore for 3D applications.
 
I apologize for the bad link, guys.

Ok. It became clearer now. :)

One thing though. Am I correct in assuming that any VPUs fully supporting DirectX 7 are capable of Normal mapping ? (If my memory serves, bump mapping was first introduced in DX7). And offset mapping (aka virtual displacement mapping) requires VPU with Shader Model 2.0 capability (DirectX 9) ?
 
PowerK said:
I apologize for the bad link, guys.
Ok. It became clearer now. :)
One thing though. Am I correct in assuming that any VPUs fully supporting DirectX 7 are capable of Normal mapping ? (If my memory serves, bump mapping was first introduced in DX7).?

Yes and now. You're also limited in the number of operations you can do in the pixel shader (well the equivalent which is texture stage state) that limits the interest of normal mapping. Some dx7 class hardware can't do dot3 operation between colors. There is no such thing as a "fully dx7" as there is no such thing as a "fully dx9", dot3 at least is not a requirement.

PowerK said:
And offset mapping (aka virtual displacement mapping) requires VPU with Shader Model 2.0 capability (DirectX 9) ?

with very limited conditions some people did something similar on xbox (geforce3/4 hardware).

Before we were being able to do dependant texture per pixel, what you could also do is store a view dependant texture that you select "en amont" in the pipeline.
 
The most important difference IMHO is that no artist could paint a normal map, so it has to be generated from either a grayscale height map (that could be hand-painted) or 3D geometry.
 
Laa-Yosh said:
The most important difference IMHO is that no artist could paint a normal map, so it has to be generated from either a grayscale height map (that could be hand-painted) or 3D geometry.

I agree. The most significant thing about normal mapping is that the information is (in most cases) from a massively higher polygon model.
 
JF_Aidan_Pryde said:
I agree. The most significant thing about normal mapping is that the information is (in most cases) from a massively higher polygon model.
Just to clarify.

Normal maps can come from anywhere (even generated real-time), the technique of generating bump-maps from high poly models is "Appearance Preserving Simplification". Its works just as well on Blinn Bump-maps as normal maps.

Most normal maps so far aren't APS but hand created heightfields. Its also worth noting the the Dreamcast supported normal maps in hardware and Trespasser did them in software a few years back (though IIRC both do them in polar coordinates rather than cartesian coordinates).
 
Given that emboss bumpmapping uses the 8 bit alpha channel for 255 levels of 'bumpyness', can someone clarify the format of the normal map?

Is it 24-bit RGB or 32-bit RGBA?

Which channel is used to store what information?

Where does this information come from?

And is this a standard format for storing normal maps or does diffferent software use different methods?
 
Normal maps can be stored in various formats. The traditional method is a 24 bit RGB texture, representing an XYZ vector. Sometimes the alpha channel is used to store a specular- or gloss-map. Recently, ATI has been promoting the use of a 2 channel, 32 bit texture -- 16 bits per channel. These two channels represent the x and y components of a normalized vector. The remaining z component is derived from x and y.
 
Virtual displacement mapping is a more elaborate form of offset mapping than used at the moment, better results but less practical.
 
Alstrong said:
has it been used in a game (on the xbox)?
I think the The Chronicles of Riddick: Escape from Butcher Bay game is one of the first to use normal mapping (for APS as described earlier) on the Xbox. Halo2 will, as well as a few other upcoming games I believe.

EDIT: Corrections in bold.
 
ninelven said:
Alstrong said:
has it been used in a game (on the xbox)?
I think the Pitch Black game is one of the first to use normal mapping on the Xbox. Halo2 will, as well as a few other upcoming games I believe.

Err just about every XBox title since release uses normal mapping (including at least Halo 1 and Shrek 1).

Now if your talking 'Appearance Preserving Simplification' then maybe...

And yes I am going to correct you everytime until you get it right.
 
Ostsol said:
Normal maps can be stored in various formats. The traditional method is a 24 bit RGB texture, representing an XYZ vector. Sometimes the alpha channel is used to store a specular- or gloss-map. Recently, ATI has been promoting the use of a 2 channel, 32 bit texture -- 16 bits per channel. These two channels represent the x and y components of a normalized vector. The remaining z component is derived from x and y.

while on the topic of normal storage spaces/formats, the XY normals storage space with posterior Z derivation has a nice little advantage over the rest of the normal storage formats when it comes to texture filtering: it handles flowlessly interpolation across normals in situations where other normal storage formats would have hard times to interpolate, i.e. between opposing normals (yes, that might actually happen).

so one day you got two neighbouring normals n0 = (-1, 0) and n1 = (1, 0) and a misfortunate pixel right amids those two of ni = .5 n0 + .5 n1, e.g. ni must get a bloody norm of zilch, right? -- have no worry as interpolation result (0, 0) from XY space automatically got you a nice normal of (0, 0, 1) in 3d space. for comaprison, in both XYZ space and polar space interpolation logic you'd be using a big spiked club against an opposing-normals case.
 
Back
Top