Difference between Bumpmapping and Normal mapping?

Wanderer

Newcomer
This came up in an irc session and quite frankly I don't know what the difference is. From what I can understand, both essentially create the illusion of more polygons by affecting the lighting and shadow on a surface. That and bumpmaps tend to be painted while normal maps are created in 3D apps and then turned into textures (not that there's anything stopping you from creating bumpmaps the same way...).
 
The simple answer is that there is that it's just different nomenclature.

Bump maps usually refer to heights encoded into a texture, Normal maps refer to directly encoding pixel normals.

Normal maps are really the differential, of the Bump Map.

Since the shading calculations require the Normal rather than the pixel height it is more usual to use a Normal map to do bump mapping on hardware.
 
Normal maps encode the X, Y, Z components of the normal vector for a pixel in the color component (i.e. red = X, green = Y, blue = Z). Bump maps use the greyscale value of the color in a bitmap to determine the ammount of relief a pixel has. You can use the base texture you draw on your geometry for the bump map, but a normal map has to be computed separately.
 
Crusher said:
Normal maps encode the X, Y, Z components of the normal vector for a pixel in the color component (i.e. red = X, green = Y, blue = Z). Bump maps use the greyscale value of the color in a bitmap to determine the ammount of relief a pixel has.
Is this emboss bumpmapping you are describing? "Real" DOT3 bumpmapping uses a normal map. DX8 and DX9 allow a bunch of different texture formats that can handle the vectors needed for DOT3 bumpmapping. Here are some examples: D3DFMT_V8U8, D3DFMT_L6V5U5, D3DFMT_X8L8V8U8, D3DFMT_Q8W8V8U8, D3DFMT_V16U16, D3DFMT_W11V11U10, D3DFMT_A2W10V10U10, and D3DFMT_Q16W16V16U16.

Normal mapping = DOT3 bumpmapping, as far as I know.
 
Well, ultimately, the end result is to compute the normal per pixel and use the normal for lighting calculations. Normal maps are direct: store the normal in a map.

But there are otherways to encode the surface, for example, a height map which is a scalar map that just stores the height per pixel which is used to perturb the interpolated normal. The only advantages of this is that they are easy to "paint" as a grayscale map or sample from photography and you can pack them into an existing texture by using the alpha channel.

Another technique is to store the partial derivatives of the surface for each pixel.

But you are right, the end result is the same: N dot L. The only difference between the techniques is how to calculate N.
 
I guess it depends on how you interpret his question. A normal map is a map of normals, usually a bitmap that encodes the normal data in the color. A traditional "bump map" is a map that contains pixel height information, typically a bitmap that encodes the height information in the greyscale component of the color. The actual effect that has been labeled as "bump mapping" can be done using either of these sources, in various ways as DemoCoder points out.

Since there is no effect called "normal mapping" that is analogous to "bump mapping", I assumed he was asking about the difference between a "bump map" bitmap and a "normal map" bitmap. I would classify a "bump map" as a bitmap similar to a height map, but used for per-pixel heights instead of per-vertex heights, and is uni-directional; whereas a normal map has the normal for each pixel in 3 dimensions.
 
Wanderer said:
This came up in an irc session and quite frankly I don't know what the difference is. From what I can understand, both essentially create the illusion of more polygons by affecting the lighting and shadow on a surface. That and bumpmaps tend to be painted while normal maps are created in 3D apps and then turned into textures (not that there's anything stopping you from creating bumpmaps the same way...).
Wanderer,
The original bump map implementation was published by James Blinn in 1978 : "Simulation of Wrinkled Surfaces". That used a texture of displacements to modify the surface normal of the surface while rendering.

In 1997 Peercy et al published "Efficient Bump Mapping Hardware". Instead of perturbing the surface normal on-the-fly, it stored a map of pre-perturbed Normals. I.E. it's what's now being called the Normal Map.

They are both bump mapping.

(FWIW, at almost the same time, the bump mapping method in Dreamcast was independently invented. It also used 'Normal textures' but stored in a different way).
 
Bump mapping is just a generalised term. What is contained within a bump map is determined by the method in which it is applied to form an apparent bump. E.g. dot3 based "bump maping" requires some form of normal map to dot light vector's with, EMBM requires a UV perturbation map etc.

John
 
I think I have a pretty good idea what normal mapping is; you light each pixel as if the normal was the value in the normal map instead of the value for the polygon.

I'm less clear on how EMBM works. I imagine it's something simpler.
 
antlers4 said:
I'm less clear on how EMBM works. I imagine it's something simpler.

EMBM grabs a texture sample from the bumpmap, rotates it with 2*2 a matrix, adds that to the texture coordinate for the base texture and grabs it's texture sample by using that new texture coordinate.
 
I've always been at a loss as to why they called it EMBM, its just UV perturbation of which EMBM is one of many possible applications...
 
Back
Top