Per Pixel Lighting / Normal Mapping

gnuyen

Newcomer
This is a n00b question, but I've been scouring the internet and can't find out a straight answer. Why is per pixel lighting (normal maps etc) feasable? I don't understand why polys are so much more expensive than a dot3 on each pixel.

Also, is normal mapping done per texel in engines like half life 2 and doom 3 or per pixel as is stated?

Thanks a bunch!!!
 
gnuyen said:
This is a n00b question, but I've been scouring the internet and can't find out a straight answer. Why is per pixel lighting (normal maps etc) feasable? I don't understand why polys are so much more expensive than a dot3 on each pixel.
They aren't, it's just that per-pixel lighting looks better. FWIW, you still have to do a lot of per-vertex setup, e.g. transforming the light vector at each vertex into a local coordinate space, to do per-pixel dot product shading.
 
Size and bandwidth could be another compelling reason. Sending an XYZ+NORM+DIFFUSE down the pipe is 28 bytes, so if you wanted to get per-pixel resolution from a per-vertex model you'd effectively be requiring the equivalent of 24bytes/pixel - compared with around 8 bytes for a normal mapping approach.

There are loads and loads of other reasons - pick up a decent book on the fundamentals of how the graphics pipeline works (esp. rasterization and transforms) and you'll probably start to see why.

It's still a trade-off though. There isn't a "one size fits all" answer to this sort of thing. There are lots of optimizations to be had by moving things to/from the VS and PS based on the ratio of invokations.

hth
Jack
 
To mirror what the above posters said and add a few, imo, three important factors:
1) Information density is much, much better. You can have a (full quality) normal in 2 to 3 bytes, while vertices may easily reach 32+ bytes.
2) New techniques, such as parallax occlusion mapping, will remove most of the traditional normalmaps' limitations.
3a) Normalmaps and similar approaches have "perfect" LODing, and it's called mipmapping ;)
3b) The transitions are always smooth, while it is hard to create good-looking geometry LOD transitions. While rendering a 1M polygons model at a close distance is feasible, rendering 100 of them when they're 250m away is not, and continuous LOD at those polycounts is stupidly expensive for geometry.

To illustrate these points and others, imagine a 2048x2048 16-bit parallax/displacement map with in-shader normal computation and parallax occlusion mapping. Besides for certain "external" edges, you'll get similar quality to that of having 4 *MILLIONS* Polygons. And the full texture only takes 8MB, and no real further LOD scheme is required but perhaps imposters. I'd love to see REYES beating that! *grins*


Uttar
 
I have two other reasons to post:
1. To achieve the quality of high-resolution normal mapping, you actually need to have sub-pixel triangles, so the information density can be even worse than some of the above posters have mentioned.
2. If you go the route of per-pixel lighting, it becomes relatively cheap to implement more complex lighting effects than just bump mapping.
 
Back
Top