Is per-pixel lighting just a technique or part of a technique in lighting?

I do know what per-pixel lighting does, but I meant that is it a part of something like say HDR, or is it its own form of lighting? I know there are tons of lighting techniques out there (I'm just using HDR as an example), but do they have 2 different ways (as in per-pixel or per-vertex) to light it; or is per-pixel a technique all on its own?
 
Basically the "old way" of doing lighting was you'd calculate the intensity of lights at each vertex in the triangle, and then interpolate the intensity of the light across the entire triangle. It worked but there was little detail going across a triangle.

The new way is to do this lighting calculation for every pixel individually. The result is you can express far more detail within a triange since each pixel gets recalculated.

Fundamentally it's the same lighting equation that is used, it's just how often you run the equation equating to how much detail.
 
The reason for interpolating normals across a primitive instead of colour is due to the fact that colour isn't surface information while normals are used as surface information.
 
I do know what per-pixel lighting does, but I meant that is it a part of something like say HDR, or is it its own form of lighting? I know there are tons of lighting techniques out there (I'm just using HDR as an example), but do they have 2 different ways (as in per-pixel or per-vertex) to light it; or is per-pixel a technique all on its own?

It's mainly a speed/quality tradeoff. Probably the earliest example would be that of Gouraud (vertex) VS Phong (pixel) shading.
 
Back
Top