In the interview, Carmack talks about the doom3 rendering techniques:
It's a generalized unified lights/shadows rendering technique.
The first pass lays down the basic physical properties of the object. (Presumably color. And also dp3 bump mapping, I guess.)
The second pass lays down an invisible shadow volume in the stencil buffer.
The third pass paints the light onto the object. (Using projected textures.)
Passes two and three are repeated for each set of non-overlapping lights. (For example, you could design a scene with 1 main light and lots of little lights. If the little lights don't overlap each other, but do overlap the main light, then you could render the whole scene in five passes (1 + 2 * 2).)
The second pass requires generating shadow volume, which can be done either on the CPU or the GPU. Using the CPU is slightly faster, but of course eats up the whole CPU.
Much of the hard work in doom3's rendering engine involves:
+ Good algorithms for generating shadow volumes out of a dynamicly changing world.
+ Good algorithms for re-optimizing triangle meshes that are generated durihg the lighting passes. He believes he has a general way of doing this.
+ How to build a BSP without regenerating verticies.
+ How to generate completely optimized shadow beam trees. (Which would help when rendering static lighting scenes.)
Off-the-cuff comments include:
+ Typical scene needs 5 rendering passes, 9 textures. That fits in with the earlier comments of 1 base pass + 2 * lights.
My question is, how do the 9 textures work? 1 base + 1 bump map + 2 * 1 projected light texture. But that's only four textures. What are the other five textures used for?
+ Curved surfaces are a waste of time, both for scenes and for characters.