GameSpy: Are you going to retire after DOOM 3?
John Carmack: No. I've got at least one more rendering engine to write. The development of rendering engines is driven by two major factors. One of these is, of course, the question, "When you finish a game, is it time to write a new engine?" The answer is based on what is happening in the hardware space.
Previously, it was just about what was happening on CPUs. Do we have 32-bit CPUs? Do they have floating point on the CPUs? Then we got graphics cards and that stayed the same for a number of years. We got some important new features in the graphics which basically engendered the DOOM engine. We had cube-mapping, dot re-rendering, and geometry acceleration. This important set of features, and it was enough to make it worth writing a new engine.
DOOM is going to be in use for a long time, but just this year, hardware has surpassed a really significant point with the floating point pixel formats and generalized dependent texture reads. These are things that demand that a new engine is written.
It's particularly significant because those are the only features that are necessary with temporary buffering to actually implement anything. You can decompose Pixar Renderman shaders into multi-passes. It doesn't mean that they can run in real time, but the fact that they can be calculated on a graphics card has a wide range of implications on what you want to do for the graphics pipeline. It's going to impact both real-time rendering and off-line rendering. There is going to be an interesting convergence.
DOOM does a lot of things to use these features, but it still uses that notchy functionality of previous generation graphics cards where you had this set of features and you could use combinations of them but you could not do exactly what you want.
The very latest set of cards, with the combination of those features -- floating point and dependent texture reads and the ability to use intermediaries -- you can now write really generalized things and that is appropriate. You might use 50 or 100 potential instructions in some really complex gaming shader; but if the engine is architected right, you would be able to use the exact same engine, media creation, and framework and architect the whole thing to do TV-quality or eventually even movie-quality rendering that might use thousands of instructions and render ridiculous resolutions. The ability to use the same tools for that entire spectrum is going to be a little different from what we have now.