Socos said:
I propose that if you made a game that took every DX9 option an enabled it it would kill a 9800XT.
Graphics API's aren't a series of options that can be turned on and off. In fact, it is conceivable to write a program that would display an infinite number of pixels in one frame (of course, the program would never run to completion).
So yes, with any graphics API, you can kill performance on any graphics card, if that is what you desire.
In this way, "making full use" of an API doesn't really mean all that much. There's still the fact that the developers are trying to display actual graphics, and there are a plethora of different ways to render a scene.
The only comparison that makes any sense is, when development began, what baseline hardware did the developers assume the game would be run on? That will determine a subset of rendering algorithms that are feasible. The actual algorithm chosen will determine how well the game engine can adapt to more powerful hardware.
DOOM3, for instance, was developed with the assumption that shadow volume generation would need to be done on the CPU. This precluded the use of any sort of higher-order surfaces. If the NV40 and R420 can efficiently calculate shadow volumes on the GPU, then JC's choice of rendering algorithm will have limited how well the DOOM3 engine could translate to more powerful hardware.
Similarly, JC was forced to use stencil buffers for the shadowing because that was the only shadow technique available on his baseline hardware.
Now, DOOM3 will be pretty slow on the original GeForce and Radeon graphics cards not because DOOM3 "fully uses DX7," but because the rendering algorithm chosen requires many passes on that hardware (not to mention DOOM3 doesn't use DirectX at all...). It doesn't even have anything to do with what options are turned on. The original GeForce and Radeon just don't have the flexibility to be efficient with JC's rendering algorithm.