I look at what they offer for analysis, and information offered to the reader:
- Splinter Cell
Extensive shader utilization, and a known render to texture stress (as I understand the shadow technique common to all cards).
- Tomb Raider
Extensive PS 2.0 utilization, and an opportunity for contrast between Cg and HLSL on nVidia cards until Cg is no longer a factor. Some concern for future patches though , so being able to evaluate HLSL improvements and having game issues addressed is an open question.
- UT2003
Extensive stress of DX 7 class performance, as far as bandwidh, fillrate, AF quality/multiple texture layer evaluation. Still seems quite informative to me, especially with new vendors offering cards with unknown characteristics. Also, it seems a waste that no further advantage was taken of the extensive benchmarking data output it offers (this should somewhat go for Splinter Cell as well, I think?).
- Wolfenstein: ET
OpenGL performance evaluation. (This is the one I think should go...I somewhat agree that balancing OpenGL and Direct3D just for the sake of maintaining a specific count is not important...some useful and informative representation is all that seems important to me).
- SS:SE
OpenGL performance evaluation, seemingly more finely controlled than Quake III engine games IMO. Also, DX/OGL contrast opportunities along with UT 2k3...this seems an interesting opportunity for AF quality analysis. What counts against it is that it might not be stressful enough, and has been a known "benchmark target" for a very long time.
I actually think Savage, as Doom brought up, might be worth looking at to replace it, since UT 2k3 would still offer the API contrast opportunity. Whether it will be a good benchmark or not remains to be seen, I think.
My question for Aquanox 3,
as a graphics card benchmark (this "Synthetics are worthless" mantra is disturbing to me, Ailuros
.... do you have another complaint against it? ) is I don't know of any unique PS 2.0 effects it offers. I know it has "2.0 version" shaders, but my impression of the usage is "implementing the same thing in differing shader versions".
This doesn't seem stressful to me, and (as far as I understand) leaves the opportunity for misrepresentation of "DX version" wide open. OTOH, it also seems uniquely suited as a "same workload" benchmark, as long as it isn't presumed to represent "DX 9" (IOW, I don't think this would be a problem in an OpenGL equivalent...which makes my issue one of presentation, not usage). If my understanding is correct, I think this actually recommends it as a unique perspective as a graphics card benchmark, as long as whatever these shader characteristics might be are represented accurately.
Of course, this is just based on my informal understanding and what I've noticed (or not noticed) in screenshots so far, and may be incorrect...evaluate accordingly.