Ah, that explains your ignorance regarding game developing!
Right, and you are?
Looks like you really have no clue what you are talking about.
Since I argumented every single thing I said about 3dmark03, it would seem that I do. And why are you not commenting on those things?
Most 3dmark03 scores are given at 1024x768 without AA/AF anyway, using relatively short rendered scenes with no interactivity, so where is the basis for comparison? Anyone who claims that Doom3 "doesn't stress the GPU" is probably not in their right mind anyway, because this is not a rational statement based on the data that we have seen so far.
I think you are somewhat confused here. The GPU is more than just a fillrate thingie. Doom3 doesn't use vertexshaders extensively, while, like pretty much any game when you turn the resolution up high enough, it will use all fillrate available.
3dmark03 uses the fillrate (more than Doom3, since it has no optimizations for stencilshadows) aswell as the vertexshader power, which relieves the CPU.
While both may be fillrate-limited in high resolutions, Doom3 isn't using the vertexshaders to their full potential. So Doom3 doesn't stress the GPU as much as 3dmark03.
And since I was discussing benchmark figures on my system in 640x480 without AA/AF, fillrate doesn't have all that much to do with it, it's mostly about whether or not the vertexshaders are up to the task of doing all the skinning and shadowvolume generation faster than the CPU, which they do.
Have you ever even implemented either method? Can you tell me how they work, and why the CPU-based method like Doom3 would stress the GPU in 640x480 then? If not, I will just assume that you are the clueless one here.
As for interactivity, bit silly of you to mention it again, since we are talking about timedemos which as we know aren't interactive either.
Again you are missing the point. Many, many reviewers have made it painfully obvious that Doom3 is quite "playable" using a variety of different cards and systems. That means playable in an interactive environment. Your basis for comparison is completely illogical.
I don't care about reviewers. YOU are missing the point. Doom3 is not playable on MY system.
Since you are so fond of criticizing other people's work, why don't you code up an interactive game yourself, and then we will see who is boss?
I already have a GPU-based shadowing system in my 3d engine, much like the one used in 3dmark03. It easily outperforms Doom3 on my system, like 3dmark03 does.
Do you want to compare it to your 3d engine?
It is very ignorant to believe that the major IHV's do not optimize for 3dmark03, even with the new guidelines. Ever wonder why NV and ATI scores sometimes jump up with new beta drivers, even though gaming performance is largely unchanged?
I don't care about beta drivers. They could have any number of experimental features, hacks, cheats, bugs, whatever. I am talking about WHQL drivers only. And in the case of ATi, those haven't changed performance in 3dmark03 very much over the years. No more than any other software anyway.
And since the FM guidelines pretty much rule out application-specific optimizations, and ATi still manages do get their drivers approved at every single release... what optimizations could they possibly have in the drivers that affect 3dmark03, but do not affect any other game?
Since you seem to imply that ATi has such optimizations, I'd like you to explain in detail what they are.
If you are consistently hitting into the 10fps range
Pay more attention: I get 10 fps or less with 3 or more characters on screen at the same time. When there are no enemies, I can easily get 60 fps.
it doesn't take a rocket scientist to figure out that you need to either: lower the in-game detail, lower the resolution settings
As I said, changing the in-game detail or resolution doesn't have any effect whatsoever.
upgrade your quantity of RAM
I have 768 mb of memory, which I belive is twice the amount of recommended memory for Doom3. So I doubt that this is the problem.
upgrade your graphics card
Obviously since my card easily gets 60 fps when there are no characters around, and about 15-20 fps when there are 1-2 characters, and changing resolution or detail has no effect, I doubt that an even faster videocard would have any effect on performance whatsoever.
Looks like you are not a very good problem solver if you can't figure this out. Instead of constantly complaining about your problem, how about working to solve your problem?
I have already solved the problem: CPU skinning and shadowvolume extrusion is not a good idea on a CPU of < 2.5 GHz. And on any R3x0-based card or better, GPU-based code IS a good idea, as 3dmark03's bruteforce approach demonstrates.
So the solution would be to get Carmack to write proper code.
What part of the problem don't you understand?
Gee, do ya think that maybe an optimized compiler could do it? I don't recall anyone saying that the IQ is any worse than at inception, so there is really no downside for FX users using newer and better drivers.
So let me get this straight... It is naive to think that there are 3dmark03-specific optimizations in the drivers, but it is not naive to think that NVIDIA gets all its performance gains in popular gains from an optimized compiler?
Well, I'll be darned...maybe this is just about Carmack
About Carmack's sub-standard code, to be exact.