The HardOCP guide shows a 1.5ghz + GF4mx getting avg 30 fps @ 640 with just turning off specular maps. In all the other games you mention the min req system would be lucky to run let alone be playable with some features turned on.
Let's not confuse average framerate in a simplistic timedemo with playability, shall we?
Also, as I said before, NVIDIA may perform better with regard to dynamic vertexbuffers.
Yes, I would be surprised. Most people I know have good CPUs and crappy or outdated graphics cards.
Well I don't, and I know quite a few people who don't either.
So better get used to the idea that not everyone is the same.
I didn't say it doesn't exist. There's a market for linux games but you only see id Software releasing clients while everyone else just releases dedicated server bins and milk the linux community.
I think it is safe to assume that there are more people who upgrade their videocard than there are people who play games on linux only.
It is also safe to assume that adding GPU-acceleration for skinning and shadowvolumes is a lot less work than making a linux port of a Windows game.
So this is not exactly a proper comparison.
Even if you use a 6800U with a similarly priced CPU you're still limited by the GPU if you play at 1600x1200 with 4xAA and 16xAF. Moving stuff to the GPU wouldn't help.
You don't get it.
If you happen to have that fast CPU, perhaps it wouldn't be faster. But you no longer NEED to have that fast CPU.
Only the GPU will matter if you use it.
So instead of only the people with a 3800+ getting 100 fps, all people with 1500+ and up will get 100 fps, as long as they have the 6800U. And even the 6600 users will get very good framerates, and the people with R3x0 cards. So it's a win-win situation.
That's what hardware-acceleration is all about. Doing things on the CPU is outdated, and should only be done as a fallback.
If you can point me to other games using stencil shadows extensively (there's Deus Ex and Thief 3, and I think there's that crappy Secret Services game) and that have done this...
We have 3dmark03 ofcourse. And if we just look at offloading processing to the GPU in general, not specifically shadowvolumes, we will see that most games today and released in the near future use GPU for skinning.
Carmack basically hasn't moved on from the CPU-based stuff that he used in Quake.
In FarCry with the min req system you also get a game that looks completely different from running it on the optimal system while in D3, even if you play in the min req system you are still getting a very similar graphical experience.
Whether you see that as an advantage or a disadvantage is up to you I suppose.
I personally expect better hardware to provide better graphics.
You should try it with "r_shadows 0" and compare.
Why? I've already uninstalled it now... And what I do know is that the game runs faster on my brothers P4 2.4 GHz with an R8500 than on my system, and that on my system, 3dmark03 runs lots better than Doom3.
Turning shadows off would put us back in the Quake-age. I don't care if it runs well then. I have the hardware to handle stencil shadowing aswell, if people bother to use it.
Because IHV A is worse at it than IHV B? Then JC would be accused of being ATi's puppet.
No, all IHVs are worse than IHV B.
Besides, just because it works acceptably on selected hardware doesn't mean it's the best solution. Obviously it's always better to have less AGP traffic. It will reduce waiting time for both the CPU and the GPU, at the least.
Because if the code was optimal for ATi, it would be suboptimal for nVidia and then nVidia fan boys would cry foul. You can't please everyone everytime using the same time.
Yes you can. NV4x can handle code that is 'optimal for ATi' just fine. Since the architecture is nearly identical, it would be nearly optimal for both vendors.
On top of that, I believe that the later FX models also had extra vertexshader performance, so those would most probably run it fine aswell, with the exception of low-budget junk like the 5200 perhaps. But those could simply demand a high-end CPU and use the CPU-path instead.
Just look at what Valve said, 5 times more time just optimising for the GFFX's.
That says more about the FX than about the amount of time required for optimizing for a certain videocard.
The FX just doesn't have any kind of floating point performance. Since the Radeon can do everything it does at a decent speed, it is very simple to write fast code for it. There is no need to spend much time on optimizations. The same goes for NV4x. FX was just a total miss.