Nvidia Against 3D Mark 2003

So if its not "correct" for 3dmark03 to be using feature sets that arent and will likely not be in games until a long time, it means that nvidia is basicly telling us to not buy their geforceFX 'cause there's no games where we can fully take advantage of the geforceFX feature sets?

Isnt that hypocrisy?
 
Crusher said:
demalion said:
Relevant to what, Crusher?

To the actual capabilities of the video cards as it relates to games.

I'd say 3dmark 2001 wasn't any better at that. I'd say that 3dmark03 is actually a better benchmark unless you expect fps in games between cards to scale proportionally with 3dmark scores (which I don't think it ever did, though that is how it was used and how 3dmark03 will be used). I'd say this expectation is exactly what has always been very wrong with the use of 3dmark, that it is just as wrong now, but that due to stressing shader functionality it is actually more useful as a benchmark of that functionality specifically. In any case, I have no big issue with not using the final 3dmark score, and I actually applaud that overall (to avoid the mindless "big 3dmark" reflex many seem to have)...I just find the reasons given by nVidia and apparently being propagated among various websites rather hypocritical and self(nvidia)-serving.

Regarding vertex shading usage, I actually find that aspect of nvidia's comments compelling right now (as opposed to the rest of their comments), and I look forward to hopefully receiving a response from Futuremark.

I'm actually quite suprised all the pro-PVR people aren't slapping NVIDIA on the back, they've been saying the same thing about 3DMark all along.

PVR=PowerVR?

Well, if a DX 9 PVR card came out, it would likely score very well in 3DMark03, maybe even outperforming current high-end cards due to efficiency. Certainly seems likely it would perform very well indeed for its cost. I actually think PVR enthusiasts would like 3dmark03 a whole lot better than 3dmark 2001 for just the reasons I mention. The vertex processing issue still remains a concern, though.
 
Using advanced feature sets is fine. Using them in terribly inefficient ways which will never be implemented by an actual game developer isn't so good. I'm not saying things are being done the way NVIDIA claims they are, but if they are, then it loses most of it's value as a benchmark that's relevant to games. That pretty much turns game tests 2 & 3 into vertex shading tests, and there are much more accurate and efficient ways of testing vertex shaders than simply causing redundant work when doing shadow volumes. And one would think after being beta testers for most of the development, NVIDIA would have a fairly good idea of what's going on behind the scenes.

But then again, as far as I'm concerned, we could use more games that looked like Game Test 4. If FutureMark can help push that kind of detail into games, regardless of how inefficient they've coded it, more power to them.
 
demalion said:
PVR=PowerVR?

Well, if a DX 9 PVR card came out, it would likely score very well in 3DMark03, maybe even outperforming current high-end cards due to efficiency. Certainly seems likely it would perform very well indeed for its cost. I actually think PVR enthusiasts would like 3dmark03 a whole lot better than 3dmark 2001 for just the reasons I mention. The vertex processing issue still remains a concern, though.

Eh, my initial feelings disagree with you here. Kyro's didn't show as well compared to regular cards in 3DMark01 because of things like the fillrate tests not acurately depicting the true 'effective fillrate' of the cards, and losing points because the cards were lacking some key features that 3DMark01 tested for, even though no games used them. NVIDIA is now complaining about the same kind of situation with things like Game Test 1 being single textured for the most part, and making extensive use of PS1.4 instead of PS1.1 and 2.0 (not going to get into that debate though). Personally I think it's good to test all the features of the cards, even if no one uses them in games, but it needs to be tested in the same way it would be used in a game if a developer were to take advantage of it.

The parallels are quite pronounced I think, but the fact that NVIDIA was championing their 3DMark2001 scores, and suddenly dropped their support altogether now that they're the ones getting the short end of the stick, is an unappealing move regardless. Whether any of the other graphics companies share NVIDIA's thoughts on some or all parts of 3DMark03 I don't think we'll ever know. At this point if they did agree, keeping their mouths shut still makes them look so much better than NVIDIA, that company would want to give it up just to be morally honest with consumers. Especially if some of the things NVIDIA complains about actually benefits their products.
 
I've said it before but I'll say it again: what value would 3dmark have as a benchmark of current games when you could, I don't know, benchmark the games. It is a benchmark of future performance. Now this isn't to say it's a good one or not. There are arguments on both sides here and only time will tell.
 
Finally read NVIDIA's comments (at xbitlabs). It has some interesting information -- for example that tests 2 and 3 are very vertex shader bound. Although I'd say that their complaint about that isn't really justified if we look at 3DMark03 as a test of the graphics card. The number of times skinning is done does seem excessive if NVIDIA is right, although it's indeed the way developers *would like* to work. The alternative that NVIDIA suggests means doing skinning once -- but on the CPU!

I completely disagree about the 1.4 pixel shaders, though. The more I think of it, the more I'm convinced they're likely to survive better in the long run, with 1.1 used for lower quality rendering on low end cards.
 
I'd just like to personally thank Depth-Test for cramming Hellbinder's ravings back down his own craw. I think that Hellbinder should be awarded like Dave Barry was and have a sewage-pumping station named in honor of his efforts. Not because Hellbinder is "wrong", but because he attacks everyone like a vicious dog as soon as you point out flaws in his latest pet theory.

So, way to go Depth-Test!
 
This arguement sounds so fimilar . I dunno if i can place it though.. Oh oh wait its coming to me ... its coming to me .... oh 3dmark 2001 . Heh .

Look the way this program does certian things is going to affect all cards not just nvidias and will of course make a card score lower. The thing is if card a is faster then card b when the code sucks. Will it be faster than card b when the code is optimized. The subject on ps1.4 well when 3dmark came out only one card supported ps . The geforce 3. Now when 3dmark03 came out lets see there are r200s , r300 and geforce fxs that support it. Where is the problem ? Because nvidia's dx 8.1 cards don't support a feature of dx 8.1 its wrong to use that feature ? And why the out rage. Are you made at 3dmark because you can't see a part of the benchmark because you bought a card that doesn't live up to the full dx8.1 specs. You should get mad at yourself for buying the card.
 
ET said:
for example that tests 2 and 3 are very vertex shader bound. Although I'd say that their complaint about that isn't really justified if we look at 3DMark03 as a test of the graphics card.
It's not vertex bound on all platforms *wink wink nudge nudge*
 
Finally read NVIDIA's comments (at xbitlabs). It has some interesting information -- for example that tests 2 and 3 are very vertex shader bound. Although I'd say that their complaint about that isn't really justified if we look at 3DMark03 as a test of the graphics card. The number of times skinning is done does seem excessive if NVIDIA is right, although it's indeed the way developers *would like* to work. The alternative that NVIDIA suggests means doing skinning once -- but on the CPU!

I checked them out as well, and then went back to the 3dMark03 white paper to see how they justified the decision to skin in the vertex shaders. Here are the relevant quotations:

[url=http://xbitlabs.com/news/story.html?id=1045073804 said:
Nvidia[/url]]These two tests [games 2 and 3] attempt to duplicate the “Z-first†rendering style used in the upcoming first-person shooter game, “Doom 3â€. They have a “Doom-like†look, but use a bizarre rendering method that is far from Doom 3 or any other known game application. This method makes for an interesting demo, but is so inefficient that no game would ever employ it. This is best exemplified by the shadow calculation method used in these tests. These tests attempt to use shadow technique used in Doom 3 called stencil shadow volumes. This is a multiple pass algorithm that is done for all objects in the scene. The passes in 3DMark03 look like this:
Code:
For every object: 

   Pass 1 (Early Z) 
      Skin Object in Vertex Shader 
      Pixel Shader writes Z, RGB = ambient, and Alpha = perspective Z 

For every light: 
   For every object: 
      Pass 2 (Stencil Shadow Volume calculation) 
         Set stencil to increment/decrement 
         Skin Object in Vertex Shader 
         Stencil extrusion calculation 
         No Pixel Shader 

   Pass 3 (Lighting) 
      Skin Object in Vertex Shader 
      Pixel Shader (lighting) write RGB = color
The portion of this algorithm labeled “Skin Object in Vertex Shader†is doing the exact same skinning calculation over and over for each object. In a scene with five lights, for example, each object gets re-skinned 11 times. This inefficiency is further amplified by the bloated algorithm that is used for stencil extrusion calculation. Rather than using the Doom method, 3DMark03 uses an approach that adds six times the number of vertices required for the extrusion. In our five light example, this is the equivalent of skinning each object 36 times! No game would ever do this. This approach creates such a serious bottleneck in the vertex portion of the graphics pipeline that the remainder of the graphics engine (texturing, pixel programs, raster operations, etc.) never gets an opportunity to stretch its legs.

It’s unfortunate that 3DMark03 does not truly emulate Doom or any other game by skinning each object only once per frame, caching the skinned result, and using that cached result in the multiple passes required for shadows. This would have been a balanced approach that allows both the vertex and pixel/raster portions of the graphics engine to run at full speed. Designing hardware around the approach used in 3DMark03 would be like designing a six lane on ramp to a freeway in the freak case that someone might drive an earthmover on to it. Wasteful, inefficient benchmark code like 3DMark03 force these kinds of designs that do nothing to benefit actual games.

3DMark03 White Paper said:
When using this kind of stencil shadowing, the developer is left with some options on the implementation. 3DMark03 does as much work as possible in the vertex shaders, since the goal of 3DMark03 is to measure vertex and pixel shader performance in 3D games. Also it is expected that many games with similar technology will have a heavy workload for the CPU doing physics (including collision detection), artificial intelligence and visibility optimizations for example. It is therefore desirable to perform as much as possible on the graphics card in order to offload the CPU.

An alternative implementation would be to give some of the graphics tasks to the CPU, and thereby offloading [sic] the graphics card. The skinning could be done on the CPU, which would reduce the amount of vertex shader tasks. Also, when pre-skinning on the CPU, the characters would not need to be re-skinned for each rendering pass. Then again, skinning is a fairly light vertex shader operation, and with as few characters as in this game test, there should not be much benefit. Also, if there are many characters on screen, more pre-skinned characters would need to be transferred over the AGP bus.

The first thing to note is that Nvidia seems to be missing a PS1.1 pass, namely "light fall-off to alpha buffer," in their analysis; the White Paper says the PS1.1 path requires (1 + 3-per-light) passes while PS1.4 requires (1 + 1-per-light). I don't know enough to say whether the alpha buffer pass just doesn't require vertex skinning or whether Nvidia left it out (perhaps to understate how inefficient emulating PS1.4 with PS1.1 is?).

Next, it's worth noting Futuremark identifies and discusses the exact issue Nvidia goes to such lengths to "expose" and ridicule, in their White Paper which was of course released before Nvidia's complaint. (OTOH they could have expected such an argument from Nvidia and been preemting it.)

Next let's take note of Nvidia's interesting rhetorical trick: explicitly identifying the dynamic shadowing technique in games 2 and 3 with Doom3--as if Doom3 is the only game that will be using similar techniques!--and thus building the implication that any test using a different means to achieve the same result is invalid; after all, if the point of the test is to simulate Doom3, then you should use the same algorithm! Of course, while Doom3 will be the first major game to use this technique for the entire game world, it will obviously not be the last, and 3DMark03 is presumably targeted to simulate the performance tradeoffs that might be used in a game being released a bit later than Doom3.

But really it all comes down to two contradictory assertions:
Nvidia said:
This approach creates such a serious bottleneck in the vertex portion of the graphics pipeline that the remainder of the graphics engine (texturing, pixel programs, raster operations, etc.) never gets an opportunity to stretch its legs.
vs.
Futuremark said:
Then again, skinning is a fairly light vertex shader operation, and with as few characters as in this game test, there should not be much benefit.
So the question is: are games 2 and 3 mainly vertex-shader limited on current cards? And, if so, how much of this is due to the extra skinning (after all, PS1.1 will still require more passes and thus more geometry ops, even if the skinning is done by CPU and cached--I think)?

Well, we don't have the answers directly, but we do have this wonderful comparison of the 9700 Pro with PS1.4 turned on and off in drivers. (EDIT: I should give credit to Ichneumon for doing running this very interesting comparison. :) )

Game2:
PS1.4 - 30.5
PS1.1 - 24.9
diff - 22.4%

Game3:
PS1.4 - 28.2
PS1.1 - 22.8
diff - 23.6%

Meanwhile, as we know, the number of vertex-skinning operations goes up by around 100% when moving from PS1.4 to PS1.1. And obviously the extra skinning is only a small part of the performance hit you get from running extra passes.

In other words: I don't buy it. If Futuremark says "there should not be much benefit," I'd tend to believe them. That said, the fact that Doom3 has the CPU do the skinning indicates to me that this is the higher performance method on current/near future hardware. 3DMark03 is targeted at hardware a bit farther out, and they seem to be saying that, in their opinion, vertex shader power at that point will be able to gobble up the extra skinning with no problem.

Especially as that hardware won't be stuck with PS1.1... :)
 
Doomtrooper said:
Great Post Dave H....

Interesting to see 1.3 vs 1.1 on Nvidia hardware now.

Unfortunately can't be done with 3dmark03. I haven't tested it yet, but theoretically you could force the 9700 to do PS 1.3, though I don't know how to test whether or not it actually works since I don't know of any PS1.3 specific test app that would use it.

I've added the options to the latest Rage3D Tweak.
 
Yes, I liked that post a lot, Dave H!

A question...is it possible to change it to use CPU based vertex shading (i.e., turn off hardware support) on the 9700 and see how it scales with different CPU speeds? That might give us a decent idea of the significance of the workload (depending on how much of a bottleneck AGP transfer is).
 
A question...is it possible to change it to use CPU based vertex shading (i.e., turn off hardware support) on the 9700 and see how it scales with different CPU speeds? That might give us a decent idea of the significance of the workload (depending on how much of a bottleneck AGP transfer is).

There's a big ass option that says "Force Software Vertex Processing" - that might do the trick!! ;)
 
From Rage3D's 3DMark03 review:
Futuremark has included separate CPU Marks in the new program. The tests involve running Game Test 1 and 3 at a resolution of 640x480 using software vertex shaders. Pixel shader 1.1 is used on all video cards for the game 3 cpu test to limit graphics card effects on this cpu test.
Has anyone compared the actual results of 3DMark03's vpu Test 3 with the cpu Test 3? Doesn't the cpu obtain a lower framerate (low res and PS 1.1 ops eliminate pixel bottleneck)? If this is the case, it would indicate that the cpu pays a greater penalty when it comes to skinning (along with everything else) than the actual vpu, whether by brute force or not. The benefits seem to be further amplefied by an robust vertex hardware implementation, which may explain why Nvidia suffers the way it does. Wouldn't this indicate that the method of skinning employed by the vertex shader is quite reasonable, especially for offloading tasks from the cpu?
 
I think NVIDIA just has a disagreement with Futuremark, things didn't go NVIDIA's way this time on a decision and they are not willing to let it go so they want to rehash it in public.

Yawn.
 
DaveBaumann said:
A question...is it possible to change it to use CPU based vertex shading (i.e., turn off hardware support) on the 9700 and see how it scales with different CPU speeds? That might give us a decent idea of the significance of the workload (depending on how much of a bottleneck AGP transfer is).

There's a big ass option that says "Force Software Vertex Processing" - that might do the trick!! ;)

Well, par-don me! ;) I don't have a registered 3dmark03 or the new Rage3D Tweak (yet), whichever way of doing it you mean. *sniffle* No need to pick on me!

So, Mr. Fancy Pants, why haven't you tested this yet? Maybe with a kicker of switching AGP mode down to see how much of the change is due to AGP transfer?

*poke* *prod* *whip crack*
 
I think the bottomline is Nvidia should have just kept their mouth shut, because they've just managed to put their foot in their mouth by complaining.
 
Back
Top