Futuremarks technical response

There will be nothing posted by [H], I can gurantee it. The problem with the internet hardware websites is that 98% of them are not technically inclined to make opinions on their own relating to graphic cards, some .pdf shows up by a IHV and they post it as fact on their front page.

Never was it as apparent as last week, and the power Nvidia has over websites.
 
Plus, [H] won't give an 'opinion' until Nvidia tells him what to say. After all, his 'review' of 3D Mark 2003 was just the regurgitation of the Nvidia PDF regarding 3D Mark 2003.

PS. Nvidia mentions a few games having PS 1.3 support. Funny thing is that UT2K3 has NO PS 1.3 effects, and not sure about Tiger Woods 2003. But, BOTH these games use PS 1.4.
 
mad.gif
 
duncan36 said:
So is it feasible that Nvidia's FX based cards will have PS 2.0 and 1.3 but not PS 1.4?

to be Dx9 certifed it needs to be able to execute 1.4 shaders... Doesn't mean it has to do it well though.
 
duncan36 said:
So is it feasible that Nvidia's FX based cards will have PS 2.0 and 1.3 but not PS 1.4?

As Ichneumon say, they would break DX9 specs if they didn't support PS 1.4. Since PS 2.0 can do everything that the PS 1.4 can do (but doesn't have the Phase-thingy) all should be just fine. Brent verified this:

http://216.180.225.194/~beyond3d/fo...amp;postdays=0&postorder=asc&start=80

BTW: This is one crucial reason why it did make sense of Futuremark to include PS 1.4 in their benchmark. All DX9 hardware can take advantage of it (even though you're just talking DX 8.1 level hardware).

But please don't ask me why nVidia wont acknowledge this as an advantage.
 
Forget PS1.1 etc, it was really moot to begin with, IMO.

Look @ this tho':

"Our recommendations for correct benchmarking are the following:

Use game benchmarks when you want to find out how fast a certain game runs on your computer;
Use 3DMark2001 for a comparable overall performance measurement of DirectX 7 or first generation DirectX 8 compatible hardware;
Use 3DMark03 for a comparable overall performance measurement of DirectX 9 compatible hardware "


That says 3DM2K1 is for 1st generation DX8 cards (GF3 basically) & 3DM03 is for DX9 cards.

Where is the benchmark for 2nd Generation DX8 (GF4, Rad8500-9100's, Parhelia, Xabre) cards from FM & why have we been using 3DM2K1 for them if FM now "recommends" it only for those 1st Gen DX8 cards? :rolleyes:

Seems to me FM continues to shoot themselves in the foot & this has left the majority of 'gamers' & consumers out in the cold w/no "FM recommended" benchmark for their 2nd Gen DX8 cards.

Anyone also see the in the '"Vertex skinning" where FM says a 3Ghz CPU doing the scaling delivers 1/5th the frame rate vs hadware VS, but then say "The test results did not change much at all, the overall performance only dropped somewhat using CPU skinning."

80% reduction in frame rate & very little change in results? :oops:

just me
 
I thought they said that the reduction in frame rate in their tests was 1/5 but that was purely doing the vertex processing. In the benchmark, they are doing other things so it isn't vertex limited. Nor is it meant to be CPU limited so that would be why the CPU skinning didn't have that drastic of effect.
 
Just in via mail, ATI's position:

ATI fully supports this new graphics benchmark, 3Dmark03. We feel it is a valuable diagnostic tool for estimating the performance of current and future graphics products when running future game titles (to be released over the next 1-2 years). We believe that using synthetic benchmarks such as 3DMark03 in combination with real applications and game benchmarks provides the most complete picture of the overall performance and value of graphics hardware.

It takes significant development time for developers to incorporate new DX features into their games and engines. So, it is virtually impossible to predict the performance of future games using only existing games. From a business perspective, game developers generally wait for a minimum install-base of a certain generation of graphics products before they incorporate features that rely on those products. Synthetic benchmarks aren't necessarily subject to these limitations, and therefore can be designed to stress the limits of even the most powerful and leading-edge graphics hardware available.

Every game uses different features and techniques, and places emphasis on different aspects of the PC. Some stress pixel fill rate, or vertex processing, or memory bandwidth, or CPU performance, or some combination. It is not often clear in such benchmarks what aspects of the hardware are being tested most strongly. Synthetic benchmarks, on the other hand, can run a diverse set of tests that are specifically designed to stress all aspects of the hardware, and pinpoint specific strengths or deficiencies much more conclusively.

We feel that benchmarks of all kinds are very important tools in our ongoing efforts to improve the performance and quality of our products. Every benchmark or diagnostic that we run provides us with feedback on what things we are doing well and what things need more work. It is often the case that optimizations we come up with for one application end up benefiting other applications as well.

Synthetic benchmarks provide us with valuable feedback because they often stress features and paths that are not yet used in shipping game titles. They may help us find bugs and inefficiencies before game developers encounter them. By evaluating our performance in these benchmarks, we can often improve the performance of future game titles long before they are released, and improve the overall robustness of our drivers.

Synthetic benchmarks also offer us a lot of help in designing future hardware. Relying solely on existing game benchmarks in this case would leave us in danger of producing new products that run old games well, but run new games poorly
 
let me just copy this bit;

'We feel that benchmarks of all kinds are very important tools in our ongoing efforts to improve the performance and quality of our products. Every benchmark or diagnostic that we run provides us with feedback on what things we are doing well and what things need more work. It is often the case that optimizations we come up with for one application end up benefiting other applications as well.

Synthetic benchmarks provide us with valuable feedback because they often stress features and paths that are not yet used in shipping game titles. They may help us find bugs and inefficiencies before game developers encounter them. By evaluating our performance in these benchmarks, we can often improve the performance of future game titles long before they are released, and improve the overall robustness of our drivers.'

I was going to suggest that as a very good reason to support synthetic benchmarks that come out well before games, especially if they have test scores that can be broken down and pin-point specific performance problem areas.

Remember the 8500 'high poly bug' (actually a texture issue) - First of all people complained about OGL poor pefromance in certain game scenes (forest level in RtCW for example) then Sharkfood posted about the extremely low score in the high poly test in GLXS (hence its name). Some driver revisions later, performance was up to par. Here was a real live example of a synthetic bench proving useful in real life apps.
 
Anyone also see the in the '"Vertex skinning" where FM says a 3Ghz CPU doing the scaling delivers 1/5th the frame rate vs hadware VS, but then say "The test results did not change much at all, the overall performance only dropped somewhat using CPU skinning."

80% reduction in frame rate & very little change in results?
The comments about the 1/5 frame rate stuff is referring to the Rag Troll test, which is very intensive with vertex shaders. The latter comment about "the test results did not change much" is referring to the game tests, where the workload is less. Apples and oranges, etc etc...
 
"The comments about the 1/5 frame rate stuff is referring to the Rag Troll test, which is very intensive with vertex shaders. The latter comment about "the test results did not change much" is referring to the game tests, where the workload is less."

Would you be so kind as to copy & paste from FM's "Response" exactly where THEY state that distinction: 'Rag Troll' for the 1st sentence & 'game tests' for your 2nd sentence. I can't find it. You're 'assuming' that is what they are 'referring to' otherwise, IMO.

Thanks,

just me :arrow: again
 
Just me - I'm not assuming anything; I know what Tero is referring to in that document. I've just been covering this very point in the Futuremark forums, so I'll just copy and paste my reply into here:

"This level of hardware does skinning several times faster than the CPU. This can be confirmed using the vertex shader test of 3DMark03, which is designed to measure above all skinning speed. CPU vs. vertex shader skinning can easily be compared by running this test with and without software forced vertex shaders. For example, a DirectX 9 graphics card and high-end CPU (ATI Radeon 9700 Pro and Intel Pentium4 3 GHz) gets five times lower frame rates with CPU skinning than with hardware accelerated vertex skinning. An older CPU (Intel PentiumIII 800 MHz) skins more than 20 times slower on the CPU than with the hardware acceleration....

...Does this mean that 3DMark03 is now completely bottlenecked by the vertex shader performance? No it does not. Try running 3DMark03 in different resolutions. If the benchmark was vertex shader limited, you would get the same score on all runs, since the amount of vertex shader work remains the same despite the resolution change. Game tests 2 and 3 scale very well with the resolution, and are thereby mostly pixel shader limited. Changing the skinning to the CPU would reduce the vertex shader workload, making 3DMark even more bottlenecked by pixel shader performance. We did some simple experiments with this rendering technique and CPU vs. vertex shader skinning. The test results did not change much at all, the overall performance only dropped somewhat using CPU skinning. This was to be expected, looking at the difference in skinning speed between the CPU and the hardware vertex shader.
"
 
I'm still missing something here. Maybe it is in the use of the words in different societies or ???

Here's how I read it in America:

'CPU skinning is 20% as fast as GPU.'

Proof:

'The 'vertex shader test' confirms that.' Nothing more & nothing less.

Then>

'In test 2 & 3 FM tried both in those tests & saw little difference.'

Now >

That says to me that the results of the 'vst' in CPU vs GPU scaling are meaningless as pertains to the bench.

Are you saying the 'vst' test ONLY applies to Troll Rag & nothing else at all? How does that test relate to me [Joe consumer] then?

I still don't see how you read the 3Ghz info to pertain to 'Rag Troll' only > they say the 'vst' only confirms CPU VS is slower than GPU VS. Confirms it in what tho'? > 3dMark03 evidently, because that is the main topic. But test 2 & 3 didn't react that way when both were applied.

I feel like a tennis ball reading it.

just me
 
just me said:
I'm still missing something here. Maybe it is in the use of the words in different societies or ???

Here's how I read it in America:

'CPU skinning is 20% as fast as GPU.'

Proof:

'The 'vertex shader test' confirms that.' Nothing more & nothing less.

Then>

'In test 2 & 3 FM tried both in those tests & saw little difference.'

Now >

That says to me that the results of the 'vst' in CPU vs GPU scaling are meaningless as pertains to the bench.

Are you saying the 'vst' test ONLY applies to Troll Rag & nothing else at all? How does that test relate to me [Joe consumer] then?

I still don't see how you read the 3Ghz info to pertain to 'Rag Troll' only > they say the 'vst' only confirms CPU VS is slower than GPU VS. Confirms it in what tho'? > 3dMark03 evidently, because that is the main topic. But test 2 & 3 didn't react that way when both were applied.

I feel like a tennis ball reading it.

just me

Game Test 2 and 3 don't change much between CPU vs GPU skinning because primarily they are Pixel Shader limited. Moving the skinning to the CPU would only make it MORE pixel shader limited.
 
Using the Vertex Shader test (the troll-smashy-smashy one), where the skinning workload is much higher than in GT2/3, there is a noticable difference in the final result when comparing hardware accelerated and software processed VS1.1 skinning - as stated in the document.

Tero then says "Game tests 2 and 3 scale very well with the resolution, and are thereby mostly pixel shader limited. Changing the skinning to the CPU would reduce the vertex shader workload, making 3DMark even more bottlenecked by pixel shader performance. We did some simple experiments with this rendering technique and CPU vs. vertex shader skinning. The test results did not change much at all, the overall performance only dropped somewhat using CPU skinning." Agreed, it's not overly clear but what he is trying to say is that GT2/3 only shows a relatively small decrease in performance when using CPU skinning because the workload is sufficiently light to not be a problem. The difference is only very apparent when running something like the VS Test (which was designed to be able show such differences anyway).

What I don't understand is why you're making such a big deal out of this?
 
Neeyik,

I'm simply trying to understand & I appreciate your time & 'non-personal' reply.

"where the skinning workload is much higher than in GT2/3" That mere statement makes things much clearer.

I've pondered this for awhile before replying. I'm too used to 3DM2K1 & it's reliance on 'system' vs the 3DM03's GPU dependancy, I was reading 'bottleneck' in 3DM2K1 terms > CPU/GPU/RAM instead of 3DM03 terms > GPU bottlenecking.

Lemme see if I have it now: FM :idea: WOW, I got it! Hit me like a bolt outta the blue. :oops:

FM did nVidia a favor by not going CPU VS in 2 & 3 > I see it now. :)

If FM went CPU VS in 2 & 3: the PS1.4 of the 8500's would have made the GF4 Ti's look REAL bad w/their PS1.1, correct? Then the allegations of 'optimization' would fly & on face value would have creedance.

If games use more CPU VS ...

Thanks Neeyik for helping me see the error of using 'bottleneck' out of context & I can also see some other things now too. 8)
 
Back
Top