Futuremark Announces Patch for 3DMark03

I've done some more tests around the 3Dmark PS2.0 test.

FX5950 52.16 : 50,8
FX5950 45.23 : 27,1
9800XT 3.9 : 52,4

This PS2.0 test uses 2 big FP24/FP32 2.0 shaders. I've used them in another application.

Shader 1 (MPix/s):
FX5950 52.16 FP32 : 71,6
FX5950 52.16 FP16 : 99,4
FX5950 45.23 FP32 : 74,1
FX5950 45.23 FP16 : 98,8
9800XT 3.9 FP24 : 112,1

Shader 2 (MPix/s):
FX5950 52.16 FP32 : 49,1
FX5950 52.16 FP16 : 61,7
FX5950 45.23 FP32 : 34,9
FX5950 45.23 FP16 : 61,3
9800XT 3.9 FP24 : 71,3

Unfortunately, I'm sick and I've a terrible headache so I'll let you philosophize on these numbers :D
 
Interesting results thanks Tridam

EDIT: wait a minute...

FX5950 52.16 FP32 : 49,1
(2nd shader)

FX5950 52.16 : 50,8
(3dmark 03 ps2.0 test)

How exactly is the fx 5950 faster doing this shader and the another one than just doing this shader ;)

EDIT Ignore this, got the Mpixels and fps mixed up :p
 
dan2097 said:
Interesting results thanks Tridam

EDIT: wait a minute...

FX5950 52.16 FP32 : 49,1
(2nd shader)

FX5950 52.16 : 50,8
(3dmark 03 ps2.0 test)

How exactly is the fx 5950 faster doing this shader and the another one than just doing this shader ;)

I forgot to say that the results in my app are not FPS but MPix/s. Sorry for that.
 
I forgot to say that the results in my app are not FPS but MPix/s. Sorry for that.

Oh ok, whoops, didnt notice that they were in MPix/s. Ive never really understood MPix/s but they seem to bear a strong relationship to fps from beyond3d's review.
 
dan2097 said:
I forgot to say that the results in my app are not FPS but MPix/s. Sorry for that.

Oh ok, whoops, didnt notice that they were in MPix/s. Ive never really understood MPix/s but they seem to bear a strong relationship to fps from beyond3d's review.
Think fillrate ;)
It pretty much eliminates screen resolution and object size from the performance equation. The only reliable metric for pixel shader performance, IMO 8)
 
How hard would it be for Nvidia to add a second set of shader fingerprints in order to identify the altered shaders in the 340 build?

If a large part of what futuremark did was just shuffling which registers were used, couldn't someone in Nvidia just add the new set to the current replacement code?
 
3dilettante said:
How hard would it be for Nvidia to add a second set of shader fingerprints in order to identify the altered shaders in the 340 build?

If a large part of what futuremark did was just shuffling which registers were used, couldn't someone in Nvidia just add the new set to the current replacement code?

It's a 3 minutes job ;)
 
StealthHawk said:
So scores from the PS 2.0 test are actually legit?
How do you figure? With the results Tridamn has shown, the 5950 scores significantly lower results then the 9800 XT with the two shaders shown here. Yet the PS 2.0 results put the 5950 very close to the 9800 XT.

-FUDie
 
Ok, I've been getting confused since the Mpix/s talk...could someone sum up the last few posts in little words that a thicky like me could understand? :|
 
Apparently, the latest FX drivers deliver performance that is about 3% less than a Radeon 9800XT while running the shaders in 3dmark. Judging by relative framerates.

However, there is a greater disparity judging by fillrate when those same shaders are run in another program, with the FX losing performance while the using the XT as a baseline. The performance difference from the XT amounts to over 10% with FP16, and around 30% with FP32 for the first shader.

With the second shader, there is a similar disparity. Interestingly, there is actually a noticeable improvement in FP32 with the 52.16 over the 45.23 for the 2nd shader, while there is a slight gain in the first.

edit: Minor gain in FP16 for the first shader, a drop with FP32. XT performance may have stayed the same, don't really know.
 
So to sum up a little here.

NVIDIA are swapping in their own shaders when 3dMark is detected.
Image isnt dropping, or if it is, not by much.
Only the 'big' games will get NVIDIA had-coded shaders.

The main thing here for me is the ethics, which show NVIDA are full of shit, and dont mind lying to sell cards.

The interesting thing is the increase in performance they can get out of swapping shaders. If this only done via register usage and instruction order, this should be able to be accumplished legally in the compiler.

Is it possible that NVIDIA are swapping the DX9 shaders with some sort of DX9/OpenGL1.5 shader which doesnt follow any API at all, but is mapped to the hardware to get the most out of it? This would explain the limited drop in IQ, but the huge improvement in fps.

This would also mean that it would be impossible for NVIDIA to get the same speed even if their compiler was 100% efficient, as it would have to follow the DX9 API.


Thoughts?
 
Ali said:
This would also mean that it would be impossible for NVIDIA to get the same speed even if their compiler was 100% efficient, as it would have to follow the DX9 API.

Well, I don't even play DigW on forums, but I'll ask anyway, why would their compiler "have to follow the DX9 API"? To pass WHQL? If they can map directly to their hardware why can't the compiler do it?

It seems to me this is the subtext to much of this discussion: if such gains are to be had without compromising IQ then their compiler *should* be able to do it legitimately on its own without "hand tuned" shaders relying on detecting the app. So either the compiler tech is a load of malarkey, or they are making significant compromises somewhere in their "hand-tuned shaders" even if we aren't "catching them at it".

At least that's what I've gotten out of this whole conversation. <keeping in mind that folks like Reverend and Dave have forgotten more about 3d internals than I ever knew on my best day>
 
Check Patric Ojala 's reply to Tech-Report,he said only 4 objects rendering differently between 340 and 330,even though Nvidia break the opti rules,they dont decrease the IQ. :?

"NVIDIA's optimizations still violate the company's own internal optimization guidelines, and indeed Futuremark's optimization rules for 3DMark03. However, it looks like the optimizations are at least breaking the rules without compromising image quality. "
 
Good work rookie nice to see they are using mathematically correct replacement shaders atleast.


BUT WHEN AF IS ENABLED THE 52.16 DRIVERS SHOULD FAIL TO BE APPROVED NO MATTER WHAT PATCH IS APPLIED.

The drivers should be banned because of that fact end of story futuremak are not willing to do this which is sad.


Edited to make the Caps stuff easier to understand
 
bloodbob said:
BUT WHEN AF IS ENABLED THE 52.16 DRIVERS FAIL NO MATTER WHAT PATCH IS APPLIED.
Uhm, what do you mean by "fail"? Does 3dm2k3 not run or is the scoring whacked? (Sorry, but I ain't got an FX so I really don't know what you're talking about and I want to.)
 
digitalwanderer said:
Uhm, what do you mean by "fail"? Does 3dm2k3 not run or is the scoring whacked? (Sorry, but I ain't got an FX so I really don't know what you're talking about and I want to.)

It will not do aniso as requested on texture stages 1-7
 
vb said:
digitalwanderer said:
Uhm, what do you mean by "fail"? Does 3dm2k3 not run or is the scoring whacked? (Sorry, but I ain't got an FX so I really don't know what you're talking about and I want to.)

It will not do aniso as requested on texture stages 1-7
Ah, thanks. :)
 
Back
Top