3d Mark - do NVidia shift more load onto the CPU?

Ehh exactly how? its a generic optimisation that is mathematically correct. If that isn't allowed then software T&L and dual chip solutions shouldn't be allowed because they don't do all the calculation on the single chip.
 
Well, the point of 3DMark03 is specifically to stress th graphics board, such that if there is VS then it should be executed there in order for the benchmark to be a graphics board test. You can offload work on to the CPU, but when you are talking about these levels of geometry then with the other work the CPU needs to carry out in a gaming environment it would probably make the performance far worse - look at HL2 for example, a title riddled with pixel shaders, and yet it is quite CPU limited on a 9800 PRO in many resolutions. The fact of the matter is that because of the low CPU utilisation of 3DMark its probably one of the few applications where you can gain performance.
 
IIRC my results with my athlon xp downclocked from 2.08ghz to 500mhz (I think that was the speed) (11.5x181 -->5x100)

Might have been 800mhz, not sure, I remember it refused to accept whatever the lowest multiplier the bios had.

GT1 plunged as did the cpu/sound tests. The only other test significantly effected was ragtroll. Score dropped about 800 points (4700-->3900)

Basically GT2/3/4 didnt change much at all.

EDIT: thats using a r9700 @303/303
 
bloodbob said:
Ehh exactly how? its a generic optimisation that is mathematically correct. If that isn't allowed then software T&L and dual chip solutions shouldn't be allowed because they don't do all the calculation on the single chip.
It is only a generic optimisation if it is faster to do it this way in all cases. As I mentioned earlier in the thread it is a complex problem to come up with load-balancing optimisations in the general case in such a way that you end up reliably faster and don't cause unwanted side effects such as uneven frame rates.
 
Changing the FSB sort of invalidates the testing to some extent.

Rag doll isn't meant to be GPU limited is it? I'm not sure I thought it would be primarily a cpu test because of the physics.

If nvidia truly developed code to generically offload work on spare cpu usage I say let them. Not all applications are cpu limited although many games this would be a true example of where benchmark might not be equal to benchmarking in games and if futuremark truly believe that this is a problem then they should revise how they calculate the final score and include some cpu limited situation for the final score. This is all academic since nvidia isn't going to do generic if they are doing or even plan to do it in the future for 3dmark2k3 but if they truly made it generic it wouldn't break the guidelines and I personally wouldn't argue.

That is my opinion I am standing by not all of you agree with it and that is fine with me as thankfully many of you agree with my other opinion of how nvidia is treating 3dmark2k3. ( which is down right disgusting ).
 
bloodbob said:
If nvidia truly developed code to generically offload work on spare cpu usage I say let them.

Given the drop in score between the 3.3.0 and 3.4.0 patch, why are we assuming that this is a generic optimisation?
 
I'm not assuming at this point in time I'm not even assuming there is a cpu off loading at all no where have I said there is cpu off loading I did say earlier but

Is off loading to the CPU wrong? well as long as nvidia are doing it at a low priority ( which probably wouldn't work because of high latence in the thread switch ) I don't think there is a problem its like saying all software T&L cards should be abolished. Of course there is one more little condition they would have to do it for all D3D apps.

all along I've said it had to be generic.

I don't have a profile of the % of cpu usage as the tests are run so from this information that has been given in the thread. The application could be cpu limited the combinatino of cpu and drivers could be cpu limited. Once the application makes the call it doesn't magic straight get to the video card it has to go through the drivers. There are also some valid reason to do some calculation in the cpu such as some very earlier cull of traingles to reduce traffic load across the agp bus.

If nvidia are doing application specific optimisations and these drivers go through the WHQL process then they should not be approved by futuremark.
 
Given the changes that have occured between the 3.3.0 patch and 3.4.0 patch there should be nothing that would disable optimisation by "dynamic" CPU ofloading. It would disable it if drivers have replacement shaders that say "execute this one on the CPU and this on on the graphics board".

Differences between 3.3.0 and 3.4.0 and the 52.16's would be useful.
 
I will try with 52.16 when I get home. However, I'm not convinced it's a specific optimization. If someone has another DX8 benchmark handy, post the link.

And can you have 3.3.0 and 3.4.0 installed simultaneously? (and crap, I'd better have the 3DMark installer handy... gargh.)
 
D3D Rightmark work for additional theoretical tests? Cause it's 6 megs and therefore won't cause my terrible DSL line to explode :)
 
bloodbob said:
Changing the FSB sort of invalidates the testing to some extent.

I agree. It makes it hard to tell if performance differences come from increase/decreased FSB settings.

Rag doll isn't meant to be GPU limited is it? I'm not sure I thought it would be primarily a cpu test because of the physics.

Probably not, but it is. NVIDIA has been detecting this test and optimizing for it. They're definitely doing something there through 51.75.
 
assuming the test settings that should be used are:

high geometry
diffuse + specular (2 point lights)
vertex shader 2.0 (and no pixel shader)
fullscreen, 640x480?
 
The Baron said:
assuming the test settings that should be used are:

high geometry
diffuse + specular (2 point lights)
vertex shader 2.0 (and no pixel shader)
fullscreen, 640x480?
Personally, I keep it windowed.
 
The Baron said:
assuming the test settings that should be used are:

high geometry
diffuse + specular (2 point lights)
vertex shader 2.0 (and no pixel shader)
fullscreen, 640x480?
1024x768, push that sucker! ;)
 
oh well.

more stuph:

52.16, 2.2Ghz:

Code:
Main Test Results
3DMark Score	3347 3DMarks
CPU Score	712.0 CPUMarks

Detailed Test Results

Game Tests
GT1 - Wings of Fury	152.8 fps
GT2 - Battle of Proxycon	21.4 fps
GT3 - Troll's Lair	18.3 fps
GT4 - Mother Nature	15.0 fps

CPU Tests
CPU Test 1	79.1 fps
CPU Test 2	12.7 fps

Feature Tests
Fill Rate (Single-Texturing)	1152.4 MTexels/s
Fill Rate (Multi-Texturing)	1523.4 MTexels/s
Vertex Shader	13.1 fps
Pixel Shader 2.0	27.0 fps
Ragtroll	12.3 fps

Sound Tests
No sounds	44.9 fps
24 sounds	40.8 fps
60 sounds	36.8 fps

52.16, 1Ghz:

Code:
Main Test Results
3DMark Score	2940 3DMarks
CPU Score	395.0 CPUMarks

Detailed Test Results

Game Tests
GT1 - Wings of Fury	115.2 fps
GT2 - Battle of Proxycon	20.4 fps
GT3 - Troll's Lair	16.5 fps
GT4 - Mother Nature	14.6 fps

CPU Tests
CPU Test 1	42.9 fps
CPU Test 2	7.2 fps

Feature Tests
Fill Rate (Single-Texturing)	1156.0 MTexels/s
Fill Rate (Multi-Texturing)	1524.1 MTexels/s
Vertex Shader	11.7 fps
Pixel Shader 2.0	27.0 fps
Ragtroll	11.9 fps

Sound Tests
No sounds	27.8 fps
24 sounds	24.4 fps
60 sounds	21.9 fps

And now, for Rightmark (specs were high geometry, diffuse + specular (2 point lights), 640x480 fullscreen, VS2.0 with no PS):

2.2Ghz, 52.16:

Code:
FPS 	13.94
PPS 	17956596

1Ghz, 52.16:

Code:
FPS 	13.92
PPS 	17933326

So, it's either a 3DMark-specific optimization, or it's simply a limitation of 3DMark.
 
Back
Top