At least I was able to get the white paper...

AzBat

Agent of the Bat
Legend
All,

Forget getting the demo at least until it settles down a bit. I happen to get the white paper that explains the benchmark. It also has some scores with the ATI Radeon 9700 Pro and 9500 Pro. No mention that I could see of NVIDIA. If you want the white paper, then send me an email or see if you can get it where I got it...

http://futuremark.allround-pc.com/

I'm going to try an digest the 23-page white paper. Be back in a bit. :)

Tommy McClain
 
Joe DeFuria said:
Thanks Tommy!

No problem! :)

Joe DeFuria said:
This'll give me something to do as I wait the hour or so "in line" at ShackFiles for the download. ;)

LOL, I'm going to wait till right as I leave for work, so it can be downloading all night.

First comments from the white paper...

3DMark03 White Paper said:
...3DMark03results are very dependent on the graphics card and scale less based on the CPU
speed. For this reason we have added a CPU test to allow the user to measure CPU performance for 3D
graphics usage.

Sweet and about time! :)

Second, they now have 4 game tests which are the only tests that are used to calculate the score. There are feature, image quality and now sound tests.

Looking good so far...

Tommy McClain
 
3DMark03 White Paper said:
"This time we also provide a frame-based rendering mechanism that renders a fixed number of frames for each
second of the timeline. The number of frames is user configurable."

"3DMark03 scores are only generated with time-based rendering."

Very cool. Can't wait to try this out. I wonder what kind of score you get when you use this?

3DMark03 White Paper said:
"One of the game tests, the DirectX 9
showcase, uses 2.0 vertex shaders and 2.0 pixel shaders. All other games tests use 1.1 vertex shaders. The
DirectX 8 game tests use 1.4 pixel shaders if available; otherwise they default to 1.1 pixel shaders."

Hmm, I'm not sure if this is good or not. What do other people think of defaulting to 1.1 pixel shaders if 1.4 are not available?

Tommy McClain
 
Luminescent said:
What is the default precision for the PS 2.0 synthetic test?

Didn't see any mention of that in the white paper, but it does say this...

3DMark03 White Paper said:
"Only the calculation precision of the
graphics hardware is a limit for the resulting texture resolution."

Oh well.

Tommy McClain
 
Hmm, I'm not sure if this is good or not. What do other people think of defaulting to 1.1 pixel shaders if 1.4 are not available?

I think it's a good thing, as long as the resultant image quality is essentially identical.
 
For those that don't have the white-paper, we've done up an overview article 3dmark03 over here.

Its a nice improvement over 3dmark2k1.

The bench still scales with system (fsb/cpu etc) performance quite a bit, but it is defiantely *more* graphics card limited than 2k1 was.

AzBat: if it didn't default to 1.1 pixel shaders if 1.4 wasn't available it wouldn't be much of a dx9 benchmark since a lot of capabilities of Dx8.1 and Dx9 wouldn't be used :) I think its Great that 1.4 is supported since this is a Dx9 benchmark, all Dx9 cards should support 1.4 shaders as well, and the amount of rendering saved by using 1.4 over 1.1 is striking.
 
Hmm...too bad there is no mention of facilitating comparison of the Image Quality Test results through ORB, or of any aids to mathematically compare images (however roughly) to reference images of your card with, for example, different driver versions.

Well...the focus is there, perhaps extra features will be added to the Pro service in the future.

Also, I've been wondering, should tests like these use HLSL? Do these tests do so? If the compiler improves over time, I'd think that would be a valid reason for benchmark results to change. Perhaps as an option if not by default...

Oh, and about PS 1.4 defaulting to PS 1.1...what performance enhancements does PS 1.3 offer? Are they applicable for what is being tested...or are they just functionality enhancements that the tests would not benefit from?
 
Luminescent said:
What is the default precision for the PS 2.0 synthetic test?

I asked futuremark this and they said it was whatever DX9 defaulted to. In the case of the 9700 it is 96bit and they said the NV30 would probably be 128bit.
 
I think its Great that 1.4 is supported since this is a Dx9 benchmark, all Dx9 cards should support 1.4 shaders as well, and the amount of rendering saved by using 1.4 over 1.1 is striking.

Agreed. Though historically speaking, GeForce4 cards, for example, seem to be able to handle PS 1.1 as fast as a Radeon can handle PS 1.4. Will be interesting to see if 3D Mark '03 results say the same thing.

You mentioned DX9 hadware running PS 1.4....and that brings up something we've been batting around here.

R-300, by all accounts, processes the same number of pixels per clock, regardless of the DirextX path. DX9 floating point, or PS 1.4 integer...8 pixels per clock.

On the other hand GeForce FX is a big mystery. We do know that it has "separate" units for PS 1.1-1-3, and PS 2.0. We have no idea how it will handle PS 1.4...with the floating point pipeline? Extended Integer pipeline of Geforce4?

This is important, because we have come to expect GeForceFX's floating point performance (edit: particularly 128 bit) to possibly be very lacking based on preliminary tests. We might have a case where the GeforceFX would run the DX8 tests FASTER with PS 1.1 vs. PS 1.4, despite it being "more work."

So again, the question is, what happens when the FX encounters the DX8 tests in 3D Mark? Is it forced to run in PS 1.4 (because it is available?) can it run in PS 1.1 as an option? Are there any performance differences regardless?

Things that make you go "Hmmmm..."
 
Joe DeFuria said:
I think its Great that 1.4 is supported since this is a Dx9 benchmark, all Dx9 cards should support 1.4 shaders as well, and the amount of rendering saved by using 1.4 over 1.1 is striking.


This is important, because we have come to expect GeForceFX's floating point performance (edit: particularly 128 bit) to possibly be very lacking based on preliminary tests. We might have a case where the GeforceFX would run the DX8 tests FASTER with PS 1.1 vs. PS 1.4, despite it being "more work."

So again, the question is, what happens when the FX encounters the DX8 tests in 3D Mark? Is it forced to run in PS 1.4 (because it is available?) can it run in PS 1.1 as an option? Are there any performance differences regardless?

Things that make you go "Hmmmm..."

So, isn't that Nvidia's problem? If they find their hardware doesn't do 1.4 better than 1.1 shaders, then they can just run all 1.4 shaders through 1.1 in their drivers coudn't they? It seems silly to me that applications (benchmarks/games whatever) should worry about whether one card or another does 1.4 better than 1.1 or the other way around. Nvidia has to figure out how to make their cards run well in Dx9. 1.1 Can do everything 1.4 does, it just takes more work (at least generally speaking).
 
It looks great from a quick glance over the whitepapers.

Anyway, two upcoming points of heated discussion between ATI and nVidia fans could well be:

1) The nice advantage for ATI cards to be able to run PS 1.4 in game test #2 and #3...

2) The use of the DX9 sincos instruction in game test #4 that should be a fairly advantage to the GeForce FX over the R300.

;)
 
That is true LeStoffer, the FX supports instructions like sincos natively (although the R300 is able through DX9 macros), but we have yet to find the problem with the FX.
 
Just heard that 3DMark03 is (at the moment at least) as wanted/downloaded as when a CS or HL patch is released! :oops:

I take that as an honour.. ;) If we only could top the UT2003 demo download internet slowdown.. :devilish:
 
Back
Top