Futuremark: 3DMark06

Nick[FM] said:
3DMark06 does support FETCH4, uses Dynamic Flow Control, uses 24 bit "DST" (prefer to use wording "Hardware Shadow Mapping") for any hardware that has hardware "DST" support.

I don't see it as one sided.

Yes, only if the card also supports DF24. So why are you forcing 24 bit DST since it's not a DX9 required specification?

Also, you didn't really explain why it is ok to use emulation on the X1800 and still output a score and not on the 6200.
 
Joe DeFuria said:
AA is an "optional thing", but what you've chosen as "default" fetures are nothing more than your own subjective judgements anyway. When you turn AA on as an option, some hardware runs all tests, some don't.

Well it is their application. Everything about its design is rather "subjective". I agree with Nick that AA and FP16 blending should be treated differently. AA is not a feature of the 3dmark06 engine.

Having said that, it looks like on the feature front it is quite balanced. But, it looks like there'll be a bit of broohaha over score reporting until next week when R580 debuts. Once it is soundly beating the GTX512 all of the complaints will subside, at least until March :)
 
ANova said:
Also, you didn't really explain why it is ok to use emulation on the X1800 and still output a score and not on the 6200.

Maybe because FP16 filtering emulation is needed for the entire X1800 series, whereas any effort for FP16 blending would be for the single slowest SM3.0 card on the planet that will hardly (never) be benched on 3dmark06?
 
trinibwoy said:
Having said that, it looks like on the feature front it is quite balanced. But, it looks like there'll be a bit of broohaha over score reporting until next week when R580 debuts. Once it is soundly beating the GTX512 all of the complaints will subside, at least until March :)

Well we know how the X1800 compares to the GTX is real world games, so it doesn't quite make sense to force features onto the X1800 that it does not support and not support performance enhancing features that it does support, which I think skewers its capabilities.
 
ANova said:
Yes, only if the card also supports DF24. So why are you forcing 24 bit DST since it's not a DX9 required specification?
Let me quote our whitepaper on this:
If the hardware supports depth textures, a D24X8 or DF24 depth map is used. If Depth Textures are not supported, an R32F single component 32 bit floating point texture will be used as a depth map.

It is not 24bit for one IHV only if that's what you claim. If the hardware doesn't have "DST" support, we use R32F. This is exactly as we did in 3DMark05, the only difference is that we now support DF24 and FETCH4.

ANova said:
Also, you didn't really explain why it is ok to use emulation on the X1800 and still output a score and not on the 6200.
Emulation on the 6200? :???: We require FP16 textures and FP16 blending which are features the X1800 supports, but the 6200 doesn't. I am not 100% sure I understand your question here..

Cheers,

Nick
 
ANova said:
Well we know how the X1800 compares to the GTX is real world games, so it doesn't quite make sense to force features onto the X1800 that it does not support and not support performance enhancing features that it does support, which I think skewers its capabilities.
The shader emulation we have for hardware with no support for FP16 filtering is highly efficient! There are benchmark numbers on the net from where you can see how big the difference is when running with hardware FP16 filtering on and off..

Cheers,

Nick
 
Last edited by a moderator:
Holy crap. My P4C 2.4@2.8 with 2GB RAM and a Radeon 9800 gets totally raped by this new 3dmark, especially in the first CPU test. It was so slow (it reported 0 FPS, but it was more like 0.2-0.5) that I had to cancel the benchmark. In the game tests it went from 1-5 FPS with an emphasis on 1. :D
 
Nick[FM] said:
Let me quote our whitepaper on this:
[/size]
It is not 24bit for one IHV only if that's what you claim. If the hardware doesn't have "DST" support, we use R32F.

Can you show an illustration, or give 3dmark users the option, to use 16 bit DST? This way, we can see

1) what kind of performance impact using 16 bit DST is vs. R32F work-around or 24 bit DST.
2) Quality differences.

Especially given that 16 bit DST is the DX9 standard.
 
not that it matters too much but just re-ran with Cat 6.1 which improves things a bit for ATI on X1800XT in SM2.0.

So far i'v run:
--------------------------------
5.13 Drivers HQ 16AF forced resulting in:

1422 SM2.0 score
1728 HDR/SM3.0 score
1412 CPU Score

3870 total
--------------------------------
5.13 Drivers Optimal Default Quality:

1425 SM2.0 score
1799 HDR/SM3.0 score
1417 CPU Score

3948 total
--------------------------------
6.1 Drivers HQ16AF Forced:

1615 SM2.0 Score
1749 HDR/SM3.0 score
1412 CPU Score

4092 total
--------------------------------
6.1 Drivers Optimal Default Qaulity:

1747 SM2.0 Score
1791 SM3.0 Score
1404 CPU Score

4257 total

Cards an XT PE 700/800, CPU Dual X2 2.0
 
Last edited by a moderator:
For some reason my SLI isnt working. Im getting a 52xx score, with 2x7800GTX's. The latest drivers have a profile already in them, but it doesnt appear to be working. Humph.
 
Where does D3D9 mandate a standard surface format for depth textures? It's up to the developer to choose, 16-bit is just one of many.
 
SugarCoat said:
not that it matters too much but just re-ran with Cat 6.1 which improves things a bit for ATI on X1800XT in SM2.0.

So far i'v run:
--------------------------------
5.13 Drivers HQ 16AF forced resulting in:

1422 SM2.0 score
1728 HDR/SM3.0 score
1412 CPU Score

3870 total
--------------------------------
5.13 Drivers Optimal Default Quality:


1425 SM2.0 score
1799 HDR/SM3.0 score
1417 CPU Score

3948 total
--------------------------------
6.1 Drivers Optimal Default Qaulity:

1747 SM2.0 Score
1791 SM3.0 Score
1404 CPU Score

4257 total

Cards an XT PE 700/800, CPU Dual X2 2.0
/me runs to install the 6.1 drivers!

Anyone else kind of feel like their penis shrunk a bit since running this latest 3dMark... :oops:
 
Nick[FM] said:
The shader emulation we have for hardware with no support for FP16 filtering is highly efficient! There are benchmark numbers on the net from where you can see how big the difference is when running with hardware FP16 filtering on and off..

I did a cursory Google and didn't turn up anything. Can anyone provide a link? (Looking for running without FP16 filtering vs. FP16 filtering in hardware, vs. FP16 filtering in pixel shaders.)

And can any devs shed light on ATI's explanation that most developers don't want a box-filter for FP16, but prefer to implement their own:

http://www.behardware.com/articles/592-4/ati-radeon-x1800-xt-xl.html

ATI explains its position by the fact that in FP16, developers generally don’t want box filters (bilinear filtering, etc), but prefer better adapted ones (the Unreal 3 engine uses a specific filter with all cards, for example).


If that is true, it would seem to make more sense for FM to implement the same filter in the pixel shaders, that would run on all hardware.
 
Nick[FM] said:
It is not 24bit for one IHV only if that's what you claim. If the hardware doesn't have "DST" support, we use R32F. This is exactly as we did in 3DMark05, the only difference is that we now support DF24 and FETCH4.

Yes but ATI claims a 16 bit DST would have been sufficient. Using 24X8 results in a significant bandwidth decrease requirement for nvidia hardware whereas ATI has to read and write to a 32 bit depth buffer and write to a 32 bit color buffer for a total of 96 bits per pixel of bandwidth. Not only that but using 24X8 results in the non-use of fetch4 and dynamic flow control, all of which results in much lower than normal performance for the X1800.
 
ANova said:
Well we know how the X1800 compares to the GTX is real world games, so it doesn't quite make sense to force features onto the X1800 that it does not support and not support performance enhancing features that it does support, which I think skewers its capabilities.

Well I actually think it reflects real-world non-AA performance. The GTX looks pretty decent next to the X1800XT without AA.

Which features are being forced on the X1800 and which ones are being left out? From the conversation thus far, it doesnt seem that either IHV had an advantage given the high efficiency claimed for their shader implementation of FP16 filtering.

NM: Saw your post above.
 
ANova said:
Yes but ATI claims a 16 bit DST would have been sufficient. Using 24X8 results in a significant bandwidth decrease requirement for nvidia hardware whereas ATI has to read and write to a 32 bit depth buffer and write to a 32 bit color buffer for a total of 96 bits per pixel of bandwidth. Not only that but using 24X8 results in the non-use of fetch4 and dynamic flow control, all of which results in much lower than normal performance for the X1800.
I think people are a bit confused by what we support and/or not. I don't know the source, but it sure has made some users confused.

In 3DMark06 we do support:

- 24 bit depth stencil textures (DF24 and D24X8)
- PCF and FETCH4
- Dynamic Flow Control in the pixel shaders for SM3.0 compliant hardware

I would think that these things should be clear now for everyone.

Joe DeFuria said:
Can you show an illustration, or give 3dmark users the option, to use 16 bit DST? This way, we can see

1) what kind of performance impact using 16 bit DST is vs. R32F work-around or 24 bit DST.
2) Quality differences.

Especially given that 16 bit DST is the DX9 standard.
DST (if refered to our support for hardware shadow mapping) is not a standard DX9 specification feature, let it be 16bit or 24bit ! The hardware shadow maps we use in 3DMark06 are not in the specifications of DX. Currently we don't have any plans on adding support (as default or as an option) for 16 bit "DST".

I think we went through the "DST & PCF" discussions well enough with 3DMark05. The only difference in 3DMark06 is that we now also support DF24 and FETCH4 since they are available.

Cheers,

Nick
 
Back
Top