Futuremark: 3DMark06

Damn what kind of monster CPU do you need to get the CPU test running at over 1fps?

I feel like the the whole benchmark is really really high end compared to previous 3dmarks. I thought my PC was still in the high end (X2 4200 7800 GTX256) but not a single test was at watchable frame rates. The 3d tests are all bellow 15fps average and the CPU tests bellow 1fps.

It's a shame too. Because the graphics are really pretty. It's just not fun to watch at 15fps
 
ANova said:
The X1800 supports fetch4, it does not support DF24, therefore it is not running fetch4 due to your requirements.

The Radeon X1800 doesn't support Fetch4. The Radeon X1600 and X1300 (and indeed R580) do.

EDIT: Bah, Dave beat me to it. :p
 
Applying AA once gets you off the standard run, in a standard run AA is not applied in 06, instead they have gone for higher resolution. Once you start changing the standard settings at all then in my view it is better to individually run tests and give frame rates rather than a score which is artificial anyway by it's very nature.

If the standard test had AA then this would be more of an issue.
 
Last edited by a moderator:
Kombatant said:
So, all in all, the overall score is a proof of concept of what would happen if developers head that way when they develop their games? Or am I getting this all wrong?
The CPU tests are proof of what can be "milked out of" dual cores if it is done properly. That was my point. I thought we were discussing the CPU tests only?

The overall 3DMark score is calculated based on the CPU tests and the graphics tests. I think that everyone in this room also understand that what you see in 3DMark06 is something we don't really have in current games. 3DMark06 is an ahead looking benchmark, while still enabling today's hardware to run the benchmark and get comparable results. We are convinced that as soon as game developers have had enough time to optimize the CPU side of things in their games, we will see an increased performance with dual cores (some released games already proove this) which reflects with the score in 3DMark06.

Cheers,

Nick
 
inefficient said:
Damn what kind of monster CPU do you need to get the CPU test running at over 1fps?

I feel like the the whole benchmark is really really high end compared to previous 3dmarks. I thought my PC was still in the high end (X2 4200 7800 GTX256) but not a single test was at watchable frame rates. The 3d tests are all bellow 15fps average and the CPU tests bellow 1fps.

It's a shame too. Because the graphics are really pretty. It's just not fun to watch at 15fps
The CPU tests won't ever run at high fps since they run at fixed frame based rendering. ;) I suggest that you take a peek at the whitepaper for more in-depth info about the CPU tests, and why they are as they are: http://www.futuremark.com/companyinfo/3DMark06_Whitepaper_v1_0_2.pdf

Cheers,

Nick
 
Nick about the dualcore issue raised here or complains.

Dont you think it would have been better to indeed include advanced Dualcore/cpu test BUT done it *in* the benchmark so too say. Ie as you have SM3/HDR tests dont you think it would have made more sense to add a "Dualcore" graphics test with either SM3 or SM2?
With a test like this you could hopefully have send a clearer message out why Dualcore is the future in gaming, and been the first to show it at a big scale, dont you agree?

I think that would have justified the whole benchmark as "the gamers benchmark" better.
 
Nick[FM] said:
The CPU tests are proof of what can be "milked out of" dual cores if it is done properly. That was my point. I thought we were discussing the CPU tests only?

The overall 3DMark score is calculated based on the CPU tests and the graphics tests. I think that everyone in this room also understand that what you see in 3DMark06 is something we don't really have in current games. 3DMark06 is an ahead looking benchmark, while still enabling today's hardware to run the benchmark and get comparable results. We are convinced that as soon as game developers have had enough time to optimize the CPU side of things in their games, we will see an increased performance with dual cores (some released games already proove this) which reflects with the score in 3DMark06.

Cheers,

Nick
We are discussing CPU scores, but they influence the overall score a great deal, hence my question. In any case, I believe you answered what I wanted to know, thanks for taking the time to do so :)
 
inefficient said:
It's a shame too. Because the graphics are really pretty. It's just not fun to watch at 15fps
This is not meant to be funny benchmark, but it end up being one!

There’s no way that DualCore will impact any game performance in a way you trying to paint the picture! System with DualCore CPU and slower GFX will NOT play games faster then SingeCore CPU with faster GFX!

Since Nick has stated that there is no wrong in dong multi vendor path (even if that meant hurting one side implementing something that it doesn’t support, but relying on it heavily in some crucial tests), ‘cos hell “game developers are doing it” does it mean that is OK now for the vendors to do shader replacement trough drivers, or to do self made patches for 3DMark06 to improve performance (we’ve all prized Humus for his work on DOOM3)?

You’re so full of pride and joy Nick do you honestly cannot see what have you done to the 3DMark? You say there was no artistic reason to put Parallax Occlusion? Well god damn you’ve should create different art! There is a bunch of games that are designed right now in such way that this type of mapping is crucial for immersion, and what you giving us is some un-usable fireflies (this time two), one big dragon fish, skinned silly, AGAIN and one non playable Antarctic scene (good for some in game cinematic, and nothing more).

You wore either lazy, to create something truly new and usable, or you just don’t know how! Anyway you turn it up you’ve failed with 06!

Does anyone find it ironic that company that pushed for 2.0 SM for so long is now losing it dominance in that pair of tests, and “Power of Three” has comparable worst results in SM3 part (GTX 256 vs. R520XT situation)!
 
Last edited by a moderator:
Nick[FM] said:
3DMark06 is an ahead looking benchmark...
I'm not so sure how far ahead it's looking. It doesn't include parallax mapping (which has already been included in games like FEAR) or a decent level of dynamic flow control (which is one of the more important SM3.0 features). If I would say anything, 3Dmark06 seems more like a modern benchmark, not a future one. In that case (and at the risk of soundling like I just graduated from the [H] school of thought) modern games would serve as a better guide to graphics performance.
 
ANova said:
The X1800 supports fetch4, it does not support DF24, [...]

[...] and since the X1800 does not support D24X8 it has to fall back to R32F which has an impact on bandwidth. So tell me, how is 24 bit to 32 bit a fair comparison? It's apples to oranges. So while the 7800s are running well on 24 bit with PCF, the X1800 is being compared to it on 32 bit without any fetch4 or DFC support. Thus it is not a relevant test to compare the two's capabilities imo.
Not only is dynamic branching a victim of R520's seriously tardy arrival, in a fashion that prevented devs from gaining any quality time with it, R520 is somewhat of a runt in terms of features - RV530 is reasonable evidence, with its Fetch4 support.

I strongly suspect ATI will support the "NVidia" technique in the not too distant future (but not with X1900). From the XB360 GPU Overview:

Additionally, the back buffer can be resolved to the DXT3A_AS_1_1_1_1 format and the depth-stencil buffer can be resolved to the 24:8 fixed-point or 24:8 floating-point formats
I'm no expert, but that appears to mean that XB360 already supports D24X8. Though whether it also supports Fetch4, I'm not clear.

It would be nice if someone who truly understands this stuff clears up my misapprehensions...

In the end, even with X1900, ATI has seemingly left out a feature that it apparently knows is due for the big time (D24X8). It's up to ATI to demonstrate that the suggested 16-bit shadowing would have enough quality.

Jawed
 
Ratchet said:
I'm not so sure how far ahead it's looking. It doesn't include parallax mapping (which has already been included in games like FEAR) or a decent level of dynamic flow control (which is one of the more important SM3.0 features). If I would say anything, 3Dmark06 seems more like a modern benchmark, not a future one. In that case (and at the risk of soundling like I just graduated from the [H] school of thought) modern games would serve as a better guide to graphics performance.

Since parallex mapping doesnt stress the GPU that much more than plain old normal mapping, for purely benchmark purposes I don't think it was nessasary.

It would been trivial for them to change their shader to support parallex mapping - but it would have just been a checkbox feature. There just aren't many opportunities to notice it in these tests.
 
dizietsma said:
Applying AA once gets you off the standard run, in a standard run AA is not applied in 06, instead they have gone for higher resolution. Once you start changing the standard settings at all then in my view it is better to individually run tests and give frame rates rather than a score which is artificial anyway by it's very nature.

If the standard test had AA then this would be more of an issue.

Up to 06, there was no difference made betwen the standard run or special feature runs with AF/AA or different resolution. You always got a score/meassurement for gaming performance. Many people used it with higher resolutions and AA/AF to set it to "HD Gaming" standards.

GF6/7 also deserves an overall score for the AA environment.

I did not test it, but I think GF7 also receives score if you force AF only!?

Klaus
 
It strikes me that Futuremark's design decisions, however honestly conceived, penalise ATI's current high-end chip for not supporting Fetch4. On the other hand, the R5X0 series of chips are able to support AA + HDR, something no NVidia chip is able to do, yet these NVidia chips are not penalised in the same way.

We know that chips which don't support the required depth textures for the PS2.0 shadowing are forced into a relatively expensive shader workaround which is fine by me as Futuremark have decided 24-bit accuracy is required. On the other hand, if this is acceptable, why aren't chips which are not able to support AA + HDR also forced into a shader workaround?

I note that in this interview, David Kirk explains NVidia's decision to not support AA + HDR thus:

David Kirk said:
"It would be expensive for us to try and do it in hardware, and it wouldn't really make sense - it doesn't make sense, going into the future, for us to keep applying AA at the hardware level. What will happen is that as games are created for HDR, AA will be done in-engine according to the specification of the developer."

This being the case, surely it would have been logical for Futuremark to include a shader workaround for SM3.0 cards unable to support AA + HDR which generates the AA in-engine - the technique recommended by NVidia's Chief Scientist? Not being as technically minded as some others on this board, I'm assuming here that this is entirely feasible although it would undoubtedly add a great deal of complexity to the shaders.

This all seems a pretty reasonable idea to me. Any holes in my logic?
 
I suppose one could argue (in a very silly manner) that ATI and NVIDIA deserve what's happening to them in 06. For example, it doesn't look good stating that "It would be expensive for us to try and do it in hardware, and it wouldn't really make sense" when your nearest competitor has done it; it also doesn't look good offering FETCH4 on your mid- and low-end models but not your top end (although this might change with the R520) when your nearest competitor offers PCF across the whole range to which it is appropriate.

But like I said, all very silly...
 
Jawed said:
Not only is dynamic branching a victim of R520's seriously tardy arrival, in a fashion that prevented devs from gaining any quality time with it, R520 is somewhat of a runt in terms of features - RV530 is reasonable evidence, with its Fetch4 support.

Hang on, wasn't dynamic branching demoed by Nvidia with their launch of NV40? Why is it that R520's late arrival stopped devs using dynamic branching, when Nvidia have been offering it for two product generations?

Dynamic branching should be in any forward looking benchmark, and would be a big advantage for ATI as they have spent a lot of transistors on it in R520/R580 - but for some reason Futuremark decided not to test this important aspect of newer SM3.0 hardware.
 
Bouncing Zabaglione Bros. said:
Hang on, wasn't dynamic branching demoed by Nvidia with their launch of NV40? Why is it that R520's late arrival stopped devs using dynamic branching, when Nvidia have been offering it for two product generations?

Dynamic branching should be in any forward looking benchmark, and would be a big advantage for ATI as they have spent a lot of transistors on it in R520/R580 - but for some reason Futuremark decided not to test this important aspect of newer SM3.0 hardware.


Well for the most part other then terrain, at least in the Futuremark art assets that can use dynamic branching to improve visuals by using parrallex occlusion mapping or similiar bump mapping ( and this is a minimal improvement in quality because as explained earlier parrallex amount will be too small to be appreciated), don't see where else it would be very useful.

Nick where is dynamic branching being used right now? From what I've seen in the benchmark I'm guessing for the multiple light sources, and possibly the water in game 3.
 
Mariner said:
It strikes me that Futuremark's design decisions, however honestly conceived, penalise ATI's current high-end chip for not supporting Fetch4. On the other hand, the R5X0 series of chips are able to support AA + HDR, something no NVidia chip is able to do, yet these NVidia chips are not penalised in the same way.

What exactly is the penalty for Fetch4? Can someone with the full version settle this once and for all? Maybe we are making a big deal out of nothing?
 
Neeyik said:
it also doesn't look good offering FETCH4 on your mid- and low-end models but not your top end (although this might change with the R520) when your nearest competitor offers PCF across the whole range to which it is appropriate.
AFAIK there are IP and usage issues related to PCF, though. IIRC the PCF operation is actually an SGI patent which NVIDIA inherited back when SGI transferred a bunch of technology and engineers to NVIDIA and NVIDIA implemented it after that. The actual PCF operation is atypical for any kind of previously documented texture operation in DX, but because it was included in the XBOX, and documented, developers started using it and then it also happended to "work" under DX. MS do openly talk about it now, but when the capability is undocumented and support kind of happens through osmosis it going to take time for a competitor to actually get in hardware (and do it in such a fashion that doesn't step on existing IP).
 
Razor1 said:
Well for the most part other then terrain, at least in the Futuremark art assets that can use dynamic branching to improve visuals by using parrallex occlusion mapping or similiar bump mapping ( and this is a minimal improvement in quality because as explained earlier parrallex amount will be too small to be appreciated), don't see where else it would be very useful.
With the jittered sampling from the shadowmaps I believe dynamic branching could be used to decide whether its in, out or on the edge of a shadow.
 
Back
Top