Futuremark: 3DMark06

Mariner said:
It strikes me that Futuremark's design decisions, however honestly conceived, penalise ATI's current high-end chip for not supporting Fetch4. On the other hand, the R5X0 series of chips are able to support AA + HDR, something no NVidia chip is able to do, yet these NVidia chips are not penalised in the same way.

We know that chips which don't support the required depth textures for the PS2.0 shadowing are forced into a relatively expensive shader workaround which is fine by me as Futuremark have decided 24-bit accuracy is required. On the other hand, if this is acceptable, why aren't chips which are not able to support AA + HDR also forced into a shader workaround?

I note that in this interview, David Kirk explains NVidia's decision to not support AA + HDR thus:

I really liked Mariner post, and I think it got overlooked.

ATI decided not to implement FP16 texture filtering, saying that in the future, developers will want to use different kernel from the default box. So what Futuremark is doing, is using this "very efficient" (we don't have any information on what "very efficient" means.
Which is good, that's what ATI was looking for, let the developed do it with PS.
(One could argue that FM could have used a custom filter, to rais quality and implement it with PS on all hardware, as it's probably what future developers will do, but it's still a personal point of view)

Now again as all of you remember, Nvidia told anyone that multisaple with HDR was useless, because developers in the future will do the AA inside PSs.
Then why didn't FM do that with a PS on hardware not supporting it natively?
Why not do software SSAA?
Wouldn't it be "very efficient"? Too bad for the hardware not supporting it!
Would show that Nvidia cards (and any other cards that don't support it) are not future proof? Surely!
(But protecting some hardware is not what a "objective" benchmark should do anyway!)

It surely is just a coincidence that all the weird decision made go against ATI.. But still!
 
Neeyik said:
Surely a game's performance shouldn't hinge on the use of dynamic branching though? What dev house would use it in such a manner to totally bork a huge amount of the user hardware base?
Shader effects in games normally come with options to turn them on or off, perhaps with a default based on the shader model of the hardware.

So, it's easy to see that in future games sophisticated eye-candy will make use of optional shaders, some of which will be heavily dependent on dynamic branching - with some pixels running shaders for 150 or 200+ cycles while most other pixels stick to an upper limit of about 100, say. If your future SM4 card is at the low-end, then you prolly won't be turning on such eye-candy options :p

So, in the end, a future game's performance hinges on the max eye-candy options the user sets. If some of those options are predicated on having viable dynamic branching performance, then the game's performance hinges on the capability of the GPU - exactly as it does today. FEAR can be played on an FX5600 by turning every option off or to the minimum. Admittedly it looks shite, but I've seen it run.

In a so-called future-aware benchmark, you'd expect a key feature of SM3 and all suceeding versions to play a significant role. Well, I would expect so. FM's only viable excuse is the tardiness of R520. I call that unimaginative.

Jawed
 
Joe DeFuria said:
That is a separate question actually. It seems to me that both SM3.0 tests are very similar in terms of quality and features that they stress...why is this? Why, if you're going to have two test, would you not make them very different in terms of what they are stressing?
If it had been your decision, how would the second SM3.0 test look like (features, quality, settings,...)?
 
And on a lighter note, a post from Hanners over at EB about NGO HQ's latest article on 3dm2k6 entitled, "Is 3DMark Really the “Gamers Benchmark” ?":

The program’s authors, Futuremark, refer to it as “The gamers’ benchmark”. However, there is a dark side to this innocuous little app. 3DMark uses various game tests which help to decide your score, and all of these game tests are based on the DirectX9 platform. The question remaining on the silent majority’s lips is “Where are the OpenGL game tests?” In response Futuremark have claimed that OpenGL is not deemed popular enough to warrant incorporation into the program. Is that really so?

Does the name “Counter-Strike” ring a bell? Most people have heard of this game (in fact there are pygmy tribes in Central Africa who have never seen a PC that are familiar with it!). And I’m not talking about Counter-Strike: Source. Even today the original Counter-Strike remains very popular among the gamers. What about Quake 3, Quake 4, Doom 3, Enemy Territory, The Chronicles of Riddick and Call of Duty? All of these big titles are OpenGL based.
rofl.gif
rofl.gif
rofl.gif


I'm sorry, but this one is slaying me! :LOL:
 
N00b said:
If it had been your decision, how would the second SM3.0 test look like (features, quality, settings,...)?

For example, one with likely with less emphasis on shadowing and HDR (which the first test covered), and more emphasis on implementing other features that are in the emerging stage in games. Some parallax mapping implementation, other skin shaders...
 
Sadly the responses to the lack of dynamic branching and HDR with AA scores on Nvidia cards leave much to be desired and I have to give this a big thumbs down . I am not even mentioning the .2 fps or so that my 3200+ at 2503 Mhz gets in the CPU test .

Illogical decisions always result in a very very fishy smell ................ why can't FM seem to understand this ?
 
Ratchet said:
It isn't about improving image quality, this isn't a game or a eye-candy demo, it's supposed to be a benchmark. If one of your "tests" doesn't have a suitable place for POM, then make a test that does. Even if was a simple feature test, one that doesn't factor into the final score (like the perlin noise or shader particles tests) which stresses nothing but the parallax mapping feature would have been fine. Futuremark chose instead to completely ignore it even though it's probably going to be an often used feature in many upcoming games.
I disagree that it's not about improving image quality. Including a feature in a (game) test only for the sake of it, if it does not add anything to the image quality or overall experience, does not seem right to me.
But I agree that Futuremark should have included one or more feature tests.
Yet I think that it is all about time and resources. DX10 is around the corner they probably already work on 3DMarkX. So pushing the schedule of 3DM06 in order to include more tests most certainly was not an option for futuremark. I agree with some posters here who stated that 3DM06 would have looked different it the R520 had been out in time.
 
IgnorancePersonified said:
A64 X2@2100 9800pro 370core/340 mem
3D Mark Score= 606
SM2.0 = 281
Cpu Score = 1589.

Only watched a bit of it but it was a slideshow.

Poor old arthritic 9800 was getting flogged.
My result:
3DMark Score: 996
SM2.0 Score: 480
CPU Score: 1006

GFX: RADEON 9800 XT @434MHz/370MHz
CPU Intel Pentium M 1.60GHz @ 2.4GHz

Looks like my little PM is still kicking (counting that the DDR400 memory is underclocked!)
 
Ok about CPU score.
I understand CPU testing is essential.
But I don't understand why CPU scores can influence the final 3DMark score in such a degree.
Can a dual-core CPU gives you better shader performance?
Can a dual-core CPU gives you HDR+AA?
Can a dual-core CPU gives you SM3.0?

While dual-core CPU IS influential in real game situations, but it won't be much influential as GPU does.
So just imagine one with fast CPU+6600GT get a 3DM06 scores higher than one with slower CPU+6800GT, I really can't and won't find a game is CPU limited in this way, now plus future.
 
Joe DeFuria said:
For example, one with likely with less emphasis on shadowing and HDR (which the first test covered), and more emphasis on implementing other features that are in the emerging stage in games. Some parallax mapping implementation, other skin shaders...
That actually a bit vague. Can you go into detail a bit? Like describe a complete game scenario.
 
N00b said:
That actually a bit vague. Can you go into detail a bit? Like describe a complete game scenario.

Of course it's vague...I'm not a 3D programmer or artist. ;)

How about a hand-to-hand compbat scene where the emphasis is more on the quality of charaters (skinning technoloy), as opposed to the environment?

All I'm saying is, if you're going to make two tests...they should not be as similar as they are in terms of what you are showcasing. HDR and shadowing are great, and an advanced and computationally heavy version absolutely SHOULD be in at least one of the tests...but not both when there are other things to explore.
 
Joe DeFuria said:
Of course it's vague...I'm not a 3D programmer or artist. ;)

How about a hand-to-hand compbat scene where the emphasis is more on the quality of charaters (skinning technoloy), as opposed to the environment?

All I'm saying is, if you're going to make two tests...they should not be as similar as they are in terms of what you are showcasing. HDR and shadowing are great, and an advanced and computationally heavy version absolutely SHOULD be in at least one of the tests...but not both when there are other things to explore.

I agree with you. The graphics test seems too tedius.
 
Joe DeFuria said:
Of course it's vague...I'm not a 3D programmer or artist. ;)

How about a hand-to-hand compbat scene where the emphasis is more on the quality of charaters (skinning technoloy), as opposed to the environment?

All I'm saying is, if you're going to make two tests...they should not be as similar as they are in terms of what you are showcasing. HDR and shadowing are great, and an advanced and computationally heavy version absolutely SHOULD be in at least one of the tests...but not both when there are other things to explore.
I think that shadowing is here to stay and the significance it has in 3DM06 is absolutely justified (even if the implementation hurts one party more than the other). Not so sure about HDR, though. But I agree with you on the advanced and computationally heavy part.
So here is what I would have liked: A game scenario where a knight fights a dragon in a castle ruin.
Featuring: Subsurface scattering on the knight's and dragon's skin.
Displacement mapping (castle bricks are great for this).
Procedural textures for things like wooden furniture and dragon skin.
A test like this probably would run at max. 5 FPS on the most fastest boards out, but would be a nice look ahead into the future.
 
Joe DeFuria said:
Of course it's vague...I'm not a 3D programmer or artist. ;)

How about a hand-to-hand compbat scene where the emphasis is more on the quality of charaters (skinning technoloy), as opposed to the environment?

All I'm saying is, if you're going to make two tests...they should not be as similar as they are in terms of what you are showcasing. HDR and shadowing are great, and an advanced and computationally heavy version absolutely SHOULD be in at least one of the tests...but not both when there are other things to explore.

Nice scenario. It could include a mixed skeletal-poligon based collision system, clothing physics, so it would not be just a graphics test.( I wonder if 3DMark 2003's Troll's lair had physics in it.)
 
Nick[FM] said:
I have said that doing multivendor paths (as in totally different paths) in games is sure ok, but in 3DMark06 (and all previous) we use only one path with certain fallbacks to enable more hardware to be able to run the tests. We will not allow shader replacements or such. The driver optimization guidelines we set a couple of years ago still are in full effect.


You think we should create our scenes just to promote one single effect that has been used in one (are there more released games (not engines/tech demos!) which use POM?) game? :???: Certainly not! Personal likings is one thing I won't go into any more than this. If you don't like the artwork, it is ok, but forcing some POM into them wouldn't have changed them visually much at all. It is down to the fact that not all effects are feasible to use just because they exist.


If that's your personal view on how things work and are, then so be it. I can't convince you to like what we have created if you simply don't like it.

Anyhow, thanks for the feedback!

Cheers,

Nick

So why can't we run all cards using the fall back method so we can compare apples to apples. If the results are the same then there is no issue.
 
rwolf said:
So why can't we run all cards using the fall back method so we can compare apples to apples. If the results are the same then there is no issue.

In the case of both PCF/Fetch4 and floating-point filtering, you can.
 
Hanners said:
The Radeon X1800 doesn't support Fetch4. The Radeon X1600 and X1300 (and indeed R580) do.

EDIT: Bah, Dave beat me to it. :p

So you are telling me that I just spend a ton of money on a card that is missing basic features of the new generation???

Can you explain why this is acceptable?

How big an impact wiill not having fetch 4 affect me in the Future?
 
Hanners said:
In the case of both PCF/Fetch4 and floating-point filtering, you can.
It's not the fetching or filtering that's causing the significant performance difference, though (as far as I can tell).

The problem is that it's taking the ATI cards 3x bandwidth to create the shadow maps compared with the NVidia cards.

Whereas the 16-bit fallback that ATI are supposedly suggesting would work equivalently for both IHVs.

Hope I've got that right.

Jawed
 
boltneck said:
So you are telling me that I just spend a ton of money on a card that is missing basic features of the new generation???

Can you explain why this is acceptable?

How big an impact wiill not having fetch 4 affect me in the Future?

It means that your 3Dmark score will be teh suxx0rs .


Lol I couldn't resist .


It is all up to the individual developer what techniques and features to use . Both sides leave out certain features or abilities in their current gen products . Should a particular dev make heavy use of a feature that your card maker didn't implement then look for a performance hit , or a missing quality checkbox in the game . Largely though developers will avoid such and try to find common ground so from a less technical standpoint I doubt it would affect you much in real games ................... but i am certain that there are several members here who can give long dissertations about the usefulness or uselessness of Fetch 4 etc .
 
Back
Top