Futuremark: 3DMark06

The thing I don't understand with the new 3DMark06 is the ATI camp complaining that current nVidia cards don't get a score with AA on.
If I were to tease some nVidia user I would certainly say something like "Dude, your card isn't even good enough to get a score with AA on. It's missing an important feature! nVidia suxxxx!" ;-)
 
Yes but we are chivalruous, we don't like to win without a good fight. :)
Nvidia cards can't pick up the glove we throw, but what's the point ? The fun is in fighting. Competing, to be politically correct. :)
 
Last edited by a moderator:
Is there any site that has 3DMark06 feature tests scores for the 1800XT and the 7800 GTX (512)? Or a comparisson with hardware shadow mapping disabled?
 
dizietsma said:
And why is this ? because ATi is not favoured. Whenever Ati is not favoured then we get the most long winded threads where a court is summarily set up and the "injustice" to Ati is gone over in such minute detail that only paranoia can be thought to be in the heads of adjudicators. In swoops thw cardinals in their red ( how appropriate ) gowns, " Everybody expects the B3D inquisition " Fear and surprise is our .....
It's not that at all. There are just so many weird discrepancies and choices, and they seem to favour Nvidia. Come on, a "forward looking test" with no SM3.0 branching, no parallex mapping, no AA/AF? Even places where Nvidia cards get no score rather than a bad score, where the exact opposite happens for ATI cards? Nvidia cards get advantage from their specific non-DX features, but ATI cards don't?

The cards from ATI/Nvidia are simply not being treated with the same level of objectivity, and that is what is being queried. It's not tribalism, it's frustration at what should be a level playing field being so far tilted that 3DMark06 is pretty useless for comparing performance between the two main chip suppliers, even though it claims to be an even and honest test of capabilities. 3DMark06 is now just a marketing tool, rather than an objective testbench of a card's capabilities and performance.
 
Last edited by a moderator:
Joe DeFuria said:
Yes, they can. Do you think they will?

I think there's more chance of them not even RUNNING AA tests since a score isn't generated for nVidia cards (thus, no interesting and pretty bar charts to show) , than calculating what the score "would have been".

Which, if I were a cynical type, I would say is exactly what nvidia would prefer.

I don't know, maybe if they were harrased enough one or two might.
 
Bouncing Zabaglione Bros. said:
It's not that at all. There are just so many weird discrepancies and choices, and they seem to favour Nvidia. Come on, a "forward looking test" with no SM3.0 branching, no parallex mapping, no AA/AF? Even places where Nvidia cards get no score rather than a bad score, where the exact opposite happens for ATI cards? Nvidia cards get advantage from their specific non-DX features, but ATI cards don't?

The cards from the ATI/Nvidia are simply not being treated with the same level of objectiviity, and that is what is being queried. It's not tribalism, it's frustration at what should be a level playing field being so far tilted that 3DMark06 is pretty useless for comparing performance between the two main chip suppliers, even though it claims to be an even and honest test of capabilities. 3DMark06 is now just a marketing tool, rather than an objective testbench of a card's capabilities and performance.
But dynamic branching has been confirmed, in this very thread.

Also, when dealing with mandatory features, the benchmark treats ATI and Nvidia equally. The 6200 doesn't support floating point blending, so the SM3.0 tests aren't run and the card receives a lower score, as with the ATI X800 class of hardware.

Finally, 3D Mark does take advantage of ATI "specific non-DX features", like fetch4.
 
Subtlesnake said:
But dynamic branching has been confirmed, in this very thread.
Yes, but it has not been confirmed on HOW it's used, and how much impact it has,
and showing from the benchmaks it has no impact whatsoever.
If it was used how future games will use it (a year from now?) nvidia cards would crawl!

Subtlesnake said:
Also, when dealing with mandatory features, the benchmark treats ATI and Nvidia equally. The 6200 doesn't support floating point blending, so the SM3.0 tests aren't run and the card receives a lower score, as with the ATI X800 class of hardware.
Not really so! If it was the case, FM would have created a pingopong shader to emulate the lack of floating point blending on the 6200 and it would have create a PS to emulate the lack of AA when FP blending is enabled for cards that don't use it.
That would have been treat Hardware equally. If the hardware doesn't support a feature implement it with PS! For all features on all hardware.
OR they could make ati not run PS3 tests because of no FP texture filtering (ok that could have never been a valid decision, but it still makes my point!)

Read my previous post. Why penalize ATI for it's decision on not supporting HW filtering, while not penalize Nvidia for not supporting hardware AA or FP blending?

Subtlesnake said:
Finally, 3D Mark does take advantage of ATI "specific non-DX features", like fetch4.

Sure after the 05 only supported the Nvidia one!
Bear in mind that fetch4 is available on every single channel texture format, while in the test is only used in the PS2.0 tests... why not in the other ones? (maybe because nvidia PCF wouldn't work?)

And what about 3dc? what about Rendering to buffer array?
 
I think a good question would be whether the dynamic branching is in there just for show. It’s likely the R520 would be a good portion faster than the G70 if it was really doing something.

HDR should be an option (like AA/AF) and not on by default -- all cards could then be tested with AA. AA is pretty well a given for good IQ on high end cards. HDR can hardly be seen as forward looking when one has to give up AA to get it (on most cards out there that currently support it).

I’m wondering why there is no SM2.0b support -- this really dismisses the X800 series cards compared to the NV6 series. I read on one site that shaders were to be limited to 512 instructions or less -- so why not have an equivalent SM2.0b? We might get some idea if SM3.0 and the dynamic branching helps performance or is there just for show. Supporting SM2.0b would also mean one doesn’t have to pull some number out of a rabbit’s hat for the SM2.0b X800 cards overall score -- ie. … multiply SM2.0 score by 0.75 as a pull-down to lower the overall score if SM3.0 isn’t supported. One wonders how that pull-down number of 0.75 was arrived at.

Score wise it looks like on 3DMark6 a X1600XT will beat a X800 XL. Yet in one of xbits latest roundups the XL beat the X1600XT in every game benchmark -- 18 games total , and by a quite large margin (typically 50% faster) in most of the benches to boot. SM3.0 isn’t going to do much performance wise for the X1600XT unless a lot of dynamic branching is used. Similar thing with X1600XT vs a 6800GS. XT looks like it matches the GS in 3DMarK6 but is quite a bit slower in almost all the games out there.

At least in 3Dmark5 there was some semblance of reality in the scores and one could expect reasonably close DX9 performance to what the scores indicated on various cards --- but in 06 there is no semblance of reality in the respective performance of many cards. And since 3Dmark6 is so far off the mark in so many cases -- it doesn’t seem very useful as a benchmark.
 
NocturnDragon said:
Yes, but it has not been confirmed on HOW it's used, and how much impact it has,
and showing from the benchmaks it has no impact whatsoever.
If it was used how future games will use it (a year from now?) nvidia cards would crawl!
Yeah. Right. :rolleyes:
I seriously doubt that a year from now there will be a single game where current nVidia cards will crawl and ATI ones will fly.
 
Blastman said:
Score wise it looks like on 3DMark6 a X1600XT will beat a X800 XL. Yet in one of xbits latest roundups the XL beat the X1600XT in every game benchmark -- 18 games total , and by a quite large margin (typically 50% faster) in most of the benches to boot. SM3.0 isn’t going to do much performance wise for the X1600XT unless a lot of dynamic branching is used. Similar thing with X1600XT vs a 6800GS. XT looks like it matches the GS in 3DMarK6 but is quite a bit slower in almost all the games out there.
That's probably due to the pixel shader / texture ratio of the x1600 3 to 1, that is not really fully used yet in current games.
We might have more information on that when the x1900 benchmarks will be shown.
 
Last edited by a moderator:
Bouncing Zabaglione Bros. said:
It's not that at all. There are just so many weird discrepancies and choices, and they seem to favour Nvidia. Come on, a "forward looking test" with no SM3.0 branching, no parallex mapping, no AA/AF? Even places where Nvidia cards get no score rather than a bad score, where the exact opposite happens for ATI cards? Nvidia cards get advantage from their specific non-DX features, but ATI cards don't?

The cards from the ATI/Nvidia are simply not being treated with the same level of objectiviity, and that is what is being queried. It's not tribalism, it's frustration at what should be a level playing field being so far tilted that 3DMark06 is pretty useless for comparing performance between the two main chip suppliers, even though it claims to be an even and honest test of capabilities. 3DMark06 is now just a marketing tool, rather than an objective testbench of a card's capabilities and performance.

Your getting too caught up in the fact that the GTX beats the X1800XT here. Just look at the frame rates of the tests! These tests ARE forward looking. They are NOT meant for this generation of video cards. The fact is that ALL current cards run these tests badly.

A GTX512 can't even get 20fps average on game3 at the standard res with no AA and no AF. And on the few cards that currently do support HDR+FSAA the frame rates are so low it makes little sense to run the HDR tests on these cards. The XT run's the test at under 15fps and the XL at under 10fps.

These tests were designed to target up comming hardware and both ATI and NVIDIA had input on what went into them.

And seriously - do you really think that Nvidia needed to pay off futuremark to sell more GTX cards? Those cards have no problem selling themselves.
 
N00b said:
Yeah. Right. :rolleyes:
I seriously doubt that a year from now there will be a single game where current nVidia cards will crawl and ATI ones will fly.
I probably used the wrong word, I was meaning crawling compared to ati ones, not in a absolute meaning.
Anyway I bet that the various 6800 won't be that fast in high end games sold a year or 2 from now!
But we are still talking about FutureMark.
And you have to agree with me that dynamic branching will be used a lot in the future. And i'm pretty sure next Nvidia card (not the upcoming refresh) will have no problem with that!

If the test was using DB in a heavy way (which i'm pretty sure it will be used in the future) nvidia cards would really be slower than the ati ones.

Here there is a link to remind you of the speed difference.
http://www.xbitlabs.com/images/video/radeon-x1000/x1800/Xbitmark_x18.gif
 
BY the time branch intensive shaders become a de facto standard, certainly one, but possibly 2 other 3DMarks will be released. Even though I don`t favour either side, I also don`t encourage being selectively blind.

3DMark was never meant to be a showcase for far-future tech, more for things coming up rather soonish, under a year`s timeframe. This "Dynamic Branching will rock da world"yadda is very reminescent of the "DX9.1 for FX goodness" and "Screw SM3.0, 3Dc iz da shiznit".Not in the x1800xt`s lifetime, or the GTXs. It`s a very important feature, true, but it`s not something you`d want to rely heavily on if your game/engine was coming out in the following 18 months, IMO.And as for flying/crawling...mehh, I doubt that will happen...the FX sucked ass badly, and it only crawled near the end of its lifecycle. Devs aren`t IHV demo-makers, they target a large audience.
 
inefficient said:
Your getting too caught up in the fact that the GTX beats the X1800XT here. Just look at the frame rates of the tests! These tests ARE forward looking. They are NOT meant for this generation of video cards. The fact is that ALL current cards run these tests badly.

Nope, my points will still be true in a week when X1900XT arrives, and in a few of months when G71 arrives. Futuremark are not treating companies equally, and this stands out because some of the decisions they made on what to support and what not to support look pretty bizarre in light of the what we'll be seeing in the next year on our PCs.
 
Morgoth the Dark Enemy said:
BY the time branch intensive shaders become a de facto standard, certainly one, but possibly 2 other 3DMarks will be released. Even though I don`t favour either side, I also don`t encourage being selectively blind.

3DMark was never meant to be a showcase for far-future tech, more for things coming up rather soonish, under a year`s timeframe. This "Dynamic Branching will rock da world"yadda is very reminescent of the "DX9.1 for FX goodness" and "Screw SM3.0, 3Dc iz da shiznit".Not in the x1800xt`s lifetime, or the GTXs. It`s a very important feature, true, but it`s not something you`d want to rely heavily on if your game/engine was coming out in the following 18 months, IMO.And as for flying/crawling...mehh, I doubt that will happen...the FX sucked ass badly, and it only crawled near the end of its lifecycle. Devs aren`t IHV demo-makers, they target a large audience.

Not meaning to go off track , but the FX sucked early on in many titles and used all manner of low quality hacks to appear competitive . And then by the time of the next gen cards (NV40 ) everyone happily abandoned the FX and relegated it to DX 8 at best .
 
NocturnDragon said:
I probably used the wrong word, I was meaning crawling compared to ati ones, not in a absolute meaning.
Anyway I bet that the various 6800 won't be that fast in high end games sold a year or 2 from now!
But we are still talking about FutureMark.
And you have to agree with me that dynamic branching will be used a lot in the future. And i'm pretty sure next Nvidia card (not the upcoming refresh) will have no problem with that!

If the test was using DB in a heavy way (which i'm pretty sure it will be used in the future) nvidia cards would really be slower than the ati ones.

Here there is a link to remind you of the speed difference.
http://www.xbitlabs.com/images/video/radeon-x1000/x1800/Xbitmark_x18.gif
I think I understood perfectly well what you meant. And still I disagree. Thanks for posting the link to the Xbitlabs charts, it will help me to make my point clear.

If you look at the chart, you will notice that nVidia (7800 GTX) is ahead of ATI (1800XT) in 11 of the 17 shaders tests. If you look at the branching tests, you will notice that these tests are not very realistic, meaning you will probably never ever see a shader like that in a game. The use of branching in these tests is artificially high, so branching will have an heavy impact on the score. In a shader used in a real game, even a year from now, you will not have as much branching. Even in two years not every shader in every game will use heavy dynamic branching because some/most shaders will not require it. So while the use of dynamic branching in future games will give ATi a boost, it will be a modest one and current ATI X1x00 cards will not be suddenly twice as fast as current nVidia cards.

And, last not least, DX10 will be here soon. So there surely will be 3DMark07 and will probably arrive early 2007. So the scope of 3DMark06 is to give a forecast on games that will come out in the next year. I'm absolutely convinced that heavy branching will not be as widely used (in upcoming games') shaders as you suggest.
 
N00b said:
Yeah. Right. :rolleyes:
I seriously doubt that a year from now there will be a single game where current nVidia cards will crawl and ATI ones will fly.

I agree here, you can't dismiss dynamic branching's share in 3DMark 2006 just because Nvidia cards don't suck running 3DMark 2006. :)

But if there won't be games what use DB, that's not because developers don't want to use it without reason. There must be a good reason not to use such tehnique, with so much potential performance and quality wise, and that reason might be the difficulty in implementation or very small gains/effort. And of course a good reason might be that a large part of the cards on market can't use it well. So your statement that there won't be games which crawl on Nvidia products might be true not because they are so good, but simply because noone will develop games which would crawl on 60% (dunno, just a number) of the cards outhere.
 
Last edited by a moderator:
NocturnDragon said:
Yes, but it has not been confirmed on HOW it's used, and how much impact it has
That's correct, but you can't use that logic to claim there's no impact. In synthetic pixel shader tests the X1800 is significantly slower than the 7800 GTX - now so far that difference hasn't translated into real world gaming performance, but that doesn't mean the same is true for 3D Mark 2006. Maybe the X1800 is being significantly helped by the dynamic branching.

Not really so! If it was the case, FM would have created a pingopong shader to emulate the lack of floating point blending on the 6200
Well, floating point blending is a requirement. I can understand this seems somewhat arbitrary, but if floating point blending is a real world requirement too then their decision is sensible.

and it would have create a PS to emulate the lack of AA when FP blending is enabled for cards that don't use it.
I wasn't aware you could fully simulate AA using shaders.

Read my previous post. Why penalize ATI for it's decision on not supporting HW filtering, while not penalize Nvidia for not supporting hardware AA or FP blending?
Nvidia is being penalised, because Futuremark is saying "your card isn't compatable with our SM3.0 tests". With the hardware FP filtering situation on the other hand they're giving ATI a very efficient fallback.

Now it's presumed that developers will use the fallback, so the test will be an accurate reflection of the performance difference between ATI and Nvidia hardware.

Sure after the 05 only supported the Nvidia one!
Fetch4 is only present in the X1000 series.

Bear in mind that fetch4 is available on every single channel texture format, while in the test is only used in the PS2.0 tests... why not in the other ones? (maybe because nvidia PCF wouldn't work?)

And what about 3dc? what about Rendering to buffer array?
According to Nick, neither would work:

"Due to the sampling method in the HDR/SM3.0 graphics tests, we weren't able to use neither FETCH4 or PCF in those tests. It simply wouldn't have worked due to the rotated grid we use."

on 3dc:

"3Dc would have increased the package by 2x"
 
Subtlesnake said:
"3Dc would have increased the package by 2x"

Not only do I doubt it would increase the package by x2 (sure it will be bigger) but who cares how big the package is anyways as a test should be constrained by the features not the size! This was a very very week excuse.
 
Back
Top