Futuremark: 3DMark06

Unknown Soldier said:
1) DST16 could've been used in the benchmark but wasn't (developers(or IHV's) preference I suppose)
We want to ensure that the shadows are rendered correctly, no matter how much we tune or tweak the cameras in the scenes. We chose 24 bit already back in 3DMark05, and we continued to do so in 3DMark06. We don't want to in a situation where we would possibly need to limit the scene due to the shadows not being properly rendered.

Unknown Soldier said:
2) DST24 is used, but cards that don't support the feature is forced to use F32(which might cost more but is an unknown factor atm).
We have multivendor hardware shadow mapping support, and for hardware that do not have it, uses R32F (just as in 3DMark05).

Unknown Soldier said:
2) Fetch4 used in SM2.0 which doesn't support Fetch4 but an algorithm of it is used.
FETCH4 is supported in the SM2.0 graphics tests for any hardware with DF24 and FETCH4 support.

Unknown Soldier said:
3) SM3.0 supports Fetch4 yet it isn't used in the SM3.0 benchmark.
We weren't able to use neither FETCH4 or PCF in the HDR/SM3.0 graphics tests due to the way we smoothen the shadows.

More? ;)
 
Nick[FM] said:
Sounds strange. Which drivers do you have installed? 3DMark doesn't actually "enable" SLI or "disable" it either. It is up to the drivers to determine if SLI should be on or off.

Well, perhaps it is working. I disabled SLI, and the score dropped about 1500 points. After re-looking at scores I was comparing it to, they had X2 A64's, I do not. I guess thats where the extra 1000 points came from. Didnt know the CPU impacted scored so much. So I guess its case closed. Thanks for replying though.
 
JasonCross said:
I still can't reconcile why a GeForce 7800 gets no score with AA enabled. Enabling AA puts it in the same boat as an X800 or GeForce 6200 - able to complete only the CPU and SM2.0 tests. I think it should use the forumla for those cards in that situation.

Oh my. I just realized what you guys are pointing at here. Of course that means that non-AA scores will be used in comparitive reviews while AA ones can't be. No wonder there are large green smiles today, given that X1800 shines best when AA is used.
 
Kombatant said:
Hmm... if you think that
a) I get 4047 with my 1800xt, stock speeds
b) I get 1040 from my 3200+ o/c at 2.7GHz
c) see the FX60 score there,

that'd mean that, if i had a FX60, I'd get something a score of 4940 with my current card right? So isn't 5348 kinda low for a 1900? Unless it's not a high-end model, of course.

Dual core seems to help substantially. Using my results i posted before, a dual core 2.0 scores a CPU score of about 1400. Quite a bit more then yours at 2.7. May not be a big deal but when you overclock your processor by 35% and score lower (small margine of error due to my cards settings) thats a rather large boost from dual core. There will be some real problems with these CPU scores and trying to make a performance basis off your total score like people are use to doing. One can dream that this will be a performance indication of future games though with the effect of dual core :)

I'm going to disable a core and see what happens to the total score as well as the CPU score. I would like to see if its the driver SMP or the 3Dmark program making such a substantial difference.


Personal request, Can someone with a GTX force on x16 HQ AF please and post scores in 2.0/3.0 and total?
 
Last edited by a moderator:
geo said:
Oh my. I just realized what you guys are pointing at here. Of course that means that non-AA scores will be used in comparitive reviews while AA ones can't be. No wonder there are large green smiles today, given that X1800 shines best when AA is used.
Yeah, there are a few questions being raised that'll I'll be interested in watching the answers develop to too....right now I'm still just trying to keep my head above water following along with the discussion. :oops:

Thanks for taking the time/abuse to answer Nick, it's most appreciated. I imagine you're having a rather busy day today... :LOL:
 
digitalwanderer said:
Thanks for taking the time/abuse to answer Nick, it's most appreciated. I imagine you're having a rather busy day today... :LOL:
Of course I will spend time to answer important questions. It seems that there are too much false (or distorted) information in regards to the hardware shadow mapping etc. Still, I'm prepared to follow up the discussions and make sure that everyone knows as much as possible what is being supported and what is not. Busy day? Yep! :D
 
Ye .. a weeks saving grace I suppose. ;)

Guess the flak cannons will be out in full force then. I'd hate to be you.

Thanks for the answers. I like what you've did with your product.

US <-- Off to bed
 
I haven't d/l this thing yet but from the comments :

3DM05 started on That Road I didn't like. This one appears to have gone further down That Road.
 
Reverend said:
I haven't d/l this thing yet but from the comments :

3DM05 started on That Road I didn't like. This one appears to have gone further down That Road.
If you're one thing Rev.....it's predictable.

....and in your case, it's not a good thing.
 
Kombatant said:
Hmm... if you think that
a) I get 4047 with my 1800xt, stock speeds
b) I get 1040 from my 3200+ o/c at 2.7GHz
c) see the FX60 score there,

that'd mean that, if i had a FX60, I'd get something a score of 4940 with my current card right? So isn't 5348 kinda low for a 1900? Unless it's not a high-end model, of course.


Dual core seems to help substantially. Using my results i posted before, a dual core 2.0 scores a CPU score of about 1400. Quite a bit more then yours at 2.7. May not be a big deal but when you overclock your processor by 35% and score lower (small margine of error due to my cards settings) thats a rather large boost from dual core. There will be some real problems with these CPU scores and trying to make a performance basis off your total score like people are use to doing. One can dream that this will be a performance indication of future games though with the effect of dual core :)

I'm going to disable a core and see what happens to the total score as well as the CPU score. I would like to see if its the driver SMP or the 3Dmark program making such a substantial difference.


Personal request, Can someone with a GTX force on x16 HQ AF please and post scores in 2.0/3.0 and total?

Follow up, seems to be 3Dmark06 CPU SMP tests doing a whole lot 3800+ with a disabled core

1737 SM2.0 score
1797 HDR/SM3.0 score
743 CPU score

3660 score

vs

1747 SM2.0 Score
1791 SM3.0 Score
1404 CPU Score

4257 total
 
Last edited by a moderator:
I'm not sure why FM included the CPU mark in final 3DMark. So, a computer with 6600GT but couped a dual-core CPU can score higher than a 6800GS with single-core CPU.
Does it corresponds to actual gaming experience?
 
Nick[FM] said:
FETCH4 is supported in the SM2.0 graphics tests for any hardware with DF24 and FETCH4 support.

More? ;)

Maybe you can explain to me why DF24 is required in order to use fetch4 and DFC? Also, why did you decide against HDR+AA on the 7x00 series (since it is not supported) but for DST24 even though it is also not supported on the X1800? This is what I mean by a double standard.
 
Last edited by a moderator:
Nick[FM] said:
We want to ensure that the shadows are rendered correctly, no matter how much we tune or tweak the cameras in the scenes. We chose 24 bit already back in 3DMark05, and we continued to do so in 3DMark06. We don't want to in a situation where we would possibly need to limit the scene due to the shadows not being properly rendered.

If i compare a 3DMark05 Ati/Nv image, i can see that the Nv shadows (due to DST) are more aliased than Ati's shadows. I have never seen a reference image so i don't know which card is "correct", but i assume Ati (shadows have softer edges).
So your statement about 'ensuring correct shadow rendering' surprises me a bit, because it seems you didn't care for that in 3DM05.

How it is in the new 3DM06 i don't know.
 
Last edited by a moderator:
Unknown Soldier said:
I think it's pretty simple.

1) DST16 could've been used in the benchmark but wasn't (developers(or IHV's) preference I suppose)
2) DST24 is used, but cards that don't support the feature is forced to use F32(which might cost more but is an unknown factor atm).
3) Fetch4 used in SM2.0 which doesn't support Fetch4 but an algorithm of it is used.
4) SM3.0 supports Fetch4 yet it isn't used in the SM3.0 benchmark.
5) AA doesn't get scored with a certain IHV's card.

Anything else to add?

US

Thanks :) But I would like to know that if you were a game developer targeting today's hardware and trying to implement features similar to those in 3dmark06, what decisions would you have made differently concerning the above points. Is there anything that could have been done to improve ATi's performance and not just decrease Nvidia's performance?
 
ANova said:
Maybe you can explain to me why DF24 is required in order to use fetch4 and DFC? Also, why did you decide against HDR+AA on the 7x00 series (since it is not supported) but for DST24 even though it is also not supported on the X1800? This is what I mean by a double standard.

I've noticed he is answering just about every question except that .
 
Richteralan said:
I'm not sure why FM included the CPU mark in final 3DMark. So, a computer with 6600GT but couped a dual-core CPU can score higher than a 6800GS with single-core CPU.
Does it corresponds to actual gaming experience?

I dont really understand either as your total score takes into account your CPU now in 3Dmark06, you have to look at the graphic scores more specifically if you want just your cards output. As you can see keeping both cores enabled or disabled basically does nothing to the graphic bench scores, but the total score is pretty drastic in differences. For instance, look at the screenshots of the FX60 and the X1900 card, look at its results in SM2 and SM3, and compare them to the results of the OC'd GTX 512 with a 3200+, you'll notice the GTX 512 is outputting a significantly higher score in the graphics tests, but because of the CPU tests, the X1900 actually recieves a very high score for the differences in the graphic tests between the cards.

I'm not too sure im happy with the way the CPU tests were implimented in the way they were either. I actually would of preferred something like modified 3Dmark01 tests using graphics cards to show CPU power, and then CPU benchmarks like the ones used as a seperate score and benchmark. I like where they were going, but not the confusion this might cause. Total score is now more worthless in showing a computers graphic horsepower in my opinion basically.
 
Last edited by a moderator:
Back
Top