Futuremark: 3DMark06

radeonic2 said:
As for cpu scores counting.. I think that is another bad decision.
If want cpus to count why not have a game test that's more cpu limited.. say having it doing lots of physics calculations since they are partnered withe ageia.
And what's the difference if we would have physics & AI in the graphics tests, eliminating pure GPU benchmarking? :???: We now have 4 graphics tests and 2 CPU tests (which do use lots of physics, AI etc.), and we are able to output a 3DMark score based on your systems' gaming performance, and sub scores for pure graphics & CPU benchmarking. I'm sorry, but I don't see the logic in your post since that's what we did. Only difference is that we separated those two aspects (CPU & GPU) in order for people to do more in-depth benchmarking.

Cheers,

Nick
 
rwolf said:
I think the solution is for FM to create a patch with a setting that makes the NV card use the same pixel shader workaround for 24-bit depth stencil textures. This way we could see if 3DMark06 is a benchmark or an NV demo. In fact why not make a patch that makes all cards use one code path through a setting.
This confuses me a bit.. Workaround for 24bit depth stencil textures? Simply disable the hardware shadow mapping from the benchmark settings, and no card uses the hardware shadow mapping. It is already in there and works for all cards. We also have an option "Force software FP filtering" if you'd like to disable the hardware FP16 filtering support. Everything is in there, it is just a matter of taking a look in the benchmark settings. :smile:

Cheers,

Nick
 
Ratchet said:
The inclusion of the CPU score into the final 3DMark score takes the "3D" out of 3DMark though. That was a terrible decision in my opinion. For that, and other reasons, I don't think I'll be using it in my reviews. In fact, I think I've just decided to drop all versions of 3Dmark from my benchmark suite.
Sad to hear that you will go that route. I don't really see what bad there is with adding the CPU score to the final 3DMark score, since the 3DMark score should reflect future gaming performance. Why not use the sub scores for pure CPU and GPU reviews/benchmarking? Do you think they are useless or why won't they be useful for you?

Cheers,

Nick
 
mongoled said:
Yup this is sticking out like a sore thumb, im still waiting to see an answer to this question. I am way out of my depth when reading alot of the technical stuff discussed over here at B3D, but the point JasonCross makes is very clear to me. So could a 3Dmark representative give an explanation to a non-tech savvy person as myself as to why (7800 AA score give no score *confused* ) this is the case, as it obviously scews the result when sites use your benchmark to make comparisons?
It doesn't get a 3DMark score, but it does get all available sub scores which are as useful for comparison as the final 3DMark score.

Cheers,

Nick
 
trinibwoy said:
Thanks :) But I would like to know that if you were a game developer targeting today's hardware and trying to implement features similar to those in 3dmark06, what decisions would you have made differently concerning the above points. Is there anything that could have been done to improve ATi's performance and not just decrease Nvidia's performance?

It isn't about making Nvidia slower .. it's about making ATI faster. It seems things could've been done to increase the performance of ATI hardware i.e. R520 but it was overlooked.

I'm still interested in the X1900XT benchmarks but I guess this will wait till next week.

US
 
Nick[FM] said:
I am still not sure how this can be seen as "double standard". We now have multivendor hardware shadow mapping support, which we didn't in our previous 3DMark. What's wrong with us supporting more vendors' hardware shadow mapping?

I believe the same reason was given by FM in your previous 3DMark. Supposedly there was another IHV (S3?) who used DST24/PCF and that was the reason given by FM to justify it's use in 3DM05. So here you are saying that there wasn't multivendor support during 3DM05 which basically means that all scores on GF6 cards should be revised since it adds quite a significant amount of points to the final score.
 
Nick, one of my main gripes is the amount of influence the CPU has in the final 3DMark score; especially dual core processors raise performance too much (in my opinion, of course). I gather that you folk believe that multi-core processors will result in such a performance increase in future games, and hence you took that decision - do you have a timeframe in mind where you believe that we'll see such performance increases from multi-core processors in real games, as portrayed in 3DMark06?
 
Nick[FM] said:
It doesn't get a 3DMark score, but it does get all available sub scores which are as useful for comparison as the final 3DMark score.

Cheers,

Nick
Thanks for taking the time to answer, but you still seem to have side-stepped my question
mongoled said:
So could a 3Dmark representative give an explanation to a non-tech savvy person as myself as to why (7800 AA score give no score *confused* ) this is the case?

-EDIT- The emphasis in this question is placed on the WHY! the 7800 with AA gets no score

Surely it would make more 'sense' tht a score is given when AA is enabled. I say 'sense' because its definiton is meaningful in this context, and in the context of my eyes , the action your company has taken makes no sense to me. It seems illogical........
 
Last edited by a moderator:
CJ said:
I believe the same reason was given by FM in your previous 3DMark. Supposedly there was another IHV (S3?) who used DST24/PCF and that was the reason given by FM to justify it's use in 3DM05. So here you are saying that there wasn't multivendor support during 3DM05 which basically means that all scores on GF6 cards should be revised since it adds quite a significant amount of points to the final score.
I see your point. In 3DMark05 we already had multivendor DST support which is true, but we only supported D24X8 & PCF (only available hardware shadow mapping at the time). Now we support D24X8 & PCF and DF24 & FETCH4. Maybe I should call in "extended multivendor DST support"? :smile:

Cheers,

Nick
 
Kombatant said:
Nick, one of my main gripes is the amount of influence the CPU has in the final 3DMark score; especially dual core processors raise performance too much (in my opinion, of course). I gather that you folk believe that multi-core processors will result in such a performance increase in future games, and hence you took that decision - do you have a timeframe in mind where you believe that we'll see such performance increases from multi-core processors in real games, as portrayed in 3DMark06?
We are already seeing games supporting dual cores, but when they will support it to the same extent as 3DMark06 does is a bit difficult to predict. I would presume during this year since dual cores are becoming more "mainstream" already. It is more or less up to the developers to decide how much resources they want to put into supporting dual cores. We see it (DC) as a great thing, and 3DMark06's CPU tests are proof that if the support is done efficiently, the gains in CPU utilization/performance can be pretty big. Really, it is up to the game developers to decide if they really want to utilize the possibilities DC's have to offer.

Cheers,

Nick
 
mongoled said:
Thanks for taking the time to answer, but you still seem to have side-stepped my question


-EDIT- The emphasis in this question is placed on the WHY! the 7800 with AA gets no score

Surely it would make more 'sense' tht a score is given when AA is enabled. I say 'sense' because its definiton is meaningful in this context, and in the context of my eyes , the action your company has taken makes no sense to me. It seems illogical........
I recall that I already answered this question a couple of pages back? :???:

Anyway, as said, the point is that we require that the hardware needs to be able to run all tests which are available with default settings, no matter what settings & options are being used. I also want to remind that this is not an IHV specific thing! Any hardware with lack of some optional (from the benchmark options) feature support will work in the same way. Some of you seem to think this is an NVIDIA only thing, but it isn't.

Cheers,

Nick
 
Nick[FM] said:
I recall that I already answered this question a couple of pages back? :???:

Anyway, as said, the point is that we require that the hardware needs to be able to run all tests which are available with default settings, no matter what settings & options are being used. I also want to remind that this is not an IHV specific thing! Any hardware with lack of some optional (from the benchmark options) feature support will work in the same way. Some of you seem to think this is an NVIDIA only thing, but it isn't.

Cheers,

Nick
Im sorry if i missed it, thanks for answering :)
 
Nick[FM] said:
We are already seeing games supporting dual cores, but when they will support it to the same extent as 3DMark06 does is a bit difficult to predict. I would presume during this year since dual cores are becoming more "mainstream" already. It is more or less up to the developers to decide how much resources they want to put into supporting dual cores. We see it (DC) as a great thing, and 3DMark06's CPU tests are proof that if the support is done efficiently, the gains in CPU utilization/performance can be pretty big. Really, it is up to the game developers to decide if they really want to utilize the possibilities DC's have to offer.

Cheers,

Nick
My point is that, the way the benchmark is constructed, CPU performance is abstracted from GPU performance. Allow me to explain what I mean. SM2 and SM3/HDR scores are more or less independent from the type of CPU you use - which is great as far as consistency is concerned. And yes, multi-core processors are seeing more support and will see more support as they become more mainstream. But (you knew there was a 'but' coming, didn't you) what I am saying is that:

a) right now, as I have said in one of my previous posts, we have results that have no relation to actual game performance. A PentiumD 2.8GHz processor will give you a 30% bigger CPU score than an AthlonFX-57 - which sounds a bit, you know, out of this world. As you said, you have no way of knowing the level of multi-core improvements developers will bring to the table, so I was interested to find out how you guys decided on the amount of influence dual-core processors (even if they are weaker in performance, like the Pentium Ds) have in your benchmark.

b) CPU scores and GPU scores are distinct, as it is now. Meaning that, if you pair a lesser gfx card with a top-of-the-line dual core processor (call this Scenario A), you'll get a score which is bigger than someone with a top-of-the-line gfx card and a single-core processor (call this scenario B). In games, on the other hand, both the CPU and the GPU influence the framerates you get; aka you'll never see a scenario like the one I described above. From what I read, the overall 3DMark score is there to give you an indication of your system performance (although I thought PCMark was doing that already - please correct me if I am wrong) and not just games/gfx card performance. So, what I am trying to ask is, what do you believe the overall mark is portaying exactly? How are the scenarios I mentioned justified? How is scenario A better than scenario B in real-life usage, since 3DMark's tests are focused mainly on the gfx card?
 
Kombatant said:
SM2 and SM3/HDR scores are more or less independent from the type of CPU you use - which is great as far as consistency is concerned.
We've got lots of results posted already in this thread that indicate 3 of the 4 graphics tests are CPU-limited.

Jawed
 
Jawed said:
We've got lots of results posted already in this thread that indicate 3 of the 4 graphics tests are CPU-limited.

Jawed

Thanks for pointing it out man, I stand corrected then on that.
 
Nick[FM] said:
I recall that I already answered this question a couple of pages back? :???:

Anyway, as said, the point is that we require that the hardware needs to be able to run all tests which are available with default settings, no matter what settings & options are being used. I also want to remind that this is not an IHV specific thing! Any hardware with lack of some optional (from the benchmark options) feature support will work in the same way. Some of you seem to think this is an NVIDIA only thing, but it isn't.

Cheers,

Nick

I have to agree with many people in this Forum, that giving NV no score is illogical. When it comes to AA and HDR, 7800 behaves like a SM2.0 card so it should get 2.0 scoring. It does all SM2.0 Tests with AA but is unable to run HDR/SM3.0 tests. It does render exactly the same in this scenario like a X850 card. The X850 gets a score but the 7800 does not... although the render work is absolutely comparable!

3DM06 AA Test:
X850 does run SM2.0 tests with AA -> SM2.0 score counting (because it cannot run SM3.0/HDR tests)
7800 does run SM2.0 Tests with AA -> No score counting (because it cannot run SM3.0/HDR tests)
7800 does run SM2.0 Tests with AA -> SM3.0 score counting (because it can run all tests)

Where is the logic???

In real games, GF7 owners have to do the same decission: Use HDR OR AA. It is a limitation of the hardware and this should be reflected in a score... this is what 3DM was meant for. A gamers benchmark that shows what the PC is capable of, reflected by a score.

The reason why this is important, is that 3DM is seen as a standard by many people. They just count the overall score. But there is no score for NV so nobody will take care of the important AA situation. NV can claim that it's not comparable and just use the 2.0 scoring.

Even just counting the single result scores does not help here, because ATI can not benefit in terms of scoring for their AA+HDR work. They have a score, NV is not "judged" by not supporting it. You leave it just neutral in this case althought the situation isn't...

Klaus
 
Typo... should be:

3DM06 AA Test:
X850 does run SM2.0 tests with AA -> SM2.0 score counting (because it cannot run SM3.0/HDR tests)
7800 does run SM2.0 Tests with AA -> No score counting (because it cannot run SM3.0/HDR tests)
1800 XT does run SM2.0 Tests with AA -> SM3.0 score counting (because it can run all tests)

Klaus
 
Kombatant said:
a) right now, as I have said in one of my previous posts, we have results that have no relation to actual game performance. A PentiumD 2.8GHz processor will give you a 30% bigger CPU score than an AthlonFX-57 - which sounds a bit, you know, out of this world. As you said, you have no way of knowing the level of multi-core improvements developers will bring to the table, so I was interested to find out how you guys decided on the amount of influence dual-core processors (even if they are weaker in performance, like the Pentium Ds) have in your benchmark.
3DMark06 is an ahead looking benchmark (as all our new 3DMark's are at the time when they are released). There are already games coming out which have some sort of dual core support, but it will take a while before game developers tune their engines to fully support dual core CPU's. I am sure that with time games will use DC's to the same extent as we do in 3DMark06. It is just a matter of time. I don't see any reason why not? I also have a hunch that when Xbox360 games (which fully utilize the multicore CPU) will be ported to PC's, we will see performance benefits of dual cores.

Kombatant said:
b) CPU scores and GPU scores are distinct, as it is now. Meaning that, if you pair a lesser gfx card with a top-of-the-line dual core processor (call this Scenario A), you'll get a score which is bigger than someone with a top-of-the-line gfx card and a single-core processor (call this scenario B). In games, on the other hand, both the CPU and the GPU influence the framerates you get; aka you'll never see a scenario like the one I described above. From what I read, the overall 3DMark score is there to give you an indication of your system performance (although I thought PCMark was doing that already - please correct me if I am wrong) and not just games/gfx card performance. So, what I am trying to ask is, what do you believe the overall mark is portaying exactly? How are the scenarios I mentioned justified? How is scenario A better than scenario B in real-life usage, since 3DMark's tests are focused mainly on the gfx card?
Look at it from this point of view. If you have a high-end DC CPU but a lower end GPU, which one will be the more obvious limiting factor in upcoming titles? Then again, if you have a single-core CPU and a high-end GPU, which one will become the limiting factor as soon as games utilize dual cores and use more complex AI & physics? The point in the CPU tests is to support the latest technology and show the benefits of it. Dual cores really are efficient CPU's if the application supports them well. In 3DMark06 I think we have done an excellent job in showing the real benefits of dual core technology. I can't see why games wouldn't do the same? If I was a game developer, I would certainly put resources into optimizing the CPU side of things in my game (to support dual cores efficiently). 3DMark06 is the proof of the benefits.

Cheers,

Nick
 
Nick[FM] said:
DF24 works with FETCH4, but certainly has nothing to do with DFC (or do you mean PCF?). DF24 and FETCH4 go hand in hand, as D24X8 and PCF does. Dynamic Flow Control (DFC) has nothing to do with FETCH4, PCF, DST etc.

The X1800 supports fetch4, it does not support DF24, therefore it is not running fetch4 due to your requirements. The X1800 also supports rather good dynamic branching yet it does not seem to use this either, why? I understand that you believe 16 bit DST was not up to standards but it would have been nice to have it as an option.

We didn't decide against HDR+AA (a DX feature) since we support it in 3DMark06. If any hardware doesn't support that feature, then it .. simply doesn't. :???: DF24 is supported since it helps in shadow rendering on hardware that supports the feature (just like D24X8 on other cards). I am still not sure how this can be seen as "double standard". We now have multivendor hardware shadow mapping support, which we didn't in our previous 3DMark. What's wrong with us supporting more vendors' hardware shadow mapping?

The difference is that you are forcing standards supported by nvidia onto ATI but not the other way around. HDR+AA is not being forced onto the 7x00 series like D24X8 is on the X1800 (instead you just give the 7 series an N/A score), and since the X1800 does not support D24X8 it has to fall back to R32F which has an impact on bandwidth. So tell me, how is 24 bit to 32 bit a fair comparison? It's apples to oranges. So while the 7800s are running well on 24 bit with PCF, the X1800 is being compared to it on 32 bit without any fetch4 or DFC support. Thus it is not a relevant test to compare the two's capabilities imo.
 
Nick[FM] said:
The point in the CPU tests is to support the latest technology and show the benefits of it. Dual cores really are efficient CPU's if the application supports them well. In 3DMark06 I think we have done an excellent job in showing the real benefits of dual core technology. I can't see why games wouldn't do the same? If I was a game developer, I would certainly put resources into optimizing the CPU side of things in my game (to support dual cores efficiently). 3DMark06 is the proof of the benefits.

Cheers,

Nick
So, all in all, the overall score is a proof of concept of what would happen if developers head that way when they develop their games? Or am I getting this all wrong?
 
Back
Top