Will 3DMark07 be a popular benchmark tool?

Nick[FM];906103 said:
For a great overall performance number, use the 3DMark Score. It's all in there.

Well what he's getting at is that it's not a great overall performance number at all in that it's not a useful indication of anything.

Oh, and just as a sidenote, 3DMark06 comes with graphics tests and CPU tests (and the feature tests), not game tests.

Well what exactly is the total score indicative of? What other application besides 3dmark remotely follows the performance metrics determined by FM's weighting of CPU/GPU scores?

For example, people claim that G80 is "bottlenecked" by all but the fastest CPU's in 3dmark. That isn't exactly true - the overall 3dmark score is "bottlenecked" but the SM2.0 and SM3.0 tests (which are the true tests of the graphics hardware) can probably be maxed out with less than the QX6700's that everyone clamours for. Since 3dmark is mostly used to test graphics hardware the total score is very misleading in this context.
 
Well what he's getting at is that it's not a great overall performance number at all in that it's not a useful indication of anything.
As you probably know, 3DMark06 is maxing out the performance of each individual component in a gaming system. That includes the multi-core CPU's as well. Games will follow, but when that will happen is very hard to predict. I am sure that most popular and high-end game engines (and why not non-high-end engines) will be optimized for multi-core CPU's. If not, then it's a shame, a very big :oops: and I am out of words.

Well what exactly is the total score indicative of? What other application besides 3dmark remotely follows the performance metrics determined by FM's weighting of CPU/GPU scores?
The final score is an indicative of the whole system's gaming performance. We are confident that with increasing amounts (and quality) of physics, better AI and better gameplay, multi-core CPU's will start to shine. It is inevitable, or do you think that quad-core CPU's were developed simply for a better experience in Thunderbird? ;)

You need to keep in mind that game developers have to decide upon their game engine architecture at a pretty early stage in the development. Of course it is not a must, but at least based on our experience, it is very wise to do so. If you start to "hack" in something into the engine at a very late stage, things start to break apart or the new addition isn't really fully efficient. Developing a full blown game takes years, and what you see being released now (or 6 months ago) have been in development for a long long time. Back then, quad core CPU's weren't available. Do you see my point? It takes time for game developers to catch up with the latest tech, but they will get there and I have a strong feeling that that will happen very soon.

For example, people claim that G80 is "bottlenecked" by all but the fastest CPU's in 3dmark. That isn't exactly true - the overall 3dmark score is "bottlenecked" but the SM2.0 and SM3.0 tests (which are the true tests of the graphics hardware) can probably be maxed out with less than the QX6700's that everyone clamours for. Since 3dmark is mostly used to test graphics hardware the total score is very misleading in this context.
Whenever there is an insanely fast CPU, the GPU may become the bottleneck and vice versa. But that's exactly how most games work. If you put in 3 year old CPU (not even with HT) and have a SLI/Crossfire system, you will most probably be heavily bottlenecked by the CPU. Both in games, and in 3DMark06. Now try the same thing with a multi-core CPU, and a lowish end SM3.0 hardware. What do you get? The bottleneck is the GPU. Both in games, and in 3DMark06. As I see it, that's far from misleading.

At the end of the day, it is up to how people use the benchmark(s) and what they are looking for. If you need to know how well the latest GPU's perform in very taxing GPU related operations, run 3DMark06 and check out the SM2.0 and HDR/SM3.0 score. It's easy. If someone wants to know if the whole system is up for new and high-tech games, check out the 3DMark score. Then if the CPU performance is at hand, use the CPU score. As I said, it's all in there. We didn't used to have "sub-scores", but we added them to 3DMark06 so that people can use them if they want to know the exact performance of one individual component (CPU or GPU).

The next 3DMark (which I think this topic is all about anyway) will be in some ways again different, but the same basic rules will apply. Can't spill too much what we have been working on, but the shots some sites claim are from the next 3DMark.. Well, they aren't. ;)

Cheers,

Nick
 
Nick[FM];906103 said:
If you simply want to benchmark the GPU's in 3DMark, use the SM2.0 and HDR/SM3.0 Scores. That's why we put them in there. For good CPU benchmarking, use the CPU Score. For a great overall performance number, use the 3DMark Score. It's all in there.


The problem i have is exactly how much it effects the total score and the fact that it was auto-added to the total '3Dmark' score which doesnt make any sense. If i wanted a CPUmark then thats what i would want, but with how people treat 3Dmark scores in general, adding in a greatly influential CPU score wasnt exactly something i was crazy about.

As i'm sure you're aware, people like to 'leak' 3Dmark scores on up and coming hardware, they usually use, and continue to use, total scores. Some of the more shadowy sources dont list the specs of the rest of the PC, and when the difference between a dual and quadcore processor can be something like 1500 points, it gets a tiny bit annoying since that leak is in effect useless instead of mostly useless but possibly good news. Dont get me wrong i dont lose any sleep or anything over it but to me it lessoned the value of what the 3Dmark score actually ment in the past.

On a side note I have yet to see any games scaling well with multi-core processors or MHz for that matter. Oblivion and FEAR with higher quality settings and resolutions are good examples of games that dont react much at all to MHz or core scaling. Hopefully you arent all getting way ahead of yourselves with exactly how much work you put into CPU impact for graphics card based demo runs. Ideally i want to see any barely impact from the CPU (be it 2GHz or 2.6GHz) because thats exactly whats going to happen when i play a more modern game at 1600x1200 with AA/AF.
 
Agreed... I've been a bit unimpressed at how reluctant developers are to really start to attack parallelism, but I suppose there are code bases and tools out there that are hard to work around. I guess we also have to wait until people are sufficiently convinced that there isn't going to be some "silver bullet" - you will have to redesign data structures and algorithms.
Yep. It is really not like "add support for multi core CPU = true" to really have efficient support for multi core CPU's. It's not impossible or even too difficult, but it requires hard work and a lot of skills. :cool: I am more than confident that game developers have the skills, but as I mentioned in my other post, they have been working on newly released titles for years, and simply adding full-blown efficient multi-core support isn't feasible at a late stage in the development. Some may be able to pull it off, but hardly that many.

In any case now that the consoles really necessitate multi-core usage to get reasonable performance, I suspect we'll see more on the PC side. From that point of view I guess it's good that - for example - the PPU in the Cell is slow as hell ;)
Yes the consoles may accelerate the multi core CPU support in games, and I don't even question it. If you develop a game for multi core consoles, why removing the multi core support if you will still release the same game the PC? I don't say that it is 1:1 on the consoles and a PC, but if you have experience in more cores than one, you already know the basics and the rest is simply hard work. ;) Of course I am not perhaps the best person to comment on developing for any consoles but still. Meh, you get the point. :smile:

Cheers,

Nick
 
Nick[FM];906234 said:
Can't spill too much what we have been working on, but the shots some sites claim are from the next 3DMark.. Well, they aren't. ;)

Thank god. No more nature walks. :LOL:

I hope at least one of the tests include cars, or at least a fast moving vehicle, the "benchers" seem to like that. :D

Can't wait to see what's up. ;)
 
The problem i have is exactly how much it effects the total score and the fact that it was auto-added to the total '3Dmark' score which doesnt make any sense. If i wanted a CPUmark then thats what i would want, but with how people treat 3Dmark scores in general, adding in a greatly influential CPU score wasnt exactly something i was crazy about.
Again, it is up to users/media to use the benchmark correctly. All information about it (and how the score formula works, i.e. the weight of the CPU tests in the final 3DMark score) is publicly available. Just check out the 3DMark06 whitepaper. It has all information you need to understand how the benchmark works, and how the scores are calculated.

As i'm sure you're aware, people like to 'leak' 3Dmark scores on up and coming hardware, they usually use, and continue to use, total scores. Some of the more shadowy sources dont list the specs of the rest of the PC, and when the difference between a dual and quadcore processor can be something like 1500 points, it gets a tiny bit annoying since that leak is in effect useless instead of mostly useless but possibly good news. Dont get me wrong i dont lose any sleep or anything over it but to me it lessoned the value of what the 3Dmark score actually ment in the past.
Again, we are not able to control how people present the 3DMark scores, and what comes to leaked numbers.. well.. I really dislike anykind of leaked information, and any such information shouldn't be considered as valid information. I know that speculating about new hardware can be fun, but people should know that leaked information is still only that - leaked information with no real value/validity.

On a side note I have yet to see any games scaling well with multi-core processors or MHz for that matter. Oblivion and FEAR with higher quality settings and resolutions are good examples of games that dont react much at all to MHz or core scaling. Hopefully you arent all getting way ahead of yourselves with exactly how much work you put into CPU impact for graphics card based demo runs. Ideally i want to see any barely impact from the CPU (be it 2GHz or 2.6GHz) because thats exactly whats going to happen when i play a more modern game at 1600x1200 with AA/AF.
Don't worry, we won't be sitting around doing nothing. ;) We are constantly following what other developers are creating and where the industry is going. It's our mission to provide benchmarks which are future-proof and stresses all components gamers purchase for a good amount of money.

On an urelated note, this reply-box doesn't really work that well in Safari. :???:

Cheers,

Nick
 
Thank god. No more nature walks. :LOL:

I hope at least one of the tests include cars, or at least a fast moving vehicle, the "benchers" seem to like that. :D

Can't wait to see what's up. ;)
Hehe, if we will ever throw in a new nature scene is something I can't comment on, but the next 3DMark will be niccccccce. ;)

Cheers,

Nick
 
Nick[FM];906234 said:
The final score is an indicative of the whole system's gaming performance. We are confident that with increasing amounts (and quality) of physics, better AI and better gameplay, multi-core CPU's will start to shine.


Whenever there is an insanely fast CPU, the GPU may become the bottleneck and vice versa. But that's exactly how most games work. If you put in 3 year old CPU (not even with HT) and have a SLI/Crossfire system, you will most probably be heavily bottlenecked by the CPU. Both in games, and in 3DMark06. Now try the same thing with a multi-core CPU, and a lowish end SM3.0 hardware. What do you get? The bottleneck is the GPU. Both in games, and in 3DMark06. As I see it, that's far from misleading.

Thanks for the thorough response Nick. I don't think anyone doubts that CPU's will continue to be an important factor in gaming performance but the point of contention is the extent of their impact. I'll still say that 3dmark06 does not provide a good indication of total system performance. The CPU contribution is way too out of whack with reality. Also it's the job of 3dmark07 to emulate what we will be seeing in the next year or two....3dmark06 should be reflecting what we see today in real games.

Looking forward to full hardware accelerated 3D tests in 3dmark07 that take advantage of multiple cores :) It's just hard to swallow a CPU contribution based on a contrived software render that is added back into the final score with an arbitrary weighting.

Ok, that's enough bitching from me. You guys have done a fantastic job with all the versions of 3dmark and since I contributed nothing to the process I'll shhh now :)
 
OK so if you acknowledge that GPU-specific scores and CPU-specific scores are loosely and/or ambiguously linked to each other, why insist on concocting a single overall score which is (by extension) loosely or ambiguously linked to average game performance (current or future)?


How about losing the single summary score from future 3DMark0x's and sticking with the (more informative) CPU- and GPU-specific scores as an ensemble?
 
Question to Nick[FM]

When information has absorb/leek that 3DMark07 will support multi-core CPU in game tests.
A. Will it be similar like "Old Comanche 4 helicopter" CPU depended game?
B. And will FM use a real game engine like they did from the past since 3Dmark2001.

1. How about including SM3.0 DX9 "Or it is going to be SM4.0 DX10 only"
 
Last edited by a moderator:
Again, it is up to users/media to use the benchmark correctly.

But many of them are stupid, need to make things as foolproof as possible!

Nick[FM];906276 said:
That's 3DMark06.. Or did I miss your point? :???:

Cheers,

Nick

*cue pink panther music


I dont know what you're talking about!


okay okay i made a small mistake. thought that was a preview shot of 07 for some reason
 
Nick[FM];906275 said:
Hehe, if we will ever throw in a new nature scene is something I can't comment on, but the next 3DMark will be niccccccce. ;)

Cheers,

Nick

I really liked the nature scenes in past 3d Marks, although wasn't as impressed with the night scene from 05/06 purely because it was dark (still great though). So I would love to see a new spectacular nature scene in 07 although with the likes of Crysis and Stalker on the horizon I think you have your work cut out to make it as mind blowing in comparison to the games as previous releases.

On the point of the total 3d mark score, I like the fact that CPU performance has a big impact. The way I see it its a measure of total system gaming potential. So maybe it doesn;t represent gaming performance but to me thats an indictation that games arn't properly utilising system potential, i.e. properly maxing out the CPU and GPU equally. Ok thats difficult when you have multiple configs to deal with but just as graphics scale I would like to see more scaling of the CPU components in games especially across multiple cores so that I can sort out the balance myself and ensure MY system is maxed out as much as possible across all components without one acting as a bottleneck.

I guess my main point there is that PC CPU's currently seem heavily underutilised (i.e. most high end games will run fine maxed out on a single core P4 as long as you have a good enough GPU) and 3d Mark takes the approach of "this is what would happen if they weren't".

BTW, that scene posted from 06, is that only available in the pro version as I have never seen it with my SM3 GPU?
 
OK so if you acknowledge that GPU-specific scores and CPU-specific scores are loosely and/or ambiguously linked to each other, why insist on concocting a single overall score which is (by extension) loosely or ambiguously linked to average game performance (current or future)?

How about losing the single summary score from future 3DMark0x's and sticking with the (more informative) CPU- and GPU-specific scores as an ensemble?
I should imagine that the typical user of 3DMark actually prefers to see a single overall score rather than separate ones, on an immediate basis; giving a list of scores rather than a single one would probably confuse more people than the number which complain about it. I bet that if FM never offered a single score result, what do you think most people would end up requesting? As an analogy, what do you want to initially see after an exam - your total score or how you faired in each question? (Yes I know the two systems don't work in the same way, I'm just talking about displayed results and not how they're obtained!)

triniboy said:
Also it's the job of 3dmark07 to emulate what we will be seeing in the next year or two....3dmark06 should be reflecting what we see today in real games.
Why should 06/07 be treated differently to the other? 3DMark06 was launched in Jan 06 so by your statement for 07, it should be reflecting games during 07/08 - but that's also what you're expecting for the next 3DMark. It's a bit confusing as to what you're expecting from them.

What other application besides 3dmark remotely follows the performance metrics determined by FM's weighting of CPU/GPU scores?
A more interesting question would be what other application remotely does the same as 3DMark (i.e. measures perfomance)? You've got Aquamark and that's it, I think - it's a bit of a one sided argument when there's nothing much to point at and say "This does such and such better than 3DMark". Besides, ever since 03 the typical user of 3DMark has complained about the lack of influence the CPU has on the score - so if it's a case of too much or too little, then what should it be? I'm not sure I'd like being in FM's shoes on these decisions...you never can please everyone! ;)

pjbliverpool said:
BTW, that scene posted from 06, is that only available in the pro version as I have never seen it with my SM3 GPU?
You need to the Pro or Advanced version (never can remember which) in order to run the full demo - it's at the end of the whole thing before the credits, I think.
 
GF256 is 4pp chip with 1 trilinear TMU per pipe. GF2MX is 2pp chip with 2 bilinear TMUs per pipe. And what is R7500 than overclocked R100? Try HL2 or Aquanox on GF256/GF2MX and on R100. You will be surprised :)

As a collector I must say, that it's much easier to get working Voodoo 5 than Kyro II or Radeon 256 VIVO. GeForce FX is rare, too, but we discuss them in almost every thread :)

Back to topic - According to my tests, Radeon DDR (and even 64MB SDR) is faster than GeForce 256/2MX in every D3D game and the only exception is 3DM01. Kyro II is faster in all pre-TnL games and even in some TnL games (e.g. Dungeon Siege) any in many OpenGL engines (Q3, SeriousSam)... And 3DM01 says that GF256 should be >50% faster... So I abide by what I have said. 3DM01 was popular, but unfair.

Kyro2 faster? Where will it end? ;) As to the 7500, if its only an OC'd R100, that is one hell of an OC as original Radeons were either 166 or 183(depending on model) core clocks with same speed memory and the 7500 was 290/460. GF256 were 125/166 or 333(SDR or DDR). Kryo2 were 175/175. The GF256/SDR sucked, period, the DDR version wasn't so bad. In most tests, the K2 and Rdeons were faster than the GF256 cards including 3dm01 minus the K2 and this was proven to be a 01 design issue as it didn't do occlusion removal the way the K2 was deisgned to do it, so it suffered. It was something like the K2 was front to back and 01 went back to front or something like that.

Start here for benches if you think I'm kidding:
http://www.tomshardware.com/2002/04/18/vga_charts_i/page2.html
The one exception being SW:JK2 were the K2 wasn't any better than a TNT2.
 
Nick[FM];906273 said:
On an urelated note, this reply-box doesn't really work that well in Safari. :???:

Hmm? More detail please. . . .maybe a thread in site feedback. (Edit: Rys checked it on his Mac laptop, and it looked fine to him, so now we're really curious what you're seeing. . . )
 
Why should 06/07 be treated differently to the other? 3DMark06 was launched in Jan 06 so by your statement for 07, it should be reflecting games during 07/08 - but that's also what you're expecting for the next 3DMark. It's a bit confusing as to what you're expecting from them.

Actually by my statement 3dmark06 would cover 06/07 and 3dmark07 would cover 07/08. If we see any game in 2007 that is as dependent on multiple cores as 3dmark06 I would be very surprised. Another big problem with 3dmark06's approach is that the CPU score is independent of the graphics hardware. So it's impossible for the GPU to bottleneck the CPU which of course happens all the time in games.

A more interesting question would be what other application remotely does the same as 3DMark (i.e. measures perfomance)?

What performance is it trying to measure? Certainly not 3D gaming. We already have PCMark and Sandra and a host of other applications to test CPU performance. If FutureMark declares that 3DMark is a test of total system performance based on some arbitrary weighting then that's fine - with the inclusion of the CPU test it's definitely not an accurate test of 3D performance.
 
Valve Multi-core benchmark
http://www.firingsquad.com/hardware/intel_core_quad_q6600_preview/page5.asp

Core 2 Extreme X6800 @2.93GHz (Dual) 54FPS
Core 2 Extreme QX6700 @2.67GHz (Quad) 85PS

I must say quad CPU Rule!!!! :)

Still no answer from Mr. Nick[FM]....

If I'd know what resolution that was rendered in and whether any AA/AF was enabled, I'd have a reason to get excited.

Given those results:

http://www.firingsquad.com/hardware/intel_core_quad_q6600_preview/page8.asp

....and the fact that I don't give a damn what performance looks like at 800*600 w/o AA/AF, I'd have to see something way more promising.
 
If I'd know what resolution that was rendered in and whether any AA/AF was enabled, I'd have a reason to get excited.

Given those results:

http://www.firingsquad.com/hardware/intel_core_quad_q6600_preview/page8.asp

....and the fact that I don't give a damn what performance looks like at 800*600 w/o AA/AF, I'd have to see something way more promising.

What would you say in this situation. "I know it is not a quad CPU game"
http://www.firingsquad.com/hardware/intel_core_2_extreme_qx6700/page14.asp

-------------------------------------------------------------------------------------------------------------------------
I understand your point of view; "@ High AA + resolution" the game rely more or just on GPU itself. - CPU does not get any impact at all in those situations.
I wounder how FutureMark will make it happened in 2007 for new 3DMark07.
I still look at example with old Comanche 4 helicopter game. If you have high End video card and you crank up resolution + AA. The FPS remain same either @ 800x600 or 1600@1200 resolution. "I did few tests myself in Comanche 4 800x600 vs. 1280x1024 @ 4XAA" But If you over-clock CPU then FPS increase at all resolution, until GPU get into bottle neck situations @ high resolution.
 
Last edited by a moderator:
Back
Top