Futuremark: 3DMark06

radeonic2 said:
what will [H] be doing with regards to 3dmark06?
it favors nvidia so perhaps you should incorporate it into your test suite ;)
Even though there are some, shall we say... "curious"... design decisions with 3Dmark06, they did manage to get HDR+AA in there for ATI X1x00 cards and it looks really nice (particulairy the Deep Freeze test).

The inclusion of the CPU score into the final 3DMark score takes the "3D" out of 3DMark though. That was a terrible decision in my opinion. For that, and other reasons, I don't think I'll be using it in my reviews. In fact, I think I've just decided to drop all versions of 3Dmark from my benchmark suite.
 
Ratchet said:
Even though there are some, shall we say... "curious"... design decisions with 3Dmark06, they did manage to get HDR+AA in there for ATI X1x00 cards and it looks really nice (particulairy the Deep Freeze test).

The inclusion of the CPU score into the final 3DMark score takes the "3D" out of 3DMark though. That was a terrible decision in my opinion. For that, and other reasons, I don't think I'll be using it in my reviews. In fact, I think I've just decided to drop all versions of 3Dmark from my benchmark suite.
Ya that's nice but as mentioned int his thread whats the deal with nvidia cards getting no score.. rather than a reduces score as with SM2 cards.

As for cpu scores counting.. I think that is another bad decision.
If want cpus to count why not have a game test that's more cpu limited.. say having it doing lots of physics calculations since they are partnered withe ageia.
 
How does 3Dmark06 favor Nvidia?

You would think that by default it should favor ATi with all the work they have done on optimizing SM3 shader execution.

Have they dont something thats just a little to obvious here?
 
Brent said:
Or you could just use games to find out how games perform...
Brent, unless you're on a crusade, it is best we don't re-read your stance again and again over here. Feel free to do so at the site you write for, however.

The way I have responded in this thread isn't about using what is best for measuring 3D hardware performance in games (which is what they are primarily created for, financially for the IHV). This thread, I think, is about FM and its 3DM app. Unless you want to discuss this instead of "Games are best" one-liner posts, I think you're wasting space.

IOW, we get your message(s). No need to for us to view your participation here as utterly boring and one-topic.
 
Last edited:
boltneck said:
How does 3Dmark06 favor Nvidia?

You would think that by default it should favor ATi with all the work they have done on optimizing SM3 shader execution.

Have they dont something thats just a little to obvious here?

Well, what I heard, support for Dynamic Branching is one of the major things in SM 3.0. Ati focused on that with R5xx, but this does not show in 3DMark 2006. As we know from other tests Ati indeed succeded in creating a strong performer in this department, we only can think that either the amount of dynamic branching shaders is low compared to other ones or they are not using significant branching.
 
Last edited by a moderator:
I think the solution is for FM to create a patch with a setting that makes the NV card use the same pixel shader workaround for 24-bit depth stencil textures. This way we could see if 3DMark06 is a benchmark or an NV demo. In fact why not make a patch that makes all cards use one code path through a setting.
 
question:
Why FM decided that no current generation card will be able to make 25 fps !?
Why not make a bench that can look well and run SMOOTH on current cards (6800/X800 + ), and next gen to run even better (like 50 vs 25 fps)
yea, the user right now is able to "see" how smoother 3dmark06 will run, after he buys 6 months later AGAIN a new 5-6-700$ videocard ...


personally i'd prefer NOT to be hit so hard in face "upgrade, upgrade, UPGRADE, UPGRADE"

and CPU test is... a joke
3.0GHz iP4 - i get 1 frames every few seconds... WHAT FOR ?! unable to do something that looks so-so and will run with reasonable (>10fps) speed ? What kind of system will be able to run smooth this test ? 4 sockets , 8 ore Opteron rig ?
Nice visuals, but too less balance IMHO.
 
Richteralan said:
I'm not sure why FM included the CPU mark in final 3DMark. So, a computer with 6600GT but couped a dual-core CPU can score higher than a 6800GS with single-core CPU.
Does it corresponds to actual gaming experience?
That is my main concern at this point as well - the dual core CPUs alter the score in such a great way, it even makes a Pentium D 2.8 GHz score 30% higher than an AthlonFX 57! If I use it, I don't think I'll use the total 3DMark score; but just the SM2/SM3 scores.
 
http://www.firingsquad.com/hardware/3dmark_06/page11.asp

We’ve only begun benchmarking with the latest build of 3DMark in the past 18 hours, so we haven’t had enough time to come to any firm conclusions on how the X1600 performs so strongly, while it can be said that the X1800 XT disappoints, often finishing behind the GeForce 7800 GTX 256MB in many tests, even though real-world testing with today’s latest games indicates the opposite. We’re sure conspiracy theories will begin popping up in various forums shortly.

This conclusion is bang on.
 
To me the "games" are more to do with league tables and competition rather than being the determinent how good the card is or is not in a specific test; the individiual tests should be used for this instead. The games are a bit of fun that has made 3dmark popular, the tests give you the raw data.

By Futuremark giving the games differing point scoring means that subjective opinon has already crept in, never mind what to include or not include in the games. A classic example of this was 3dmark03 GT1 which not only had a mere 7 points per fps because it was DX7 and not very stressful but also was mainly single textured which favoured the Ati R300 series cards ( remember how nvidia got upset about that and there was that HUGE 3 post thread on b3d because nvidia was not favoured ? :p )

To me 06 is how 05 should have been, it does better a job of estimating current trends in games, especially with the inclusion of the cpu test which sure will be offloaded more and more work from the gpu's. If you have quad cores in 2007/2008 you need to have something to justify them !
 
Last edited by a moderator:
chavvdarrr said:
question:
3.0GHz iP4 - i get 1 frames every few seconds... WHAT FOR ?! unable to do something that looks so-so and will run with reasonable (>10fps) speed ? What kind of system will be able to run smooth this test ? 4 sockets , 8 ore Opteron rig ?
Nice visuals, but too less balance IMHO.
Read the manual - it's never going to run at 10+ fps because the tests are set to a fixed frame rate of 2.
 
chavvdarrr said:
question:
Why FM decided that no current generation card will be able to make 25 fps !?
Why not make a bench that can look well and run SMOOTH on current cards (6800/X800 + ), and next gen to run even better (like 50 vs 25 fps)
yea, the user right now is able to "see" how smoother 3dmark06 will run, after he buys 6 months later AGAIN a new 5-6-700$ videocard ...

personally i'd prefer NOT to be hit so hard in face "upgrade, upgrade, UPGRADE, UPGRADE"
nice point, maybe because they are FutureMark and testing future games' behaviour is the target. And, sadly, future games will run on actual systems just as 3DMark 2006 does. Talking about games from not the immediate future. But FutureMark will never get it right, it's hard to predict the future, IMO. :)

and CPU test is... a joke
3.0GHz iP4 - i get 1 frames every few seconds... WHAT FOR ?! unable to do something that looks so-so and will run with reasonable (>10fps) speed ? What kind of system will be able to run smooth this test ? 4 sockets , 8 ore Opteron rig ?
Nice visuals, but too less balance IMHO.

With the CPU test, I read that there is a limit of 2 fps, in order to minimize the effect of the videocard. You know, it's not the CPU churning the vertexes as in 3DMark 2003, but it only does physix and AI. Of course, the scene needs to be rendered, and that's done by the GPU, but in order to avoid the influence on the results, it is set this very low limit. I hope I got it right ...
 
Last edited by a moderator:
having a long - shit day but d/led this at work and just ran it:
A64 X2@2100 9800pro 370core/340 mem
3D Mark Score= 606
SM2.0 = 281
Cpu Score = 1589.

Only watched a bit of it but it was a slideshow.

Poor old arthritic 9800 was getting flogged.
 
Brent said:
Or you could just use games to find out how games perform...
And which games help show how well cards will keep up with techniques that'll be used in games in 1-2 years from now?

Part of the point of 3dMark is to help indicate future performance of hardware. How well it actually manages to do this varies from version to version: '03 predicted future trends of the early sm 2.0 cards perfectly, I'm not so sure about '05, and as you can see from this thread there's good reason to doubt how useful '06 will be.

But when in the hands of those who know what they're doing 3dMark has often been a useful tool for discovering how strong hardware is and how well it'll keep up. But for clueless people who don't understand what it's for or how to use it (i.e. someone who goes around continually complaining that it doesn't present exact correlation with current games) it's quite useless.
 
ANova said:
Maybe you can explain to me why DF24 is required in order to use fetch4 and DFC? Also, why did you decide against HDR+AA on the 7x00 series (since it is not supported) but for DST24 even though it is also not supported on the X1800? This is what I mean by a double standard.
Good morning! Sorry it took me a while to get back here, but I had to get some sleep.
:smile:

DF24 works with FETCH4, but certainly has nothing to do with DFC (or do you mean PCF?). DF24 and FETCH4 go hand in hand, as D24X8 and PCF does. Dynamic Flow Control (DFC) has nothing to do with FETCH4, PCF, DST etc. We didn't decide against HDR+AA (a DX feature) since we support it in 3DMark06. If any hardware doesn't support that feature, then it .. simply doesn't. :???: DF24 is supported since it helps in shadow rendering on hardware that supports the feature (just like D24X8 on other cards). I am still not sure how this can be seen as "double standard". We now have multivendor hardware shadow mapping support, which we didn't in our previous 3DMark. What's wrong with us supporting more vendors' hardware shadow mapping?

I think that all these abbreviations confuse a lot of people..

* edit: corrected some errors in the post.

Cheers,

Nick
 
Last edited by a moderator:
Cowboy X said:
I've noticed he is answering just about every question except that .
If "he" is me, I'm sorry but I need to sleep sometimes.. ;) Anyway, I do answer all questions the fastest I can.

Cheers,

Nick
 
Can someone clear up for me the cpu side of it ? From the white paper it seems that you have

1 thread or cpu doing the main logic which calls 1 thread or cpu doing physics and 1 or more threads or cpu doing AI dending on how many cores you have extra

So ideally the new Intel 955 with 2 cores with hyperthreading ( 4 logical processors ) would benefit best as you have 1 logic, 1 physics and 2 AI.

If the AGEIA chip is present does that do all the work and what "cpu" score does it get ? Nick can you clarify ?
 
Neeyik said:
Read the manual - it's never going to run at 10+ fps because the tests are set to a fixed frame rate of 2.

It's not that the test runs at a fixed frame rate...it runs faster or slower depending on the speed of your CPU(s). It's worded poorly in the help file.

The CPU tests use fixed frame-based rendering, meaning that there are X number of frames per second of gameplay (in this case, 2). So when it renders 40 frames, that always equals the exact same 20 seconds of gameplay. Quake/Doom timedemo are like this, though I think it's usually 20 frames per second in those. They made the CPU tests run with frame-based rendering instead of time-based like the graphics tests to ensure that all CPUs end up running the same number of pathfinding and physics intervals on the same number of units.

In theory, if your CPU was fast enough, you could run the tests at 3 or 4 frames per second: it would still render 40 and 60 frames for the two tests, equalling 20 and 30 seconds and performing calculations for the same number of physics and pathfinding intervals. Anyone got a seriously overclocked dual-CPU, dual-core Opteron system to test it out? :)

People shouldn't be concerned that it runs at an extremely low frame rate. It's irrelevant, since you're just running those tests to get an overall score. But it *is* appropriate that the CPU is included in the overall score. 3DMark strives to be an overall gaming benchmark, not a pure graphics benchmark. It hasn't done a good job of this in the past, being fully graphics-bound nearly all the time. But this time, the CPU test stresses CPU functions from games almost exclusively: D* Lite pathfinding algorithm and Ageia physics, as well as game logic. It's multithreaded, so it's pretty forward-looking as only a couple of games so far take advantage of dual core CPUs (though that will become very common in the future).
 
JasonCross said:
I still can't reconcile why a GeForce 7800 gets no score with AA enabled. Enabling AA puts it in the same boat as an X800 or GeForce 6200 - able to complete only the CPU and SM2.0 tests. I think it should use the forumla for those cards in that situation.
Yup this is sticking out like a sore thumb, im still waiting to see an answer to this question. I am way out of my depth when reading alot of the technical stuff discussed over here at B3D, but the point JasonCross makes is very clear to me. So could a 3Dmark representative give an explanation to a non-tech savvy person as myself as to why (7800 AA score give no score *confused* ) this is the case, as it obviously scews the result when sites use your benchmark to make comparisons?
 
Last edited by a moderator:
dizietsma said:
So ideally the new Intel 955 with 2 cores with hyperthreading ( 4 logical processors ) would benefit best as you have 1 logic, 1 physics and 2 AI.

If the AGEIA chip is present does that do all the work and what "cpu" score does it get ? Nick can you clarify ?

It doesn't support the PhysX PPU, no.

I think in the case of 4 logical CPUs (P4 955), you'd have 1 main game logic thread, 1 physics, and *4* pathfinding threads. At least, this is how FutureMark explained it to me.
 
Back
Top