Futuremark: 3DMark06

Richteralan said:
HT has nothing to do with memory bus performance in A64 architecture.

This is not correct. Your memory frequency mirrors your HT freq by default, or you can set a divider.
 
JasonCross said:
Everyone, stay tuned. I've been talking with Futuremark and ATI and have cleared up a couple of things on the whole 24-bit depth stencil texture issue. There should be an update to the article (second page) soon.

I'm still a little concerned that the benchmark doesn't use some form of parallax mapping technique - it's already in FEAR, it's a major feature of UE3, it's going to be in a lot of SM3.0 games, and it's not in 3DMark06. It seems like one of the defining shader effects of 2006-era 3D engines to me.

From my "better late than never" file, thanks for a nice read, Jason. :smile:

We have since learned from ATI. . . Radeon X1300 and X1600 cards, as well as upcoming ATI hardware, support the 24-bit depth stencil textures and Fetch4,

The above lines (hopefully I didn't do violence to the sense with that ellipsis), from your Update, really made me wonder why FM didn't wait another week to release this, as doing so would have removed at least one cause of grief for them the last two days.
 
mrcorbo said:
This is not correct. Your memory frequency mirrors your HT freq by default, or you can set a divider.

That's what I said, the performance of memory bus is judged by memory divider. And A64 memory controller has auto memory frequency adjustment.
 
.Melchiah. said:
Weird. My score is slightly better, eventhough you have a better videocard. Did you put "mipmap detail level" to "performance" instead of "quality" in the Catalyst control center? That could explain it.
It just might. I stick with Balanced, or whatever the default is, b/c I don't appreciate MIP-map bands.

I decided to try the older 3DMarks with Catalyst 6.1 drivers as well;
3DMark05 score: from 2274 to 2382
3DMark03 score: from 5062 to 5059
3DMark01 score: from 14492 to 13099 =O !
I haven't run 03 or 05 in a while, but I just ran 01 for kicks, and my CPU/RAM are holding me to 10900 (Cat 5.13 Balanced).

IgnorancePersonified said:
What's the vid card clocks Pete?
Stock speeds: 378/338, according to ATT. I'll try OCing too, eventually. I think I can go north of 420/380, from the half hour I spent using ATT's OCing tool, and then testing in CSS.
But the CPU doesn't seem to be weighing in as much as what people are saying. NocturnDragon's 3Dmarks score smashes mine yet the cpu score for yours is 1/3 mine.
True, at least on our piddling 128MB cards. :)

Chalnoth said:
Er, it's very easy to implement FP filtering in the shader for simple situations (and since FP filtering is probably only used for tonemapping, this should be extremely easy). It's impossible to implement multisampling AA in the shader.
Thanks. I figured as much, but had to ask. So, SS is the only option, and that means a huge hit and possibly not-so-great IQ. (Still doesn't explain the NA score, though.)
One could obviously implement supersampling AA in software, but this would hardly be equivalent either in performance or in visual quality.
Well, does it have to be "SW-based?" NV does SSAA in "hardware," no? That won't work with 3DM06's HDR?

geo said:
really made me wonder why FM didn't wait another week to release this, as doing so would have removed at least one cause of grief for them the last two days.
Well, two reasons spring to mind: to avoid X1900's shadow (and associated 7900 "leaks" ;)), and to avoid being cast as ATI's plaything (a la Valve).

It's a tough, cynical, HDR world, this PC gaming.
 
Pete said:
OTOH, NV's huge hits w/o HSM (25% on a 6800GT, 17% on a 7800GT) beg the question why FM couldn't have implemented a SW-based HDR AA workaround and considered it an equivalent situation?

Ah, FM says, but AA isn't part of their standard suite, just an option. Well, is HDR isn't part of SM3's standard suite? HSM? FP filtering? If the answer is that 3DM isn't a D3D test, but a gamer's test, then surely gamers use AA as (much as) they would fancy shadows, as an IQ enhancer?
A big "Ditto" from me. Haven't checked out all the comments here but I'd appreciate Nick addressing this (if he hasn't already).
 
Richteralan said:
That's what I said, the performance of memory bus is judged by memory divider. And A64 memory controller has auto memory frequency adjustment.


What mrcorbo said originally was that putting up the HT bus speed also increases the memory bus speed which is true ( this is eaxily shown by cpu-z etc). It also puts up the Pcie speed if this is not locked. Both effect your performance as well as the increased speed of the cpu.
 
Pete said:
Ah, FM says, but AA isn't part of their standard suite, just an option. Well, is HDR isn't part of SM3's standard suite? HSM? FP filtering? If the answer is that 3DM isn't a D3D test, but a gamer's test, then surely gamers use AA as (much as) they would fancy shadows, as an IQ enhancer?

I'd agree with this Pete except that gamers do have the option of putting up the screen resolution instead ( I am assuming they have a good enough monitor ). Indeed, Futuremark themselves have put the default screen resolution up and yet again left out AA in the standard test.

In the none standard tests then this would be a problem if you could not run tests individually and so could not judge how one cards performance compares to another because one card says N/A, but you can run tests where it is supported, the issue here is that people think the scoring is not fair for none standard tests.

To me this is a bit strange because for the last few months/years this forum has tended to pour scorn on Futuremarks bench, it's scoring and the use the IHV's use the scores to sell cards and that anybody who buys a card based on this is tending towards being a bit daft. But now it seems this is the upmost importance.

| I put on my green tinged pro futuremark hat |

And why is this ? because ATi is not favoured. Whenever Ati is not favoured then we get the most long winded threads where a court is summarily set up and the "injustice" to Ati is gone over in such minute detail that only paranoia can be thought to be in the heads of adjudicators. In swoops thw cardinals in their red ( how appropriate ) gowns, " Everybody expects the B3D inquisition " Fear and surprise is our .....

Remember that poor bloke from Anandtech that came over here and went grey haired before he had to leave saying he had better things to do ( as they all do, like nick[FM] ) apart from Kyle[H] who was rude enough to refuse to leave and eventually the thread had to be locked before veryone lost their marbles completely.

You wait, you lot just wait until nvidia do not have the perfect rub of the green ( pun intended ). i am going to start my very own thread in like manner, even if I have to write all the 400 posts myself.. :D

|rant over, green tinged pro futuremark hat back off|

Phew I feel better now :) Just out of interest, and in no connection to the above, where is WaltC nowadays ?
 
I'm getting a whole lot of pauses due to what I think is texture swapping. So maybe 256MB of video memory isn't enough anymore, or maybe its just because AGP doesn't have enough bandwidth for this?
 
dizietsma: Do you admit that Nvidia is perfroming better ? I guess you do.

What bothers most people that Nvidia's slight lead without AA is well known, but things are just the opposite when AA enabled, based on real life tests. (games)
But 3DMark2006 simply can't compare competing IHV's cards' with AA enabled. This totally nullifies Ati's effort put in optimising bandwith usage. So, whatever the reason are, this synthetic benchmark simply can't test competing products. Because few people will bother doing separate game tests (mostly reviewers) most people will just run it, got their score and an idea about their systems capabilities.

I think there are several questionable things regarding 3Dmark 2006 as a benchmark, but the main problem is the overall score can be misleading and unrepresentative. Some cards can't run the SM 3.0 tests, some cards don't get scored under certain circumstances, the CPU score weights maybe too heavily into the final score.

Also the fixed frame rate of the CPU test is very disappointing for the uninnicitaes, who don't know why it is so slow. Anyway, why even send pictures to your display at that framrate ? Usually, the CPU's activity is hidden, why have it a slideshow, why not just a blue bar with percentage done, than the final score ? If I got it right, the CPU tests won't ever reach the 20 fps range. Than what's the reason for showing anything ?
 
Last edited by a moderator:
Mendel said:
I'm getting a whole lot of pauses due to what I think is texture swapping. So maybe 256MB of video memory isn't enough anymore, or maybe its just because AGP doesn't have enough bandwidth for this?
A quick check of the graphics tests suggest that the peak memory usage ranges from 188MB to 224MB (across all 4 of them), so it's tight but not quite over 256MB - unlike in 05, which was frequently over its claimed 128MB requirement.

Hubert said:
If I got it right, the CPU tests won't ever reach the 20 fps range.
Make that 2 fps and then you'll be right.
 
I know that, the point was why display such thing ? It is unenjoyable. It will never be. That's different, when I got my X800 XL card I was happy that finally 3DMark 2003 displayed fluid images. But that will never happen in 3DMark 2006 CPU test's case. Ever.
 
What about the CPU tests in 3DMark200, 03 and 05? Did they run any better (for CPUs of their release time)?
 
Neeyik said:
What about the CPU tests in 3DMark200, 03 and 05? Did they run any better (for CPUs of their release time)?

Actually, there, it did not strike me like here. Because it wasn't 2 fps, it was like 4-5. :)
But thinking of it, why have those, too ?
 
Below are some old 3DMark runs (the old ones in my ORB account for each relevant 3DMark):

http://service.futuremark.com/compare?2k3=844616
CPU Test 1 = 35.7 fps
CPU Test 2 = 7.0 fps

http://service.futuremark.com/compare?3dm05=778247
CPU Test 1 = 1.8 fps
CPU Test 2 = 3.1 fps

I think the problem that FM have faced is finding game-like CPU code, which they can run as a routine benchmark, that can sufficiently load up the processor. With the last two tests it was easy enough, given the use of software vertex processing. Now they've just got physics, AI and the usual game code - by itself, it's just not enough hence the ludicrously low fixed frame rate.
 
Back
Top