Futuremark: 3DMark06

Ragemare said:
Can't reviewers manually work out what score the 7800 would get with AA on?

Yes, they can. Do you think they will?

I think there's more chance of them not even RUNNING AA tests since a score isn't generated for nVidia cards (thus, no interesting and pretty bar charts to show) , than calculating what the score "would have been".

Which, if I were a cynical type, I would say is exactly what nvidia would prefer.
 
Joe DeFuria said:
Boy, do we ever need a B3D write up on this....

Very true Joe. If nothing else the release of this 3dmark has made people think about things.

Myself I think the cpu score is a Godsend. I don't think it is over estimated because Intel are promising quad cores and what are you going to do with 4 cores ? One for the operating system background tasks and 3 for the game is my answer !

At present we have only 2 physcial cores so I do not mind 06 being weighted this way.
 
boltneck said:
So basically, I really did just spend several hundred dollars on a total dud that was marketed as a Ferrari. :cry:

This strikes me as bordering on unethical behavior by ATi.

I might as well have set my money on fire or purchased a Geforce FX or whatever that was.

This is about as bad as I have felt in a long time. Worst case of Buyers remorse ever.
easy killer, it's only a benchmark.
It isn't a game.
Even if your card supported everything the GF7 does it still would run this benchmark like a turd.
 
dizietsma said:
Very true Joe. If nothing else the release of this 3dmark has made people think about things.

Myself I think the cpu score is a Godsend. I don't think it is over estimated because Intel are promising quad cores and what are you going to do with 4 cores ? One for the operating system background tasks and 3 for the game is my answer !

At present we have only 2 physcial cores so I do not mind 06 being weighted this way.


What would be even nicer is if windows vista will allow us to configure which programs use which cpus :)
 
dizietsma said:
Myself I think the cpu score is a Godsend.

I have no problem with the test itself, nor do I have a problem with the CPU test figuring into the final score. The only real issue I have, is that IMO, they put far too much weight into CPU test when calculating the overall score.

No matter how many cores the CPU manufacturers offer us, that doesn't mean games will be able to make almost linear usagae out of them.
 
Jawed said:
Compared with 7800GTX SLI in 3DMk06, obviously, yes.

Jawed

I just tested 515/1100 on the card instead of 485/1100 and get


FX-55 at 2400Mhz

4569 => 2358/2035/940

Which is higher than my score for FX-55 at 2600Mhz at 485/1100. So on my system at least this bench is overwhelmingly gpu limited ie 40Mhz gpu gain is worth more than 200Mhz cpu gain and that is not including bandwidth which this test seems to like.


Indeed the top score for a GS is

http://service.futuremark.com/compare?3dm06=8451

and even taking away 1000 points for his dual A64 he is still ahead of my card at lower clocks showing there is still some headroom to go.
 
Joe DeFuria said:
The only real issue I have, is that IMO, they put far too much weight into CPU test when calculating the overall score.

No matter how many cores the CPU manufacturers offer us, that doesn't mean games will be able to make almost linear usagae out of them.

That's where we'll have to see. Already cpu's are just starting to make fairly big differences for games with multithread tweaks, bigger gains than whether the IHV's AF algorithm is set to high quality or not and we make a lot of noise about that do we not ?

Who's to know what gains we can have when both IHV driver teams and game developers start working on offloading to the multicores ? Futuremark's "guess" might be a massive underestimation.

I can just see nvidia marketing SuperNitroTurbocache2 in 2007 where they are referring to the cpu and not the memory :)
 
Re: CPU scaling

Keep in mind that if you change your CPU by O/C the HT bus that you are not only increasing your CPU perfomance, but also the memory and of course the HT performance, as well. The proper way to test this is as suggested. Underclock your CPU by changing the multiplier down.
 
With regard to the SM2.0 and HDR/SM3.0 results being affected by the CPU, here are my results on an Athlon64 3500+ versus an Athlon64 X2 4200+ (Both running with a NVIDIA GeForce 7800GT):

Athlon64 3500+
SM2.0 Score 1518
HDR/SM3.0 Score 1525
CPU Score 866

Athlon64 X2 4200+
SM2.0 Score 1493
HDR/SM3.0 Score 1512
CPU Score 1655
 
mrcorbo said:
Re: CPU scaling

Keep in mind that if you change your CPU by O/C the HT bus that you are not only increasing your CPU perfomance, but also the memory and of course the HT performance, as well. The proper way to test this is as suggested. Underclock your CPU by changing the multiplier down.

That's a good point. The other person Jawed quoted was running at 250Mhz HTT and so the Pcie speed might have been out as well. Pcie speed is yet another kettle of fish, normally it does not make much difference but who knows yet, I certainly have not tested it.

My figures with my unlocked Fx-55 were done at 11x200, 12x200 and 13x200 to be uniform so I do not have to worry about changes in HTT and Pcie speeds.
 
chavvdarrr said:
question:
Why FM decided that no current generation card will be able to make 25 fps !? [...] personally i'd prefer NOT to be hit so hard in face "upgrade, upgrade, UPGRADE, UPGRADE"
I think previous 3DMs debuted similarly, with high-end cards scoring around 5k, but I wonder if this "low" performance is a clue as to how long until we can expect a DX10 3DM?

I wish Nick would have completely clarified FM's decision to exclude NV scores when AA is applied. Graphix and Mariner made the best posts most succinctly, so I'll just (re-)repeat for emphasis:
GraphiX said:
3DM06 AA Test:
X850 does run SM2.0 tests with AA -> SM2.0 score counting (because it cannot run SM3.0/HDR tests)
7800 does run SM2.0 Tests with AA -> No score counting (because it cannot run SM3.0/HDR tests)
1800 XT does run SM2.0 Tests with AA -> SM3.0 score counting (because it can run all tests)
Mariner said:
It strikes me that Futuremark's design decisions, however honestly conceived, penalise ATI's current high-end chip for not supporting Fetch4. On the other hand, the R5X0 series of chips are able to support AA + HDR, something no NVidia chip is able to do, yet these NVidia chips are not penalised in the same way.

We know that chips which don't support the required depth textures for the PS2.0 shadowing are forced into a relatively expensive shader workaround which is fine by me as Futuremark have decided 24-bit accuracy is required. On the other hand, if this is acceptable, why aren't chips which are not able to support AA + HDR also forced into a shader workaround?

[...]

This being the case, surely it would have been logical for Futuremark to include a shader workaround for SM3.0 cards unable to support AA + HDR which generates the AA in-engine - the technique recommended by NVidia's Chief Scientist?
Richteralan said:
Ok about CPU score. I understand CPU testing is essential. But I don't understand why CPU scores can influence the final 3DMark score in such a degree.

Can a dual-core CPU gives you better shader performance?
Can a dual-core CPU gives you HDR+AA?
Can a dual-core CPU gives you SM3.0?

While dual-core CPU IS influential in real game situations, but it won't be much influential as GPU does.
So just imagine one with fast CPU+6600GT get a 3DM06 scores higher than one with slower CPU+6800GT, I really can't and won't find a game is CPU limited in this way, now plus future.
I dunno, a slow CPU can probably hamper gameplay as much as a slow GPU. I'm of the opinion that framerate comes first, everything else second. One could argue whether DC will show as much of an improvement as SC, of course, but I think 360 and PS3 should make exploiting DC quite common--and possibly similar to fancy effects, if the majority of the PC market is SC and so the second core is just used for neat tricks like more boxes or boulders.

I'd like to see more testing done on Q4, considering how much of an improvement it sees with DCs. Specifically, we'd want to test without AA and preferably at 12x10, to more closely mimic 3DM's vision of future games.

As for the arguments over uniquely accelerated features, I think they should be held in check until we see more benches using 3DM06's thoughtfully-included switches to disable "hardware" shadow mapping and FP filtering.

Hanner's limited (NV-only) testing does seem to suggest that ATI wasn't too silly to skip "fixed" FP16 filtering (Nick wasn't kidding when he said their SW fallback was "highly efficient"), though I'd like to see SS compares to examine IQ differences, if any. OTOH, NV's huge hits w/o HSM (25% on a 6800GT, 17% on a 7800GT) beg the question why FM couldn't have implemented a SW-based HDR AA workaround and considered it an equivalent situation?

Ah, FM says, but AA isn't part of their standard suite, just an option. Well, is HDR isn't part of SM3's standard suite? HSM? FP filtering? If the answer is that 3DM isn't a D3D test, but a gamer's test, then surely gamers use AA as (much as) they would fancy shadows, as an IQ enhancer?

I'm cool with most of the test. I think it's eminently fair to take advantage of HW features, as surely game devs would do the same. Only the (GF 6's and 7's lack of) AA score reporting puzzles me. Though we can calculate it by hand, we shouldn't have to, and forcing us to do so only diminishes the holistic "3DMarks" relevance.
 
Last edited by a moderator:
Hanners said:
With regard to the SM2.0 and HDR/SM3.0 results being affected by the CPU, here are my results on an Athlon64 3500+ versus an Athlon64 X2 4200+ (Both running with a NVIDIA GeForce 7800GT):

Athlon64 3500+
SM2.0 Score 1518
HDR/SM3.0 Score 1525
CPU Score 866

Athlon64 X2 4200+
SM2.0 Score 1493
HDR/SM3.0 Score 1512
CPU Score 1655

I think that's pretty interesting. A single core 3500+ that beats a dual core X2 4200+

US
 
IgnorancePersonified said:
A64 X2@2100 9800pro 370core/340 mem
3D Mark Score= 606
SM2.0 = 281
Cpu Score = 1589.
NocturnDragon said:
3DMark Score: 996
SM2.0 Score: 480
CPU Score: 1006

GFX: RADEON 9800 XT @434MHz/370MHz
CPU Intel Pentium M 1.60GHz @ 2.4GHz
Thought I'd kick in my 9800P scores, just to see how much my AXP 2400+ (@2000MHz, 256kB, single-channel DDR266) is crippling me:

545 3DMarks
SM2.0: 246
CPU: 654

Honestly, not too bad compared to a dual 2.1GHz A64 w/ practically four times the RAM bandwidth (64b DDR266 -> 128b DDR400), and merely iffy compared to a 2.4GHz P-M. Too bad the difference isn't as small in games, where I'm sure faster RAM or more L2 would greatly benefit me.

Nocturn's XT's extra VRAM likely contributes to his much higher SM2 score.
 
Unknown Soldier said:
I think that's pretty interesting. A single core 3500+ that beats a dual core X2 4200+
"Beats" is a pretty strong word. The difference in the 3D tests is a mere 1-2%, practically margin of error.

But I'd be interested to see 3DM06 performance with both pre- and DC-optimized drivers on DC systems, to see if there are any gains (or even losses, if the drivers and 3DM conflict, as with Q4).
 
Back
Top