Futuremark: 3DMark06

jb said:
Not only do I doubt it would increase the package by x2 (sure it will be bigger) but who cares how big the package is anyways as a test should be constrained by the features not the size! This was a very very week excuse.

Seriously? You really think adding another 600MB justifies proving that one card is faster than another at 3dc? :rolleyes:

This is just one feature! One feature that nearly no-one is even using.
 
Errr, this is just compression of normals - how many normal maps are there? It would add the size of the compressed normal map(s), or the normals could be compressed on install.
 
inefficient said:
Seriously? You really think adding another 600MB justifies proving that one card is faster than another at 3dc? :rolleyes:

This is just one feature! One feature that nearly no-one is even using.

It was more of a counter point. And I really doubt its 600mb mroe in file size unless you have data to back that up :) Not including 3dc because its not used that much is a much better reason that it makes the size to big....
 
Dave Baumann said:
Errr, this is just compression of normals - how many normal maps are there? It would add the size of the compressed normal map(s), or the normals could be compressed on install.
The latter would require uncompressed normal maps to be in the package though; either solution is going to result in a bigger download package. As to how many normal maps there are in 06, it's anyone's guess but I do know that GT2 in 3DMark03 required 140MB of uncompressed normal maps. One could make some speculative guesses as to how much more one of the tests in 06 requires.
 
Neeyik said:
The latter would require uncompressed normal maps to be in the package though; either solution is going to result in a bigger download package.
Eh? Uncompressed normals are presumably already there, so compressing on install would result in the same download size now (save for a small conversion routine).
 
I was presuming that the normal maps in 06 are already compressed (DXT5) in the download package or is this not the case?
 
Neeyik said:
I was presuming that the normal maps in 06 are already compressed (DXT5) in the download package or is this not the case?
3dmark06 thus uses green/alpha channel of DXT5 textures for normal maps? In this case 3Dc wouldn't improve performance probably, as it has same bandwidth requirements / memory footprint (though IIRC such dxt5 normal maps need one instruction more in the pixel shader than 3Dc would, so a small performance improvement might be there). Though the quality of the normal maps would be better...
It still would not necessarily mean you'd need two times the space, for the download package normal maps could be stored in 3Dc only with the same size as the current dxt5 textures (*). Then you can create the DXT5 version of it upon installation, half the values you can put in the dxt5 alpha channel without any recompression, the other half needs to be decompressed/recompressed to the green channel. There should be no loss in quality there, or maybe a very very slight loss compared to when you'd generated it from the uncompressed maps. Obviously, the other way around (3Dc created from DXT5) would be pointless as far as quality is concerned.
(*) not quite true for zipped packages. Since the red/blue components are unused and thus presumably always 0, those DXT5 textures should probably compress somewhat better. Though this only affects 10 bits out of 128, so the potential gain isn't that big.
 
Last edited by a moderator:
I find the griping over the lack of parallax mapping to be really pathetic. Hey, they aren't testing physics on the GPU either (where ATI will presumably win big if you read the GPGPU papers), it's all a conspiracy! I don't see people crying that they should have spherical harmonics/prt or other "future/uncommon today" effects, because those won't show an ATI advantage. Likewise, 3Dc isn't going to improve ATI's performance in 3dMark06, only it's IQ, since it already uses DXT5.

This is the same situation as the vertex heavy shadow volume arguments of past benchmarks. Those favored ATI and nVidia fans were griping the tests were unrealistic. Now ATI f*nb*ys are upset over the results. If '06 showed nVidia soundly losing, none of these red herrings would even be swimming around.
 
Yeah, I find it pretty funny how similar the tone here is now to that when NV30 was showing so poorly in '03. Right down to the unshaking belief that with future games their favored architecture is going to show it's "real" performance.

I'm not comparing the technology of NV30 to R520, because R520 is a much stronger design then NV30 was. But it sure feels like I've gone into the way-back machine back to the 3DMark 2003 days.

Now WRT the complaints about Nvidia's parts getting no score when AA is enabled; what's wrong with just comparing the SM2.0 scores and including the SM3.0 results with either a 0 for the Nvidia cards or just not even including them in the results becasue they don't support it. I mean, the overall 3DMark score in '06 is a pretty poor way to compare different cards anyway IMO. With the CPU score included it has become more of a platform result and any difference between 2 cards while using the same CPU is going to actually be lessened in the overall 3DMark score because you are getting the exact same score for the CPU.

Basically you are complaining about the inability to do something that it is unadvisable to do in the first place.
 
Hubert said:
What bothers most people that Nvidia's slight lead without AA is well known, but things are just the opposite when AA enabled, based on real life tests. (games)
But 3DMark2006 simply can't compare competing IHV's cards' with AA enabled. This totally nullifies Ati's effort put in optimising bandwith usage. So, whatever the reason are, this synthetic benchmark simply can't test competing products. Because few people will bother doing separate game tests (mostly reviewers) most people will just run it, got their score and an idea about their systems capabilities.


"This totally nullifies Ati's effort put in optimising bandwith usage"

and

"Because few people will bother doing separate game tests (mostly reviewers) most people will just run it, got their score and an idea about their systems capabilities."

For your second point

I can asure you that most people, presumably gamers, when assessing their systems capabilities will run the standard test and see 25fps and then not decide to apply AA/AF on top of that just to decrease their fps further. This is a theoretical test, not a practical one.

On your first point

No, because the SM3 tests where AA cannot be applied for nvidia are heavily gpu biased and not bandwidth limited at all I think. The SM2 tests might be but then the capability of each card can be measured in turn.
 
Well, it makes no sense to report a complete score with AA on an NV4x, because part will be run with AA, and part either won't be run with AA, or not at all.

It may make some sense, however, to compare the SM2 with AA scores between the NV4x and ATI hardware. I believe this score is reported when 3DMark06 is run with AA enabled.
 
Chalnoth said:
Well, it makes no sense to report a complete score with AA on an NV4x, because part will be run with AA, and part either won't be run with AA, or not at all.

Using that logic, it makes no sense to report a complete score for certain SM 3.0 NV parts that do not support floating point blending because part of the tests won't run.

And yet, a complete score is in fact given.

This is the problem that I have...it's not consistent.
 
Well, if they changed that (report no score for SM3.0 parts without blending) would that satisfy you? If they made it consistent, by still not reporting a complete score AA on an NV4x, would you be happy?
 
Bouncing Zabaglione Bros. said:
It's not that at all. There are just so many weird discrepancies and choices, and they seem to favour Nvidia. Come on, a "forward looking test" with no SM3.0 branching, no parallex mapping, no AA/AF? Even places where Nvidia cards get no score rather than a bad score, where the exact opposite happens for ATI cards? Nvidia cards get advantage from their specific non-DX features, but ATI cards don't?

The cards from the ATI/Nvidia are simply not being treated with the same level of objectiviity, and that is what is being queried. It's not tribalism, it's frustration at what should be a level playing field being so far tilted that 3DMark06 is pretty useless for comparing performance between the two main chip suppliers, even though it claims to be an even and honest test of capabilities. 3DMark06 is now just a marketing tool, rather than an objective testbench of a card's capabilities and performance.

Then why, when 3dmark03 did a mainly single textured game for GT1 (when most of the current games did multitexturing and future games did shading) did nobody on this forum create a big stink and a 30 page thread to defend nvidia when nvidia got so upset they left the futuremark program ?

And why, also, has a a writer from another web site ever had to defend a negative review of nvidia? Never is my answer ! It constantly happens when it is Ati who is having the negative review or loses in some benchmark.

Although Daves reviews are so neutral it is a marvel ( even though I guess I know he has a leaning to) people on the forum just cannot do likewise.

Hence why I wear my green hat, just to try and balance things up.

It's a lonely cause, but being lonely means I do not have to shower :D
 
Joe DeFuria said:
Using that logic, it makes no sense to report a complete score for certain SM 3.0 NV parts that do not support floating point blending because part of the tests won't run.
Well, there's a lot of cards that won't run the SM3 tests. I think that Futuremark's logic was simply that it made sense to sacrifice the comparability of the benchmark a bit in order to get it to run on more hardware.

There wasn't much of any reason to make such a compromise on the AA situation, as it's a non-default setting.
 
Chalnoth said:
I think that Futuremark's logic was simply that it made sense to sacrifice the comparability of the benchmark a bit in order to get it to run on more hardware.
The benchmarks should maximise comparability, don't you think? To run on more hardware - that's what games are about :)
 
Chalnoth said:
Well, there's a lot of cards that won't run the SM3 tests.

Right. So treat them all the same.

There wasn't much of any reason to make such a compromise on the AA situation, as it's a non-default setting.

I submit that one "small" reason to make the same comprimise with AA is that practically everyone who cares about this benchmark runs these cards with AA enabled at some level...

Look at it this way: FM had a decision to handle the AA scores one way or the other...if it's kind of a toss-up as to which way they should do it...why chose the way that is NOT CONSISTENT with the non AA approach?
 
Joe DeFuria said:
I submit that one "small" reason to make the same comprimise with AA is that practically everyone who cares about this benchmark runs these cards with AA enabled at some level...
Right, so either the NV4x will produce a score that is artificially high or low, depending upon how the comparison is done.

Much better, if you ask me, to just compare the SM2 AA scores and be done with it. The only thing not reporting a full score does is it doesn't allow you do search for projects on the ORB. It doesn't prevent benchmark sites from reporting scores.
 
Joe DeFuria said:
I submit that one "small" reason to make the same comprimise with AA is that practically everyone who cares about this benchmark runs these cards with AA enabled at some level...

Whoa there pardner - I think you're quite wrong. 3dmark comparisons in the vast, vast majority of cases are at default settings.
 
Can you even enable AA without paying for 3DMark? I can't test it until this afternoon, but given that you can't change any other settings...
 
Back
Top