Reverend at The Pulpit #12

Status
Not open for further replies.
It seems like Dave and you are the ones that aren't willing to look at another interpretation of this particular statement,
That statement is absolute. You can see who the BDP members are in FM's page -- ask each and every one of them what their interpretations are about that particular statement of FM's.
 
Reverend said:
That statement is absolute. You can see who the BDP members are in FM's page -- ask each and every one of them what their interpretations are about that particular statement of FM's.

I asked you if my interpretation was complete nonsense. If it was not, then apparently the statement was not as absolute as you thought it was.
If you feel like it, get in contact with the BDP members and ask their interpretations, I think you're in a better position to do that than I am, but I am interested in what they say.
 
Dude, instead of asking the BDP members, just ask FM what they really meant by that statement of their's.

I am sure you take a huge amount of time signing contracts, by questioning every single sentence/word and see if they are absolute or open to "interpretations".

Man, this is really far out stuff... any of you other guys agree?
 
Reverend said:
Man, this is really far out stuff... any of you other guys agree?
Yup, I'm putting Scali into the "fair game" catagory now since he's just going out of his way to troll now.....and I still like eating trolls. :devilish:
 
Scali said:
Yes, and as I said before, 3DMark's game tests are not about apples-to-apples testing, but about predicting in-game performance. The feature tests are apples-to-apples (why haven't you made any fuss about the fact that some cards use SM3.0 while others use SM2.0 in the same game tests?).
There's a notable difference between comparing fruit-to-fruit and fruit-to-a-different-plant-form-altogether. The use of different shader models within 3DMark05 doesn't affect how the final rendered frame looks, to any degree of normal scrutiny (and by that, I mean using one's eyes or even something like Photoshop); this is why _pp modifers are used too. However, the use of DST produces a difference that is visible; therefore how can one be expected to make judgements over performance if two separate products are not generating the same final output? After all, that is what the benchmark is about - take a fixed scenario, with a fixed final image and see how long it takes to render that.
 
Hind sight is 20/20
Futuremark in June 2003 said:
However, recent developments in the graphics industry and game development suggest that a different approach for game performance benchmarking might be needed, where manufacturer-specific code path optimization is directly in the code source. Futuremark will consider whether this approach is needed in its future benchmarks.
 
It's just apparant to me that Futuremark isn't really thinking clearly about its decisions.

When you are trying to set up an apples to apples comparision.. they got it right for 3DMark03...

Not to lower the importance of DST (I'm not sure if it will be widely implemented).. but if such a feature isn't written in the standards, why should it even considered to be the default option?

Futuremark needs to separate itself from the benchmark perspective, and the game perspective.

I would think the best suggestion is to have two separate modes of operation:

1) Benchmark mode - this mode is the way benchmarks should be held to (adhering to standards and whatnot), these scores would be representative of the hardware according to the DX specifications

2) "Game mode" - this mode is the way games "usually are implemented" (this being multiple paths, features, stuff not covered by DX specs)... I'm not sure whether scores produced by this should be posted, but it would be more of a reflection to what games will feel. "Optimizations" could come into play here (to some extent, but should not be the SOLE focus of this mode)

I feel that the inclusion of DST is something pushed by NVidia (3dc being pushed away by NVidia/ATI not arguing hard enough to get it in).. however, I do think that Futuremark needs to make clear (again) what they are aiming for... as 3DMark05 is clearly not a good benchmark as it is now . If trying to convince people that this is "close enough to a game engine"... then it MUST separate these modes as you cannot combine the two. It is NOT A GAME and it is clearly NOT A BENCHMARK either.

By separating the modes you will have a benchmark and have a benchmark simulating a game enviornment/engine .
 
Neeyik said:
After all, that is what the benchmark is about - take a fixed scenario, with a fixed final image and see how long it takes to render that.

According to Futuremark, http://www.futuremark.com/companyinfo/3DMark05_Whitepaper_v1_0.pdf, the benchmark is about:

One could argue that the DST and hardware accelerated PCF implementation vs. the non-DST and point
sampling code paths do not produce comparable performance measurements, since the resulting
rendering shows slight differences. 3DMark05 was designed with the firm belief that those two are indeed
comparable, and in the fact that it is the right way to reflect future 3D game performance.
Our study has
proved that over a dozen of the biggest game developers are using DST and hardware PCF for dynamic
shadow rending in their latest or upcoming titles. So if DST and hardware PCF are supported, they should
be used in depth shadow map implementations, because that is what is done also in the latest and future
games.
However, if the benchmark user wishes to compare exactly identical rendering performance
across different architectures, DST can be disabled in the benchmark settings, and the dynamic shadows
are then always rendered using R32F depth maps and four point sample PCF.

So indeed their goal is to try and benchmark game performance, not apples-to-apples per-se.
In which case this makes perfect sense. Do you consider FarCry, Doom3 or HL2 benchmarks on hardware from different vendors, or even on hardware from different families of the same vendor, apples-to-apples for example?
 
Scali, you are using what Futuremark said same time they officially released 3Dmark05, you refuse to understand that it is not what it was said to beta member until they change the rules without warning at the last minute.
 
PatrickL said:
Scali, you are using what Futuremark said same time they officially released 3Dmark05, you refuse to understand that it is not what it was said to beta member until they change the rules without warning at the last minute.

Neeyik spoke of what the benchmark is about, and it's not what he claims. Many people seem to misinterpret what 3DMark05 is supposed to be about, and then continue to use this misinterpretation to explain that it is a bad product. Ofcourse, a hammer makes for a bad screwdriver aswell.

As for refusing to understand, look at how stubborn Reverend has reacted to a possibly different interpretation. Apparently Beyond3D refuses to understand that they could possibly have misunderstood what the rules were in the first place. I'm sure they're asking Futuremark what they really meant now, so we'll hear if Beyond3D was in fact right or not shortly.

I don't refuse to understand anything, I understand Beyond3D's issues, but I'm just pointing out that they could be based on a misunderstanding, but apparently Reverend has not even considered the possibility of ever possibly being wrong at all.

To be honest, I find my explanation more sensible than his, because first of all, it is far more common to speak of hardware features than engine features in terms of requirements, and secondly, while there are multiple hardware featuresets in 3DMark05, there is only one engine featureset, so it wouldn't make sense to speak of 'required featureset' in relation to the engine.
 
Scali - stop interpreting thnigs the way you please. We didn't interpret things incorrectly because we've been flat out told what to expect. There is no arguments about this - we know more about this than you do as beta members.
 
Status
Not open for further replies.
Back
Top