nVidia vs Futuremark Continued - Guess what nVidia's doing!

Uttar said:
My post above gives a paraphrase of a statement related to the 44.67. As I said, only the two "detectable only when in FM's Beta" so called optimizations are gone. The shading ones, and any other which might exist, remain.

Well, I can confirm that one of them wasn't gone on the last set of drivers (44.65 IIRC). As for the sahder optimisations, these are still illegal as far as Futuremark are concerned.
 
Uttar,
Saying the "issues are removed", or "corrected", "retrieved", or what have you, speaks to the visibility of issues to users, not (necessarily) to what is going on "underneath the surface", and what things there are "gone" or not.

Also, the notion that driver personnel aren't aware of such issues is rather...unbelievable. If it is believed, it paints the picture of PR/marketing having direct technical control of drivers, and indicates that personnel controlling putting drivers together aren't...driver personnel. More than slightly contradictory without some rather significant departures from general interpretation of the words being used. It becomes a question of what the phrase "driver personnel" actually means, and how accurately your proposed information and interpretation reflects a complete picture of nVidia's workings.
 
DaveBaumann said:
Uttar said:
My post above gives a paraphrase of a statement related to the 44.67. As I said, only the two "detectable only when in FM's Beta" so called optimizations are gone. The shading ones, and any other which might exist, remain.

Well, I can confirm that one of them wasn't gone on the last set of drivers (44.65 IIRC). As for the sahder optimisations, these are still illegal as far as Futuremark are concerned.

Nope. They are definatly not all removed. There were more than the published screenshots showed and at least one is still there!
 
Please Dave, be more specific. If there were more optimizations in 3DMark, did you just not show them to us?
 
There was another clip plane optimisation inserted into GT2, similar to those in GT4, but there weren't any actual pictures shown of it. That one is back (it was removed by 330 with the 44.03 drivers). I doubt it has too much of a performance improvement though.
 
I just want to add my opinion about Dave's reluctance to agree to vendor specific rendering paths in a DX benchmark.

Personally, I see no harm in allowing this in "game-like tests" as long as it's part of a number of options, i.e. the benchmark has a "default" non-IHV-specific path that must be adhered to first-and-foremost but options are provided for testing (maybe several) IHV-specific rendering performance (maybe available as checkboxes after the benchmark detects the type of video card present in user's system). It would, at the very least, provide for interesting studies/results, as long as the author/reviewer knows what's happening, what to look out for, etc. It may be tricky to police, I'll admit to that.

For specific synthetic tests, I would not agree to vendor specific paths nor any sort of tinkering -- there must be Only One Way To Do It. Kristof, for instance, said that the procedural texturing (dubbed PS 2.0 Feature Test by FM in 3DMark03) tests could be replaced by a simple texture map -- if we're testing procedural texturing feature/performance, then only procedural texturing testing it must be and nothing else.
 
I dunno guys. My feeling is once you add tea to the water it will never just be water again.

Nvidia did very questionable things with 3dmark2003. They are continueing to do them. No matter how much respect i had for the benchmark its gone now. IT will take alot for me to respect it again. They would need to have a few months of cheat free benchmarking before i think of looking at results. Just like what happened with ati and quake 3. Untill i didn't see the bug anymore for a few driver releases i was worried that they were cheating again or if i was playing the game with the iq i should be.


These are my feelings btw.
 
I agree with the comments here that benchmarks and games should be able to be reliably run with User Selected settings either:

1) As the game / benchmark developer desired it or

2) As best as a IHV can optimise it - trading performance for quality potentially

To me they are black and white comparisions. With Option 1) you can do an apples versus apples comparisions. With option 2) it is all subjective and at best you might get double your performance for just a different looking game or scene - not necessarily a much worse looking scene on some cards.

Any IHV mixing sneaking option 2) into option 1) and pretending they aren't is shameful and disgusting to me.
 
I have to wonder what value 3DMark will now hold. Once FutureMark reversed its stance on nVidia, they proved they couldn't stand up to them. If this is the case, how can we truly feel it is a non-bias product when nVidia are back in the picture?

The fact that FutreMark are producing a demo specially for nVidia is cause enough for concern. Why do they need to? As suggested earlier, there is only one reason, to learn the architecture. Why should they need to do any such thing if they are to be independant?

In my eyes, the demise of 3DMark is accelerating.
 
Awww c'mon!!!.....Futuremark is cool, we're going to see alot of cool stuff come out of them soon. I'm still stoked that B3D is a partner with them, even after all the BS.
 
Reverend said:
I just want to add my opinion about Dave's reluctance to agree to vendor specific rendering paths in a DX benchmark.

Personally, I see no harm in allowing this in "game-like tests" as long as it's part of a number of options, i.e. the benchmark has a "default" non-IHV-specific path that must be adhered to first-and-foremost but options are provided for testing (maybe several) IHV-specific rendering performance (maybe available as checkboxes after the benchmark detects the type of video card present in user's system). It would, at the very least, provide for interesting studies/results, as long as the author/reviewer knows what's happening, what to look out for, etc. It may be tricky to police, I'll admit to that.

For specific synthetic tests, I would not agree to vendor specific paths nor any sort of tinkering -- there must be Only One Way To Do It. Kristof, for instance, said that the procedural texturing (dubbed PS 2.0 Feature Test by FM in 3DMark03) tests could be replaced by a simple texture map -- if we're testing procedural texturing feature/performance, then only procedural texturing testing it must be and nothing else.

I actually like that idea a lot - It adds a lot more scope for using it as both a synthetic benchmark and a true 'gamers benchmark' (which lets be honest is what FutureMark are supposedly shooting for). I think that kind of approach would benefit everyone - The average user, the people interested in the deeper technology, developers and IHVs.

Of course, you could argue that it would make things unncessarily complicated and harder to compare cards and scores, but then I suppose reviewers will just have to work a little bit harder and have a few more late nights. ;)
 
JohnH said:
And then the point of having a standard API is what ?

I think as time goes on, different IHVs are going to find themselves going down different development paths from one another more and more, and it's going to become more difficult to have a true 'standard' API - Just look at R3x0 vs NV3x, it's impossible to make a true, 100% apples to apples comparison.

As I mentioned in my last post, 3DMark is trying to sell itself as a 'gamers benchmark', so we have to be honest and say that most developers are optimising for one of the major IHVs or another in some way. If you want to be representative of games in a benchmark without actually using real game engines, then you need to have some kind of codepaths to represent the way games are coded.

To have only IHV-specific codepaths in a benchmark like 3DMark would be a total disaster, but I think having it as an option (and preferably not the default option either) alongside a generic 'this is how it should be done' path would be a useful addition, and if it was used properly could be a valuable tool.
 
Reverend said:
I just want to add my opinion about Dave's reluctance to agree to vendor specific rendering paths in a DX benchmark.

Personally, I see no harm in allowing this in "game-like tests" as long as it's part of a number of options...

I think you're missing the point here Rev. A benchmark is ment to be a way of estimating what the general performance level is. Not just specific to that one thing. Running 3DMark is not ment to only tell you how fast 3DMark runs, but also what the general performance level of the card is. It's highly debatable if it does that, but that's the intention. That's why OEMs like Dell use the figures in their descissions on who to buy from. It's why Joe Consumer uses the figures. If X is better than Y in 3DMark, then it should be that X is likley to be better than Y for anything else.

As soon as you introduce Vendor/Application specific optimisations (either on the driver side, or the application side) then the only thing running that test tells you is how well that combination of Driver/Application runs. The result cannot be extrapolated to anything else.

The idea of having a switch to allow Vendor/Application specific optimisations doesn't gain anything. Any OEM would require that figures supplied to them would be with the switch turned off. Therefore Vendor X, who's trying to play catch up in the market, will just ignore the switch, and we're back where we started.

Let's not have any grey areas. No application specific optimisations. No vendor specific optimisations. Not for benchmarks. That way it's fair, especially to the consumer.

The only problem is policing it, which is why I think Unwinder has done something pretty special.
 
Everybody talks about IHV "neutral" paths.

I question that there truly is a IHV neutral path.

Is it the path that works best on the first piece of hardware try?
The one that provides the average performance of all combinations of paths/cards tried?

The point being, even if you stay within the bounds of DirectX, there are some things some cards do better than others and some ways of structuring the rendering that favors one dsign over another. How do you choose?

The mantra of "its the spec" falls apart a bit in the real world.
 
Hanners said:
JohnH said:
And then the point of having a standard API is what ?

I think as time goes on, different IHVs are going to find themselves going down different development paths from one another more and more, and it's going to become more difficult to have a true 'standard' API - Just look at R3x0 vs NV3x, it's impossible to make a true, 100% apples to apples comparison.

As I mentioned in my last post, 3DMark is trying to sell itself as a 'gamers benchmark', so we have to be honest and say that most developers are optimising for one of the major IHVs or another in some way. If you want to be representative of games in a benchmark without actually using real game engines, then you need to have some kind of codepaths to represent the way games are coded.

To have only IHV-specific codepaths in a benchmark like 3DMark would be a total disaster, but I think having it as an option (and preferably not the default option either) alongside a generic 'this is how it should be done' path would be a useful addition, and if it was used properly could be a valuable tool.

The idea behind a standard API is that you code your HW to work well with it. The internal complexities of the HW should not be exposed to the application, thats what the driver is there to deal with. It should be 100% possible to do an apples to apples comparison between HW through a standard API. If the driver can not translate what an application is requesting, using lagitamate means, to what is required by the HW in a way that executes efficiently then its the HW/driver that is at fault not the application. This is the case with much of noise surrounding 3DMark2003, the fact is it has found an underlying inadequacy in a peice of HW, well, isn't that what its meant to do ?

Of course, things aren't generally this black and write, for example an application should attempt to use any _standard_API_ features that are exposed by a peice of HW. But, there may be cases where these features are deterimental when used with some architectures even though they support them so they may choose to not use them for that HW etc etc.

Basically standard API's are created so that ISV's can spend less time messing around with oddities of various HW so that they can deliver a better experience to the end user. It is the responsibility of the IHV to implement HW that executes what the API's request in an efficient manner, if they don't they do badly in benchmarks and people don't buy their products. But now for some reason we're saying that a benchmark should be coded to work around the inadequacies of a specific peice of HW, irrespective of the performance of other HW through the "standard" path. This does not make much sense to me (an IHV).

Anyway, thats my 2peneth.
John.
 
JohnH said:
The idea behind a standard API is that you code your HW to work well with it.

It's a bit of a chicken and egg situation though - What comes first, the hardware or the API that supports it?
 
Back
Top