Futuremark Announces Patch for 3DMark03

I would like to know how the 50.xx series drivers have improved (legitimately) from the 40.xx series. Is there anyone out there who would be willing to run the benchies and do a comparison using the newest patch?
 
Neeyik said:
Either that or the compiler is actually doing what it's supposed to do...


if it is then thats very well done to nvidia.

however they beat the 9800pro in that test and even with the new compiler drivers ive not seen any other shader bench where the FX cards even draw level seems a little odd for just one single shader to suddenly do really well.
 
For people that visit this forum, that have been here during all the custom benchmarks...it is not possible for a FX card to compete with a R3.XX with only 4 rendering pipelines. Even with their 'optimizations', yet 3Dmark 03 shows different.

How about Shadermark 2.0 or the HDR demo....show exactly the opposite.
 
There are two areas in which the compiler can help the card: register usage and parallel instruction execution.

For register usage, it is entirely possible that a great deal of registers can be eliminated with a minimal increase in instruction counts.

For parallel instruction execution, you can see that in here each of the NV35's pipes can execute 3 vec4 instructions in a single cycle. While the latter two are "simpler" instructions, they are certainly some of the more widely used ones. As such, it's possible that the compiler is able to rearrange the shader such that this capability is maximized.

EDIT: I suppose a good way to test this would be to bench an FX5800, which can execute only a single FP32 instruction or texture lookup per pipe. As such, the performance should certainly be different before and after the patch if there's any cheat.
 
Xmas said:
OpenGL guy said:
Xmas said:
I'm interested in what exactly FM changed in the new build. If they changed some of the rendering code, it's not certain that the performance drop can be wholly attributed to circumventing application-specific optimizations.
If the changes were significant, then the Radeons would show differences as well.
You mean, you only consider them significant if they affect the Radeons? ;)
Did you see any performance difference on Radeons? Did you see any image quality differences? No? Why do you suppose that is?
Even simple code reordering can make a driver choke on your application, but that doesn't mean all drivers will.
Who said anything about code reordering? Shouldn't the driver/hardware give the same correct result in either case?
 
I would think that most would be outraged at nvidia. They continue to cheat and get caught. Are you getting numb to this since it has happened so many times? Is this what nvidia hopes for or has planned for?

I for one will not buy a nvidia product until they stop treating me like an ignorant kid. I might be ignorant but I ain't a kid.

And then Futuremark. Why don't they have the balls to call a spade a spade? Do they think they are uberintelligent and will be able to catch all the cheats?

How would you react if one of your family members started to treat you like this? Are we getting so damn poltically correct we can't call a lying/cheating sack of shit what it is?
 
Doomtrooper said:
Is the compiler maintaing minimum precision requirements ??
If it isn't, then it'd have to either be doing that globally (since it was established that there would be almost unimaginable complexities in determining where lower precision could be safely used) or on an application specific basis. For the latter, we're back to the issue as to if any cheat is getting though the patch. For the former, though, it would potentially show in other applications (unless an "optimization" for those other apps specified full precision).

EDIT: I'm not saying that it's impossible; I'm simply looking at the likelyhood of such things.
 
Som said:
Why buy a card now for DX9 games? If you're going to upgrade hardware for a specific game or games, you wait until the first is out to purchase.


I thought that was obvious.

Obvious to someone who has the budget to upgrade as whim strikes them perhaps. I'll offer myself as an example. My financial circumstances have changed in the last few years due to family upheveals such that I no longer have the budget I once did to do extensive yearly updates of my computer.

As a result I now try to purchase parts that have around a two year lifecycle to keep costs down to a manageable level. With graphics cards my rule is now that I won't upgrade the card till I see a doubling of rendering ability in benchmarks and games. I also look to the future to see how things look likely to shape up over that two year period.

My last card was a GeForce 3, the 4 series simply didn't offer much more than a 30% benefit over the 3 card so I kept my eye on the upcoming FX series. Then came the 9700, a card that eclipsed anything else on the market at the time. I waited for the price to reach a reasonable level and how the FX series would shake down. Net result? four months ago I bought a Radeon 9700 np and expect that this card will last for the next twenty months or so as a capable graphics card.

So far the indications are all positive.

There are lots of people like me who have limited budgets and long cycle times on upgrades. We care about performance with future titles because we want the card we buy to last the distance. Certainly synthetic benchmarks are really making this just an educated guess but an educated guess is better than a random wild stab in the dark.
 
7 pages discussing the same shitty thing all over again...

I was going to say "boring" (instead of "shitty") but I guess I'm wrong.

You guys should lay off criticizing FM over the "approved drivers" issue and focus on what NVIDIA had told the press :

NVIDIA said:
An optimization must produce the correct image

An optimization must accelerate more than just a benchmark

An optimization must not contain pre-computed state

I suppose there is a chance that NVIDIA may say that the 52.16 drivers contain no optimization for 3DMark03, so nobody can say there were wrong in what they told the press at Editors Day. Instead, NVIDIA may say "Well, we didn't optimize for 3DMark03 with the 52.16s but we actually ch**ted, so we weren't wrong in our words and although we are FM's beta partner, that doesn't mean we give FM's 3DMark series of benchmark much importance... we only give FM money and our next-gen hardware tio play with... we can still do what we like with 3DMarkXX, even with their, hehe, strict rules and guidelines because, well, we've shown these (the rules and the guidelines) really don't mean much to us. Hey, we all know FM needs NVIDIA 2x as much as NVIDIA needs FM, right?"

:rolleyes:
 
Reverend said:
I suppose there is a chance that NVIDIA may say that the 52.16 drivers contain no optimization for 3DMark03, so nobody can say there were wrong in what they told the press at Editors Day.

This is precisely why we can't comment on that. Until Futuremark releases exactly what was changed and why it makes a difference or someone manages to show us screenshot proof of dubious optimisations.

Hopefully Dave's upcoming report from his testing, given that he has indicated that he already seen visual differences, will put people in the position of having evidence to use when asking nVidia the hard questions.
 
For some reason the patch won´t install on my system. The installer says it can´t find any installation of 3DM03!
Do one have to have it installed to the default location (which I haven´t)?
There was no such issue with the previous patch.

Anyone else having this problem?
 
Reverend said:
7 pages discussing the same shitty thing all over again...

I was going to say "boring" (instead of "shitty") but I guess I'm wrong.

You guys should lay off criticizing FM over the "approved drivers" issue and focus on what NVIDIA had told the press :

NVIDIA said:
An optimization must produce the correct image

An optimization must accelerate more than just a benchmark

An optimization must not contain pre-computed state

I suppose there is a chance that NVIDIA may say that the 52.16 drivers contain no optimization for 3DMark03, so nobody can say there were wrong in what they told the press at Editors Day. Instead, NVIDIA may say "Well, we didn't optimize for 3DMark03 with the 52.16s but we actually ch**ted, so we weren't wrong in our words and although we are FM's beta partner, that doesn't mean we give FM's 3DMark series of benchmark much importance... we only give FM money and our next-gen hardware tio play with... we can still do what we like with 3DMarkXX, even with their, hehe, strict rules and guidelines because, well, we've shown these (the rules and the guidelines) really don't mean much to us. Hey, we all know FM needs NVIDIA 2x as much as NVIDIA needs FM, right?"

:rolleyes:
:LOL:

Ok, I don't know if you were going for "laugh out loud sarcastic-funny" with that one or not but it sure got me! :LOL:

I agree with you. It isn't FM doing anything wrong or bad, I salute 'em for what they're doing and the stand they're taking as I think it's the right thing...I'm just hoping they KEEP doing it.

(And I'd LOVE to hear nVidia say that, I really would! :LOL: )
 
Reverend said:
NVIDIA said:
An optimization must produce the correct image

An optimization must accelerate more than just a benchmark

An optimization must not contain pre-computed state

I suppose there is a chance that NVIDIA may say that the 52.16 drivers contain no optimization for 3DMark03, so nobody can say there were wrong in what they told the press at Editors Day. Instead, NVIDIA may say "Well, we didn't optimize for 3DMark03 with the 52.16s but we actually ch**ted, so we weren't wrong in our words and although we are FM's beta partner, that doesn't mean we give FM's 3DMark series of benchmark much importance...
Of course it's not important. NVIDIA wrote a white paper explaining how unimportant the benchmark was and also detailing how to cheat. You can tell how unimportant the benchmark is by how much time NVIDIA has spent "optimizing" for it. :rolleyes:
we only give FM money and our next-gen hardware tio play with... we can still do what we like with 3DMarkXX, even with their, hehe, strict rules and guidelines because, well, we've shown these (the rules and the guidelines) really don't mean much to us. Hey, we all know FM needs NVIDIA 2x as much as NVIDIA needs FM, right?"
Of course NVIDIA is bigger than anyone else, at least in their own minds. Got hubris?

Since FutureMark has given the guidelines for the benchmark and since NVIDIA refuses to follow those guidelines, I'd love to see FutureMark sue NVIDIA for breach of contract or some such thing. Consider that NVIDIA is still using (unimportant) 3D Mark scores to show how fast their hardware is. FutureMark winning money from NVIDIA in a lawsuit would certainly keep them cleaner than taking money from NVIDIA as a "beta partner".

-FUDie
 
The point is that NVIDIA appears to have total lack of respect for FM wrt their beta partner agreement. I think history has proven why this appears to be the case. Unless FM has the balls, I doubt this scenario will change.

The other point is that with every new driver release by NVIDIA, media outlets should refrain from using 3DMark03 in any of their content (driver-comparison-articles, hardware reviews, whatever) until FM can verify (probably best via private corespondences to their beta press members, for it to be announced by these beta press members, so as not to let thios appear like FM is specifically targetting NVIDIA with every new driver release... it's a tricky political thing) that the drivers are "valid". Media outlets will have to swallow any potential pride they may have and depend on FM's beta press members (like B3D, for instance) for such announcements. Just an opinion.
 
I think the 340 patch is a sign of FM growing a set of stones (so to speak ;) ) and just hope they stick to that course with whatever nVidia's inevitable reaction to this is. (Which is me next area of speculation fun & adventure, what nVidia's reaction shall be.)

If FM keeps countering nVidia's cheating they'll regain my respect.
 
Reverend said:
The point is that NVIDIA appears to have total lack of respect for FM wrt their beta partner agreement. I think history has proven why this appears to be the case. Unless FM has the balls, I doubt this scenario will change.
NVIDIA's lack of respect goes far beyond FutureMark. I believe that NVIDIA doesn't respect their own customers. Look at the BS they are passing off on the average Joe!
The other point is that with every new driver release by NVIDIA, media outlets should refrain from using 3DMark03 in any of their content (driver-comparison-articles, hardware reviews, whatever) until FM can verify (probably best via private corespondences to their beta press members, for it to be announced by these beta press members, so as not to let thios appear like FM is specifically targetting NVIDIA with every new driver release... it's a tricky political thing) that the drivers are "valid". Media outlets will have to swallow any potential pride they may have and depend on FM's beta press members (like B3D, for instance) for such announcements. Just an opinion.
It'd be tough all around. First, FutureMark is receiving money from NVIDIA, a known cheater so FutureMark's partiality could come into question when/if they certify a driver. Second, OEMs often do their own thing anyway (do they even read or care about web reviews?) and likely wouldn't care about the whole certification process. OEMs seem to only see "bigger is better".

Until someone like Dell phones up NVIDIA and says, "We're not buying another product until you stop cheating." I don't think NVIDIA will give a rat's ass.

-FUDie
 
banksie said:
This is precisely why we can't comment on that. Until Futuremark releases exactly what was changed and why it makes a difference or someone manages to show us screenshot proof of dubious optimisations.

Hopefully Dave's upcoming report from his testing, given that he has indicated that he already seen visual differences, will put people in the position of having evidence to use when asking nVidia the hard questions.

We don't have to wait for EXACT specifics--they just help nail down the individual points of complaint. Irregardless of that, however, is that there are INDEED optimizations in place that effect only a benchmark (violating nVidia's statement) and that since they were excised by Futuremark's most recent build those steps are violating their main three conditions:

1. It is prohibited to change the rendering quality level that is requested by 3DMark.

2. It is prohibited to detect 3DMark directly or indirectly. In its sole discretion, Futuremark may approve detection in order to fix a specified hardware error.

3. Optimizations that utilize the empirical data of 3DMark are prohibited.


Certainly there ARE points to take umbrage at outside of specific details (which we are unlikely to get anyway), and seems silly to wait ONLY on them since we're talking about extremely general rules set down by nVidia and FM in the first place.
 
It'd be tough all around. First, FutureMark is receiving money from NVIDIA, a known cheater so FutureMark's partiality could come into question when/if they certify a driver.
Yes, but like I said, it comes down to the size of FM's combined balls (or should that be "combined size of FM's balls"?)

Second, OEMs often do their own thing anyway (do they even read or care about web reviews?) and likely wouldn't care about the whole certification process. OEMs seem to only see "bigger is better".
Here's what a certain OEM wrote me and Dave in mid-October :

An OEM personnel said:
Anthony/Dave,

My name is [removed], and I’m an engineer for the Performance/Architecture team at [name of OEM removed], with one responsibility being to define benchmarks that we use internally at [name of OEM] for our internal qualification of product (graphics or platform). I (as well as many others, I realize) ran across your Tomb Raider benchmark last month when an IHV had problems with people using it as a benchmark.

We would like to implement it as a standard DX9 test that we use in our labs for our fall/spring graphics refreshes and wondered if you’d be willing to share the Prague3a demo & batch files that you had created for your own tests. I thought these were on your site at one time, but just found out yesterday that is not the case, assuming it was removed by request.

I’d appreciate the help and look forward to hearing back from you.

Thanks!

[name]
Performance & Architecture
[name of OEM]
So, yeah, I think they do read web sites or at least are informed about websites. Oh, and no, we didn't grant him his request, naturally.

Until someone like Dell phones up NVIDIA and says, "We're not buying another product until you stop cheating." I don't think NVIDIA will give a rat's ass.
Dell, you say?
 
cthellis42 said:
We don't have to wait for EXACT specifics--they just help nail down the individual points of complaint. Irregardless of that, however, is that there are INDEED optimizations in place that effect only a benchmark (violating nVidia's statement) and that since they were excised by Futuremark's most recent build those steps are violating their main three conditions:

Really? And we know that these are application specific optimisations - how?

All we know at the moment is that with the 340 patch nVidia's performance drops. As people have asked is this because some juggling of shader instruction order has created code that is now pathological with nVidia's 52.16 driver? This might not be directly related to 3Dmark but more that 3Dmark has hit a general bad case.

That waggle room is more than enough for nVidia to claim plausibly that they are doing nothing wrong. Until it gets tied down to specifics all we really have is a general suspicion.

Don't get me wrong, I think nVidia are lying through their little cotton sockies on this one and have broken their much vaunted new rules. But I don't know that right now - I merely strongly suspect it. Further discovery be it by testing such as what Dave is doing or disclosure by Futuremark is required.
 
Back
Top