Futuremark Announces Patch for 3DMark03

Joe DeFuria said:
Wait for DX9 games, real DX9 performance tests using those games, and then make your decision.

Unfortunately, that doesn't help someone who's buying a new card NOW, does it? I can be buying a card to play my EXISITNG games better...this doesn't mean I shouldn't consider the possibilities with future games.

In other words, if Card A and Card B both play "current games" very well....but "future benchmarks" (like 3DMark, aquamark, Shader Mark, etc.) show that Card As performance and quality is solid and predictable, and Card B is "flaky...depends wildly on drivers and acceptance of certain quality degredations"....

What card would you buy?

Why buy a card now for DX9 games? If you're going to upgrade hardware for a specific game or games, you wait until the first is out to purchase.


I thought that was obvious.
 
It's definitely the best approach to take, since simply labelling a current set of drivers as "cheating" and forcing results from a few drivers back can kill many valid optimizations as well as the invalid ones. If Futuremark is confident they excised everything objectionable to their new rules, then they are free to approve the current drivers under their new patched version. It removes offending paths while keeping valid ones, as well as being the "smoothest" adoption method. The critical steps are the ones that go from here on out. They've been VERY careful in working up their guidelines and parameters so as to leave no wiggle-room, and they've "reset the counter" as it were. How will they respond if nVidia were to patch 52.16 to defeat their current 3DMark2003 build? How will they respond to later infractions from nVidia, ATi, or any other source?

They've taken a solid stand on THEIR desires alone, and made a long, careful progression to this point, but the maintainance will likely be even harder and more critical to ensure they don't have to do this again.
 
digitalwanderer said:
"So, if Nvidia is running "non-standard" code (like that exists or something) in the background to achieve their results, WHO CARES? We know what we're getting when we buy, regardless of whether or not we truly know how we get it."

This is the second time in 10 minutes I've seen this same assignine argument used by a one-post-wonder. I was gonna call you 'paranoid', but...

(The Dig pulls out his trusty tinfoil hat and carefully unfolds it and places it firmly on his head making sure it completely encases his brain.)


Nice use of out of context quotation. The point is, we have real world performance tests. If you're getting good performance in actual game play, why bother looking at synthetic benchmarks?


And it's "assinine" I believe. Not sure m'self though, to be honest.
 
Som said:
Why buy a card now for DX9 games? If you're going to upgrade hardware for a specific game or games, you wait until the first is out to purchase.


I thought that was obvious.

Did you even read what I wrote? :rolleyes: I'll keep it short this time.

What if I'm buying a card NOW for DX8 games. Does that mean I shouldn't consider potential DX9 performance?

I want to know which card has better potential at running DX9 games. Because that could mean I don't have to upgrade AGAIN when DX9 games start arriving.

Get it?
 
AzBat said:
Ok, you got a point there. But I still think that if they release newer patches it gives a bad appearance to end-users such that they believe the results can't be trusted.

It's a good thing we're not keeping score. ;)

OK...we'll just have to agree to disagree then....and I think we're tied. :)
 
Of course, the thing I'm still unclear on is is this code-baking or does the patch just screw up some sort of instruction critical to NV3x architecture performance?

I mean, we all know that the FX line is ridiculously fickle to code for (small changes in code can result in HUGE changes in performance), so is it just maybe possible that the patch moves some code around that causes NV3x to choke? In other words, it kind of comes off like NV3x needs app specific code for EVERYTHING. Sounds like they're trying to pull a microsoft without the proper monopoly.

I'd like to see a reversal from that in NV4x, as I think many others would as well.

Of course, you Beyond3d folks, don't break any NDAs just to answer questions here. If it is some kind of questionable tactic Nvidia is using, I'm sure we can wait til the sites around the web do their own investigations.
 
Som said:
If the performance is there, who cares why?

Don't mistake me for some Nvidia <bleep> - I wouldn't be buying an NV card right now if I were upgrading, but I don't see the problem in different manufacturers using different driver code to achieve similar or identical results.

But the performance isn't there while *playing the game*, it's only in the *benchmarking modes*. Nvidia have been found to be not clearing buffers, replacing shaders with lower quality versions, static clip planes, reducing filtering to bilinear, etc. All of these things either lower the IQ or cannot be used while playing games - they are just techniques to create misleading benchmark scores and make the Nvidia hardware look better than it is when you play a game. Nvidia is actively trying to cheat the results because their hardware is not up to scratch.

All this is before we even get onto the question of quality settings like AA/AF (why you buy a high end card in the first place), and the extremely poor DX9 shader performance that will hurt the Nvidia cards in the newer games.

You need to search back through the forum, because I'm sure most of the regulars simply can't face rehashing the whole "cheating in benchmarks" issue again.
 
Joe DeFuria said:
Som said:
Why buy a card now for DX9 games? If you're going to upgrade hardware for a specific game or games, you wait until the first is out to purchase.


I thought that was obvious.

Did you even read what I wrote? :rolleyes: I'll keep it short this time.

What if I'm buying a card NOW for DX8 games. Does that mean I shouldn't consider potential DX9 performance?

I want to know which card has better potential at running DX9 games. Because that could mean I don't have to upgrade AGAIN when DX9 games start arriving.

Get it?

Oh, I got it the first time. Considering DX9 performance is certainly a factor, but just check out how the hardware performs in current DX9 games. (gunmetal, TRAOD, Halo, etc.)

What about before they were available? Well, in that case you're just guessing really. 3dmark came out before the first DX9 games, fair enough, but since it is synthetic, there's no telling how it would tally with real world performance, so all you're really doing is making a slightly more educated guess than without consulting 3DM03.
 
Som said:
If Nvidia is using "cheat code" to render the same things ATi renders, and performance and IQ are the same, then what does it matter?

I already explained this to you.

If nVidia has to resort to doing things that go against FM's guidelines, this means that either

1) They are detecting the app
and/or
2) They are NOT rendering what FM is asking them to.

This matters because there is no guarantee that nVidia can do this with every game out there. It's absurd to think that nVidia can. Can nVidia detect every single app and have custom code for it in their drivers? Can nVidia render something OTHER than what the app calls for when shading, and have the imagae quality look similar or not break the app?

I want to know if the card I'm considering buying has the REAL DISADVANTAGE of requiring "special support" for individual apps in order to get performance up to snuff. I want to know if the card I'm considering buying needs to replace what the developer ASKS FOR vs. what the IHV "decides" is acceptable.
 
Bouncing Zabaglione Bros. said:
But the performance isn't there while *playing the game*, it's only in the *benchmarking modes*. Nvidia have been found to be not clearing buffers, replacing shaders with lower quality versions, static clip planes, reducing filtering to bilinear, etc. All of these things either lower the IQ or cannot be used while playing games - they are just techniques to create misleading benchmark scores and make the Nvidia hardware look better than it is when you play a game. Nvidia is actively trying to cheat the results because their hardware is not up to scratch.

Oh I'm well aware of all the previous questionable optimizations, but the reputable hardware analysis sites (Anandtech, HardOCP, etc.) have had their own game based benchmarks scripted to undercut shenanigans like that, or they'll simply run through a scripted gameplay event (which is gaining more acceptance) with all cards being compared to get a performance tally.

As for Nvidia's hardware, I don't know that it's not up to scratch so much as they are placing far too much work on the developer. If coded for properly, the NV3x architecture appears very capable, the problem is it takes SO much extra coding that developers simply aren't willing to do it, and I don't blame them.

And of course, their PS2.0 performance does suck. That's undeniable.

Bouncing Zabaglione Bros. said:
All this is before we even get onto the question of quality settings like AA/AF (why you buy a high end card in the first place), and the extremely poor DX9 shader performance that will hurt the Nvidia cards in the newer games.

You need to search back through the forum, because I'm sure most of the regulars simply can't face rehashing the whole "cheating in benchmarks" issue again.

Like I said, I'm familiar with all that previous stuff, so please, DON'T rehash.
 
Som said:
Oh, I got it the first time. Considering DX9 performance is certainly a factor, but just check out how the hardware performs in current DX9 games. (gunmetal, TRAOD, Halo, etc.)

And as soon as a game becomes a "benchmark", then the IHV can "optimize" for it.

So what if the next game that isn't benchmarked comes out?

I want a benchmark that does NOT ALLOW "app specific" optimizations, because whether or not my next game get the "treatment" is just a crap shoot.

What about before they were available? Well, in that case you're just guessing really.

And you're just guessing with any benchmark. Halo doesn't tell you anything about Half-Life2.

...so all you're really doing is making a slightly more educated guess than without consulting 3DM03.

Again, no different than "real games." Unless there is a benchmark for the specific game that you are after, all the other "real world" tests are just guesswork as well.

Regardless...I'll take all the additional "education" I can get.
 
OpenGL guy said:
Xmas said:
I'm interested in what exactly FM changed in the new build. If they changed some of the rendering code, it's not certain that the performance drop can be wholly attributed to circumventing application-specific optimizations.
If the changes were significant, then the Radeons would show differences as well.

True, then if we go back to the press release, then we see that they specifically say....

Futuremark press release' said:
* Maximum point sprite size is now limited to 256x256 instead of the maximum supported by the hardware

Does anybody know what the max is for ATI and NVIDIA? Is it possible that NVIDIA was using larger than 256x256 point sprites than ATI? Doesn't the MaxPointSize CAP give us the maximum supported by the hardware?

Tommy McClain
 
Som said:
And of course, their PS2.0 performance does suck. That's undeniable.

Don't you get it? With some of these "real-world" benchmarks, and the "non patched" 3DMark, nVidia's PS 2.0 perofrmance sucking is deniable.

It's exactly synthethic tests (like PATCHED 3DMark, shader mark, etc) that TELLS US that nVidia's PS 2.0 performance generally sucks.
 
Joe DeFuria said:
AzBat said:
Ok, you got a point there. But I still think that if they release newer patches it gives a bad appearance to end-users such that they believe the results can't be trusted.

It's a good thing we're not keeping score. ;)

OK...we'll just have to agree to disagree then....and I think we're tied. :)

Wussy. :p LOL

Tommy McClain
 
Well, firstly, if someone buys an OEM PC, they deserve what they get. 'Nuff said.

So 99% of consumers deserve to be ripped off simply because they are not among B3D readership. I have to disagree.

If the performance is there, who cares why?

The point is the performance is not there and they are attempting to make it appear that it is there.
 
Joe DeFuria said:
Som said:
If Nvidia is using "cheat code" to render the same things ATi renders, and performance and IQ are the same, then what does it matter?

I already explained this to you.

If nVidia has to resort to doing things that go against FM's guidelines, this means that either

1) They are detecting the app
and/or
2) They are NOT rendering what FM is asking them to.

This matters because there is no guarantee that nVidia can do this with every game out there. It's absurd to think that nVidia can. Can nVidia detect every single app and have custom code for it in their drivers? Can nVidia render something OTHER than what the app calls for when shading, and have the imagae quality look similar or not break the app?

I want to know if the card I'm considering buying has the REAL DISADVANTAGE of requiring "special support" for individual apps in order to get performance up to snuff. I want to know if the card I'm considering buying needs to replace what the developer ASKS FOR vs. what the IHV "decides" is acceptable.

That's not the situation I'm discussing though - you're shifting off. But we'll talk about your hypothetical for a bit.

And as for special support - you're right, it does seem that that's what NV3x needs. I think Nvidia was tying to bully market share with NV3x - they though developers would follow their way of thinking because of their superior position in the market, but they moved too soon. Devs simply aren't willing to optimize for hours on end for NV3x hardware (Cg) instead of the indsutry standard compiler (HLSL) provided by the REAL monopoly.

So in short, I think you're right that Nvidia made the wrong move with their architecture, and is paying the price for it. But I'm not sure that them trying to create as much performance as possible for their customers by putting in app specific code for alot of programs is cheating so much as support. Now, doing such for a benchmark is definitely marketing, and not meant to help the user, so it's a bit more questionable.

So then, is it only "cheating" when it's done for a benchmark, or is it just good optimization when it's done for something such as BF1942 or UT2003?
 
What I would like to know is, was there certain cheats found based on suspicion? Or did were they discovered from random instruction re-ordering?
 
Back
Top