Reply from nVidia about the 3DMark anomaly

Chalnoth said:
No, because Morrowind is currently not a widely-used benchmark. But, I am hopeful that there are no game-specific optimizations needed by 3D video card companies for this game, or most any other.

Ok so lets say ATI can make game-specific optimizations for both Quake3 and Morrowind that boosts perfomance 50% without haven any negative effects what so ever.

ATI should then only apply the optimizations to Morrowind because applying them to Q3 would be cheating?
 
Remember, also, that ATi (and presumably everyone else) has also made the Q3 optimisations generic for Q3 engined games which benefit many games that aren't used for benchmarks.
 
Chalnoth said:
No, because Morrowind is currently not a widely-used benchmark.

So what ahppens if it becomes one?
Is ATI/nVidia expected to remove their morrowind optimizations and suffer FPS losses? Are you aware how absolutely rediculous that sounds?
The problem YOU have is, is you'd like to be able to take a single games performance on two cards and use that to extrapolate performance for those cards elswhere.
This is flawed because games , as you mentioned, use different code paths anyways. Therefore, the game specific driver optimizations are irrelevant because you cannot extrapolate performance from a single game to another anyways.
 
DaveBaumann said:
Remember, also, that ATi (and presumably everyone else) has also made the Q3 optimisations generic for Q3 engined games which benefit many games that aren't used for benchmarks.

Dave,

I would find that to be the case. If you look at all of the other games run the Q3 engine that we have bench marks scores, they show the 8500 doing very well in those games. Some games they meet or beat a GF4 Ti4200 (Jedi Outcast for example). Whats interesting is the fact that the 8500 usally looses out in q3 by a large margin, but in these other Q3 powered games the margin is very small.
 
I just had to comment here, this was driver me insane

Chalnoth said:
It's a cheat because Quake3 is a benchmark.

Besides, good drivers shouldn't need any game detection, except for, perhaps, those situations where games were developed on poor drivers from a different manufacturer, and the game developer refused to patch to fix the problem.

This REALLY bugged me. Have we gotten to the point where Quake3 is only condidered a benchmark??? Sad :(

Quake3 is a GAME, it was NEVER made to be a benchmark, hell, the demo that was released first didn't even have benchmarking code in it! It was meant as a GAME! The fact that it CAN benchmark is cool and all, but that is not the main point of the game. It's a nice feature, but that is all. I know some companys see it maybe as only a benchmark, and it's sad indeed. But if we give in to those notions and support them on that fact then we aren't helping the situation! We must stay true to the fact that it is indeed just a game, like many others.

Doomtrooper said:
Detecting Quake3 is not a CHEAT :rolleyes: ..like I said before if a video card company saw a way of improving the game peformance on their hardware without lowering IQ how the hell do you assume that is cheating ?
Nobody really knows what the application detection of Quake 3 was doing in the Radeon drivers but they were in there almost from the beginning of the Radeon cards, this was not something ATI put in specifically for the 8500 debut like some webiste tried to say.
Since I still own a Radeon Card and countless reviews on the net if you can find any of them complain of image quality...then you can make your claim.

I totally agree with you, I am all for companys figuring out ways to make the game run faster with no IQ loss on their hardware. It only benefits us the gamer!

What I think they maybe should do though is explain in their readme or somekind of documentation that they ARE using some kind of optimization in that game. Maybe something that sais yes, we are doing an optimization, it doesn't have any loss in IQ but has proven to increase performance on our hardware. And if they could maybe even give details about what they did. THAT would eliminate people calling it 'cheating'.
 
jb said:
Whats interesting is the fact that the 8500 usally looses out in q3 by a large margin, but in these other Q3 powered games the margin is very small.

Q3 (and hence Q3 engined games) use hardware transformation minimally, if at all. Later games that you see, such as RtCW and JKII are much more geometrically complex than Q3 and so the onus is on the CPU/system rather than the fill/bandwith performance of the 3D card. This is most likely why we have minimised the differences.
 
Brutal Deluxe said:
Ok so lets say ATI can make game-specific optimizations for both Quake3 and Morrowind that boosts perfomance 50% without haven any negative effects what so ever.

ATI should then only apply the optimizations to Morrowind because applying them to Q3 would be cheating?

Very hypothetical, but let's explore this.

The game-specific option for Morrowind would not be cheating, but it would not be good, either. In fact, any game-specific coding in drivers is not good, as that means that the drivers are being designed for certain games at the expense of others. For optimal performance and compatibility in the widest variety of games possible, it should just make sense that there should be no game-specific optimizations.

After all, if ATI designs optimizations for high-profile game A, then they are most certainly neglecting game B.

By contrast, if ATI manages to optimize a certain algorithm that is used often in Quake3, they may improve performance on a variety of other games that use a similar rendering algorithm.
 
Brent said:
This REALLY bugged me. Have we gotten to the point where Quake3 is only condidered a benchmark??? Sad :(

Quake3 is a GAME, it was NEVER made to be a benchmark, hell, the demo that was released first didn't even have benchmarking code in it! It was meant as a GAME! The fact that it CAN benchmark is cool and all, but that is not the main point of the game.

But it doesn't matter what Quake3 was meant to be! Regardless of what else Quake3 is, it is the most widely-used game benchmark (3DMark2k1 is not a game benchmark...) in existence today.

You are deluding yourself if you think there's any other reason besides the fact that Q3 is the most popular game benchmark that ATI designed specific optimizations for that game.
 
You are deluding yourself if you think there's any other reason besides the fact that Q3 is the most popular game benchmark that ATI designed specific optimizations for that game.

I think you'd be deluding yourself if you thought nobody else did the same.
 
Chalnoth said:
Brent said:
You are deluding yourself if you think there's any other reason besides the fact that Q3 is the most popular game benchmark that ATI designed specific optimizations for that game.

Whatever optimzations were put into my original Radeon drivers I certainly have no complaints about, the IQ was unmatched at the time and it was as fast as a GTS. So as a Radeon owner during that time I certainly have no reason to complain, I got great frame rates with superior IQ..even with texture compression enabled ;)
 
jb said:
Whats interesting is the fact that the 8500 usally looses out in q3 by a large margin, but in these other Q3 powered games the margin is very small.

A few recent reviews have shown the 8500 ahead of the ti4200 in RTCW and in jk2 level with the ti4600. This may not mean nvidia have q3 specific optimizations though. It may just be because of the different demands of jk2. (high res textures?)
 
Chalnoth,

I matters that are not "ATI vs. Nvidia" you seem to be fairly level headed. I just don't get your attitude here.

In fact, any game-specific coding in drivers is not good, as that means that the drivers are being designed for certain games at the expense of others.

First of all, that's wrong.

Every game will use the hardware in different ways. The means by which the drivers handle graphics calls are not going to be "optimal" for all possible ways that the hardware can be accessed and/or stressed.

I don't see you throwing up a red flag because nVidia (or ATI) has different GL driver optimizations for "Quadro" vs. GeForce cards. They're all "OpenGL Apps", aren't they? Why not use one single driver? Is there cheating going on?

The point is, optimizations for "one game/engine" are NOT necessarily "at the expense of others." They can simply be "for the benefit of the particular app".

Driver writers (and hardware designers, for that matter) cannot possibly have one, "generic" driver that is OPTIMAL for all cases. At best, they will have a base "widest compatibility" path, and then have separate OPTIMIZED paths for specific apps (or type of apps, such as fill rate limited vs. polygon limited apps.)


Regardless of what else Quake3 is, it is the most widely-used game benchmark

Regardless of how Quake3 is used, it is STILL A GAME. So at the very least, any "Quake3 specific" optimizations will benefit A GAME. The same cannot be said of 3D Mark. And specific optimizations for 3D Mark apply only to a synthetic test.

You are deluding yourself if you think there's any other reason besides the fact that Q3 is the most popular game benchmark that ATI designed specific optimizations for that game.

I don't believe anyone said or implied anything different. What's your point? Does this not make Quake3 a game?

You are deluding yourself if you think that the nVidia 3D Mark "anomoly" exists for any other reason that it is the most popular synthetic gaming benchmark in existence.
 
Joe DeFuria said:
First of all, that's wrong.

Every game will use the hardware in different ways. The means by which the drivers handle graphics calls are not going to be "optimal" for all possible ways that the hardware can be accessed and/or stressed.

Which is why situation-specific optimizations aren't a bad thing in the least. It's the game-specific optimizations that are bad, because they don't help other games that have similar rendering situations.

Some examples of situations that might trigger certain optimizations:

1. Frequent render-state changes.
2. Rendering a certain amount of textures that aren't stored in video memory, but change from frame to frame.
3. Texture memory usage statistics (i.e. different codepath for few textures than many, different for large textures than small).

I'm sure there are more, but I'm not particularly sure of what those situations might be.

I don't see you throwing up a red flag because nVidia (or ATI) has different GL driver optimizations for "Quadro" vs. GeForce cards. They're all "OpenGL Apps", aren't they? Why not use one single driver? Is there cheating going on?

Of course not! Games and professional OpenGL apps are fundamentally different in the way they render. In games, there's absolutely no reason to optimize, for example, for wireframe or solid-fill (no textures) situations. But with a Quadro, there would certainly be.

The point is, optimizations for "one game/engine" are NOT necessarily "at the expense of others." They can simply be "for the benefit of the particular app".

That's assuming that ATI or nVidia has an infinite amount of manpower and time to develop these drivers. Given the fact that neither company has yet put out perfect drivers, this is obviously not the case. Since both companies should definitely work on optimizing the rendering, it just makes sense to do it for many games at once, instead of for one or two games at a time.
 
There is nothing wrong with per-situation or per-game optimizations as long as the optimization DOES NOT MAKE IQ WORSE.

If the driver say, drops to 16-bit textures, 16-bit color, disables AA, lowers anisotropic, lowers texture LOD, WITHOUT YOUR KNOWLEDGE to try to boost framerates at the expense of IQ, it is a CHEAT and the reason it is a cheat is because the benchmark/game reports that feature X is enabled and reports framerates based on that, but the driver actually disabled feature X!

As a game player, I can make these choices manually. I don't need the driver DISOBEYING MY GAME SETTINGS and trying to boost performance by hiding drops in IQ.

Consider a driver which randomly turned filtering down to bilinear on certain surfaces. During normal play, you might not notice during the heated action, but in the screenshots, you will notice. This, IMHO, is a pure cheat and nothing but a cheat.


If on the other hand, the driver realizes that GAME X is say, a heavy user of say, the stencil buffer, and switches to a different code rendering path that is heavily optimized for maximizing stencil fill (Doom3 anyone?), there is no problem at all. The rendered image will be IDENTICAL to a non-optimized driver, the only difference is, the driver detected the game and optimized the driver rendering path.


In other words: Drivers which detect games and use game specific code transformations of the execution path (optimization) must be IMAGE QUALITY PRESERVING to not be considered a CHEAT.

Game specific driver Optimizations should boost *performance only* not at the expense of IQ. IQ settings should be strictly controllable by the user in the game settings or the driver settings tab.
 
Such optimizations will result in little-known games or modifications that use somewhat non-standard rendering paths in getting terrible performance by comparison, which is why I dislike any game-specific optimization.
 
It's the game-specific optimizations that are bad, because they don't help other games that have similar rendering situations.

Some examples of situations that might trigger certain optimizations..

Chalnoth, you talk as if the driver is suppossed to "optimize" for these situations "on the fly" so to speak. This is completely contradictory to your later comment:

Of course not! Games and professional OpenGL apps are fundamentally different in the way they render. In games, there's absolutely no reason to optimize, for example, for wireframe or solid-fill (no textures) situations. But with a Quadro, there would certainly be.

So why don't the drivers "KNOW" when they are doing "wire-frame" or non-texture fill? You are claiming that the drivers should realize when they are "Rendering a certain amount of textures that aren't stored in video memory, but change from frame to frame" and be optimized accordingly, but then if there are NO textures, this requires a separate optimized driver be written?

You simply make no sense.

That's assuming that ATI or nVidia has an infinite amount of manpower and time to develop these drivers.

Which is why it's reasonable to assume that the FIRST applications they "optimize" for are the very ones that people use to "test" their cards.

In a nut-shell...I agree with DemoCoder. I don't care WHAT "application specific" optimizations are used, as long as it doesn't negatively affect imagae quality to get the performance boost.
 
Joe DeFuria said:
So why don't the drivers "KNOW" when they are doing "wire-frame" or non-texture fill? You are claiming that the drivers should realize when they are "Rendering a certain amount of textures that aren't stored in video memory, but change from frame to frame" and be optimized accordingly, but then if there are NO textures, this requires a separate optimized driver be written?

You simply make no sense.

Why bother to optimize the wireframe and solid-fill codepaths in GeForce drivers when there is always the potential that those optimizations will affect GeForce drivers adversely? It's far better to have different driver teams work on the different situations.

Additionally, since there are far, far fewer 3D modelling programs, it is not unfeasible to specifically optimize for each one (it is unfeasible to optimize for each game individually). Obviously, such optimization is better than "on the fly" optimizations.

Which is why it's reasonable to assume that the FIRST applications they "optimize" for are the very ones that people use to "test" their cards.

In a nut-shell...I agree with DemoCoder. I don't care WHAT "application specific" optimizations are used, as long as it doesn't negatively affect imagae quality to get the performance boost.

Of course, by that definition, nVidia isn't cheating with 3DMark2001, given that it is an optimization and not a bug, since image quality is not decreased.
 
Bambers,


hmmm if that was the case of just the higher-res textures you might think that then the ATI cards should fall behind as the GF4 has better memory mangament.


Chalnoth,

I have always held to the blief that both issues with nVidia's drivers and 3dmark were bugs. I just found it highly ironic that the SAME thing happened with the 8500 in Q3 and people like you assumed they were cheating with out looking into all the facts.

Game optimizations are done all the time by even the developrs and yet its ok for them to do it. Why not let the hardware vendors do the same thing. Take the SS games. Highly different optimizations done for different cards (example like taking advatage of the K2s ability to use more than 2 textures per pass). Yet no one is complianing why other games dont use the same set of optimizations. After all they should be able to help "other games that have similar rendering situations" using your own words....
 
I think everyone here agrees that it would be best without game-specific optimizations. Not because they are cheats, and not because they might neglect other games. But simply because it would be best if they werent needed atall.

Sadly this is not the 3d vendors responsability alone. The game developer is responsable as well. And he too is in a dilemma, one optimation might work great for an ATI card but suck for an nVIDIA card so forth. Besides he doesnt know either card half as good as the respectable 3d vendor does.


Let try another (very)hypothetical situation.

Some new game comes out, this game takes full advantage of the R8500 and not of the GF4TI4600. The result is that the R is twice as fast as the GF at every quality level. The gamedeveloper doesnt see it as a problem and changes nothing. Now this game is a good one and it gets REAL popular. With that it becomes more and more used as a benchmark, and eventually everybody uses it.

Wouldn't you as nVIDIa want to get some drivers out there that put your card in a more fair light? I sure as hell would. And wouldnt that be perfectly ok? Not only does it make the benchmarks more fair, but the thousands of gamers who love the game and own a nVIDIA card get a much better experience since they can now run max detail @ 60-70 FPS.
 
I think everyone here agrees that it would be best without game-specific optimizations.

Since when? Game-specific optimizations are absolutely fantastic and should be implemented whereever possible and likely to provide benefit.

A very, very, very elementary princinciple in software is- the more your shield a code path, the less the implications for QA. By using game-specific optimizations and/or bug fixes and ensuring a new code path does not execute in anything other than the destined target, you've reduced the complexity and chance for ill-effects in other software nearly infinitely.

I just can't believe the amount of NV-zeal to express such bold, ignorant and comedic values for *software design*. It comes down to forcing a silly and ludicrous standpoint all in the good name of trying to bend it into some sort of evidence against ATI- when the very theory itself is best practice and one that even the most bonehead developers will implement daily with good reason.

Gee, by all means, if there is a specific issue or codepath that has immediate effect in Game X, be sure to throw it in the drivers so Game A through Z,ZZZ,ZZZ (which wont have time for full QA to try all of them unless we target a driver revision date of Feb 2011) may crash, break, or otherwise have ill effects from the changes implemented in Game X. Yeah, that standpoint makes sense... not.
 
Back
Top