Egg on Nvidia's face: 6800U caught cheating?

Status
Not open for further replies.
jolle said:
But are you sacrificing IQ when the end user cant se the difference, or is it enough that you can find a difference with these Mipmap tools and blowups of screenshots?

Well, you couldn't see 3DMark's clip-planes while running the benchmark normally, only by deep digging with the special "BETA members" version.
 
jimmyjames123 said:
To be quite honest, this seems more like FUD, because at least at the moment, there is no good reason for NVDA to cheat with the NV40. The NV40's performance lead over the current generation of high end cards has already been shown through numerous tests and benchmarks.

So its ok to cheat even if they don't need to????? :oops:


heh, that's even better than "TWIMTBP"

"Cheating even when we don't need to..."

or maybe...

"If you ain't cheating... you ain't trying"

I was hoping we were past all this stuff. Apparently not. :devilish:
 
SsP45 said:
jolle said:
But are you sacrificing IQ when the end user cant se the difference, or is it enough that you can find a difference with these Mipmap tools and blowups of screenshots?

Well, you couldn't see 3DMark's clip-planes while running the benchmark normally, only by deep digging with the special "BETA members" version.

I guess it might take some rethinking in synthetic benchmarks from my part..

But isnt the cliping plane what defines what will be rendered and what wont?
in that case geometry would be dissapearing from the scene if you reduce them, thus have a significant impact on the image? hehe what with the missing stuff and all...
I didnt follow the latest driver cheating very closly since Im not using NV hardware atm..
 
Nick Spolec said:
geee the inq picked this up already ...... http://www.theinquirer.net/?article=15488 they must lurk here all day long

The Inq staff are like cochroaches.. You know they are there, you just can't see 'em..
Why DOESN'T Fudo post here? I still haven't been able to yell at him for "6800 Ultra? Must be NV40GL!" stupidity. Oh, and "32 color-value-only pixels per clock." And lots more.
 
lol - the standard for what is trolling and what isn't is very ambiguous here. My (short-lived) leaf blower thread was at least amusing and had more substance than either of the two aforementioned threads.

Never-the-less, the green spray under the wing on the nV image looks more like a driver error than any evidence of cheating. The nose difference was very small IMO . My two cents is that what we are seeing is a case of a minor driver bug, not any systematic cheating.

I can't imagine nV taking the risk for cheating at such an early stage with NV40. Surely among the things nV didn't relish the last couple of years is their earned reputation for being cheats. If anything at this point I would expect them to become holier than thou.
 
jolle said:
With all this witchhunting in that department these days, you could stop and ask yourself: Does it matter?

If any chipmaker decides to sacrifice IQ that CANT be seen for peformance, does it matter to the end user?
The line for IQ, is it drawn at what you can FIND by deep digging, or what you SEE onscreen when you play..

Not calling any shots on this case, but just in general..
Optimizing is a good thing, when you sacrifice IQ while doing so its instead a cheat..
But are you sacrificing IQ when the end user cant se the difference, or is it enough that you can find a difference with these Mipmap tools and blowups of screenshots?
Of course it matters!
The ostrich argument (my head is in the sand - if i can't see it, it doesnt matter!) is rediculous.


(note - not passing judgement on nVidia here, but on the general premise given by jolle)
 
Althornin said:
jolle said:
With all this witchhunting in that department these days, you could stop and ask yourself: Does it matter?

If any chipmaker decides to sacrifice IQ that CANT be seen for peformance, does it matter to the end user?
The line for IQ, is it drawn at what you can FIND by deep digging, or what you SEE onscreen when you play..

Not calling any shots on this case, but just in general..
Optimizing is a good thing, when you sacrifice IQ while doing so its instead a cheat..
But are you sacrificing IQ when the end user cant se the difference, or is it enough that you can find a difference with these Mipmap tools and blowups of screenshots?
Of course it matters!
The ostrich argument (my head is in the sand - if i can't see it, it doesnt matter!) is rediculous.


(note - not passing judgement on nVidia here, but on the general premise given by jolle)

Then Occlusion culling is cheating?
cause it removes things that are hidden behind other things to save performance?
You dont see the difference, but a alteration is made, and thus you have been "screwed" on those polys behind that wall somewhere...

I just mean that there should be a line somewhere, if the difference as he stated in his investigation isnt visible to the eye, then is the end user really getting the short end of the stick?

Its prolly different with 3dmark and the synthetics, they are MEANT to benchmark hardware under SAME conditions..
Games are to be enjoyed at the highest IQ you can muster at a playable rate, if you get a few more FPS and cant se the IQ difference, that isnt wrong to me tho..
 
Occlusion culling is a general case optimization that can be used in any application. Adding clip planes to a synthetic benchmark because you know it runs through the same path every time is cheating, because that type of optimization only works in synthetic benchmarks. Allowing thay type of thing makes the benchmark much easier to optimize for than a real game, so it's no longer useful as a benchmark.
 
GraphixViolence said:
Occlusion culling is a general case optimization that can be used in any application. Adding clip planes to a synthetic benchmark because you know it runs through the same path every time is cheating, because that type of optimization only works in synthetic benchmarks. Allowing thay type of thing makes the benchmark much easier to optimize for than a real game, so it's no longer useful as a benchmark.

yeah, that is prolly what im trying to get across...
there is a fine line between optimizing and cheating..

I guess all synthetic benchmarks should be left totally untampered with since they are about testing under same conditions..
Even if you got something that wouldnt affect the way a game looks on the end users screen..

But again, I havent tried to defend clipping planes or whatever there is to cheat in benchmarks..

just think if you get the same visuals in a game due to optimizations, its just that, and not cheating..
Farcry on NV cards is a good example on how you "cheat" since it seems to drop to lower quality, without concent from the user..
If it was optionable, it would allow the user to choose to play at a more playble framerate, with lower gfx, but it isnt asking..
 
jolle said:
Althornin said:
Of course it matters!
The ostrich argument (my head is in the sand - if i can't see it, it doesnt matter!) is rediculous.


(note - not passing judgement on nVidia here, but on the general premise given by jolle)

Then Occlusion culling is cheating?
cause it removes things that are hidden behind other things to save performance?
You dont see the difference, but a alteration is made, and thus you have been "screwed" on those polys behind that wall somewhere...

I just mean that there should be a line somewhere, if the difference as he stated in his investigation isnt visible to the eye, then is the end user really getting the short end of the stick?

Its prolly different with 3dmark and the synthetics, they are MEANT to benchmark hardware under SAME conditions..
Games are to be enjoyed at the highest IQ you can muster at a playable rate, if you get a few more FPS and cant se the IQ difference, that isnt wrong to me tho..
no, occlusion culling is not cheating.
why do i feel like i am talking to a brick wall here?
Occlusion culling is IMPOSSIBLE to see a difference in the rendered image, because THERE IS NO DIFFERENCE IN FINAL OUTPUT. Ergo, it is 100% IRRELEVANT to the discussion at hand.

There is a line - does it affect the rendered image?
He didnt say "it isnt visible to the naked eye", he said its "not noticeable" and then "you'd be hard pushed" to notice it.

And it has nothing to do with 3dmark or synthetic benchmarks - it might have something to do with benchmarks in general, but not specifically synthetic ones.

I feel that what is "noticeable" differes from person to person, and I want to be able to make the call on noticing it or not - not have it made for me.
I also feel that giving a carte blanche on "unseeable optimizations" is a quick step down a veeeery slippery slope, and i dont want to go down it.
 
jolle said:
Its prolly different with 3dmark and the synthetics, they are MEANT to benchmark hardware under SAME conditions..
Games are to be enjoyed at the highest IQ you can muster at a playable rate, if you get a few more FPS and cant se the IQ difference, that isnt wrong to me tho..

Oh good grief Jolle - those arguements have found no fertile ground with the exception of ardent supporters of a certain company. Why do you repeat them unless you want to be known as an nVidiot?

By your own comment about synthetics, nVidia cheated. Period. And games are meant to be played the way the developer developed them, not the way an IHV decides to play them for you. Anything else is cheating the consumer, all rationalizations about "you can't see the differences" aside.
 
I thought game developers were supposed to dictate how a game runs and looks.

Not an IHV who decides that lowering the quality will bost their scores. I'm increasingly getting suspect of all the other 6800U scores. Maybe this 2X-3X advantage over the 9800XT is more like a 30% increase when rendering the same picture using the same textures and mipmaps

shame on people who will accept a LOWER quality on a 500$ card that is supposed to push the gfx industry forward.
 
hmm.. considering FutureMark approved the drivers, and from the image quality differences between the reference image and nVidia's, i must say something fishy is going on. Maybe a "bug" in nVidia's drivers perhaps? :p
 
Scarlet said:
jolle said:
Its prolly different with 3dmark and the synthetics, they are MEANT to benchmark hardware under SAME conditions..
Games are to be enjoyed at the highest IQ you can muster at a playable rate, if you get a few more FPS and cant se the IQ difference, that isnt wrong to me tho..

Oh good grief Jolle - those arguements have found no fertile ground with the exception of ardent supporters of a certain company. Why do you repeat them unless you want to be known as an nVidiot?

By your own comment about synthetics, nVidia cheated. Period. And games are meant to be played the way the developer developed them, not the way an IHV decides to play them for you. Anything else is cheating the consumer, all rationalizations about "you can't see the differences" aside.

yes I said synthetics should be left untampered, and I stick by that..
It didnt strike me at first, but then dawned on me hehe, its 6 in the morning here and the brain is slowing down a bit..
As i said im not defending anyone, Im on a R9700Pro and damn happy about it, ive seen the FX doing its "own thing" in games it isnt suited well for.. like farcry...

All I wanted said is if you can make a game faster without selling out IQ, its optimizing it, which should be a good term..
But maybe it should be all in the hands of the game devs..

I see what you mean with mean by "IHV forcing it" on people, and if they do, reviewers will give the "IQ crown" to the other guys, cause you got a card that is dealing out lesser IQ then the competetion..

I acctually said before, on the subject of Optimizations that what draws the line is the users ability to choose, eh, well unless its something that doesnt have any impact on IQ, only speed.. i guess..
So yeah, in that aspect it is a cheat i guess, even if its affecting IQ in a "not noticeable" kind of way..

Btw, you mentioned the way devs made it look, which is one of the important aspects i guess.
when it comes to mipmapping, I always assumed that wasnt specified in the 3d engine, but in the hardware or driver where and if to use what level of mipmaping, i guess that is wrong, or does this mean IHVs override the specified settings from the 3d engine?
 
jolle said:
SsP45 said:
jolle said:
But are you sacrificing IQ when the end user cant se the difference, or is it enough that you can find a difference with these Mipmap tools and blowups of screenshots?

Well, you couldn't see 3DMark's clip-planes while running the benchmark normally, only by deep digging with the special "BETA members" version.

I guess it might take some rethinking in synthetic benchmarks from my part..

But isnt the cliping plane what defines what will be rendered and what wont?
in that case geometry would be dissapearing from the scene if you reduce them, thus have a significant impact on the image? hehe what with the missing stuff and all...
I didnt follow the latest driver cheating very closly since Im not using NV hardware atm..
A clipping plane is a more general thing. It cuts away (parts of) geometry, and is usually something a developer requests, because he/she/it wants it to be there for various reasons. Clip planes are not interchangeable with occlusion culling techniques, such as old-fashioned z buffering and advancements made beyond (TBDR). The z buffer will always work, it's transparent, no need for dynamic adjustments.

Now. Clip planes can help you save fillrate on objects that you know will get overwritten by other objects later in the frame. But they are not a universal technique, because rendering APIs are not scene graphs, so they don't know what they would need to know about the scene to construct a clip plane that would always work right. OTOH a developer might know more about expected scene composition, so it can be done in the rendering engine, if wanted. El cheapo clip functionality such as the scissor test (x/y bounds) and "Ultrashadow" (depth bounds and depth clamp) are meant for exactly this purpose, in addition to the more free form (but also more expensive) clip planes.
It isn't feasible for IMRs' drivers to extract final visibility information, because you need the transformed post-vertex shader geometry for most of the frame to even start the analysis. Needs lots of storage space, will most probably cause pipeline stalls if you want to run the vertex shader in hardware and the analysis on the host CPU. It's computationally cheaper to just fire it off regardless.

The "clip plane drivers" didn't analyse the geometry on a per frame basis. If they did, it would have been a valid optimization, because it wouldn't cause rendering glitches (if done correctly), regardless of scene composition and camera position. It would have been very expensive, which is the obvious reason why it hasn't, ever, been done by any IMR driver. Exception: 3dfx geometry assist. Didn't work. 'nuff said.

The clip planes were static, which is computationally free (you just store the parameters in the driver). This is the reason why with this "optimization" the thing was faster than without it, but it's also the reason why no way in hell you could use it for any application with an unknown scene composition or camera path (read: everything except 3DMark or publically available timedemos).
 
Now how could it be a bug? I thought Nvidia had the golden drivers... :LOL: :rolleyes:
 
Status
Not open for further replies.
Back
Top