ET article confirms ATi/Nvidia cheats

The Baron said:
Noooooooo! Not more Flash graphs! DHTML, layers, and CSS, fair enough, but Flash? Argh. I miss so many ads by not running Flash...

In all honesty, though, I like the customizable graph idea. Somebody needs to develop (a non-Flash version of) that further.

I use a program called FlashSwitch (http://www.flashswitch.com/). It's a little icon that sits in your systray, you just click the icon to turn flash on and off and reload the page, assuming you use IE of course.
 
Ok - thanks Dave.

Now - is it just me or does anyone else think the results are whacked? I mean, the scores are not even close. Hell - taking a quick swoop and we are looking at a 49% difference between the 1280x1024 scores and a 52% difference at 1600x1200...

This has probably already been asked - but exactly how different are the two settings between the manufactures? In regards to card settings (AA/FSAA, etc)...

BTW Dave, I am not questioning your tests or the results - or whoever provided the numbers. With a difference in speed across the board one would expect to see the same results on other benchmarks.

Maybe I have missed something - but I do not recall any result showing 40% or greater...

Something does not add up here....
 
Saem said:
Could always do it with java script. Lots of math folk seem to be all over that sort of thing.

Err...Javascript is client-side, nor does it have any graphics capability.

My suggestion? Well, of course you keep all the results in a simple database; MySQL would be an obvious choice. You have a CGI script in Perl (or something) that handles the user's request, selects the proper data from the database, and formats it all for a pass off to...

...jgraph, a utility which outputs a beautiful graph in...

...Postscript. Um.

So then, as per the handy suggestion on the jgraph site, we pipe the result to a command-line invocation of ghostscript, and pipe that result to pnmtopng (or ppmtogif), tie it all up with a ball of twine, and voila! A beautiful image file to serve up to our discerning viewer, assuming he has not gotten bored waiting for all of this to occur and gone to the kitchen to make himself a sandwich.

Alternatively, Perl has some native image producing support via Perl/Tk, although I wouldn't want to code up the means to produce graphs that way.

Alternaternatively, someone has probably already done it and posted it to CPAN.

Or, seeing as how the total amount of data surely won't be very large: a dozen or so video cards * a few PC configurations * a dozen or two tests * five output resolutions * four or five AA settings * four or five AF settings * two or three AF algorithm choices * four or five bytes to encode the score (ooo!--less if you use packed decimal!) = maybe a MB or so of data. Ok, that's actually larger than I thought. But anyways, the idea was to just download all the data, and have a client-side java applet generate the graphs or something.

Well...?

What are you waiting for, Wavey? Go to it! And make some custom timedemos for every review while you're at it! And speaking of, where is that NV35 review anyways?! :p

;)
 
The Baron said:
Noooooooo! Not more Flash graphs! DHTML, layers, and CSS, fair enough, but Flash? Argh. I miss so many ads by not running Flash...

In all honesty, though, I like the customizable graph idea. Somebody needs to develop (a non-Flash version of) that further.
I certainly didn't mean Flash (I've complained already on AT's forums about that). I meant some script that would allow users to download only graphs for the settings they consider comparable. Generating them on the fly on the user's PC is very intriguing, tho, and would probably save both bandwidth and CPU time on the server--very interesting idea, especially given how ridiculously fast even the slowest new PC's are nowadays.

I remember Marco saying he was working on something like that, but that was a long time ago. I'm not sure if it's still on the way, or if it proved to be more trouble than it's worth.

BTW, I just noticed CNet's reviews only use 3DM03 and UT2K3 as benchmarks. I'm guessing it's a fairly popular site, so we can see that 3DM03 cheating can be a big issue. Nice to see CNet has taken down their overall 5900U score to reevaluate after they retest 3DM03. Funny how a seemingly more technical and reader-oriented site like AT can't even do that: post a blurb at relevant points througout the article notifying readers they are investigating suspect numbers.
 
Pete said:
BTW, I just noticed CNet's reviews only use 3DM03 and UT2K3 as benchmarks.
On a side note, do you think that a member of Futuremark's beta program wouldn't use 3DMark? It would be a great moment if CNET wasn't :LOL: :devilish:

It's quite a popular site.
 
Doomtrooper said:
A much needed update..well done.

rofl

I'm sorry, but this update seems pointless...

ET said:
So what happened? We suspect that in preparing our version 3.30 numbers, we went into ATI's driver control panel, and set the correct AF and AA parameters, 8X and 4X, respectively. But upon launching the application, we proceeded to also set those same parameters at an application level, which over-rode the driver-level settings, and had ATI doing the work specified by the app, rather than letting the driver handle AF filtering decisions.

Why are these different? And for a .5% gain?

An update that where the difference is within the magin for error that brings up more questions than it answers.
[edit- to explain my post a little better]
 
Deflection said:
ET said:
So what happened? We suspect that in preparing our version 3.30 numbers, we went into ATI's driver control panel, and set the correct AF and AA parameters, 8X and 4X, respectively. But upon launching the application, we proceeded to also set those same parameters at an application level, which over-rode the driver-level settings, and had ATI doing the work specified by the app, rather than letting the driver handle AF filtering decisions.

Why are these different? And for a .5% gain?
The difference between app and driver aniso is around 10%? I'd really like to know what is the difference. Does 3dmark03 use different texture filtering settings (bilinear vs. trilinear) when aniso is used?
 
OK, I'll throw in my results in the mix:

System:
Intel P4 1.8A GHz @ 2.25 GHz
Radeon 9700 with Catalyst 3.4 drivers

Control panel Direct3D changed settings:
- forced performance 4x ansisotropic filtering (I play my games with that ;) )
- forced wait for vsync to off

3DMark settings were left at default, only resolution was changed to 1600x1200.

The results for game test 4:
3DMark 2003 build 313 = 16.6 fps
3DMark 2003 build 320 = 16.6 fps
3DMark 2003 build 330 = 14.9 fps

So:
- Build 330 is (100 - ((14.9/16.6) * 100)) = 10.24% slower than build 320.
- Build 320 is (((16.6/14.9) * 100) - 100) = 11.41% faster than build 330.

(Both points under 'so' are valid. You just can pick the one that suits you the best. Rule number one of letting benchmark difference look less or more impressive. ;) )

As far as I am concerned, the ExtremeTech figures for gaming test 4 are completely valid.
 
OT: Question to John Reynolds

Hi John,

out of curiousity, are you the same John Reynolds that was quite against 32 bit color in the early TNT2 and G400 years? ;)
 
Re: OT: Question to John Reynolds

sonix666 said:
Hi John,

out of curiousity, are you the same John Reynolds that was quite against 32 bit color in the early TNT2 and G400 years? ;)

I don't remember him, or anyone else, being against 32 bit color. I do recall that I, for example, was for 22 bit color: near 32 bit quality at near 16 bit performance. John was probably similar i his views, but he'll have to speak for himself. ;)
 
Yeah yeah, the '22' bit of the Voodoo3. That post filter still makes me laughs. I was convinced in 32 bit and have gotten me a G400 then. And yeah, it was very fast in 32 bit color. Some people that went for the Voodoo3 in the end were quite jealous of the image quality of my G400. ;)

Now, the big question, what do you think of floating point? ;)
 
Re: OT: Question to John Reynolds

sonix666 said:
Hi John,

out of curiousity, are you the same John Reynolds that was quite against 32 bit color in the early TNT2 and G400 years? ;)

I was opposed to 32-bit being discussed outside of performance, just as I argue against people who constantly bring up ATi's AF algorithm without mentioning how fast it is. Maybe it's just me, but I've always liked trade-offs that give both good IQ and good performance (22-bit post filter, 24FP, etc.).

But as for 32-bit output, I never started using it until November of '01 when I bought a GF3 Ti.
 
sonix666 said:
Yeah yeah, the '22' bit of the Voodoo3. That post filter still makes me laughs.

Um, why? Was a very good trade-off. 22 bit isn'at as good (qulity wise) as 32 bit, but it was significant;y better than 16 bit, the performance was better than the comparably spec'd / priced competition at 32 bit.

I was convinced in 32 bit and have gotten me a G400 then.

Glad you were happy!

And yeah, it was very fast in 32 bit color. Some people that went for the Voodoo3 in the end were quite jealous of the image quality of my G400. ;)

Of course, the G400 wasn't nearly as "compatible" as the Voodoo3 back in the day, nor was it as cheap. It's all about the trade-offs.

Now, the big question, what do you think of floating point? ;)

Um, what about it? :?:
 
Um, why? Was a very good trade-off. 22 bit isn'at as good (qulity wise) as 32 bit, but it was significant;y better than 16 bit, the performance was better than the comparably spec'd / priced competition at 32 bit.
Only the post filter made it look somewhat better, however, internally it was still just as 16 bit as ever before, resulting in banding and ugly colors as soon as transparency was used.

Of course, the G400 wasn't nearly as "compatible" as the Voodoo3 back in the day, nor was it as cheap. It's all about the trade-offs.
I haven't ever noticed that it was less "compatible". The ending days of 3dfx and Glide were already happening bigtime.[/quote]
 
sonix666 said:
Only the post filter made it look somewhat better, however, internally it was still just as 16 bit as ever before, resulting in banding and ugly colors as soon as transparency was used.

Actually, once you tinkered a bit with the correct registry settings and set the alpha dither mode to a correct value (depended on the app), you could get pretty good results even with transparent objects.
 
Back
Top