ATI Benchmarking Whitepaper

Getting completely back to the original topic at hand, I only have one minor bone to pick with the white-paper. That's the next to last paragraph before the conclusion:

Our second recommendation is to avoid benchmarking games and settings that produce excessively high frame rates. The difference between 100 fps and 300 fps might seem large, but since a display with a 75 Hz refresh rate cannot display more than 75 frames per second, the difference will be impossible to detect in practice.

Ususally, FPS is given as the "Average". In this example, with 300 FPS, you are much less likely to have frames dip below the monitor refresh rate of 75 Hz, than when your average FPS is 100.

In other words, I agree that settings that give excessively high frame rates are not particularly important for gameplay impact...but I don't consider the example in this case 100 FPS (average) excessively high. 250 vs. 300 FPS....yes...why bother.

In addition, even if the frame rates are excessively high, (250 vs. 300 FPS) that doesn't make the test completely useless. It does become more or less irrelevant from a "impact on gameplay" perspective, but it can still be useful as a synthetic measurement. Quake3, for example, can be used as something like a "synthetic test of multitextured fill-rate in a game situation."

The same can be said for Unreal Tournament 2003 fly-by tests.

The problem with using Quake3 today isn't directly because it's useless...it's because review sites don't use the scores properly when evaluating the results.
 
I can agree with that, Joe. Tests--no matter what they are--just need the appropriate context included, and be definitively mentioned and scaled appropriately in the commentary. (Perhaps even given a sub-cue in the chart for those people who mainly pull graphs and numbers and run with it.) The only things that become "useless" are tests where you can't pull a single facet of relevance to modern gaming--and those I imagine would be few and far between. (At least of things people benchmark.) The important thing is to recognize what tests actually represent, and make sure you TELL the readership that.

The only thing that strikes as really "marketing" is the broad 5600U/9600XT comparison chart (and from all reviews I've seen they seem to have overstated their numbers in some cases and understated them in others, so it's obviously not anything people will use as "rote"--especially since it doesn't supply real numbers anyway), and I suppose it COULD let reviewers pick and choose tests to favor one IHV over the other--but in this climate those kinds of people stand out as crappy reviewers anyway.

For reference, Lars, what about the paper do you really disagree with? Ignore the source, or change it even. What if, say, DaveBaumann said something like this? (And on the whole, if one reads most of his articles and commentary of late, he probably HAS said all of this.) We actually get some useful information from it we usually wouldn't get (test usage percentages, common testing games/benches, release years to keep in mind...), but of the general commentary, what would you actually DISAGREE with?

The only time I remember much ado about IQ was during the quake/quack debacle, and even that disappeared commonly from mention and from reviews shortly after being resolved. Didn't resurge again until the 3DMark03 manuverings began. But why DID it, and SHOULD it have? Even before the ET/FM revealings came to light, I would occasionally wander across a review dealing with IQ--or bundled with a performance review--and it only made me realize how LACKING other ones were who perpetually glossed over the matter. What does a straight performance comparison matter when the end result isn't the same? Even prior to specific "optimizations" it's an important factor to keep in mind, and one often ignored. (Or done poorly/improperly--just as performance reviews can be.)

Is ATi saying things that would seem to favor them in this climate? Er... I believe the answer to that is "duh." IHV's tend to be rather tight-lipped about where they're lacking. But are they LYING? If they're glossing over huge areas of note, feel free to mention them. If they're being shady about something or another, point out what you feel. But that they package something with a card release that talks about factors we've all been talking about for months? Heaven forfend! It dictates nothing, while presenting a large informational palatte (not informational enough, IMHO), as well as bringing up concerns we pretty much all have, agree with, and should apply to all IHV's. Considering I STILL run across sites that review cards based on one resolution, one or MAYBE two quality settings, and use three Quake 3-based engines and UT2003, I'm rather hoping people WOULD start to rethink matters seriously--reviewer and public alike.
 
I didn't care for the 75Hz = 75 fps comments, either, as this is easily removed as a cap by using the Cpanel to disable vsync--which is, I presume, why the option is there. So it really doesn't follow that 75Hz refresh caps you at 75 fps. Rather, I think the problem in this respect is one of minimum frame rates and in discovering what they might be.

I also don't care for the "frame-rate target" idea where you pick a number that suits you and then try and load up the IQ settings to get as close to the "target" as you can, because you are still primarily focusing on a frame-rate target. But the comments do at least indicate an awareness of the fact that as all IQ settings between competing products are not the same, you still must do an IQ comparison to have tests like this mean anything in a comparative setting.

So, the bottom line is that what is missing from most reviews are credible IQ examinations (which is what everyone has known for years.) If the IQ comparison is done first in a review, to establish some kind of baseline for the comparison, then no matter how you do the frame-rate comparisons after that everything will fall into line provided the frame-rate tests are based on the data collected and conclusions reached in the IQ examination portion of the review. That's the only way to make a credible product comparison, seems to me.

What would be of much more interest to me is in having ATi lay out what it thinks a credible IQ examination should consist of--detailing what specific steps a reviewer should take in doing that and suggesting a methodology for it. It would be interesting to see them make specific suggestions just to see what their opinion of a credible method might be.

I thought Anand's nV30 review in which he first contrasted the IQ differences he found between the nv30 and the R300 products he compared, and then followed up with frame-rate tests based on his IQ difference conclusions, might be a good general model to follow (although it's a pity that in his subsequent product review/comparisons since Anand has abandoned that format in favor of ones which are far less informative, IMO.)
 
It's a decent start. Now they need to add info about bilinear, trilinear, adaptive aniso, ordered and rotated grid AA, gamma correction, vsync, whether to include audio as part of their "target" fps, etc.

The benches were too obvious a PR move, and distasteful at that, as they just don't jive with some of the benches I've seen. I normally see nVidia whipping ATi in Wolf:ET and SS2 (see TR's 9600XT review) and JKII (see TR's 5600U review), ATI whipping nV in Sim City 4 (see Anand's 9600XT review), and I don't understand how a 5600U can lose across the board to the bandwidth-challenged 9600XT in a bandwidth-intensive setting of 12x10 4xAA 8xAF.

But I really like the benchmark format. Clear, concise, and almost unambiguous. They just need to add framerates at the end of the graphs (as I could care less if card A is 50% faster than card B if card A is only chugging along at 20fps) and make the game names hyperlinks to a page containing descriptions of the benchmark test, comparative screenshots, and the reviewer's subjective appraisal of each card's IQ. I'd love to see this sort of summary page at the end of every review.

Lars, I really don't agree that this is some form of insidious propaganda. If a reviewer is so stupid and/or lazy as to somehow lift this example review wholesale (note that there aren't any numbers and it's at only one resolution and setting!), then they don't deserve to be getting review samples from ATi.
 
The only ones who should feel that this whitepaper is threatening are the fansites promoting FX hardware and the sites who were/are still using Quake 3 and old benchmarks in 2003.

Try a game, any game that actually is shipping and bench it. That was the main message. The next message is, be careful about using the same game as a benchmark for a long time. Keep switching so a certain company doesn't ruin the experience with crappy visuals. Hey, I can turn down the settings myself thank-you.

-Larry
 
"Using latest WHQL-certified release drivers from ATI (Catalyst 3.7) and Nvidia (Detonator 43.25)"

and what version FX5600 Ultra used in its test? wire-bond or fcbga?

in its "Unreal Tournament 2003 Antalus Flyby (2002)" test , the 9600xt get almost 200% framerate compare with "fx5600 ultra" . if the fx5600 is fcbga , that maybe too slow for the fx5600 ultra .

in my test (44.67 whql and 7.91 whql):
Snap1.png
 
WaltC said:
I didn't care for the 75Hz = 75 fps comments, either, as this is easily removed as a cap by using the Cpanel to disable vsync--which is, I presume, why the option is there. So it really doesn't follow that 75Hz refresh caps you at 75 fps.
The key question is this - and there is NO answer to it:

If you are running at 300fps, and your monitor refresh is 75Hz, how do you know all four of those frames that composes one 'monitor frame' were fully rendered and displayed correctly?

I personally do not believe it is wise to benchmark such that the average framerate is above monitor refresh. It opens up far too many avenues for downright cheating. There is even an argument that says you should avoid benchmarking where max framerate is above refresh.
 
WaltC said:
So, the bottom line is that what is missing from most reviews are credible IQ examinations (which is what everyone has known for years.) If the IQ comparison is done first in a review, to establish some kind of baseline for the comparison, then no matter how you do the frame-rate comparisons after that everything will fall into line provided the frame-rate tests are based on the data collected and conclusions reached in the IQ examination portion of the review. That's the only way to make a credible product comparison, seems to me.

What would be of much more interest to me is in having ATi lay out what it thinks a credible IQ examination should consist of--detailing what specific steps a reviewer should take in doing that and suggesting a methodology for it. It would be interesting to see them make specific suggestions just to see what their opinion of a credible method might be.
Long post warning...

Not speaking for ATI but just for myself, my opinion is that part of the problem is that generally reviewers don't have enough experience with 3D hardware to actually correctly interpret the screenshots that they take - I think I've touched on this in posts before.

Note that I'm not talking about knowledge of 3D hardware in the general sense, but in the specific case of actually understanding of what the hardware does to produce the final image.

When I look at hardware reviews or image quality analyses I frequently find myself spotting particular differences in the images that I can usually identify as being caused by some specific behaviour of the hardware, and I can then have a good expectation of what will happen when the image is in motion, whether some artifact will be more visible when you get close to it etc. All too frequently the very same images result in the reviewer stating things like 'The images are identical' or 'There are some differences, but you can't say which is better', and I just find myself pounding my head against a wall. Often I can see what (to me) are glaring differences in rendering - ones that I expect (or know from experience) to be visible in actual gameplay, which are passed without comment.

Of course even if the reviewer has a highly technical understanding, are most people really interested in exactly why the images differ, and why one is 'better'? How does Joe Public with relatively little knowledge of the subject correctly differentiate between one reviewer who really knows what they are talking about, and another spouting cheap, third-hand Star Trek technobabble in order to try to appear knowledgable (or worse yet, actually believing that repeating third-hand technobabble really makes them knowledgable)?

Dodgy reviewer said:
The image on the left is better because the IHV in question has reversed the polarity of the neutron flow in the primary regulators.
Anyway - reviewers have a tough job - they have to make judgment calls as to whether image quality is good enough, and naturally this is based on their perception of a 'good' image. Whether this agrees with what others regard as good is, of course, the root of the problem, and frequently I might not agree with their perceptions. As I spend a lot of time examining image quality as part of my work my expectations may be skewed somewhat from what is typical, so for many other readers perhaps they are right.

Moving on, the natural progression is that we end up in a gradual downward spiral in levels of image quality -

1. Some image quality compromise becomes accepted as 'ok' in reviews. 'I can't really see the difference, or don't feel qualified to pass judgement, so I think it's good.'
2. Everyone then does it, so the initial competitive advantage is lost. After all, what's the point in trying to hold the image quality high-ground if you just get trounced in benchmarks and no-one cares.
3. Race to find the next 'acceptable' image quality compromise for performance.
4. Lather, rinse, repeat.

The losers in this scenario are people who feel that they can already see things that reviewers generally can't or don't, as they end up seeing more and more of them. If something like this does happen who should we choose to blame for the situation - the IHVs (for needing to be competitive) or the reviewers (for encouraging it)?

When does image quality become the yardstick in reviews, and not FPS graphs? What minimum frame-rate do we need to achieve before we give up on speed as the differentiator and really look at what's being rendered? When does providing better image quality become a large enough factor that it 'wins' comparative reviews, even if the frame rates are a bit slower?

Judging on past performance - never. People will always be too excited by graphs showing 500fps in Quake3 as that's something tangible, differences between images remain intangible - we simply can't provide an easy number that says which is better. Reviewers get accused sufficiently frequently of bias (by one side or the other) when simply trying to present verifiable benchmark numbers - imagine what would happen to them if they were to give victory in reviews based on something more subjective...

Just IMNSHO. ;)
 
andypski said:
Of course even if the reviewer has a highly technical understanding, are most people really interested in exactly why the images differ, and why one is 'better'? How does Joe Public with relatively little knowledge of the subject correctly differentiate between one reviewer who really knows what they are talking about, and another spouting cheap, third-hand Star Trek technobabble in order to try to appear knowledgable (or worse yet, actually believing that repeating third-hand technobabble really makes them knowledgable)?

Dodgy reviewer said:
The image on the left is better because the IHV in question has reversed the polarity of the neutron flow in the primary regulators.
Beam me up Scotty, there's no intelligent life down there... :)

Judging on past performance - never. People will always be too excited by graphs showing 500fps in Quake3 as that's something tangible, differences between images remain intangible - we simply can't provide an easy number that says which is better. Reviewers get accused sufficiently frequently of bias (by one side or the other) when simply trying to present verifiable benchmark numbers - imagine what would happen to them if they were to give victory in reviews based on something more subjective...

Just IMNSHO. ;)

Excellent post, and an interesting perspective (that depending on one's personal level of expertise, visual artifacts may become more and more apparent and easy to spot). For example, everyone (well, except a few PS1 aficionados) was whoooed by the Voodoo1 and its incredible filtering when it was introduced. But nowadays, for most serious people, the mipmap banding of simple bilinear filtering is seen as a serious artifact. Same goes for AA/AF : before being introduced to proper AA (first RGSS AA, then good AF + RGMS AA), most people (as in general public, not talking about professionals) took jaggies, texture crawl... as granted. Now, even a small amount of aliasing is easily noticed. I suppose that in a matter of years, more people will be able to easily spot the artifacts introduced by current AF methods.

To go back to the matter of reviewers, the "repeat techno-babble" syndrome is only bound to get worse : I remember the very same people (the "guys with webpages") we are talking about being unable to grasp how AA worked, or what mipmap dithering was. I remember them hyping AGP texturing like it was the best thing since bilinear filtered sliced bread. And now, readers are expecting them to be experts on pixel shaders, floating point pipelines, and instruction branching in the vertex shader. So they can either admit their ignorance (hardly the good way to get page hits and advertising revenues), or pretend they know WTH they are talking about. Since the public is generally even less informed, it can go on for a while.

Framerate-based comparisons are a very easy "tool" for those guys with webpages (except for those who still have to learn how to position origins and scale on an Excel graph, anyway) : framerate charts are relatively easy to do, don't require any technical knowledge, and you just run the timedemo, get a coffee, and come back later for the results. Slap a couple of screenshots, add some techno-babble straight from the Press Release, copy-paste the specs, and presto, a 15 pages review just in time for the official lifting of the NDA, and some fat advertising revenue. Hence the hyping of really stupidly high framerates, which, IIRC, started around the time Q3 appeared.
 
Borsti said:
The guide is like a pre-generated review including benchmarks, IQ results, conclusions etc.

You really think anyone could get away with that? Yes, it has some benchmarks in there (which to be honest were probably unnecessary, and very much a marketing thing), but I don't really see any conclusions being made at all in the document, particularly when it comes to image quality.

As far as I can see these are just cmmon sense guidelines with a marketing sheen over the top.
 
You really think anyone could get away with that?

Well, considering what some "guys with webpages" got away with, it's entirely possible. For example, imagine something along the li(n)es of posting performance numbers from a "reliable" source without even mentioning IQ. Of course, since you don't want to be held accountable, you could do that on your weblog and not on your main page and include a "grain of salt" disclaimer. And people would actually praise you for the good news and the nice display of journalism and rejoice at number coming out of thin air. Not that I would dare to take examples in recent weeks events, of course...
 
Hanners said:
Borsti said:
The guide is like a pre-generated review including benchmarks, IQ results, conclusions etc.

You really think anyone could get away with that? Yes, it has some benchmarks in there (which to be honest were probably unnecessary, and very much a marketing thing), but I don't really see any conclusions being made at all in the document, particularly when it comes to image quality.
Heh, just wait until you see my AIW 9600 Pro review.... ;)
 
andypski said:
When does image quality become the yardstick in reviews, and not FPS graphs? What minimum frame-rate do we need to achieve before we give up on speed as the differentiator and really look at what's being rendered? When does providing better image quality become a large enough factor that it 'wins' comparative reviews, even if the frame rates are a bit slower?

Well, as you say, probably never.
Quantitative data is given a higher significance than qualitative in most areas, not only in consumer 3D-graphics. And there are some perfectly good reasons for this, as well as some not so good.
Furthermore, framerates and image quality is intimately linked. Looking at one without evaluating the other isn't good practise, but arguably framerate is the primary value, as it determines fundamental playability as well as appearance, and not merely appearance. In the present cinematic rendering rut, it seems all too easy to forget that games are not movies, and that control and interactivity, while nonexistant parameters for cinematic rendering, are absolutely fundamental to gameplay.
Furthermore, it is often implied that framerate is a solved problem, and image quality is "where it is at". Yet, even at only 1024x768, without AA or AF, Beyond3D reports the following average fps scores with an extremely fast host system and the fastest midrange card money can't buy just yet:
UT2003 79 fps
Splinter Cell 29 fps
Tombraider 42 fps
RTCW 67 fps
SeriousS 112 fps

Minimum framerates typically dipping to one half or a third of the average.

I have to say that these framerates doesn't look like framerate is anywhere near being a "solved problem" to me given the very modest gfx-settings. Whether a player would choose to improve the graphics quality at the cost of even lower framerates is highly doubtful. Would the improved prettyness of the scenery be worth missing jumps, missing shots, dodging into instead of away from damage, misjudging enemy movement et cetera? For many (most?) the added frustration of bad control would determine their choice.

What we are seeing is that game designers at any given point in time try to balance responsiveness and visual quality to provide as good an experience as possible. This is reasonable and it means that there will always be a tradeoff between the two. But taking responsiveness out of the the equation when evaluating game performance is absurd. Though control and responsiveness obviously matter little to benchmarkers, for those who actually play the game these are vital parameters.
 
Entropy said:
Furthermore, it is often implied that framerate is a solved problem, and image quality is "where it is at". Yet, even at only 1024x768, without AA or AF, Beyond3D reports the following average fps scores with an extremely fast host system and the fastest midrange card money can't buy just yet:
UT2003 79 fps
Splinter Cell 29 fps
Tombraider 42 fps
RTCW 67 fps
SeriousS 112 fps

Minimum framerates typically dipping to one half or a third of the average.I have to say that these framerates doesn't look like framerate is anywhere near being a "solved problem" to me given the very modest gfx-settings.
Framerate will never be a solved problem as long as the goalposts keep moving. That's fundamental.

We regard the settings you describe as 'modest' these days. Getting into the way back machine we can see that when the first 3D gfx cards appeared then on the newest games you might be running at between 30->60 fps in 640x480 on the fastest cards around, with bilinear texture filtering, and it seemed like the best thing that had ever happened to PC graphics. Now even for mid-range cards we expect to run at 1024x768, possibly with AA or AF as well - an increase in basic pixel resolution of 2.56 times, and if you look at the complexity of each pixel rendered it has increased many more times than that. This is because the IHVs have, to some extent, managed to outstrip the requirements of software development. As a result our expectations have increased, and rightly so - this is progress.

So now it comes down to how many of those hard-earned gains we are willing to sacrifice on the altar of performance?

If the graphics card was totally fill-limited in the cases you gave, and ignoring other potential bottlenecks, then dropping back to 640x480 (without any other rendering quality decreases) would give the following frame rates -

UT2003 202 fps
Splinter Cell 72 fps
Tombraider 107 fps
RTCW 171 fps
SeriousS 286 fps

So maybe the frame rate issue can be solved, at the cost of some quality... (Higher res->more quality)

Your increase in expected resolutions is an image quality issue, and one that you currently have full control of as you get to choose the resolution. However, what if in the driver I actually rendered at 640x480 while pretending that I was rendering at 1024x768? My apparent frame rates would leap up by sacrificing image quality. If a driver was discovered that did this then I'm sure there would be some kind of outcry, particularly if it only did it on benchmarks. However if you only looked at the frame-rate graphs, and didn't examine the images produced you might think that this card was much faster than another that was behaving correctly. Fortunately the image quality differences introduced by such an 'optimisation' would be large enough that any reviewer with half a brain should catch them. However, maybe I don't need quite such a big gain, so maybe I can be a bit less obvious and just render at some slightly lower resolution, say 1000x730, and upscale at the back end, now perhaps I'm close enough in quality that the same reviewer wouldn't spot what I'm doing, or might regard my IQ as close enough?

So, is this fundamentally different to, for example, a driver that chooses to render with bilinear filtering instead of trilinear? Once again it is ignoring the image quality options in order to give a higher frame rate. Maybe the image quality differences are still too high to pass a reviewer and be acceptable?

Now how about one that renders with 'partial' trilinear - somewhere in between?

The difference in overall image quality between the cases I've described above is just one of degree. At some point along the line from bilinear->trilinear the image quality differences might become small enough that they don't bother you, but maybe they would still be large enough to annoy someone else. The problem is - who decides where to place the boundaries and when such manipulation is acceptable? The IHV's? The reviewers?

As you say, quantitative data is given high significance, but the problem is that this data (frame rate) is totally meaningless without a thorough understanding of the qualitative data (IQ) that lies behind it, and yet many people still place great store in it, regarding it as in some way definitive and using it to declare winners in reviews...
 
Andy, why don’t you talk to your marketing department about putting together a video demonstrating the issues that you raised? I would assume that a large number of people would be interested in having as much information as possible about it. Maybe a whole presentation on image quality. Show examples of different filtering, precision, AA etc. include it on the driver cd that comes with ATI’s cards and host it on the ATI web site. Arm people with enough information about image quality issues and then maybe the slippery slope that you alluded to could be avoided.
 
Dio said:
WaltC said:
I didn't care for the 75Hz = 75 fps comments, either, as this is easily removed as a cap by using the Cpanel to disable vsync--which is, I presume, why the option is there. So it really doesn't follow that 75Hz refresh caps you at 75 fps.
The key question is this - and there is NO answer to it:

If you are running at 300fps, and your monitor refresh is 75Hz, how do you know all four of those frames that composes one 'monitor frame' were fully rendered and displayed correctly?

I personally do not believe it is wise to benchmark such that the average framerate is above monitor refresh. It opens up far too many avenues for downright cheating. There is even an argument that says you should avoid benchmarking where max framerate is above refresh.

You're right--there's no answer, because the proposition that they are all being rendered is equally as valid as the proposition that they aren't, unless you can see them being skipped...:) Nobody knows whether they are being rendered or not if no skipping is apparent, thus the question has no answer. Therefore, it's no more wise to assume they are not being rendered than it is to assume they are, unless you can see that frames are being skipped.

Generally when enough frames are skipped the display gets choppy and chugs and stutters. So that's one way you can actually tell whether you are skipping frames.

There've been more than a few times where I have actually seen vsync act as a governor on the performance of my 3d card when playing a 3d game. A couple of years ago playing UT with my GF3 at the time I hit a certain area of a map where the frame rate began to chug, with noticeable stutter. Happened every time. Turned off vsync and the map section smoothed right out--smooth as butter. (The only time you can see frames "skip" is when the card stops rendering them all--that's when you get "chugging" or "stuttering" in a game--as frames are being skipped.) Had something similar occur in Wizardry 8 that was also completely cured by turning off vsync.

Normally these days I run with vsync on, but at 160Hz @640/480-800/600, 120Hz @ 1024/768, 100Hz @ 1152x864/1280x1024 and 85Hz at 1600x1200, so it hasn't been that much of a problem for me, as my refresh rate is high enough not to throttle the processing power of the vpu--most of the time.
 
WaltC said:
Generally when enough frames are skipped the display gets choppy and chugs and stutters. So that's one way you can actually tell whether you are skipping frames.
Generally it's not the video card skipping frames in these cases. If the framerate drops too low, whether because the video card is heavily loaded or CPU limitations, it's not the video card skipping frames that causes the chugginess, it's the fact that so few frames are rendered at all. The application compensates for low framerates by reducing the number of frames it generates per second. Of course, this just means that animations and such proceed normally, but it doesn't do anything to alleviate the low framerates.
There've been more than a few times where I have actually seen vsync act as a governor on the performance of my 3d card when playing a 3d game. A couple of years ago playing UT with my GF3 at the time I hit a certain area of a map where the frame rate began to chug, with noticeable stutter. Happened every time. Turned off vsync and the map section smoothed right out--smooth as butter. (The only time you can see frames "skip" is when the card stops rendering them all--that's when you get "chugging" or "stuttering" in a game--as frames are being skipped.) Had something similar occur in Wizardry 8 that was also completely cured by turning off vsync.
All of this falls under what I mentioned above.
 
However, what if in the driver I actually rendered at 640x480 while pretending that I was rendering at 1024x768? My apparent frame rates would leap up by sacrificing image quality.

Now, that's an idea for a future set of drivers... Which I heard were WHQL candidate and will be available two days ago. Seriously, it would never work. Not without some clever marketing and buzzwords, such as "dynamic load-based intelligent downscaling-based runtime optimizer". After all, when you are fragging, you really don't care if the game is being rendered at 640x480 or 1024x768.


If a driver was discovered that did this then I'm sure there would be some kind of outcry, particularly if it only did it on benchmarks.

Well, you could always say that the people crying foul are fanboys of your competitor, and you could promise that you won't do it for benchmarks alone, and later keep your word by applying the "optimization" to all games.

However if you only looked at the frame-rate graphs, and didn't examine the images produced you might think that this card was much faster than another that was behaving correctly.

Well, it all depends what you mean by "correctly". After all, this whole resolution thing is obviously a plot by M$ and various companies, and people designing games were upgrading the resolution brings some meaningful IQ increase are obviously doing something to make your product look bad.

Fortunately the image quality differences introduced by such an 'optimisation' would be large enough that any reviewer with half a brain should catch them.

Yes, but since brain is not equally distributed among "reviewers", you could get away by sending those "special purpose" drivers to a few selected guys with webpages, together with unannounced hardware so they feel they've got a big hit on their hands and that they must be very special (agreed here) and important to you.


However, maybe I don't need quite such a big gain, so maybe I can be a bit less obvious and just render at some slightly lower resolution, say 1000x730, and upscale at the back end, now perhaps I'm close enough in quality that the same reviewer wouldn't spot what I'm doing, or might regard my IQ as close enough?

Well, if your PR guy does his job well, the guy with a webpage will only release 400x300 screenshots to the public anyway. So I guess you are pretty well covered here.
 
Back
Top