Image Quality Comparisons, Some thoughts.

ChrisRay

<span style="color: rgb(124, 197, 0)">R.I.P. 1983-
Veteran
Obviously over the last year Image Quality has become a major concern for the end users. I have been watching, Participating, and learning from threads across various hardware sites, (As I am sure we all have)

One thing I have noticed about Image Quality Comparisons. I would say 95% of the time they highlight worse case scenerios and unrealistic enviroments. Is this perhaps the best way to compare Image Quality? I realise. That if something is better for said IHV, That point should be clarified and marked. Now before you flame me. I'm definately pro deciding which IHV is offering the best Image Quality.

But we have come to the point where we will blow screenshots up looking for the most "Jagged free" texture, Or Stare at scenes presented by Dave Baumann presented "Can see you the difference" threads.

Now Coming to my main point. Which kind of illustrates my above point. We've taken "Can you see the difference" Pretty far in terms of relevent real case scenerios in Image Quality. Shouldnt we be focusing on the overall scene and how the whole image is rendered. Are Worse case scenerios really the best way to determine which IHV is better?

Your Thoughts Please.

Chris
 
slippery slope argument, lots of little optimizations add up to noticeable decreases in IQ, etc. al. same point people have been saying since brilinear came about.
 
The Baron said:
slippery slope argument, lots of little optimizations add up to noticeable decreases in IQ, etc. al. same point people have been saying since brilinear came about.


I'm not really asking that. I"m asking. "Are Worse Case Scenerios, The best way to compare Image Quality"
 
digitalwanderer said:
Rather ask yourself if best case scenerios would be a better way and I think you'll find your answer. 8)

Well, I realise there is no easy answer to the question. I tend to believe the worse case scenerios tend "exxagerate" The problem. There's got to be some kind of happy medium.

If you looked at every worse case scenerio. Some would almost come the conclusion that the difference is between software rendering and Hardware acceleration ;)

Anyways, Dont mis understand me. I'm currently looking for possible alternatives to Compare Image Quality.
 
I see your point, and I agree to some extent. If you have to blow up a set of screenshots (or use a binary comparison) to tell any difference at all, then for all intents and purposes you should call them equal.

That said, using a specialized tool to examine the worst-case scenario does have its place. Knowing that card A has noticably better worst-case IQ than card B is a good thing, even if the worst-case is a rare or simulated situation (and the situation should be described in the review as such).

Maybe a good compromise would be to treat IQ more like framerates. In other words, try to find the worst-case IQ first, then move on to a few different games and try to differentiate the "average" case IQ.
 
Use the worst case scenerios to highlight the problems and then just explain that if it is noticeable, if you really have to look for it to see problems, or however else you best describe subjectively your opinion on it. :)
 
I dunno....I dont really see it as 'worst case scenerio' when it comes to taking screen shot's during game play, and disecting them. I do understand what your saying though. I think it's pretty important that we do it.
 
ChrisRay said:
Now Coming to my main point. Which kind of illustrates my above point. We've taken "Can you see the difference" Pretty far in terms of relevent real case scenerios in Image Quality. Shouldnt we be focusing on the overall scene and how the whole image is rendered. Are Worse case scenerios really the best way to determine which IHV is better?


Chris

I am glad someone brought this up. I have seen numerous threads regarding filtering this and jaggies that.

For people to waste their time taking screenshots of a DYNAMIC MOVING ENVIRONMENT and then blowing up sections of said images 100-500% to see a transition between 2 textures that 99.99% of the gaming population will NEVER see in motion is nothing more than an exercise in stupidity, IMHO.

The ONLY time something of this nature is even viable as an argument is if the scene being rendered has NOTICEABLE rendering issues to the naked eye. I have seen FEW if ANY situations where this was happening. When it does, then I am all for understanding the issues that cause such a thing. When this does not exist (the vast majority of time at 60+ fps), then it is an exercise in futility because there is nothing to be gained by cussing and discussing such issues.

IMHO, I think the 3D graphics industry media has gotten SO focused on looking for cheats or driver anomalies, that many of them (B3D NOT included) have lost sight of what we purchase these expensive devices for...to enjoy the improvement in graphics acuity, and speed. Personally, I have had it with all of the investigative tactics that look for any and every little thing that does NOT relate to a solid gaming experience. The latest commotion about ATI's Trilinear algorithm is a perfect case in point. All ATI was attempting to do was give users more bang for the buck. However, from reading some of the articles and forum responses, you would think they had committed the crime of the century. I think the 3D industry needs to clean up it's act, attitude, and outlook to the future. It will not continue to grow if the participants keep looking at the hole and not the donut....
 
The other side of that coin being BB that a lot of us use the information gained/gleaned from those benchmarks/blown-up images to better our gaming experience....at least that's what all this stuff is about for me.

EDITED BITS: For some strange reason I thought BB was Chalnoth. ? It's late, I'm tired; good-night!

BTW-Hey BB! How ya doing?
 
What disturbs me is not the "worst case scenario" natures of comparisons, it is the practice of substituting numerical idealogy or convenience as a a whole-cut and final replacement for evaluating image quality.

To me, this is consistently displayed by filtering discussions all over the net ever since the initial exploratory article.

One aspect is the lack of testing of typical texture examples, treating "indicating" textures and observations as "proof" (I'm still waiting for some clear progress beyond this point), and proposing establishment of numerical deviation as the universal and final criteria of quality.

Another is the complete lack of evaluation in motion being a priority. In the recent Russian article, for example, an otherwise promising-looking (depending on what the translation clarifies) analysis is accompanied by a program that extensively tests and establishes...nothing more than numerical difference, and with a set of textures that seems remarkably useless for even marginal image quality evaluation (bordered non repeating textures, with no apparent consideration even given to using horizontal/vertical lines for illustration).
The actual significant problem the program represents to me, as other than possibly being used as a test of the single aspect of the adaptability of the trilinear/optimization determination with the right set of textures, is how someone went through the trouble of giving it the functions it has and as a replacement for any apparent attempt to allow any "in motion" evaluation.

Which brings something to mind: has anyone examined the impact of the difference in the 3dmark 03 filtering test, which at least does have a specific "in motion" evaluation feature? The patterns in it should be a "good" worst case example. The articles and tools I'm seeing don't seem to even attempt to reach the point where this test has been for over a year, and I find that disappointing.
 
I think a big problem is the whole subjective nature of image quality, there isn't really any one good way to quantify it.

I think we're going to have to just figure out how to better subjectively rate them somehow.
 
What do you mean by worst case scenario? Things that are only seen on screenshots?

I think partial precision and filtering optimizations are probably the most noticeable factors that produce degraded image quality -- and that doesn't take much time to discern.
 
volt said:
What do you mean by worst case scenario? Things that are only seen on screenshots?

I think partial precision and filtering optimizations are probably the most noticeable factors that produce degraded image quality -- and that doesn't take much time to discern.


Worse Case scenerio, Looking hard for an IQ issue that wouldnt usually be present.

Be it a Blown up Aliased Edge, Or Banded Mip Map. This is a very general conclusion based off of multiple Image Quality Comparisons.

You just listed a few examples. Worse Case Scenerios, Are exactly that. A Scenerio that would most likely not be noticable.

Not everything is a worse case scenerio Mind you. Take Far Cry's lighting when compared to the FX and 9800 series, Thats what I would call obvious.
 
Big Bertha EA said:
For people to waste their time taking screenshots of a DYNAMIC MOVING ENVIRONMENT and then blowing up sections of said images 100-500% to see a transition between 2 textures that 99.99% of the gaming population will NEVER see in motion is nothing more than an exercise in stupidity, IMHO.

Yay, what BB said. Static screenshots from games should be banned. That would force sites (B3D included) to address the real issue at hand which is what you can see in-game with a moving image. This is where most of the IQ issues are most obvious anyway.

(Yes, I know it's hard, if it was easy everybody would be doing it. But how long do you think it will be before IHVs put opimisa-cheats in their drivers that are frame-rate sensitive, or detect when a screenshot is being taken, and turn the IQ-reducing optimisations off? If it doesn't happen already that is...).
 
The only problem is A) you can't use uncompressed movies they are just to frigging massive so you already get images different from the cards B) even compressed movies are large and their are hosting costs and in many places around the world their are telco monopolies which hold back broadband.
 
Chris,

I doubt there's a short answer to your question and even then I doubt that it would cover all possible circumstances. IMO it depends on how much each individual's eye is "trained" to detect specific things and then what annoys each and every one most.

My point is rather in the direction that what might be a significant issue to me, might be a worst case scenario to you or vice versa.

I'll put it that way: any kind of legitimate optimisations are fine with me as long as there isn't any degradation in image quality overall. Ideally the driver CP or application could allow the user to switch said optimisations and judge for himself if it suits his imagery or not.

That said the real purpose of in depth image quality analysis is to define possible differences and possibly render criticism in that direction where it's necessary. There will always be objections to any analysing methods, the criteria used, as of course to the possible conclusions reached.

One can always overdo it, no doubt or he can just write an analysis based purely on equally senseless performance ratings with not a single IQ evaluation and/or comparison at all. A reviewer/analyst will have to find a happy medium or a perfect balance if you prefer.

Finally I personally wouldn't want any IHV and their according products to escape any kind of criticism; the more information available the better for me. After there I can myself as a consumer add up the positives and negatives up and see what covers my needs most. Alas if I wouldn't know that an accelerator does have X nasty side-effect and I find out only after the purchase.

PS: this is an interesting topic; yet instead of sterile criticism to criticism, I'd also like to read proposals how better balances can be found for IQ analysis if you really think that measures are blown out of proportion. I'm personally glad that IQ analysis increases and I wouldn't want to see the opposite.
 
I think it pretty much agree with ailuros :!:

Moreover, as time passes, people are acustomed to certain IQ's standards, and most of the time doesn't like to see those standards to be degraded. The issue is, what is the standards.
 
Why couldn't you use uncompressed movies?

They needn't be very long or at very high resolution. You might make a movie of part of the screen to show differences. (yes the worst case scenario again)

In any case, there are loads of people who download countless movies of several GB every day. So a 400MB movie to show trilinear filtering issues wouln't be so bad every 9 months?

If your servers can't handle the load, just put the movie on edonkey/emule or something like that?

It might not be the best solution for everybody, but it would more for most people here wouldn't it?
 
Back
Top