AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
Surely you see the possibility of this changing however?

The raw resolution was only used as a metric for optical clarity. More pixels = better, at the most basic of measurements. This is understandable to most people. But eventually, you're going to have to move into the territory of actual image clarity, which is entirely subjective, side by side image comparison and grading etc. If A can output a better image than B, it shouldn't matter if it's reconstructed or not. That's the direction we're headed. We need to move on from just making GPUs do computational work.
You can't be serious. You're literally suggesting destroying any point of having reviews in the first place.
They're supposed to be objective, not subjective. Subjective issues are something people need to sort themselves.
I would add something here but it would probably push it to RPSC territory
 
You can't be serious. You're literally suggesting destroying any point of having reviews in the first place.
They're supposed to be objective, not subjective. Subjective issues are something people need to sort themselves.
I would add something here but it would probably push it to RPSC territory
I’m suggesting that the future can’t still be about just measuring performance at a specific resolution and graphics quality.
 
I’m suggesting that the future can’t still be about just measuring performance at a specific resolution and graphics quality.
So long as it's the only objective relevant metric, yes it can and more importantly has to be, or there's no point having any reviews at all.
Including scaler numbers and providing lossless material to compare them yourself can be a nice bonus, but that's it.

edit: not sure what's the english equivalent for "väännetään ratakiskosta" but anyway
You can have as many different reviews from exact same data as there are reviewers. How is that supposed to help consumers make informed choice, when one reviewer prefers one method, other some other method and so on?
That's why reviews need to stick to objective measurements, where playing field is as level as it can be. You can add toppings on that cake, like compare scalers and so on, but those are just optional toppings. Cake is necessary.
 
Last edited:
So long as it's the only objective relevant metric, yes it can and more importantly has to be, or there's no point having any reviews at all.
Including scaler numbers and providing lossless material to compare them yourself can be a nice bonus, but that's it.
Already have games that can't pixel count.
Soon that will be most games.
So what's the point of a measure that can't be used?
And even when you can pixel count it doesn't actually inform you as much as it used to.
In the past it was a half decent measure.
 
That's why reviews need to stick to objective measurements, where playing field is as level as it can be.
Agreed. However, the objective metrics must be an adequate proxy that proportionately represents a subjective experience. We experience aliasing, temporal stability and smoothness. It just so happens that the measurable quantities of resolution and fps approximately (although not quite linearly) serve as decent predictors of those experiences.

With scaling — and especially due to the spatial-temporal antialiasing they provide — raw resolution is no longer an adequate measurable proxy for image quality. We need a better metric.

I understand that *you* do not like scalers, and because of that resolution continues to be an adequate metric for you, but that’s not true of others. Good scalers (incl. game specific ones) have been widely accepted by the industry because they allow GPU horsepower to be redirected towards other more impactful purposes than raw sampling rate.
 
You are getting jitter and judder free results thanks to VRR regardless.
Pay attention to the discussion if you're willing to participate in it.

May I suggest some introspection.
You were suggesting downclocking to person not willing to engage VRR and preferring frame rate lock...

FWIW, I prefer to both activate VRR and set a frame rate limit to value I find satisfactory - having your system run at 100% load is just silly if you stop noticing the visual difference after 50%. You don't get input lag variation, frame rate is as constant as it gets, no excess fan noise or high temperatures, and so on.
 
With scaling — and especially due to the spatial-temporal antialiasing they provide — raw resolution is no longer an adequate measurable proxy for image quality. We need a better metric.

This is much harder though. Image quality can't easily be measured quantitively. It's a known difficult problem in video encoding. Further, you'll need to have "reference" images in order to do objective comparisons, and that's also a problem.
 
Already have games that can't pixel count.
Soon that will be most games.
So what's the point of a measure that can't be used?
And even when you can pixel count it doesn't actually inform you as much as it used to.
In the past it was a half decent measure.
Pixel counting is irrelevant, level playing field is relevant. Should have apparently thrown a huge asterisk somewhere stating that obviously this is regarding cases where there's more than one scaling method. It's just as level if everyone uses the same scaler.

Agreed. However, the objective metrics must be an adequate proxy that proportionately represents a subjective experience. We experience aliasing, temporal stability and smoothness. It just so happens that the measurable quantities of resolution and fps approximately (although not quite linearly) serve as decent predictors of those experiences.

With scaling — and especially due to the spatial-temporal antialiasing they provide — raw resolution is no longer an adequate measurable proxy for image quality. We need a better metric.

I understand that *you* do not like scalers, and because of that resolution continues to be an adequate metric for you, but that’s not true of others. Good scalers (incl. game specific ones) have been widely accepted by the industry because they allow GPU horsepower to be redirected towards other more impactful purposes than raw sampling rate.
It's not about whether I like them or not, the fact is if everyone isn't using the same scaler, there's no point in doing any benchmarks or reviews because the comparison turns instantly subjective instead of objective. That would hold just as much true even if I loved every damn scaler out there.
Reviews are supposed to help customers make informed choices, not to help reviewer push their personal subjective views down customers throat as some sort of gospel.
 
You were suggesting downclocking to person not willing to engage VRR and preferring frame rate lock...
And the reason given was to lower power consumption and heat. In which case you'd better with downvolting/clocking than fps limiting.
Also I struggle to think of a game where limiting fps would solve issues with jitter and judder on a VRR display.

FWIW, I prefer to both activate VRR and set a frame rate limit to value I find satisfactory - having your system run at 100% load is just silly if you stop noticing the visual difference after 50%.
As I've said that's your choice. I prefer to have a game running at a maximum which my h/w allows it to.
I've given the example of CP2077 where to have it running at a "satisfactory limit" would mean that the lock would need to be at 40 - while the game is fully capable of hitting 80 at various points. The difference between 40 and 80 is pretty stark.
Another example would be WDL which tend to drop down to 40s in the open world while hitting 90 inside buildings.
To me this is much more preferable than locking said games to 40-45 for their entirety.

You don't get input lag variation, frame rate is as constant as it gets, no excess fan noise or high temperatures, and so on.
Input lag isn't much of an issue outside of competitive shooters and fighting games,especially on a VRR display where it's low regardless of fps.
Consistent framerate isn't important on a VRR display - consistent frame to frame render time variance is and you won't fix that with an fps lock of any kind (see shader compilation woes).
And my system has no fan noise or issues with temperatures running fully unlocked.
 
So long as it's the only objective relevant metric, yes it can and more importantly has to be, or there's no point having any reviews at all.
Including scaler numbers and providing lossless material to compare them yourself can be a nice bonus, but that's it.

edit: not sure what's the english equivalent for "väännetään ratakiskosta" but anyway
You can have as many different reviews from exact same data as there are reviewers. How is that supposed to help consumers make informed choice, when one reviewer prefers one method, other some other method and so on?
That's why reviews need to stick to objective measurements, where playing field is as level as it can be. You can add toppings on that cake, like compare scalers and so on, but those are just optional toppings. Cake is necessary.
I would disagree that if we took away resolution as a metric reviews would be pointless, though that is a separate topic.
Ultimately there are 2 main features that have high correlation to performance and resolution and that is mainly transistor count and power.
As noted here: https://forum.beyond3d.com/posts/2252880/
Upscaling techniques use less power to generate equivalent results to native, in the post above we see the different between FSR and DLSS. And we see a 20% power savings, but what about the power savings vs native? It would likely be even more. When we look at power limited scenarios, and we should, suddenly these upscaling techniques whether ML based or not matter, now more than ever. There is an obvious spectrum of power/performance between a mobile GPU to cloud computing, and somewhere in between those two endpoints is the limit of how much power and size a single desktop GPU can have. Today that is the 6900XT and the 3090. But where are we really going from this point forward? If we can have upscaling reduce our power requirements by half to generate equivalent performance at native this matters for progression in graphics.

FSR being a non ML technique saves on transistor space at the cost of having slightly more power consumption. DLSS saves on power consumption at the cost of having a higher transistor count. 2 techniques but meant to solve 2 particular problems: the need to increase die size for higher resolutions and the need to reduce power consumption to fit into the form factors we need them to.

IHVs don't care about the reviewer's abilities to benchmark, they are looking to design cards at a specific performance level within a transistor and power budget, and when FSR/DLSS/XeSS become standard technologies you may be looking at a cards where they designed it for 1440p with upscaling, or 4K with upscaling in mind, as in it's always turned on by default. The age of native is going to start going away because we're at the limits of what can fit into a desktop PC. A GPU that requires a 1000W power supply or more @ 1000 hours of play is a megawatt of power, that's significant and it's not reasonable to keep heading in that direction. Firstly, increasing transistor budgets coupled with increased clock frequencies would drive silicon costs way up, and we already feel that today. Secondly, cooling becomes increasingly difficult, there's not enough market at extreme price points to run the industry off.

The market expects performance to continue to evolve but the price must come down and that means die sizes and power consumption must come down for the market to continue to have relevant progression. Upscaling techniques must be brought into review procedures, reviewers will figure out a way to do this, it is not as difficult as IHVs going up against the hard wall of physics.
 
Last edited:
And the same is true for console-land, ofcourse. Hunting higher native resolutions (be it 4k or 8k) is hopefully not going to happen either next generation.
 
Reviews are supposed to help customers make informed choices, not to help reviewer push their personal subjective
That is no longer true, when you have dumb reviewers waving away ray tracing and DX12U based on their personal whims, then it's time to really not care about the reviewers and their agendas, their reviews are no loner represetiative of what users experience, they are no longer even informative.

Users used to know the full capabilities of their GPUs through informed reviews that cover everything, especially new APIs and technologies. But that doesn't happen now.

Now we have a bunch of youtube morons who think they understand better than everybody else and who think that DXR tests should be listed in a small corner under a separate section, morons who test DXR games without any DXR effects whatsoever, instead of properly testing with DXR and integrating those tests in their usual suite of benchmarks. This happens despite the numbers of DXR games approaching 70, despite the abundance of synthetic DXR tests, and DXR engines/consoles, and despite being 4 years through the life of DXR.

So for the past 4 to 5 years, those reviewers are no longer relevant, they are way behind the curve, so either they will have to cope, and keep up, or sink to the bottom of the hole they already dug themselves into.
 
And the reason given was to lower power consumption and heat. In which case you'd better with downvolting/clocking than fps limiting.

And that precisely is which does not make sense. FPS limit is much easier way to drop the power consumption effectively without causing the minimum frame rate to potentially sink to undesirable levels. Although it does seem like Nvidia hardware is at disadvantage in this, now that I googled a bit:
upload_2022-5-22_15-10-54.png
 
the fact is if everyone isn't using the same scaler, there's no point in doing any benchmarks or reviews because the comparison turns instantly subjective instead of objective.
Ideally we would have an objective metric that proportionately reflects the subjective perception of a majority of non-experts. Such a metric is not trivial to develop, as @pcchen correctly points out. But we do know the telltale problems with scalers (e.g., ghosting) and with native resolution (aliasing and temporal instability) and I think there's hope that we will be able to come up with a metric that quantitatively measures these phenomena. Until we have such a metric, we *have* to rely on subjective deep-dives (thanks @Dictator).

To me, a performance benchmark using scalers along with a subjective deep-dive on image quality is far more useful than a native-resolution performance benchmark. The latter is borderline useless to *me* on a game that supports DLSS (for example) because I don't see myself running native resolution in such a game.
 
I've given the example of CP2077 where to have it running at a "satisfactory limit" would mean that the lock would need to be at 40 - while the game is fully capable of hitting 80 at various points. The difference between 40 and 80 is pretty stark.
Another example would be WDL which tend to drop down to 40s in the open world while hitting 90 inside buildings.
To me this is much more preferable than locking said games to 40-45 for their entirety.
...and circling back to my original post which touched this off, this is why dynamic res is also a desired option.
 
Would it be possible to write an IQ comparator which would take exactly same frame (just upscaled etc.) and compared it pixel to pixel to the most super-sampled top-notch ideal? Then output could be a a single number representing distance from the ideal. It would not solve the subjective preferences problem. Unless there are scenes focusing on particular issues, like ghosting, over-sharpening, texture detail and so on.
 
The objective mathematical error metric is known and used for eternity: MSE.
Human perception isn't objective though, so people are looking for a objective subjective metric since quite some time. That that's not possible should be evident. If the game's pre-processing (tonemapping), the environment's mid-processing (viewer condition) and brain's post-processing of an image would happen to be "standardized", you could move MSE into post-proc space (basically brain-space before meaning is attributed). Unfortunately that's a folly.
If you can accept an objective perceptor (e.g. a computer), and would accept that it is a valid representation of the averaged subjective perceptions (e.g. it's at the center of the normal distribution of divergent perceptions), MSE is as good as it gets.
 
The objective mathematical error metric is known and used for eternity: MSE.
Human perception isn't objective though, so people are looking for a objective subjective metric since quite some time. That that's not possible should be evident. If the game's pre-processing (tonemapping), the environment's mid-processing (viewer condition) and brain's post-processing of an image would happen to be "standardized", you could move MSE into post-proc space (basically brain-space before meaning is attributed). Unfortunately that's a folly.
If you can accept an objective perceptor (e.g. a computer), and would accept that it is a valid representation of the averaged subjective perceptions (e.g. it's at the center of the normal distribution of divergent perceptions), MSE is as good as it gets.
Yeah MSE/PSNR doesn’t work very well because as you pointed out the error often doesn’t correlate with subjective perception.

I was thinking more along the lines of recent IQA work on perceptual metrics. Here’s an example: https://arxiv.org/pdf/2104.14730.pdf
 
Yeah MSE/PSNR doesn’t work very well because as you pointed out the error often doesn’t correlate with subjective perception.

I was thinking more along the lines of recent IQA work on perceptual metrics. Here’s an example: https://arxiv.org/pdf/2104.14730.pdf

I did not point that out. I pointed out that conditions are so diverse that the search for that "objective subjective" won't yield anything useful. You can move to different spaces (fe. contrast space), which is fine, but it's not part of a metric. The metric you'll use in any derived space is going to be MSE, or sometimes in algorithms MAE.
 
I did not point that out. I pointed out that conditions are so diverse that the search for that "objective subjective" won't yield anything useful. You can move to different spaces (fe. contrast space), which is fine, but it's not part of a metric. The metric you'll use in any derived space is going to be MSE, or sometimes in algorithms MAE.
Okay I now understand what you were saying. I disagree though. If you look at the neural IQA work it's not as simple as (transform-space -> apply MSE). It's possible that the network is implicitly doing that, but if so it seems to have found a more effective latent space that leads to the MSE yielding better correlation with subjective perception. Or it's found a different metric altogether. In any case I'm a little surprised by the force of your absolutist claims that this is an unsolvable problem. There seems to be active ongoing research on the topic.
 
Back
Top