AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
You've made your mind about HUB before even reading their piece.
There's a world of difference between saying that you won't notice the differences immediately or during gameplay and saying that you don't see them at all. HUB is saying the latter, all the rest are saying the former. LTT I don't view at all, their reviews are awful.
 
You're looking at it backwards I think. FSR takes an image from a base resolution and scales it upwards to a target resolution. It is not doing this in the reverse. So there is no performance gain at the cost of quality. The algorithms eat performance to increase quality, in this case, the quality is to sample the lower resolution and extract detail for a higher resolution.

The base resolution for 4K ultra is 1660p. You if you want to look at performance numbers, you compare 1660p vs 4K FSR Ultra in terms of it's image quality.
If you compare 4K Native vs 4K FSR and ignore image quality, then you are missing the point of what FSR is trying to achieve.
This one is definitely a matter of perspective. You're both right in your own way.
 
There's a world of difference between saying that you won't notice the differences immediately or during gameplay and saying that you don't see them at all. HUB is saying the latter, all the rest are saying the former. LTT I don't view at all, their reviews are awful.
They never said they don't see them all. Where do they say that?
 
There's a world of difference between saying that you won't notice the differences immediately or during gameplay and saying that you don't see them at all. HUB is saying the latter, all the rest are saying the former. LTT I don't view at all, their reviews are awful.

Handpicking can be done on either side. Usually DF is the most reliable source.
 
This one is definitely a matter of perspective. You're both right in your own way.
I don't see how there is a matter of perspective on this. The diagram shows that the algorithm only works in one direction. It goes from a lower resolution buffer to a higher resolution buffer.
We do not create a 4K native image and go in reverse.

Anyone who is doing any sort of analysis without bias, if you want to want to measure the effectiveness of FSR, you will measure the before and after. The before is 1660p. The after is 4K. You will measure 1660p bilinear vs 4K FSR.

You won't be measuring frame time, because clearly FSR will eat more performance than native resolution it upscaled from.
 
I don't see how there is a matter of perspective on this. The diagram shows that the algorithm only works in one direction. It goes from a lower resolution buffer to a higher resolution buffer.
We do not create a 4K native image and go in reverse.

Anyone who is doing any sort of analysis without bias, if you want to want to measure the effectiveness of FSR, you will measure the before and after. The before is 1660p. The after is 4K. You will measure 1660p bilinear vs 4K FSR.

You won't be measuring frame time, because clearly FSR will eat more performance than native resolution it upscaled from.
With its intent being costing less than native rendering while achieving acceptably close results, you can quite easily say it's for performance.

It is a matter of perspective vOv
 
With its intent being costing less than native rendering while achieving acceptably close results, you can quite easily say it's for performance.

It is a matter of perspective vOv
Yes, but your entire argument hinges on "while achieving acceptably close results".
So resolution in itself is non important, what important is whether it achieves acceptably close results. In this case we are referring to image quality. So if we are comparing image quality, then DF has not done it wrong by bringing in other upsampling techniques to compare image quality against.
 
Yes, but your entire argument hinges on "while achieving acceptably close results".
So resolution in itself is non important, what important is whether it achieves acceptably close results. In this case we are referring to image quality. So if we are comparing image quality, then DF has not done it wrong by bringing in other upsampling techniques to compare image quality against.
I'm not arguing about DF bringing other upsampling techniques. In that regard I fully agree with DF, if they didn't compare it against anything else already available then what's the point?

I'm just saying all of these technologies could be called IQ algorithms or performance improvements, depending on how you look at them.

It's the purest form of semantics here lol
 
You're looking at it backwards I think. FSR takes an image from a base resolution and scales it upwards to a target resolution. It is not doing this in the reverse. So there is no performance gain at the cost of quality. The algorithms eat performance to increase quality, in this case, the quality is to sample the lower resolution and extract detail for a higher resolution.

The base resolution for 4K ultra is 1660p. You if you want to look at performance numbers, you compare 1660p vs 4K FSR Ultra in terms of it's image quality. ie what image quality is gained by losing some performance and to use FSR to bring the image to 4K
If you compare 4K Native vs 4K FSR and ignore image quality, then you are missing the point of what FSR is trying to achieve.

These techniques are introduced for the sole purpose of increasing framerates at an output resolution at the cost of internal resolution with the least quality impact possible. You didn't learn the performance impact of FSR on your gaming experience from watching the DF video. Hence, my question.

Even if you personally can infer what the performance should be, everyone is not you.
 
GamersNexus: "Ultra quality looks close enough to native 4K that the difference might not be immediately obvious"
LTT: "Compared to native, Ultra looks nearly indistinguishable"
TPU: "I'd say FSR Ultra Quality is "almost native", even FSR Quality is good enough not to notice much of a difference during actual gameplay."
Hothardware: "Without blowing the image up to 2x to find all the hard edges, or swapping back and forth between images, we're actually quite hard-pressed to find glaring differences. The stairs are a little more blurry, and perhaps there's not quite as much detail in the character's hair, but in motion it's basically indistinguishable."

You've made your mind about HUB before even reading their piece.

Dont know, but if you cant see the difference between 4K and FSR UQ in Godfall i would suspect that image quality doesnt matter to them:
And this was DLSS 1.0 in FF15:
 
Its a substitute for TAAU where that tech isnt implemented into engines.

If AMD wanted to make that their message they would have done so explicitly. TAAU has a more significant performance impact than FSR which AMD clearly isn't interested in promoting. They are working against TAAU, right along with NVIDIA.

If it succeeds on PC despite both NVIDIA and AMD's efforts it's a fucking miracle.
 
If AMD wanted to make that their message they would have done so explicitly. TAAU has a more significant performance impact than FSR which AMD clearly isn't interested in promoting. They are working against TAAU, right along with NVIDIA.

If it succeeds on PC despite both NVIDIA and AMD's efforts it's a fucking miracle.

How are they working against it ? If the devs decide to use it, it's working. They're not blacklisting the tech or anything.
 
Everything takes development time ... devs are getting money and free marketing to support FSR and DLSS and AMD/NVIDIA are presenting them as complete solutions while pretending TAAU simply doesn't exist. What's the point in implementing TAAU on PC when it just embarrasses your sponsors?

If AMD believes in TAAU as importance for their ability to compete they need to speak up ... but apparently they think that with enough PR they can pretend the emperor has clothes instead. Maybe they are right, dunno, I don't like it regardless.
 
Everything takes development time ... devs are getting money and free marketing to support FSR and DLSS and AMD/NVIDIA are presenting them as complete solutions while pretending TAAU simply doesn't exist. What's the point in implementing TAAU on PC when it just embarrasses your sponsors?

If AMD believes in TAAU as importance for their ability to compete they need to speak up ... but apparently they think that with enough PR they can pretend the emperor has clothes instead. Maybe they are right, dunno, I don't like it regardless.


But TAAU was there before fsr and dlss 2 right ? And still it was not widely deployed. I don't know, maybe you're right.

Having some feedback from a medium size studio dev would be nice to figure out how really the marketing is influencing this...
 
Back
Top