AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
Watched for about 5 mins and dropped out.
Showing the screenshot which is confirmed as not even using FSR while saying how great it looks and showing DLSS1 from BFV again because you know this is still relevant in 2021 for some reason. Zero new info on FSR otherwise.
 
Watched for about 5 mins and dropped out.
Showing the screenshot which is confirmed as not even using FSR while saying how great it looks and showing DLSS1 from BFV again because you know this is still relevant in 2021 for some reason. Zero new info on FSR otherwise.

I couldn't finish it either. Terrible analysis as usual. They should stick to data-driven reviews.
 
Zero new info on FSR otherwise.

Nothing new on FSR but that’s not their fault. They did telegraph what their coverage is going to be like though.

Firstly the bar for success is FSR IQ halfway between DLSS 1 and 2. Will be fun seeing how that’s measured. Also, now that FSR exists there’s even less reason to talk about DLSS in their reviews because apparently FSR and DLSS cancel each other out. They also said that it’s too much work to benchmark at native and with upscaling features enabled in every review.

The comparisons to DLSS 1.0 are somewhat relevant as it was also a spatial upscaler. Unfortunately HWUB seem more interested in burying their heads even deeper in the sand instead of really exploring the nuances of each technology.
 
Firstly the bar for success is FSR IQ halfway between DLSS 1 and 2. Will be fun seeing how that’s measured
Yeah I wonder what will be the comparison strategy here as well. Equalize performance and check quality? But shouldn't this also account for native performance to establish the baseline? Equalize quality and check performance? What do you do when one of solutions isn't capable of a similar quality? Will DLSS UP mode suddenly become relevant to them in such comparisons? Etc

The comparisons to DLSS 1.0 are somewhat relevant as it was also a spatial upscaler.
Relevant for initial FSR assessment, not so much for figuring out how it fits into present day status quo. I also wonder what their take on FSR will be if their take on DLSS1 was "it's worse than just running the game in lower resolution". Say FSR will be "between DLSS1 and 2" - wouldn't that mean that it's the same as running game in a lower resolution to them? And if yes then shouldn't that make FSR completely useless in their view of the matter?

Anyway it will be somewhat fun to see them try and explore all that while staying true to what they've said about DLSS previously.
 
The HWUB guys are being pretty level headed with what to expect from FSR, especially considering they were recently bullied by nvidia for not promoting DLSS and RTX as much as some other supposedly independent outlets.

It's also good that they're pointing out past overhypes from RTG as a way to say "we're not giving you a free pass on this".



I'm not saying that FSR will offer higher IQ than DLSS 2.0, but this logic is faulty. The fact that DLSS 2.0 uses AI doesn't mean anything by itself. DLSS 1.0 also used AI and no matter that it was worse than a simple spatial upscaler.

Nvidia's marketing team has been pretty successful at convincing some that "it's only good if it's using deep learning". Which is pretty much their job since they're the only ones putting lots of dedicated tensor cores in consumer GPUs.
Oddly, I'm not seeing anyone blasting UE5's temporal super resolution, which isn't using deep learning AFAIK.
 
Nvidia's marketing team has been pretty successful at convincing some that "it's only good if it's using deep learning". Which is pretty much their job since they're the only ones putting lots of dedicated tensor cores in consumer GPUs.
Oddly, I'm not seeing anyone blasting UE5's temporal super resolution, which isn't using deep learning AFAIK.

Nvidia's marketing didn't have to work very hard to sell that story. After all we've had GPUs that could do spatial post-processing for decades now. So it's reasonable to expect that some special sauce has been missing.

UE temporal upscaling isn't using deep learning. Seems to be hand tuned shaders similar to DLSS 1.9 in Control.

It has been optimised on PS5 and XSX, and we should soon be able to have this optimizations enabled on D3D11, D3D12 and Vulkan. One is already available on PC with r.TemporalAA.R11G11B10History=1.

It is more expensive, because it is at its core designed to be a temporal upscaler (TAAU with r.TemporalAA.Upsampling=1) more than it it is a TAA pass, to maintain quality when upscaling.

Maintaining the quality with lower input resolution is what truly differentiate Gen5 TAA to Gen4’s, and we find that the savings by lowering the inputs resolution quickly outweigh the performance overhead it has while maintaining better output quality with the console optimization. Already we have cases where Gen5 TAAU, despite configured with a lower screen percentage, looks better than Gen4’s. But where the performance and quality benefit comes in, is when you can lower even more the screen percentage: not only to compensate for the higher upscaling cost, but also to free up some ms on the rest of the frame you can reinvest to increase quality or even turn-ON some other rendering features that ultimately also contribute to a higher output pixel quality.
 
Watched for about 5 mins and dropped out.
Showing the screenshot which is confirmed as not even using FSR while saying how great it looks and showing DLSS1 from BFV again because you know this is still relevant in 2021 for some reason. Zero new info on FSR otherwise.

Everyone should watch any material in its entirety before making comments that will induce other people in error. Its the least we can do before deciding to talk.

With that being said:

1) the references to dlss1 are all explained in the video and the context in which was used was fully explained.
2) I saw the video again and they are using B-roll while they speak. Their image quality comments were made on the footage that was confirmed to be using FSR.

That is all.
 
2) I saw the video again and they are using B-roll while they speak. Their image quality comments were made on the footage that was confirmed to be using FSR.
You should certainly watch the video again because they talk about how good FSR look precisely while showing the ill fated 4K native image. And stop telling others what they should talk about.
 
You should certainly watch the video again because they talk about how good FSR look precisely while showing the ill fated 4K native image. And stop telling others what they should talk about.

source me the image in question, please.

People were asked to talk about FSR, not to denigrate a channel work you dont agree or watched fully.
 
The HWUB guys are being pretty level headed with what to expect from FSR, especially considering they were recently bullied by nvidia for not promoting DLSS and RTX as much as some other supposedly independent outlets.

It's also good that they're pointing out past overhypes from RTG as a way to say "we're not giving you a free pass on this".





Nvidia's marketing team has been pretty successful at convincing some that "it's only good if it's using deep learning". Which is pretty much their job since they're the only ones putting lots of dedicated tensor cores in consumer GPUs.
Oddly, I'm not seeing anyone blasting UE5's temporal super resolution, which isn't using deep learning AFAIK.
It has temporal component, so it should be able to pull details single frame solution cannot.

It will be interesting to see what AMD has cooked up.
If it's effective with limitations, there should be use cases for it.
 
Is that because it looks pretty decent?
Doesn't that actually prove that people aren't saying it's only good if you use ML?
Since it uses temporal accumulation I can totally believe it's pretty decent (or even excellent). It'll be really interesting to see how it A/Bs vs. DLSS2.
 
Back
Top