AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
Well yes since the faster gpu will render faster. Also faster gpus will be able to do higher res upscaling
fsr_performance_mode_timing.jpg

fsr_hardware.jpg

AMD-FSR-Quality-Mode.jpg
Also worth noting that this isn't limited to just FSR.

DLSS 2.0 is the same in this metric. In terms of render costs there doesn't appear to be a significant difference between the two either.

https://media.discordapp.net/attachments/697380355937927239/956649443388686396/20220324_202136.jpg
 
Ok thanks thats intresting. The XSX GPU has somewhat more to play with here then.
Series X and ps5 would most likely fall in line with the 6600 which they have at 1440p. maybe between a 6600 and 6700xt ?
The series s would be most likely be around the 6500 performance so it should be good at 1080p

The thing with the consoles is the developer can go in and fine tune everything for FSR 2.0 instead of just having us pick what we want in the end

I do wonder how well this works with a dynamic resolution however.
 
Series X and ps5 would most likely fall in line with the 6600 which they have at 1440p. maybe between a 6600 and 6700xt ?
The series s would be most likely be around the 6500 performance so it should be good at 1080p

The thing with the consoles is the developer can go in and fine tune everything for FSR 2.0 instead of just having us pick what we want in the end

I do wonder how well this works with a dynamic resolution however.

Well we should find out soon enough, as Deathloop already has the pieces in place for dynamic resolution support, and I'm pretty sure I read somewhere this'll extend to FSR 2.0 as well.
 
I am interested to see what this technology brings to consoles like PS5 and Xbox series. Console devs use their own methods for upscaling and such, so it will ll be nice to compare and contrast to see what works best in what scenario. Maybe a returnal situation can be something different where they can still render at native 1080 but have that look a lot less 1080p

If fsr 2.0 works particularly well in the best case scenario, im also interested but in seeing what the performance savings for the gpus of the console are funneled into. Either you render at the same res you already were going to just for a higher quality upscale (1440p to 4k let's say) or you decide to lower the res down to 1080p for example to get a better fps or graphics with a good 1440p tier upscale?

The consoles RT hardware is good but it does take quite an impact on GPU resources. Would this ease the load while potentially providing higher quality image?
 
Last edited:
Likely nothing. As you've said consoles already use various forms of resolution reconstruction, including TAAU.

That was DF's take aswell. Though, the Xbox seems invested into FSR2.0 as evident from the presentations, which is intresting. It could be usefull for multiplatform games and ports.
 
Likely nothing. As you've said consoles already use various forms of resolution reconstruction, including TAAU.

Really? Then barring marketing deals why would MS be so proud to put it in their Xbox SDK as something to use? I don't really understand. Is it really just as simple as "hey this feature exists and we have it!"?

From what I understand devs usually use their own types of methods, and I guess for those devs who don't use something of their own design it would be easier to implement this maybe
 
Last edited:
Really? Then barring marketing deals why would MS be so proud to put it in their Xbox SDK as something to use? I don't really understand. Is it really just as simple as "hey this feature exists and we have it!"?

From what I understand devs usually use their own types of methods, and I guess for those devs who don't use something of their own design it would be easier to implement this maybe
Why not put it into SDK? It's free and it's an option. Xbox API is very close to PC DX so it's easy as well. But I wouldn't expect much impact from it - just like there wasn't much from FSR1.
 
Why not put it into SDK? It's free and it's an option. Xbox API is very close to PC DX so it's easy as well. But I wouldn't expect much impact from it - just like there wasn't much from FSR1.

I also see FSR2.0 as something for in the pc gpu space, and as you say, the Xbox platform/API could make it easier for studios to implement it. It may or may not be better than what console devs already have, but its a nice option for sure, and could see some usefullness where other existing solutions arent up to sniff for performance.

It's an intresting development for sure. A 147% improvement in performance in Deathloop as the presentations make us believe, i wouldnt know why console devs wouldnt want to explore this.
 
I also see FSR2.0 as something for in the pc gpu space, and as you say, the Xbox platform/API could make it easier for studios to implement it. It may or may not be better than what console devs already have, but its a nice option for sure, and could see some usefullness where other existing solutions arent up to sniff for performance.

It's an intresting development for sure. A 147% improvement in performance in Deathloop as the presentations make us believe, i wouldnt know why console devs wouldnt want to explore this.

That is what I'm saying. It's kinda weird to say there is zero impact at all in any sense
 
Also worth noting that this isn't limited to just FSR.

DLSS 2.0 is the same in this metric. In terms of render costs there doesn't appear to be a significant difference between the two either.

https://media.discordapp.net/attachments/697380355937927239/956649443388686396/20220324_202136.jpg
this leaves nVidia in a bad place. Are AI and Tensor Cores really necessary, as they pointed out a thousand times, for DLSS to actually work? I don't think so, but it's a way to sell their tech.

On a different note... Glad I fixed my GTX 1080, even with NIS on, sometimes my GTX 1060 3GB struggled. Still, FSR 2.0 will work well with it, if need be.

AMD doesn't recommend using a GTX 1060 with frame-rate boosting FSR 2.0 | PC Gamer
 
this leaves nVidia in a bad place. Are AI and Tensor Cores really necessary, as they pointed out a thousand times, for DLSS to actually work? I don't think so, but it's a way to sell their tech.
UnReal's TSR and FSR 2.0 in one camp, DLSS and XeSS in the other camp. Can't wait for the reviewers to perform their magic!
 
UnReal's TSR and FSR 2.0 in one camp, DLSS and XeSS in the other camp. Can't wait for the reviewers to perform their magic!
XeSS doesn't require matrix acceleration, it has version optimized to run on them but it doesn't mean there would be any other differences between the two.
Haven't gone through their GDC stuff yet but can't remember them even mentioning neural nets for XeSS

Edit: just opened their GDC site, says AI front and center of XeSS, but still no need for matrix acceleration
 
Last edited:
this leaves nVidia in a bad place. Are AI and Tensor Cores really necessary, as they pointed out a thousand times, for DLSS to actually work? I don't think so, but it's a way to sell their tech.
The pretense wasn't that it was necessary to work, but rather necessary for performance reasons.
 
The pretense wasn't that it was necessary to work, but rather necessary for performance reasons.
According to Intel on their solution matrix acceleration cuts the time in about half compared to without them, but even without them you don't have to worry about performance cost, it's still just a fraction of native res rendering
 
XeSS doesn't require matrix acceleration, it has version optimized to run on them but it doesn't mean there would be any other differences between the two.
Haven't gone through their GDC stuff yet but can't remember them even mentioning neural nets for XeSS

Edit: just opened their GDC site, says AI front and center of XeSS, but still no need for matrix acceleration
It's about machine learning cores, which XeSS and DLSS have in common. The difference is Intel can also run on cards supporting DP4a though with less quality and performance.
What is Intel XeSS, and how does it compare to Nvidia DLSS? | Digital Trends
The secret sauce is machine learning. Intel Arc Alchemist graphics cards include Xe Matrix Extension (XMX) cores, which run the A.I. model to perform the upscaling. They’re similar to Nvidia’s Tensor cores on RTX 30-series graphics cards, and Intel says they provide the best quality with XeSS.

However, XeSS can also work without the XMX cores. Graphics cards that support DP4a instructions — useful for A.I. calculations — also work. We have a full list of supported GPUs below, but the latest RX 6000 graphics cards from AMD support these instructions, as do the last several generations from Nvidia.
 
It's about machine learning cores, which XeSS and DLSS have in common. The difference is Intel can also run on cards supporting DP4a though with less quality and performance.
What is Intel XeSS, and how does it compare to Nvidia DLSS? | Digital Trends
I'm in the middle of night shift at psychosis ward so excuse my possible laziness, but where exactly did you pick up quality difference between the two modes?
I have seen the slide where per frame cost of XeSS is about double for DP4a compared to XMX, but no mentions of quality difference in that slide
 
I'm in the middle of night shift at psychosis ward so excuse my possible laziness, but where exactly did you pick up quality difference between the two modes?
I have seen the slide where per frame cost of XeSS is about double for DP4a compared to XMX, but no mentions of quality difference in that slide
It's in the quote I gave ... he quoted Intel from some document. I remember reading this as well in one of Intel's articles.
 
Back
Top