AMD FSR antialiasing discussion

  • Thread starter Deleted member 90741
  • Start date
Yes but how do you know the trained ML model in each DLL wasn’t optimized for a particular game?
Because several games ship with same version, can be improved by newer versions and NVIDIA actually offers experimental builds with new things they're trying to toy around with.
And NVIDIA said themselves ages ago they switched to generic training model. Early DLSS versions (pre 2.x? Can't remember for sure) were game specific, but that's history.
 
Because several games ship with same version, can be improved by newer versions and NVIDIA actually offers experimental builds with new things they're trying to toy around with.
And NVIDIA said themselves ages ago they switched to generic training model. Early DLSS versions (pre 2.x? Can't remember for sure) were game specific, but that's history.

We still see different behavior from different 2.x DLLs so something is changing. If later versions were universally “better” there would be no need for people to try out multiple DLLs. They would just use the latest.
 
We still see different behavior from different 2.x DLLs so something is changing. If later versions were universally “better” there would be no need for people to try out multiple DLLs. They would just use the latest.
Yes, some things they tweak suit some games better, some things other games, it's always a compromise one way or another. Just like using just one generic model is.
 
PC users try different DLSS DLL versions for the same reason they try out different driver versions.
 

interesting that FSR 2 and DLSS performance are very similar. With DLSS a hair more performant.

makes me wonder whether DLSS was really using tesor cores... or it was using both main GPU cores + tensor?
as if DLSS was just using tensor cores, it should be way more performant than FSR 2 right?

or FSR 2 is simply that good? its super lightweight
 
interesting that FSR 2 and DLSS performance are very similar. With DLSS a hair more performant.

makes me wonder whether DLSS was really using tesor cores... or it was using both main GPU cores + tensor?
as if DLSS was just using tensor cores, it should be way more performant than FSR 2 right?

or FSR 2 is simply that good? its super lightweight

They are different technologies. FSR is hand-tuned, like UE/TAAU and whatever consoles are using, now on pc. DLSS is using machine learning/AI tech.
 
interesting that FSR 2 and DLSS performance are very similar. With DLSS a hair more performant.

makes me wonder whether DLSS was really using tesor cores... or it was using both main GPU cores + tensor?
as if DLSS was just using tensor cores, it should be way more performant than FSR 2 right?

or FSR 2 is simply that good? its super lightweight
It's most likely just a case of "we're using tensor cores to justify their existence better" than actually needing or even getting notable performance from them
Tensor cores are fast in matrix crunching, but not every workload is that well suited for them.
They are different technologies. FSR is hand-tuned, like UE/TAAU and whatever consoles are using, now on pc. DLSS is using machine learning/AI tech.
You're throwing that "ML/AI" around like it's some magic bullet or fundamentally different from other temporal AA/scalers - it's not. It just uses pretrained model to pick/ignore samples, while other TAAU methods use predefined algorithm(s) to do the same.
 
"we're using tensor cores to justify their existence better"

Like tempest core on PS5? Or the PS4 Pro's special ID buffer? We can scream this about other platforms and vendors too.

You're throwing that "ML/AI" around like it's some magic bullet or fundamentally different from other temporal AA/scalers - it's not. It just uses pretrained model to pick/ignore samples, while other TAAU methods use predefined algorithm(s) to do the same.

TAAU/checkerboarding, custom temporal solutions (all alike FSR2) use hand-tuned solutions, whereas DLSS is taking the machine learning route. They try to achieve the same goals though. It remains to be seen what is 'better', although better doesnt really exist i think. I feel FSR2.0 is a complementary feature to the PC gpu space, which is great to see as not everyone can or wants to use DLSS.

Also its probably a good idea to see further testings across different games, benchmarks and settings aswell as more GPU to GPU comparisons.

Abit more on the different approaches: AMD's FSR 2.0 is John Henry versus Nvidia DLSS's machine learning | VentureBeat

I do not think we can rule out ML/AI technologies already now, these probably can save time in the future and most likely improve over what humans can do manually. With both Intel and NV going the same route (AI acceleration), i assume that AMD will do so finally too, maybe with FSR3.0+.
 
Really good initial look at single title comparison, with 3 resolution, multiple quality and motion tests.

One thing they missed is the FSR2 shimmering around the gun on the bottom left of the screen at the 11:00 mark. That ghosting on the DLSS motion at 13:20 though, oof.

Overall takeaway from the video that they're very close, with DLSS a slight edge on fine detail reconstruction and performance is practically the same.
 
TAAU/checkerboarding, custom temporal solutions (all alike FSR2) use hand-tuned solutions, whereas DLSS is taking the machine learning route.
Literally what I just said. The only difference is how you pick/weight your samples, predefined algorithm or pretrained neural network.
 
Really good initial look at single title comparison, with 3 resolution, multiple quality and motion tests.

One thing they missed is the FSR2 shimmering around the gun on the bottom left of the screen at the 11:00 mark. That ghosting on the DLSS motion at 13:20 though, oof.

Overall takeaway from the video that they're very close, with DLSS a slight edge on fine detail reconstruction and performance is practically the same.

the annoying thing with DLSS is that... it has ridiculous ghosting variance between versions. for example even in DLSS 2.3.x family, it has very different ghosting variances.
 
Really good initial look at single title comparison, with 3 resolution, multiple quality and motion tests.

One thing they missed is the FSR2 shimmering around the gun on the bottom left of the screen at the 11:00 mark. That ghosting on the DLSS motion at 13:20 though, oof.

Overall takeaway from the video that they're very close, with DLSS a slight edge on fine detail reconstruction and performance is practically the same.
They also missed that FSR2 was straight up doing a better job at antialiasing. ~6:38 shows this well, as it persists even with default higher sharpening and not just with zero sharpening.
 
So a hypothetical situation where the FSR2 performs similarly to DLSS across many titles with image quality, motion stability and performance is comparable. Why would a developer decide to implement both DLSS and FSR2 when the latter is an open solution that supports all GPUs across several generations?

My initial thought is that likely both could be implemented in parallel with relatively small dev cost to provide choice to the user.
 
Regarding ML, the learning part of it is what's more taxing. Inference from the model is probably way way less intensive, and arguably may not require tensor cores at all, as performance from "normal" cores increases with time and as long as we keep aiming for 4K max. So the question is not if it really needs Tensor cores, but until when, if the number of pixels to work with is kept constant. In the end yes, Tensor Cores on consumer products will be a waste, unless we find ways to offload other tasks to them.
 
It would be something if NV and intels solution was opensource/available to all GPU’s instead of amd’s route. I think the automatic/ML route is less time consuming and probably better for the future. Hand-coded/taau solutions have been around for awhile now.

My quation is why AMD didnt go the AI/ml route for fsr2.0? If ai route is just as good as hand tuned or close, why not go that route aswell (but works on across all gpu’s).
 
Back
Top