Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

On paper you're right, but that's a lot that need to happen. If nVidia is not capable of it yet, and they work a lot on this, I don't see AMD there soon.
Nvidia's solution is universal, they don't need to retrain per title. Developers just need to make adjustments to their rendering path to support DLSS. AMD can be there, if MS isn't already the one working on it.
 
And who's going to do all the training for each game or come up with something similar to DLSS 2.0 that integrates and works for potentially all games? AMD? They don't exactly have the same resources as Nvidia though they're usually far more open with their technologies.
Microsoft since they're pushing it on Xbox anyway?
 
there is more than sufficient compute to run the models very well.

You're not going to get to break neck speeds, but there is plenty of compute available to support 30fps and 60fps

One of digital foundries recent DLSS articles quoted Nvidia stating that there were feature limitations of DLSS 1.9 (shader based) vs DLSS 2.0 (tensor based) and that the quality on shaders basically couldn't scale upwards beyond 1.9 limits. May have just been marketing speak of course.

On a phone now so can't search for the quote but it may have been in the Control article.
 
Nvidia's solution is universal, they don't need to retrain per title. Developers just need to make adjustments to their rendering path to support DLSS. AMD can be there, if MS isn't already the one working on it.
yea AMD can leverage MS and the xbox / ps5 to get their solution in as many titles as possible. Thats if they have a solution. Hopefully FidelityFx as it stands isn't the best they have in RDNA2
 
yea AMD can leverage MS and the xbox / ps5 to get their solution in as many titles as possible. Thats if they have a solution. Hopefully FidelityFx as it stands isn't the best they have in RDNA2
Well, if this solution will use NN for reconstruction it will likely run faster on GPUs with TCs - which AMD lack at the moment. So why should they push this into as many titles as possible?
 
One of digital foundries recent DLSS articles quoted Nvidia stating that there were feature limitations of DLSS 1.9 (shader based) vs DLSS 2.0 (tensor based) and that the quality on shaders basically couldn't scale upwards beyond 1.9 limits. May have just been marketing speak of course.

On a phone now so can't search for the quote but it may have been in the Control article.
That's entirely possible.
 
yea AMD can leverage MS and the xbox / ps5 to get their solution in as many titles as possible. Thats if they have a solution. Hopefully FidelityFx as it stands isn't the best they have in RDNA2

Presumably if MS come up with their own models they'll be restricted to the Xbox and if AMD are lucky, to DX on the PC. Sony will have to sort themselves out and may be at a disadvantage in that respect.
 
I could see MS using azure to train models.
Anything where they can say they leveraged the power of azure they see benefit in, especially if you can tag the word AI or Machine Learning onto it.
The models could possibily be useful for non game applications also.

I could also see them using a trained upscale in the cloud to deliver games. If you can get better performance from the hardware in the cloud through upscaling you might be able to save on energy and heat, stretch the hardware further etc. Video compression may hide some artifacts produced from upscaling.
 
Well, if this solution will use NN for reconstruction it will likely run faster on GPUs with TCs - which AMD lack at the moment. So why should they push this into as many titles as possible?

Because not having it vs having a slower implementation of it are diffrent ball games.

We don't know what AMD has in RDNA 2 for ML resolution scaling. It could be 20% as good as NVidia's or 80% or maybe even 120% better. But if I'm amd i would take any of them vs 0% . Because at the end of the day Nvidiais going to push for more support from more devs anyway.
 
Presumably if MS come up with their own models they'll be restricted to the Xbox and if AMD are lucky, to DX on the PC. Sony will have to sort themselves out and may be at a disadvantage in that respect.
For starters they wouldn't restrict it to xbox, the DX ML implementation would work on all DX12U cards.

It would be more performant on tensor cores, but that doesn't negate the benefits on amd or xbox.
Could even be part of playfab (not sure if that's the name).
 
Because not having it vs having a slower implementation of it are diffrent ball games.

We don't know what AMD has in RDNA 2 for ML resolution scaling. It could be 20% as good as NVidia's or 80% or maybe even 120% better. But if I'm amd i would take any of them vs 0% . Because at the end of the day Nvidiais going to push for more support from more devs anyway.
Not supporting something means that there can be no direct comparison. You benchmark in the common environment. If some card support AI upscaling and the other doesn't then there can be no direct comparison between them in AI upscaling - only upscaling vs native where the IQ is different and the results are up to a personal taste mostly.

When you support something which your competition also support you will get into direct comparison territory. And being still slower there won't make you much good, even if your GPUs will provide some performance benefits in the same mode - just for example a smaller one than that of your competition.

I dunno, I don't see much reason for AMD to push for something which will mostly benefit NV h/w to be integrated into all console games.
 
Yah, taa looks sharper here, but I'd gladly trade it for the performance improvement.

Probably, but I'd be interested to see how it looks compared to the rendered res needed to get that performance improvement. This isn't doubling performance by any means, if you can get similar quality/performance by say, dropping 4k down to 1800p, then DLSS doesn't serve that much of a purpose in this title.

I mean the main reason DLSS 2.0 is so lauded is that while it can produce some noticeable artifacts, on the whole it looks better than native res with TAA (at the worst you could say it's roughly equal), while also delivering a big performance boost. Tests like Hardware Unboxed undertook with DLSS 1.0/1.9 by comparing it to just a lower res with standard scaling + sharpening filter were relevant at the time, as early DLSS versions were obviously inferior to native res, so you would want to see how much you're actually losing/gaining with respect to the scaling that any game will do. That type of comparison hasn't been necessary with DLSS 2.0 as you can clearly see in all titles so far that for the most part, it's superior to TAA - sometimes significantly.

This implementation has neither a huge performance boost, nor any quality improvements over TAA, at least judging from those shots. Maybe others should show a difference.
 
Not supporting something means that there can be no direct comparison. You benchmark in the common environment. If some card support AI upscaling and the other doesn't then there can be no direct comparison between them in AI upscaling - only upscaling vs native where the IQ is different and the results are up to a personal taste mostly.

When you support something which your competition also support you will get into direct comparison territory. And being still slower there won't make you much good, even if your GPUs will provide some performance benefits in the same mode - just for example a smaller one than that of your competition.

I dunno, I don't see much reason for AMD to push for something which will mostly benefit NV h/w to be integrated into all console games.

except that doesn't happen. DF will do a segment with DLSS and show the diffrences in performance and claim IQ is almost as good or in some instances better than native res. Other sites will have benchmarks and show the diffrences and say hey maybe this game isn't playable on either card at 4k 60fps native but on nvidia you can turn on DLSS and make it look just as good and now its playable !

So the comparison wont be equal at any point for AMD.
 
Yah, taa looks sharper here, but I'd gladly trade it for the performance improvement.
One ComputerBase forum user stated that dlss looked slightly 'muddier" if not using the latest driver. Once they installed the latest driver 452.06 there was a definite improvement.
They still noticed slight artifacts at times (halo), though more detailed information on this should be forthcoming from review site's analysis.
I had not installed the latest driver (451.67). I hadn't swapped the screenshots.
The screens above show the difference between the InGame settings "DLSS" vs "TAA and FidelityFX Sharpening".
After installing 452.06 I notice a clear improvement of DLSS.
 
Back
Top