Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

I think that what Nv should do instead is just port DLSS to DirectML. This should make it compatible with any DML compatible GPU out there while keeping the performance advantage on those with dedicated ML h/w.
Not sure that current DML is able to handle such port though.

Nvidia will only do this if competition forces their hand. Or some independent 3rd party will train their own network and make it broadly available via DirectML at which point DLSS will actually be dead. Lots of ifs though.
 
Nvidia will only do this if competition forces their hand. Or some independent 3rd party will train their own network and make it broadly available via DirectML at which point DLSS will actually be dead. Lots of ifs though.
Any doubts that either or will eventually happen? Why wait if it's basically a given? The only reason I can think of for them to not do such a port is if they don't really want to support DLSS in the long run - a similar 3rd party solution would still use their ML h/w which is likely all they want.
 
Any doubts that either or will eventually happen? Why wait if it's basically a given? The only reason I can think of for them to not do such a port is if they don't really want to support DLSS in the long run - a similar 3rd party solution would still use their ML h/w which is likely all they want.

I think that depends on how hard it is to train for DLSS, or, more accurately, how quick the training converges on something across different games.
If it's hard, then it probably unlikely to happen, as training is expensive. On the other hand, if it converges rather quickly, then we should already be seeing some free AI model trained by someone else.
Note that there are many superscaling training models out there, but most are based on traditional video (e.g. trying to upscale 480p to 1080p, or create interpolation of two frames to make motion smoother). One of the advantage of 3D rendering is there are more data available for use. For example, in 3D rendering, we generally have Z value, object ID, true motion vectors, etc. These are rarely available in a traditional movie. When your model takes advantage of more inputs, it tends to be more difficult to train, but also more likely to give you better results.
 
If it's hard, then it probably unlikely to happen, as training is expensive.
I'd imagine that this cost will go down in time too, at least if we're aiming at a similar result.
It's possible of course that over the same time a more advanced NN will become feasible which will give even better results but require more expensive training (aka DLSS 3+) but this may already be in the zone of diminishing returns visually. And even in this case making the old version compatible with industrial API seem like a good idea if they want better adoption.
 
I can't see any third party's doing this on their own. Let's not forget algorithms can have upwards of a couple thousand or million optimization tuning variables. At this point no one really knows what is involved in Nvidia's "all for one" algorithms, but the speed Indie developers apply DLSS to their games seems to indicate there's minimal delay in using. I recall one recent developer stating they implemented DLSS to their game over one weekend.
 
I can't see any third party's doing this on their own. Let's not forget algorithms can have upwards of a couple thousand or million optimization tuning variables. At this point no one really knows what is involved in Nvidia's "all for one" algorithms, but the speed Indie developers apply DLSS to their games seems to indicate there's minimal delay in using. I recall one recent developer stating they implemented DLSS to their game over one weekend.

There is general academic interest in the field of ML and image processing. Nvidia has a head start but one day some comp sci grad student will match DLSS. It’s only a matter of time as training resources become more widely and more cheaply available.
 
There is general academic interest in the field of ML and image processing. Nvidia has a head start but one day some comp sci grad student will match DLSS. It’s only a matter of time as training resources become more widely and more cheaply available.


But maybe nvidia isn't waiting doing nothing and still works on improving dlss too ?
 
But maybe nvidia isn't waiting doing nothing and still works on improving dlss too ?
Just because you started ahead doesn't mean you will stay ahead even if you keep running.
As soon as open technology matches or even gets close enough to proprietary it's only a matter of time when proprietary will lose the fight.
 
As soon as open technology matches or even gets close enough to proprietary it's only a matter of time when proprietary will lose the fight.
Yeah but as I've said would it even matter to them if the "open" technology still require or runs better on their h/w? I can imagine that it won't.
 
Yeah but as I've said would it even matter to them if the "open" technology still require or runs better on their h/w? I can imagine that it won't.
Did anyone say it would matter? Of course the "open technology" in this context wouldn't require one brands hardware, it would be just another proprietary tech (again in this context) regardless of who made it if it did.
Before we have any competitors out there and we know how they exactly work it's too early to even make semi-educated guesses how fast any hardware would run such and before we do, there's nothing in NV or AMD hardware that should make one think it would run better on X
 
Before we have any competitors out there and we know how they exactly work it's too early to even make semi-educated guesses how fast any hardware would run such and before we do, there's nothing in NV or AMD hardware that should make one think it would run better on X
Sure there is, and you know exactly what.
 
Just because you started ahead doesn't mean you will stay ahead even if you keep running.
As soon as open technology matches or even gets close enough to proprietary it's only a matter of time when proprietary will lose the fight.

That's a biiig condition. In a lot of spaces, proprietary is still wining because of this, competitors can't came close with open tech...
 
Seems DLSS can essentially be dropped in as a replacement for UE's built in temporal upscaler. Generic APIs for the win. This should hopefully enable developers to easily offer both DLSS and TAAU as upscaling options and enable real head-to-head comparisons.

DLSS is temporal upscaler that takes advantage of Nvidia hardware, so it would be replacing TAA. The way we ship it in Fortnite is thanks to 4.26’s ITemporalUpscaler interface that can be overridden.

I thought this was interesting too. There's quite a bit of hand tweaking required to Unreal's shader based approach. Presumably the goal is to arrive at an algo that works across all games and doesn't have to be customized for each title.

Well I’m glad it works on your content, because I’ve not be particularly happy with ScreenPercentage<50 on the content I’ve been testing with. There are some known shader permutation I need to implement when ScreenPercentage<50, but it hasn’t been much of focus, lately mostely because on shading change like blinking light the fundamental low input res can get really rough. So my focus has been more on making it work on next gen consoles, and improve the known quality problem and stability in the ScreenPercentage [50;100]
 
Back
Top