Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Then Nvidia should be prepared to take on the long-term burden of sustaining DLSS forever without developers ever returning the favour in the future ...

The best AMD is going to do is release a demo, open source the code, and what happens to them afterwards is the developers problem since AMD practically never updates their github samples. Do people actually think AMD are interested in endlessly chasing down as much developers as possible and offering free support indefinitely on a regular basis ? I'm pretty sure this description doesn't match their profile at all since they don't want to take on long-term commitments outside of their hardware projects or drivers ...
AMD is in both consoles, the developers will always do the work for them as a result. I think anyone expecting some AMD maintained DLSS competitor is going to be disappointed.
 
The long-term problem with DLSS or other similar technology isn't going to be the technical knowledge, it's going to be maintenance. Can DLSS realistically succeed where other experiments with a lone maintainer like PhysX (deprecated in UE4 and Unity DOTS) or G-Sync failed ?

It doesn’t need to last forever. It just needs to last long enough until the next hotness comes around to maintain their competitive advantage.

Btw, how many maintainers does the Chaos physics engine have?
 
Then Nvidia should be prepared to take on the long-term burden of sustaining DLSS forever without developers ever returning the favour in the future ...
don't both amd and nvidia do this with 'optimized drivers'? I recall reading about how much cost and work it is to constantly revise a new set of drivers to support the latest titles.
 
AMD is in both consoles, the developers will always do the work for them as a result. I think anyone expecting some AMD maintained DLSS competitor is going to be disappointed.

The only thing AMD 'maintains' is their own drivers so they obviously don't plan on giving indefinite support to a hypothetical DLSS alternative ...

It's a hopeless endeavor to bring up a DLSS competitor if no developers have any intentions to eventually support it themselves with their own hard work. DLSS is hardly going to be sustainable in the long-run if we have to keep relying on a singular party to make it keep up ...

It doesn’t need to last forever. It just needs to last long enough until the next hotness comes around to maintain their competitive advantage.

Btw, how many maintainers does the Chaos physics engine have?

What do you think the odds are of a definitively superior solution showing up on the horizon ? I think people need to start being more mindful about the maintenance cost of all solutions. It's always been about whoever was going to continue the work ...

As for the specific number of number maintainers of the Chaos physics engine, I wouldn't particularly know but I imagine Epic Game's engine physics specialists takes full responsibility for their new system. Meanwhile PhysX hasn't had a new public release in almost 2 years and contributions are starting to dry up for comparison because Nvidia eventually don't want to maintain it forever so they open sourced it to see if there's other parties interested ...

don't both amd and nvidia do this with 'optimized drivers'? I recall reading about how much cost and work it is to constantly revise a new set of drivers to support the latest titles.

At least with drivers, the hardware vendors know it's their absolute responsibility ...

With graphics technology integration such as DLSS or etc, the lines are blurrier. Nvidia thinks it's their responsibility to provide free support whenever possible. AMD has a different opinion in that they give out free code with no support thereafter so offering DLSS like how Nvidia provides it is out of the question and in the end we're left with the fact that neither AMD or the developers want to do the work ...

Maybe Microsoft could pursue an alternative but I'd imagine it would suck so much having to do the other developers job for them and that they'd vastly prefer it if the developers do their own work ...
 
Last edited:
The only thing AMD 'maintains' is their own drivers so they obviously don't plan on giving indefinite support to a hypothetical DLSS alternative ...

It's a hopeless endeavor to bring up a DLSS competitor if no developers have any intentions to eventually support it themselves with their own hard work. DLSS is hardly going to be sustainable in the long-run if we have to keep relying on a singular party to make it keep up ...
I think it's almost a certainty that AMD just releases open source code for whatever compute based upscale they come up with just like they did with most of FidelityFX. It will be up to devs to implement it with any additional work as their engine requires. I don't expect this to see any major uptake as the majority of devs will just continue to use their own custom methods on consoles and just leave standard monitor/GPU upscaling for the PC version.
 
I think it's almost a certainty that AMD just releases open source code for whatever compute based upscale they come up with just like they did with most of FidelityFX. It will be up to devs to implement it with any additional work as their engine requires. I don't expect this to see any major uptake as the majority of devs will just continue to use their own custom methods on consoles and just leave standard monitor/GPU upscaling for the PC version.

This is literally what it takes to get DLSS into a game. They needed an entire team consisting of machine learning specialists and a supercomputer to train the DLSS model after a game is released. That's tons of resources going down the drain all just to get a better hack for image upscaling. Two very high barriers to entry of needing to hire a dedicated team for machine learning and access to a supercomputer. Even I don't think the big game publishers are willing to go to those great lengths ... (people frequently criticize the carbon footprint behind crypto mining but machine learning is arguably getting to be just as bad)

DLSS without any assistance is a nightmare for developers to try and implement/maintain by themselves while TAA is a minor annoyance that can be handled by a graphics programmer along with maybe a technical artist at hand. These conditions make it virtually impossible for DLSS to ever be the "natural outcome" so the only way to distort this outcome is if Nvidia keeps pouring resources their own resources and this'll only last as long as they're willing to keep up with it. Naturally, DLSS will meet it's own demise for being a high maintenance solution and will end up in a dead end eventually ...
 
DLSS without any assistance is a nightmare for developers to try and implement/maintain by themselves while TAA is a minor annoyance that can be handled by a graphics programmer along with maybe a technical artist at hand.
This I agree with but you're overestimating the resources needed to support DLSS going forward. Nv will invest into DLSS because it brings them insane performance advantages. AMD will also invest into something similar. I fully expect Epic and Unity to make their own reconstruction tech down the line too.
 
This was about DLSS1, the whole approach changed with DLSS2.

This I agree with but you're overestimating the resources needed to support DLSS going forward. Nv will invest into DLSS because it brings them insane performance advantages. AMD will also invest into something similar. I fully expect Epic and Unity to make their own reconstruction tech down the line too.

I don't think DLSS 2.0 changes really anything with regard to the barrier of entry. Nvidia still needed a bunch of machine learning experts and they used their own DGX-powered supercomputer to train the DLSS 2.0 model ...

How are big publishers even supposed to come up with something remotely close ?
 
I don't think DLSS 2.0 changes really anything with regard to the barrier of entry. Nvidia still needed a bunch of machine learning experts and they used their own DGX-powered supercomputer to train the DLSS 2.0 model ...

How are big publishers even supposed to come up with something remotely close ?

One AI model for all games is a definite game-changer...
 
Nvidia still needed a bunch of machine learning experts and they used their own DGX-powered supercomputer to train the DLSS 2.0 model ...
Not really since it is trained once for most cases. Improvements may be needed but it's less of an issue when there's no pressing need to make the model work at game's launch.

How are big publishers even supposed to come up with something remotely close ?
Well, I don't see why Microsoft can't come up with something close. Or Epic which I've already mentioned.
Also worth noting that ML for gaming as an active field of research will inevitably lead to ML being used in games anyway so why not use it for resolution reconstruction as well?
 
I don't think DLSS 2.0 changes really anything with regard to the barrier of entry. Nvidia still needed a bunch of machine learning experts and they used their own DGX-powered supercomputer to train the DLSS 2.0 model ...

How are big publishers even supposed to come up with something remotely close ?

Nvidia’s overall business strategy already calls for lots of ML experts and supercomputer hardware. DLSS likely consumes negligible resources versus all the other AI and image reconstruction stuff they’re doing anyway. I think you’re exaggerating the incremental cost.

For the foreseeable future it will be infinitely cheaper to upscale than to produce hardware that can render at full resolution. DLSS will die when better upscaling tech comes around or when displays stagnate and upscaling is no longer required.
 
Nvidia’s overall business strategy already calls for lots of ML experts and supercomputer hardware. DLSS likely consumes negligible resources versus all the other AI and image reconstruction stuff they’re doing anyway. I think you’re exaggerating the incremental cost.

Don't take my word for it. This is literally from a real account from one of Nvidia's own employee ...

They needed a team of ML professionals and a supercomputer just to implement DLSS for a couple dozen games over the past few months. Can you imagine how much more resources would be required to do the same for nearly all games from a big publisher like Activision, EA, Take-Two, Ubisoft, etc ? (and they aren't going just share their models willingly with each other either)

For the foreseeable future it will be infinitely cheaper to upscale than to produce hardware that can render at full resolution. DLSS will die when better upscaling tech comes around or when displays stagnate and upscaling is no longer required.

It'll be cheaper to do naive upscaling or other methods of upscaling, this is not necessarily true for ML upscaling where no current developer or publisher independently have the same resources as Nvidia does to make it happen ...
 
Don't take my word for it. This is literally from a real account from one of Nvidia's own employee ...

They needed a team of ML professionals and a supercomputer just to implement DLSS for a couple dozen games over the past few months. Can you imagine how much more resources would be required to do the same for nearly all games from a big publisher like Activision, EA, Take-Two, Ubisoft, etc ? (and they aren't going just share their models willingly with each other either)

Your argumentation is based on DLSS 1.0 (individual training per game)
Repeat after me:
"One AI model for all games" aka DLSS 2.0

Simple as that.
 
DLSS without any assistance is a nightmare for developers to try and implement/maintain by themselves while TAA is a minor annoyance that can be handled by a graphics programmer along with maybe a technical artist at hand.

DLSS 2.0 and TAA have literally the same requirements (annoyances as you call them) to be implemented in a game: bunch of previous frames + motion vectors.
 
Maybe I'm misreading things, but I thought what Lurkmass was getting at is we likely won't be getting a DLSS equivalent from individual game developers and there is a high cost of entry should anyone else try. So it may be some time until AMD/Microsoft/Sony can deliver on their version of it. Its not about what it costs for Nvidia to do refinements or game developers to use Nvidia's DLSS.
 
Yeah, NVIDIA has a unique position in hardware/software giving them a head-start on ML-upscaling.
How big a head start is yet to be seen, but so far 2 generations head-start.
 
DLSS 2.0 and TAA have literally the same requirements (annoyances as you call them) to be implemented in a game: bunch of previous frames + motion vectors.

Minus the fact that you likely need an entire team dedicated to training the model and a supercomputer (DLSS 2.0 doesn't change this requirement at all) to go with it ...

The only common property TAA and DLSS share is that they both rely on temporal stability and motion vectors ...

Maybe I'm misreading things, but I thought what Lurkmass was getting at is we likely won't be getting a DLSS equivalent from individual game developers and there is a high cost of entry should anyone else try. So it may be some time until AMD/Microsoft/Sony can deliver on their version of it. Its not about what it costs for Nvidia to do refinements or game developers to use Nvidia's DLSS.

It's both and just because you only have to train one model doesn't necessarily mean that you won't have to do a similar amount of training either depending on the situation ...

If a model that was only ever trained on a specific set of data then it can only be realistically used for reliably inferencing similar conditions. Models are more fragile than we imagine them to be since they can break under different art designs or rendering technology. If we're faced with two graphically dissimilar games then training on one unified model doesn't provide much advantage over training separate models because you needed roughly the same amount of data and training time in both cases. Is there supposed to be a significant difference between duplicating effort or making twice the amount of edits ?
 
Back
Top