Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

And who's going to do all the training for each game or come up with something similar to DLSS 2.0 that integrates and works for potentially all games? AMD? They don't exactly have the same resources as Nvidia though they're usually far more open with their technologies.
 
Sort of a heads up, in the Direct ML demos where they up-resolution Forza Horizon 3 from 1080p to 4K. Nvidia provided those models for the sake of the demo.

I very much doubt Nvidia is going to start handing out their DLSS ones at least freely.
I highly doubt NVidia will have a AI model for every new game. They haven't in the past 20 months. Are they going to come up with more money to help game developers on these models..?

Or will Game Devs just use something like directML and let NVidia and AMD figure dx12 out themselves.


ed: I meant to quote Malo.
 
f1 2020 gained dlss support via patch.

WCCFF120202-740x393.jpg

https://wccftech.com/f1-2020-nvidia-dlss-support/
 
I highly doubt NVidia will have a AI model for every new game. They haven't in the past 20 months. Are they going to come up with more money to help game developers on these models..?

Or will Game Devs just use something like directML and let NVidia and AMD figure dx12 out themselves.
I'm not sure you understand how it works. Nvidia no longer train specific games as their AI model for motion-based image reconstruction is now seemingly advanced enough that it is game agnostic. That tech is proprietary and Nvidia use it as a selling point for their RTX GPUs, taking advantage of the Tensor cores to accelerate it.

Game devs can't simply "use" directML to achieve the same thing. There's no models available for them to utilize, Nvidia aren't about to release their DLSS model for everyone to use. Could each game dev hire some AI model programmer to train for their game and provide that to directML for any GPU drivers that support it? Sure. I can't see that happening much though, outside of some tech experiments by large publishers.
 
There's no models available for them to utilize, Nvidia aren't about to release their DLSS model for everyone to use.
Even if we entertain the idea that they will release the model for everyone to use - or better yet just port DLSS from NGX to DML - you would still very much need a GPU with tensor cores to run DLSS.
 
Even if we entertain the idea that they will release the model for everyone to use - or better yet just port DLSS from NGX to DML - you would still very much need a GPU with tensor cores to run DLSS.
Don't the Tensor cores simply accelerate the matrix lookups for the model? It could still be done on compute cores albeit slower, possibly completely negating the performance benefit of DLSS in the first place.
 
I thought you only need something with fast enough instruction accelerations (whatever it may be, such as INT8 or FP16 or FP32).
 
Don't the Tensor cores simply accelerate the matrix lookups for the model? It could still be done on compute cores albeit slower, possibly completely negating the performance benefit of DLSS in the first place.
Sure. But what would be the point of that? I mean, it could be kinda useful in games where DLSS looks better than TAA in native but otherwise seems pretty useless without the performance benefit.
 
Sure. But what would be the point of that? I mean, it could be kinda useful in games where DLSS looks better than TAA in native but otherwise seems pretty useless without the performance benefit.
Almost zero point, just clarifying requirements. We also don't have any way to compare in a real-world situation with games. It could still net some benefit depending on the architecture design.
 
I could see MS using azure to train models.
Anything where they can say they leveraged the power of azure they see benefit in, especially if you can tag the word AI or Machine Learning onto it.
The models could possibily be useful for non game applications also.
 
Even if we entertain the idea that they will release the model for everyone to use - or better yet just port DLSS from NGX to DML - you would still very much need a GPU with tensor cores to run DLSS.
there is more than sufficient compute to run the models very well.

You're not going to get to break neck speeds, but there is plenty of compute available to support 30fps and 60fps
 
there is more than sufficient compute to run the models very well.

You're not going to get to break neck speeds, but there is plenty of compute available to support 30fps and 60fps


Even if it's true, I don't think AMD would want a solution where they're a lot slower than nVidia because they lack "dedicated" units.

IM (not educated) O the "solution" would be a thing which doesn't need ai/models. Because I don't see AMD having the ressources to maintain that... Maybe a improved checkerboard rendering solution (with dedicated hardware like on the ps4 pro) could help a lot ?
 
Even if it's true, I don't think AMD would want a solution where they're a lot slower than nVidia because they lack "dedicated" units.

IM (not educated) O the "solution" would be a thing which doesn't need ai/models. Because I don't see AMD having the ressources to maintain that... Maybe a improved checkerboard rendering solution (with dedicated hardware like on the ps4 pro) could help a lot ?
Well technically the advantage of building an AI solution (a good one) is that you can apply that everywhere. And as you get better at building AI models, the quality and performance will continue to improve. All AMD would have to do, once the models are working well and in place, is to simply provide the hardware to run it even faster.

The advantage of having AI models is that we're quickly approaching a compute and bandwidth limit. It's going to quickly become an effective way to get solid results as we move forward in visual progression.
 
Well technically the advantage of building an AI solution (a good one) is that you can apply that everywhere. And as you get better at building AI models, the quality and performance will continue to improve. All AMD would have to do, once the models are working well and in place, is to simply provide the hardware to run it even faster.

On paper you're right, but that's a lot that need to happen. If nVidia is not capable of it yet, and they work a lot on this, I don't see AMD there soon.
 
Back
Top