How about platform agnostic directML...?
https://lordsofgaming.net/2020/06/xbox-series-x-directml-a-next-generation-game-changer/
https://lordsofgaming.net/2020/06/xbox-series-x-directml-a-next-generation-game-changer/
Sort of a heads up, in the Direct ML demos where they up-resolution Forza Horizon 3 from 1080p to 4K. Nvidia provided those models for the sake of the demo.How about platform agnostic directML...?
https://lordsofgaming.net/2020/06/xbox-series-x-directml-a-next-generation-game-changer/
How about learning to understand the difference between an API and an application which is using the API?
DirectML is also hardly "platform agnostic" being Microsoft's proprietary.
I highly doubt NVidia will have a AI model for every new game. They haven't in the past 20 months. Are they going to come up with more money to help game developers on these models..?Sort of a heads up, in the Direct ML demos where they up-resolution Forza Horizon 3 from 1080p to 4K. Nvidia provided those models for the sake of the demo.
I very much doubt Nvidia is going to start handing out their DLSS ones at least freely.
I'm not sure you understand how it works. Nvidia no longer train specific games as their AI model for motion-based image reconstruction is now seemingly advanced enough that it is game agnostic. That tech is proprietary and Nvidia use it as a selling point for their RTX GPUs, taking advantage of the Tensor cores to accelerate it.I highly doubt NVidia will have a AI model for every new game. They haven't in the past 20 months. Are they going to come up with more money to help game developers on these models..?
Or will Game Devs just use something like directML and let NVidia and AMD figure dx12 out themselves.
Even if we entertain the idea that they will release the model for everyone to use - or better yet just port DLSS from NGX to DML - you would still very much need a GPU with tensor cores to run DLSS.There's no models available for them to utilize, Nvidia aren't about to release their DLSS model for everyone to use.
Don't the Tensor cores simply accelerate the matrix lookups for the model? It could still be done on compute cores albeit slower, possibly completely negating the performance benefit of DLSS in the first place.Even if we entertain the idea that they will release the model for everyone to use - or better yet just port DLSS from NGX to DML - you would still very much need a GPU with tensor cores to run DLSS.
Sure. But what would be the point of that? I mean, it could be kinda useful in games where DLSS looks better than TAA in native but otherwise seems pretty useless without the performance benefit.Don't the Tensor cores simply accelerate the matrix lookups for the model? It could still be done on compute cores albeit slower, possibly completely negating the performance benefit of DLSS in the first place.
Almost zero point, just clarifying requirements. We also don't have any way to compare in a real-world situation with games. It could still net some benefit depending on the architecture design.Sure. But what would be the point of that? I mean, it could be kinda useful in games where DLSS looks better than TAA in native but otherwise seems pretty useless without the performance benefit.
Some independent benches here btw: https://www.guru3d.com/articles_pages/f1_2020_pc_graphics_performance_benchmark_review,8.html
there is more than sufficient compute to run the models very well.Even if we entertain the idea that they will release the model for everyone to use - or better yet just port DLSS from NGX to DML - you would still very much need a GPU with tensor cores to run DLSS.
there is more than sufficient compute to run the models very well.
You're not going to get to break neck speeds, but there is plenty of compute available to support 30fps and 60fps
Well technically the advantage of building an AI solution (a good one) is that you can apply that everywhere. And as you get better at building AI models, the quality and performance will continue to improve. All AMD would have to do, once the models are working well and in place, is to simply provide the hardware to run it even faster.Even if it's true, I don't think AMD would want a solution where they're a lot slower than nVidia because they lack "dedicated" units.
IM (not educated) O the "solution" would be a thing which doesn't need ai/models. Because I don't see AMD having the ressources to maintain that... Maybe a improved checkerboard rendering solution (with dedicated hardware like on the ps4 pro) could help a lot ?
Well technically the advantage of building an AI solution (a good one) is that you can apply that everywhere. And as you get better at building AI models, the quality and performance will continue to improve. All AMD would have to do, once the models are working well and in place, is to simply provide the hardware to run it even faster.