Which is a prevailing theory among lots of us, that these cores were thrown in for non-gaming reasons and nVidia are looking for reasons to use them. The latest, greatest upscaling could have run on RTX cards that don't have Tensor cores, and used that silicon for more compute that'd be better for upscaling as it's fast enough.They also end the article by stating that Turing's tensor cores and ready and waiting to be used.
Tensor cores got a bit of a slap-back in justification from this Control algorithm. Of course, if their AI Research Model can be run efficiently on tensor cores, it might still prove itself. Although in the comparison video, one feels just rendering particles in a separate pass on top would be the best of all worlds and the most efficient use of silicon.
That's not what's described. The DLSS process runs on the Tensor cores and is not the 'algorithm' being talked of. DLSS as an ML technique is slow. nVidia found the ML training threw up a new way to reconstruct, but it's too slow to run in realtime as an ML solution. However, the engineers managed to take that new-found knowledge and create a new reconstruction algorithm running on compute*.They're saying those cores are there and they're capable of the next round of improvements coming to DLSS which is a more optimized version of their AI research model. It's also a way of reassuring people that they wont need a next gen GPU to handle these improvements when they come. Their AI model utilizes deep learning to train their Image Processing algorithm. The goal is to get that high quality of the AI model performant enough so that it can run on the tensor cores.
The hope is to improve the NN technique so it can be run directly in game; what they term the AI Research Model. One of the reasons its confusing to follow what's going on is nVidia are calling the image processing algorithm 'DLSS' alongside the NN based DLSS. They showcase DLSS videos of control that are running an image-processing algorithm rather than a NN, as an example of what their NN-based DLSS will probably be doing in the future, they hope.
* Perhaps, maybe, it's possible to run image processing on Tensor but I've never heard of it used like that.