If we were to approach 'worse' empirically, we'd measure %age deviantion from ground truth. A quick eyeballing shows DLSS is fairing far worse there. It's losing all the high-frequency detail. It would appear the AI can make sense of what information is needed, but cannot restore fine details. Metro shows notable blurring. Battlefield V shows corrupted detailing and softness. It's interesting that the two are different in response though and that shows the AI approach requiring training. Maybe some day it'll get good?
The better it gets, the more training it requires, and the more hardware time it costs once running, as the larger the neural net gets.
Neural net image reconstruction for games has interesting applications. The idea of reducing motion blur artifacts is neat, if perhaps not the most noticeable thing in the world.
Denoising raytracing is also interesting, but there temporal reconstruction might also win. More interesting is temporal reconstruction + deep learning, for both primary image and raytracing. But the error in the image might compound too much to create weird artefacts, just like the variable rate shading does (see the Wolfenstein examples for weird crawling textures).
Ultimately deep learning could be very neat for video games, but perhaps for other areas that aren't image reconstruction like DLSS. Controlling and generating animation in realtime is a great application, as building animation systems today is an absolute pain, especially getting them to work right under multiple scenarios. That's one of the biggest runtime things devs are actually interested in for using Deep Learning today. There's also, of course, using deep learning for, you know, game AI. But that can be hard to get to "do the thing you want" today, maybe some day though.