So basically Nvidia are getting all the plaudits and slaps on the back for all these great new things that have been in real use on consoles/PC for a while then?
Not exactly. This is the first iteration using inferencing for reconstruction in a consumer product at least. It's new.
I think you can think of rendering itself as a reconstruction technique. Fundamentally, you're just recreating an artificial image through algorithms. I'm a big fan of reconstruction/checkboarding as done on consoles. I think the image quality is excellent and it's not worth spending the resources on "native" resolution. DLSS I think can further improve quality and performance.
For me neural network approaches have an extremely high ceiling for image qulaity. Think about this: suppose you were going to be given an image that was rendered at 720p and told to upscale it in photoshop but you had to do the anti-aliasing by hand with a paint brush, you could make that image look better then the what a native render would be (and I type better looking and not equal to native). Why? because you know where the jaggies are, what ideal edges are, what textures should looklike, etc. It's all based on experience. That's what a neural network will do as well provided, its trained well enough and is big enough to handle all the varieties of images thrown at it. This might not happen in the first go at it with DLSS, but I think it will eventually.
The great thing about neural networks for inferencing is that they don't need a lot of precision so you can do your ops with byte, nibble, or even lower operations that the AI cores provide. You can save a lot of compute power and divert it else where.