That is not to say there are no issues with TAA or CR. Since there are content changes when rendering games in real-time (the scenes are continuously dynamic, after all), naively assuming previous frames will be correct could easily lead to artifacts like ghosting or lagging. Usually, these problems are handled with heuristics based history rectification of the invalid samples from previous frames. That comes with other issues of its own, though, such as the reintroduction of temporal instability, blurriness, and Moiré patterns.
One of the most commonly used heuristics models is neighborhood clamping, which works by clamping samples from previous frames to a minimum and maximum of the neighboring current frame samples. This does strike a decent balance to avoid the shortcomings mentioned above. However, it cannot prevent a rather significant loss of detail as you can see below.
NVIDIA is solving this issue by exploiting the power of its supercomputer, which is trained offline on tens of thousands of extremely high-quality images.
Neural networks are simply much more suited to a task like this than handcrafted heuristics as they can find the optimal strategy to combine samples collected over multiple frames, delivering much higher quality reconstructions in the end result.
It's a data driven approach and one that allows DLSS 2.0 to successfully reconstruct even complex situations like those with the Moiré pattern. The image comparisons below are thoroughly impressive, surpassing even native images more often than not while doing a 4x upscaling from 540p to 1080p. None of this was possible with the previous DLSS model.