but that doesn't mean you're not introducing aliasing from oversharpening and losing sample reprojection
Never said that.
Aside from that, aliasing from oversharpening and losing samples due to wrong or lack of reprojection are two different and unrelated beasts.
Oversharpening comes from the settings of the sharpen filter that creates ringing at extreme values. The network has nothing to do with sharpening unless it has been proven otherwise with some real evidences.
In fact, you can take a look at the screens I posted here yesterday -
https://imgsli.com/NjE3MDQ/1/2 . As you hopefully can see, there are no signs of sharpening in screenshots with Native + TAA and DLSS Quality, there is exactly the same level of sharpness for both Unreal's TAA and DLSS.
And that's what I see in most of games with DLSS, so, again, neural network is not responsible for the oversharpenning you mentioned, developers are.
Losing sample reprojection comes from either bad motion vectors, lack of motion vectors or lack of information in history (occluded regions) and that again has nothing to do with the neural network in DLSS.
So what are we even arguing with here?
in native Doom Eternal's TAA does a wonderful job of reprojection with incredibly smooth edges
So does DLSS in this game.
So the second option in practice doesn't make much sense, unless you care more about looking better in freeze frame comparisons rather than in motion.
That's not as simple as you're saying, camera in games is never static and so are the characters, guns and hands in the first person games, trees, trafic, etc. So all comparisons are in motion.
When you look at something, you don't move your head, because for the microsaccades to work, you have to look into a point, that impoves focal resolution. Detalization without motion forward is a priority because that's how eyes can appreciate all the details in focus and in sight.
There are other elements in retina that are sensitive to edge resolution (they spot flickering and aliasing) and that's why clever post-processing pipeline should also have prefiltering before TAA (be it morphological anti-aliasing of input frame, motion blur or something else) for the cases where temporal accumulation fails (this won't fix flickering on the portion of pixels where accumulation fails, but will fix aliasing and motion blur will fix remaining flickering if there is any).
The pixel sharp reflections in CP are interesting, however you're looking at a much narrower range of roughness cutoff there, much easier to reconstruct reflections on windows like CP has rather than DE
First, CP has RT reflections on all range of roughness from 0 to 1. Second, It's not the roughness that prevents accumulation, but rather something that breaks camera jittering, and denoising is most likely to blame here.
Third, from what I can tell after playing many hours in DE with RT reflections, reflections in DE are not physically correct, they don't take roughness into account when tracing rays, all reflections seem to be perfect mirror ones (roughness is responsible just for the strength of blending of these reflections) and there are no signs of denoising because of that.
Also, after applying full resulution to RT reflections, they work perfectly fine with DLSS, default half resolution breaks jittering and accumulation in DLSS.
Reprojection from reflections isn't handled by DLSS, as your looking at second order shading, the movement of the reflection in screenspace is only partially dependent on the initial surfaces motion vectors.
Some games track movement of reflections for denoising, this can be different from typical motion vector of course.
You also have to consider full 6 axis motion, including the rotation of all objects, as well as any motion of the reflected object. DLSS isn't built at all for that, it doesn't handle it.
Any temporal accumulation is not built for that because that's the same corner cases as occluded regions (though occluded regions can be overcome to a point by tracking a few history buffers), you can only remedy it with higher spatial resolution on the rotated edges, otherwise you simply blur the edges of the rotated object with motion blur, morphological AA, TAA, DLSS, etc, which looks quite good and acceptable in practice.
neither work for including multiple samples as your information is lost behind applying the brdf.
BRDF has nothing to do with that, you can integrate as many samples as you'd like, but post pocessing of the samples, such as denoising, can easily break camera jittering, and if it's broken, there is nothing TAA or DLSS can do to accumulate futher samples with time, as simple as that.