B3D IQ Spectrum Analysis Thread [2021-07]

Discussion in 'Console Technology' started by iroboto, Jul 13, 2021.

  1. cwjs

    cwjs Regular

    You can use something like z distance -- you can use whatever you want. Like everything in graphics, getting the perfect result is a ton of work.
     
  2. see colon

    see colon All Ham & No Potatos Veteran

    I mean, if that's how you want to live your life...
     
    PSman1700 likes this.
  3. Frenetic Pony

    Frenetic Pony Regular

    Giving that control to devs makes sense, but that doesn't mean you're not introducing aliasing from oversharpening and losing sample reprojection. If samples go missing that otherwise wouldn't that's up to DLSS, in native Doom Eternal's TAA does a wonderful job of reprojection with incredibly smooth edges. And as the art directors for the game themselves state, or rather to paraphrase, "if you're going a million miles an hour you don't notice crispness like that" (in reference to RT shadows, but the same here). Optimizing for a comparison still screenshot is silly, if you're moving a ton your eyes lose resolution anyway thanks to humans essentially using their own TAA upscaling (our eyes even vibrate to jitter sample space) while sharp pops from lower accumulation will increase contrast and be more visible; and if you're standing still you can just accumulate a lot of frames. So the second option in practice doesn't make much sense, unless you care more about looking better in freeze frame comparisons rather than in motion.

    The pixel sharp reflections in CP are interesting, however you're looking at a much narrower range of roughness cutoff there, much easier to reconstruct reflections on windows like CP has rather than DE, which goes all the way down to what, 0.6 or something? Anyway I wonder if there's also RT reconstruction differences for both titles, which is a dev controlled thing, and not to do with DLSS. Reprojection from reflections isn't handled by DLSS, as your looking at second order shading, the movement of the reflection in screenspace is only partially dependent on the initial surfaces motion vectors. You also have to consider full 6 axis motion, including the rotation of all objects, as well as any motion of the reflected object. DLSS isn't built at all for that, it doesn't handle it. While you can do an interesting reprojection for nigh perfectly smooth surfaces like CP has a cutoff for by searching screenspace in the next frame or by tracking the motion vector of the reflected subject, neither work for including multiple samples as your information is lost behind applying the brdf.
     
  4. OlegSH

    OlegSH Regular

    Never said that.
    Aside from that, aliasing from oversharpening and losing samples due to wrong or lack of reprojection are two different and unrelated beasts.

    Oversharpening comes from the settings of the sharpen filter that creates ringing at extreme values. The network has nothing to do with sharpening unless it has been proven otherwise with some real evidences.
    In fact, you can take a look at the screens I posted here yesterday - https://imgsli.com/NjE3MDQ/1/2 . As you hopefully can see, there are no signs of sharpening in screenshots with Native + TAA and DLSS Quality, there is exactly the same level of sharpness for both Unreal's TAA and DLSS.
    And that's what I see in most of games with DLSS, so, again, neural network is not responsible for the oversharpenning you mentioned, developers are.

    Losing sample reprojection comes from either bad motion vectors, lack of motion vectors or lack of information in history (occluded regions) and that again has nothing to do with the neural network in DLSS.
    So what are we even arguing with here?

    So does DLSS in this game.

    That's not as simple as you're saying, camera in games is never static and so are the characters, guns and hands in the first person games, trees, trafic, etc. So all comparisons are in motion.
    When you look at something, you don't move your head, because for the microsaccades to work, you have to look into a point, that impoves focal resolution. Detalization without motion forward is a priority because that's how eyes can appreciate all the details in focus and in sight.
    There are other elements in retina that are sensitive to edge resolution (they spot flickering and aliasing) and that's why clever post-processing pipeline should also have prefiltering before TAA (be it morphological anti-aliasing of input frame, motion blur or something else) for the cases where temporal accumulation fails (this won't fix flickering on the portion of pixels where accumulation fails, but will fix aliasing and motion blur will fix remaining flickering if there is any).

    First, CP has RT reflections on all range of roughness from 0 to 1. Second, It's not the roughness that prevents accumulation, but rather something that breaks camera jittering, and denoising is most likely to blame here.
    Third, from what I can tell after playing many hours in DE with RT reflections, reflections in DE are not physically correct, they don't take roughness into account when tracing rays, all reflections seem to be perfect mirror ones (roughness is responsible just for the strength of blending of these reflections) and there are no signs of denoising because of that.
    Also, after applying full resulution to RT reflections, they work perfectly fine with DLSS, default half resolution breaks jittering and accumulation in DLSS.

    Some games track movement of reflections for denoising, this can be different from typical motion vector of course.

    Any temporal accumulation is not built for that because that's the same corner cases as occluded regions (though occluded regions can be overcome to a point by tracking a few history buffers), you can only remedy it with higher spatial resolution on the rotated edges, otherwise you simply blur the edges of the rotated object with motion blur, morphological AA, TAA, DLSS, etc, which looks quite good and acceptable in practice.

    BRDF has nothing to do with that, you can integrate as many samples as you'd like, but post pocessing of the samples, such as denoising, can easily break camera jittering, and if it's broken, there is nothing TAA or DLSS can do to accumulate futher samples with time, as simple as that.
     
    Last edited: Jul 19, 2021
    PSman1700 and pharma like this.
Loading...

Share This Page

Loading...