B3D IQ Spectrum Analysis Thread [2021-07]

Discussion in 'Console Technology' started by iroboto, Jul 13, 2021.

  1. iroboto

    iroboto Daft Funk Legend Subscriber

    =P
    Yea, you're getting it now why this analysis is really more important than pixel counting. Some scenarios like tree branches and power lines from afar, this is an incredible thing to have. But also we need to look at the other things DLSS needs to improve on (in particular the red areas)
     
    Deleted member 86764, cwjs and BRiT like this.
  2. cwjs

    cwjs Regular

    I suspect its intentional in as far as that edges are both very important comparatively easy to detect in an image, but not intentional as far as fooling the press.

    However I think the fact that reconstruction algorithms extremely heavily over-perform in stills vs motion is about fooling the press -- screenshots still sell games and are heavily used for comparisons.

    I wonder if there are any good sample games without a lot of random effects where you could perfectly sync up a camera motion and capture screenshots that have ghosting and other motion artifacts? I was recently playing the great looking ps5 port of ff7 and being constantly reminded how bad ue's lauded TAA reconstruction is at preventing ghosting.
     
  3. Clukos

    Clukos Bloodborne 2 when? Veteran

    This is very cool, thank you! :yes:
     
  4. This analysis has made me think about how the VRS technology could be utilised for a type of reverse situation. Rather than looking for dark low detail areas, couldn't it also look for angles and choose to have increased resolution for those?

    A combination of a baseline resolution (1440p) with dark areas lower res and high detail lines at a higher resolution.
     
  5. Globalisateur

    Globalisateur Globby Veteran Subscriber

    This is similar to Sony's solution for VR. Increase resolution of effects / textures on the most visible parts of the image. Which is the right way to use this tech.
     
  6. cwjs

    cwjs Regular

    VRS (tier 2) looks for whatever the devs tell it to -- they pass in a screen space texture and that texture is used to drive shading rate. I imagine the hard part is developing a heuristic to generate that screen space texture which catches all the fine grained things you want, doesn't accidentally catch some parts that are very expensive to upres/very artifacty to downres, and doesn't take so much frame time that it undoes the benefit of vrs.

    Gears for example uses a sobel edge detection (and, I assume, keeps the edges and surrounding areas higher res and downreses the rest?) which is a robust and fast way to find things you absolutely need to be sharp, but probably does a bad job at detecting which non-edges benefit from res.

    OTOH, i'm sure devs are measuring this stuff empirically, and sometimes the results of their profiling will foil our intuition (and our 400x zoom analysis). Really looking forward to the next wave of gdc/etc talks about vrs implementation.
     
    mr magoo, PSman1700, turkey and 2 others like this.
  7. pjbliverpool

    pjbliverpool B3D Scallywag Legend

    Why wouldn't you use both methods at the same time? Reduce res on the least visible parts of the scene and increase res on the most visible parts with roughly no net performance change. This is an unambiguous win.
     
    PSman1700, pharma, BRiT and 2 others like this.
  8. Silent_Buddha

    Silent_Buddha Legend

    This is exactly one of the use cases that have been put forward for VRS tier 2, maybe regular VRS as well, I can't quite remember the talks on the first implementation of VRS.

    Nothing precludes combining both use cases simultaneously, reduce quality in low detailed/visible/contrast areas while increasing detail in high detail/visibility/contrast areas. So, instead of just increasing the resolution or performance of a given scene, you can instead increase details in select areas of the scene while lowering quality in another area of the screen.

    Regards,
    SB
     
  9. Frenetic Pony

    Frenetic Pony Regular

    This does seem the general idea behind DLSS. We can see it loves oversharpening edges, which will cause aliasing and crawling, but gets people going over "look how sharp it is" just like cranking up a sharpening filter on pictures does. At the same time DLSS does rather poorly in interiors, but the problem with triple a games is they're getting increasingly complex shading, and Doom Eternal is a great example. The quality drop in reflections is quite glaring, and the same will be happening with VRS.

    Really it's just hard to predict just how evident missing shading samples will be, as you're not doing it based on your primary view, you're doing it based on what your primary view is reflecting. Whether what you're seeing is reflecting something important or not is really hard to know until you send out samples, the expensive part, to find out. There's entire sections about raytracing dealing with rayguiding and etc. that hope to deal with this. But it's just not something VRS or DLSS are built to deal with well. Speaking of, I rather fear DLSS will look quite bad with UE5. The TAA reconstruction there is already built to better deal with how Lumen works, without that you could easily see fairly bad undersampled GI; which is already a problem at times.

    Oh and the ironic thing with VRS choosing "dark" spots is that humans are more sensitive to contrast and detail in darker areas as compared to lighter areas. Film grain is a fun example, it's often applied over the whole frame, but you notice it a lot more in dark areas than you do in lighter areas because of the above. Combined with more and more triple A studios going in on TAA upsampling I wonder how useful it even is; as we can see from other analyses the lower the resolution the lower, exponentially, VRS saves anything. And if you're upsampling again anyway that'll just make even further undersampling even worse. Which is all to say, maybe we won't be seeing a ton of highly aggressive use of it.
     
  10. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■) Moderator Legend Alpha

    VRS doesn't choose anything, it does exactly what the Developers tell it to do. It's on them to work out how to apply it.
     
    AntShaw, pjbliverpool and PSman1700 like this.
  11. Globalisateur

    Globalisateur Globby Veteran Subscriber

    Only if we ignore the I/O part of the equation. As we can see here is VRS is apparently often a fair trade-off. What you gain in fps you mostly loose it in perceptible resolution. But that's actually ignoring that you still have to load high resolution textures that will be downgraded later. And unfortunately the I/O is becoming one of the main bottleneck this gen: The scenes are more and more bottlenecked by I/O. So some I/O ressources will be wasted by VRS.

    It will become more important later when Xbox games will more often than now stream assets during gameplay (what is heavily done now in some exclusives PS5 games like Demon's souls or the Insomniac games). But even now some open world games are heavily reliant on I/O streaming during gameplay (like Cyberpunk) and have big framerate drops during those. So you can't ignore I/O. Reducing pressure on I/O in those games could actually... improve the framerate during the most delicate moments of the scene (where it usually drops the lowest).
     
  12. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■) Moderator Legend Alpha

    That's why they should use SFS too. No need to stream the entire texture when not all of it may be required.
     
    PSman1700 likes this.
  13. Insight

    Insight Newcomer

    I don't think they need SFS, UE5 uses an algorithm to determine what to load at any given view according to the interview Epic gave to Edge Magazine
    https://www.gamesradar.com/amp/were...ssible-tomorrow-inside-epics-unreal-engine-5/
     
    Deleted member 13524 likes this.
  14. Jay

    Jay Veteran

    mr magoo likes this.
  15. OlegSH

    OlegSH Regular

    That's not true. DLSS doesn't oversharp image. Developers are fully responsible for sharpness levels since they control sharpening filter in DLSS. They are free to select any sharpen filter btw, in DOOM, for example, ID uses their own contrast adaptive sharpening for DLSS instead of the default non CAS filter in DLSS.
    It would have been insane to use 2 sharpen filters in a row, so the more sharpen results you see with DLSS in the DOOM are not caused by some additional sharpening in DLSS (there is only 1 sharpen filter applied which you can control in game settings), but rather because DLSS itself losses less details during resampling and rectifying samples.
    Neural nets are trained with loss functions and this training should be done with both L1/L2/PSNR/SSIM or other distance metrics between images and of course it should also take into account temporal metrics (temporal loss).
    When you have these metrics, you can flexibly trade off between more stable, but blurry image with more accumulation or less blurry image, but not as stable image with less accumulation. Most people prefer the second option.

    Again, not true.
    How it does depends mostly on game's motion vectors, on how different shaders factor in the camera jittering (they all have to be jitter aware), on texture mip bias adjustments in general and in shaders, and on how devs are clever with post-processing pipeline.
    For example, despite of having mirror reflection everywhere in DOOM, DLSS cannot reconstruct these because they are apparently not jitter aware. On the contrary, DLSS reconstructs pixel perfect sharp mirror RT reflections in CP2077.
    In other words, you can't feed some garbage to an algorithm and expect good results, there are expected input parameters that have to be taken into account for a good result.
     
  16. Is it possible to bind the look functions to the keyboard in Doom Eternal? i.e., like we did with the original Doom back in the early 90s.

    I'm thinking the look right/left would have a consistent rotation when binded to a key. You'd need a video capture device presumably, or some relevant software. Start a stage in native 4k, rotate using the binded key, then repeat the process for DLSS Performance and DLSS Quality. Find a matched screenshot in the capture, then send to @iroboto to run through his tool.

    It'd be interesting to measure the accuracy between static and dynamic screenshots.
     
  17. see colon

    see colon All Ham & No Potatos Veteran

    It would be trivial for anyone with soldering skill to create a device from any supported gamepad that would give you a consistent turn speed. Modern analog sticks are just 2 potentiometers and a spring used for self centering. All you have to do is remove the stick (or just the pots) and wire up a compatible pot that either have defined steps, or mark the steps so you get consistent input.

    Actually, now that I've said all that, instead of ruining a controller, it would probably be easier to just 3d print or build out of cardboard some something that just holds left on the right stick at the same angle.
     
  18. Surely key binding is easier?
     
  19. I thought it could be somewhat automated using stuff like Z distance and a DoF check.

    If it's not, isn't its implementation a monumental task for the developers?
     
  20. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■) Moderator Legend Alpha

    I think they may have mentioned time estimates in the Gears/Gears Tactics/VRS blogs, but I can't recall specifics or if it's false memory.

    Here's one part where they say a few days of dev work:

    https://devblogs.microsoft.com/directx/variable-rate-shading-a-scalpel-in-a-world-of-sledgehammers/
     
    Deleted member 86764 likes this.
Loading...

Share This Page

Loading...