There we have it.DLSS Explained (super dumbed down explanation with a pinch of PR Boogaloo)
https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-your-questions-answered/
That's a bit overboard. Options are always good and players have the option to turn it off. If you don't want gamers being 'duped', disable it as the default option and have gamers have to enable it in settings.
- Goes as far as recommending it be removed from BFV entirely as gamers will be using it based on marketing from Nvidia and ending up with a terrible experience compared to normal TAA
Been thinking about the negative mipmap as well, part of the blurrier result may indeed be from teaching DLSS with target resolution mipmap range instead of what would be appropriate for 64xSSAA. (And thus loosing what would be the proper look for 64xSSAA for each pixel.)Seeing the softer textures in the DLSS analysis cited in the post above made me wonder about the methodology of DLSS.
It looks like they are running the full scene at the lower resolution with the corresponding low res mipmaps. A neat thing that Epic wrote about with TAA is that the TAA can actually run the mipmaps at the post-upscale target resolution (e.g. 4K mipmaps instead of 1885p), which is often why the TAA scenes look more crisp. Normally, running the scene with this negative LOD bias would result in texture shimmer because the lower initial render resolution would be undersampling the mipmaps creating aliasing artifacts. However, the pseudo-samples from the previous frame and the noise-reduction from the TAA filter let you 'get away with' using the higher detail mipmaps without shimmer.
nVidia made a recent statement seem to indicate that they are not using any temporal information to compute DLSS, so they can't gain information in the image via sample reuse and can't don't benefit from the temporal jitter/shimmer reduction which should limit how much sharpening they would want to do.
I suspect that under the hood DLSS works a lot like MLAA/FXAA, except instead of just looking for pre-determined contrast edge shapes, the machine learning is used to find the the characteristics of areas of the image which contain aliasing artifact, then find the blend instructions which would lower the error values of those artifacts the best way possible. I wonder how large of a search area they are using around each pixel. The main issue with this method is that it is impossible to construct more information than exists in the original image (which is rendered at the lower resolution). This works OK at 4k, but I assume that one of the reasons low resolution 1080p DLSS seems not to be widely available is because it would become readily apparent that the image contains too little information if it is constructed from an upsampled 720p-900p image.
The video itself talks about points raised with other RTX features. They didn't want to compare FFXV's TAA solution because they felt it was poor versus other games, similar to Metro's non-RT lighting being poor versus other games. There is always going to be a conflict of interests when an IHV wanting to sell a new proprietary tech works with devs on an engine to implement that tech. We need data from independent parties who nVidia isn't supporting directly.
One can't determine that until it's proven itself.Thats why i thought dlss, or some form of it could be a thing for consoles.
Reconstruction on a 1.8TF PS4 to Spider-Man/HZD quality takes a few ms. DLSS to inferior quality standards on 2080 takes longer than to render a 1080p frame, which is something like 140fps DXR off, so 7ms. Much longer with RTX enabled. There was no reason (evidence) to think DLSS is better than reconstruction; you should wait before jumping to conclusions about what are good ideas for consoles.Doing reconstruction with compute...
Well they won't be current gen graphics if they have raytraced lighting. However, this thread is about DLSS and determining how well it performs both in time and quality, and seeing how that changes over time if the algorithm/training advances. DLSS is a very interesting tech and a whole new paradigm being applied to realtime graphics. As I mentioned before, perhaps ML assisted reconstruction is the ideal? It'd also be nice to hear how PS4's ID buffer helps at all, if any. The issue at the moment seems to be the ML not creating any surface detail, but the source material isn't that blurry. Worst case, DLSS should look like 1080p upscaled. That with crisp edges (being recovered okay) would be a notable improvement.Everything compute and we have current gen gfx with new features at atleast 30fps.
As I mentioned before, perhaps ML assisted reconstruction is the ideal?
I think he's referring to how it's being reconstructed. With DirectML, developers have full control over what the ML does in the pipeline and when. With Nvidia DLSS, the solution could be entirely blacked boxed from developers and we're seeing a post processing reconstruction using ML.DLSS is ML assisted reconstruction. That's exactly what it does under the hood. That's also why the Tensor Cores are so important to it. Evaluating neural networks in vanilla compute is too slow to hit the performance target they need for it to actually be useful.
That's assuming DICE/4A has provided all the source screens already. Keep in mind that ultrawide resolutions aren't even an option yet since DLSS needs to be trained on every single resolution separately and ultrawide hasn't been done. Whether it's in a queue at Nvidia or DICE hasn't provided non-standard resolutions yet, who knows.Perhaps, the interesting part is whether DICE and 4A Games is hands off at this point. I likely think so, given that this is a driver based solution. It will be on nvidia to find a way to train to make the results better than TAA.