Nvidia DLSS 3 antialiasing discussion

Is frame generation now considered acceptable (even desirable) ? Haven’t seen many complaints recently from average gamers about latency or artifacting in “fake frames”. Not a lot of media coverage either.
In the cases where I've tried DLSS3 framegen I was shocked by how unaffected I was by the added lag. To the point I kept checking to make sure FG was actually on. I expected to be able to at least notice the extra lag. I'm pretty damn sensitive to that.
 
From my experience with both DLSS and FSR latency is heavily implementation dependent. Some games can be used with FG even at 30-40 pre-FG fps, some require 60+ and even then you can feel the added latency in them.

Some of the better implementations of DLSS FG are path traced games like CP2077 and Portal. One of the best FSR FG implementation to date is in Starfield (on par if not better than DLSS there IMO).

The Talos Principle is an interesting case as DLSS there had a driver bug since launch which they've worked around at some point but then Nvidia fixed the bug - the workaround implementation was very laggy, FSR was less laggy in comparison but with the driver fix DLSS became a lot less laggy and is now beating FSR there I'd say.

Other interesting comparisons include the fact that with DLSS you can pretty much force vsync in the driver and forget about further tweaking as "it just works". There are some minor issues with frame pacing on fast changes between "lite" and "heavy" frames (like looking at the sky and down again with a mouse) but they are tolerable and you generally get a fine experience above 90 visible fps with gsync (can be lower in some games).

With FSR you really need to lock the frame rate to some number which you will be hitting all the time as leaving it unlocked will lead to heavy frame pacing issues. This makes it harder to tweak into an optimal state than DLSS and you're loosing some performance in areas which can go higher than the limit but otherwise it does work surprisingly well when integrated properly.

On the latter note both DLSS and now FSR had botched integrations which just doesn't work. With FSR these are Avatar as of last couple of patches and TLOUP1. Enabling FSR FG in these don't improve the fps or do anything really at the moment. DLSS had is share of such borked integrations too. All of them were fixed eventually IIRC.

Perhaps it's different now but I played through Portal RTX when it launched and FG was simply too 'floaty' for me. Thankfully ultra-perf mode looks very decent in this game and I used that.

Cyberpunk otoh has been way better and though I can still feel the added latency since my base frame-rate is less than 60, it's still quite playable. Maybe FSR3 decoupling and addition will improve this even further since DLSS FG seems to be more computation heavy.

Not sure what the frame-pacing results should be, the Avatar review by ComputerBase had 7900XTX running merrily while 4080 was hitching all over the place. I have not kept up with this game so maybe it was a bug, since they say DLSS FG also has issues on the nvidia card.


Looking towards future, when people tested out 8k FG they found huge amounts of VRAM usage( 5-6GB ) and no performance improvement. Maybe the OFA is limited to 4k and would need improvement in the next-gen.

Also I had high hopes for FG to become a driver toggle, like AMD have done it, when Jensen made his 40xx series announcement.
 
March 26, 2024
Horizon Forbidden West: DLSS vs. FSR vs. XeSS Comparison Review | TechPowerUp
The PC release also has support for NVIDIA's DLSS Super Resolution, Frame Generation (also known as DLSS 3), NVIDIA's Deep Learning Anti-Aliasing (DLAA), Intel's Xe Super Sampling 1.2 (XeSS 1.2) and AMD's FidelityFX Super Resolution 2.2 (FSR 2.2) from day one. AMD's FSR 3 Frame Generation isn't supported on launch day, but the developers will add it in a future update.

All implemented upscaling solutions are able to use Dynamic Resolution Scaling (DRS) at 30, 45 and 60 FPS, a very welcome feature. When DRS is active, the internal resolution will scale from 100% to a maximum of 50% in more demanding scenes.

In order to run Horizon Forbidden West at maximum graphics settings and reasonable framerates at native resolution, quite a powerful GPU is required, which is why upscaling solutions are so important. But depending on the game, there can be differences in the implementations of NVIDIA's DLSS, Intel's XeSS and AMD's FSR, so we are keen to take a look at how temporal upscalers perform in Horizon Forbidden West.
 
Streamline 2.4.0 + DLSS 3.7 released

Some interesting things here. DLSS now has Preset E, which is the new default.

This in particular though caught my eye:

Added support for upscaling Alpha in sl.dlss.

Comments from the author of DLSS Tweaks:

emoose said:
Preset E is interesting, but the alpha upscaling is even more so, seems this has to be enabled by the game itself by setting alphaUpscalingEnabled SL flag, which then sets a DLSS creation flag:

https://github.com/NVIDIAGameWorks/...b35/source/plugins/sl.dlss/dlssEntry.cpp#L391

SL docs mentions the following about it:

experimental alpha upscaling, enable to upscale alpha channel of color texture
NOTE: Alpha upscaling (DLSSOptions::alphaUpscalingEnabled) is experimental, and will impact performace. This feature should be used only if the alpha channel of the color texture needs to be upscaled (if eFalse, only RGB channels will be upscaled).
NVSDK_NGX_DLSS_Feature_Flags_AlphaUpscaling is the new DLSS creation flag added, should be pretty easy to add to DLSSTweaks since we already have stuff to force AutoExposure flag, probably just need to update the DLSS headers with the new ones from SL & add code to read/set it from INI.

Very curious to learn more about what "Alpha upscaling" is.
 
@Flappy Pannus

From DLSS tweaks

experimental alpha upscaling, enable to upscale alpha channel of color texture

NOTE: Alpha upscaling (DLSSOptions::alphaUpscalingEnabled) is experimental, and will impact performace. This feature should be used only if the alpha channel of the color texture needs to be upscaled (if eFalse, only RGB channels will be upscaled).

 
Streamline 2.4.0 + DLSS 3.7 released

Some interesting things here. DLSS now has Preset E, which is the new default.

This in particular though caught my eye:

Added support for upscaling Alpha in sl.dlss.

Comments from the author of DLSS Tweaks:



Very curious to learn more about what "Alpha upscaling" is.
It says that it can upscale the alpha channel if your frame buffer has one. This is often not used in games but it can be used to make transparent windows. So the use of this is very niche but still cool that they add support to this.
 
It says that it can upscale the alpha channel if your frame buffer has one. This is often not used in games but it can be used to make transparent windows. So the use of this is very niche but still cool that they add support to this.
In games destination alpha was used in Playstation2 era for some multitexture effects.

Basically one write mask into it where you wanted future pass to be visible.
Funnily one of the use was to have reflective windows on building.

Things would have been so much better with modern texture sampling.

I really wonder where modern games have used destination alpha..
 
The final render target that DLSS upscales would never use alpha channel because the scene has already been composited, no? I'm trying to think of what case you'd need to upscale a texture with an alpha channel, unless there was an additional compositing step afterwards. Maybe ui? Is there some reason you'd want to upscale a ui that has some level of transparency? Usually ui is not expensive (I don't think) so not sure why you'd want to upscale it instead of just rendering it native and compositing.

Is there a reason why you might want to use the alpha channel from the previous frame? I wonder if DLSS ever interpolates samples between two frames, or if it's just selecting the best sample from history.

From the new DLSS programming guide
3.16 Alpha Upscaling Support
By default, DLSS is intended for 3-channel RGB images, only. Experimental support for upscaling 4-
channel RGBA images can be enabled by setting the NVSDK_NGX_DLSS_Feature_Flags_AlphaUpscaling
flag at creation time. For best results, the RGB color should be premultiplied by alpha in the color input.
Note: performance will be impacted by enabling this feature. Expect the overall execution time of DLSS
to increase by 15-25% when alpha blending is enabled.


The biggest thing I can think of is not gaming. Maybe you're rendering something real-time that's going to be composited on top of video, like maybe an augmented reality device. You could render a 3D interface or character, upscale it (to save power/performance) and then composite it on top of a video feed. You could have 3D ui or objects with transparency or translucency.
 
Last edited:
April 7, 2024
# This allows setting a single global DLSS DLL for games to use
# If the global version is newer than the one included with game, it should get loaded automatically

If you save that script as a powershell module, then you can just run "UpdateDLSS" inside powershell whenever new DLSS versions show up, point it to the downloaded nvngx_dlss.dll, and it should handle copying the files/updating INI for you
 
I was looking at a tv an lg oled and it has a α9 AI Processor 4K Gen6 which among other things takes care of upscaling anyone know how this compares to dlss ?
and if chips like this start getting included in other monitors will we need dlss ? and would it free up the tensor cores to do other stuff?
also applies to amd/intel upscaling as well.
some info about the chip here :

DLSS uses more data than just images, so in theory it may perform better than pure image upscalers.
 
Most of the image processing (almost of it) on TVs isn't designed with interactive content in mind. None interactive content has a considerably higher latency window to work with.

Which is why the game mode on TVs typically disable the majority of image processing features to get down to lower latency.
 
Conversely maybe Nvidia thinks rival tv manufacturers to Lg are going to want an ai upscaling chip there's a market there ?
 
Conversely maybe Nvidia thinks rival tv manufacturers to Lg are going to want an ai upscaling chip there's a market there ?

It's possible, but IMHO upscaling for video is a bit different from upscaling for games. Though how different is hard to say.
In a sense it's easier to train for games because you can create "true" high resolution images from games, but you generally don't have that for video or movies.
Training with downscaled video and movies does help, but only to an extent.
Denoising also can be quite different between video and games.
 
Nvidia Maxine SDK is still beta but real time interaction with TV AI processors might not be out of the question assuming hardware requirements are met or modified in the future.
Two of the five AI features are specifically related to Super Resolution and Upscaling.

Maxine Windows Video Effects SDK

NVIDIA Maxine Windows Video Effects SDK enables AI-based visual effects that run with standard webcam input and can easily be integrated into video conference and content creation pipelines. The underlying deep learning models are optimized with NVIDIA AI using NVIDIA® TensorRT™ for high-performance inference, making it possible for developers to apply multiple effects in real-time applications.

The SDK has the following AI features:
  1. Virtual Background, which segments and masks the background areas in a video or image to enable AI-powered background removal, replacement, or blur.
  2. Artifact Reduction, which reduces compression artifacts from an encoded video while preserving the details of the original video.
  3. Super Resolution, which generates a detail-enhanced video with up to 4X high-quality scaling, while also reducing blocky/noisy artifacts and preserving textures and content. It is suitable for upscaling lossy content.
  4. Upscaler, which is a very fast and light-weight method to deliver up to 4X high-quality scaled video with an adjustable sharpening parameter. This feature can be optionally pipelined with the Artifact Reduction feature to enhance the scale while reducing the video artifacts.
  5. Video Noise Removal, which removes low-light camera noise from a webcam video while preserving the texture details.

The following table illustrates the scale and resolution support for input videos to be used with the ArtifactReduction and SuperRes effects.

Table 1. Scale and Resolution Support for Input Videos

Effect

Scale Factor

Input Resolution Change

Output resolution range
Artifact reduction​
This effect does not change the resolution, so the input and output range is the same.​
[90p, 1080p]​
[90p, 1080p]​
Super resolution​
4/3x​
[90p, 2160p]​
[120p, 2880p]​
1.5x​
[90p, 2160p]​
[135p, 3240p]​
2x​
[90p, 2160p]​
[180p, 4320p]​
3x​
[90p, 720p]​
[270p, 2160p]​
4x​
[90p, 540p]​
[360p, 2160p]​
 
I was looking at a tv an lg oled and it has a α9 AI Processor 4K Gen6 which among other things takes care of upscaling anyone know how this compares to dlss ?
and if chips like this start getting included in other monitors will we need dlss ? and would it free up the tensor cores to do other stuff?
also applies to amd/intel upscaling as well.
some info about the chip here :
To use that effectively wouldn’t you have to output a lower resolution signal to the TV? If you disable DLSS and turn down internal res, wouldn’t it use ‘traditional’ upscaling and the TV just sees a 4k image, so it wouldn’t upscale any further? Unless these TVs are smart enough to determine that the 4k signal they’re receiving is actually a 1080p signal upscaled bilinearly.
 
April 7, 2024
# This allows setting a single global DLSS DLL for games to use
# If the global version is newer than the one included with game, it should get loaded automatically
This is very cool, my only issue is with older versions they assumed different presets so I’m curious if this tool handles that. For example the common guidance was for anything pre DLSS 2.5.1 to just use 2.5.1 instead of 3.5 and above since 2.5.1 was the last one before a big preset change.

Also, I’ve always wondered how this works with multiplayer games with anticheat. Can I upgrade the DLSS version in Warzone and not get banned?
 
Back
Top