Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

IMO if you do the comparison to TAA U - DLSS 2.2 is quite... different. It definitely does not have "all of its shortcomings".
I take your word because can't compare myself.
But...
In my UE5 scene TAAU looks pretty much perfect. But it's too static, i have no moving characters in.
In UE5 screenshots posted here i saw characters have jaggy edges / stairstepping, which is bad. This can be fixed and i would know how. So maybe it's still too early, because NV spends much more effort on this than any others did.
I'm convinced TAAU can give practical quality at equal performance in the end.
Even if not, i think we would see more overall profit if the 'less than 10%' die area spent on tensors would have been spent on more SMs instead.

With shortcomings i mean the requirement of motion vectors, smearing and other TA issues. DLSS has them too, even if in different situations or by showing different failure cases.
Some people think TA approaches will become generally unacceptable as display nits increase. Though i hope they're wrong about that.
 
I take your word because can't compare myself.
But...
In my UE5 scene TAAU looks pretty much perfect. But it's too static, i have no moving characters in.
In UE5 screenshots posted here i saw characters have jaggy edges / stairstepping, which is bad. This can be fixed and i would know how. So maybe it's still too early, because NV spends much more effort on this than any others did.
I'm convinced TAAU can give practical quality at equal performance in the end.
Even if not, i think we would see more overall profit if the 'less than 10%' die area spent on tensors would have been spent on more SMs instead.

With shortcomings i mean the requirement of motion vectors, smearing and other TA issues. DLSS has them too, even if in different situations or by showing different failure cases.
Some people think TA approaches will become generally unacceptable as display nits increase. Though i hope they're wrong about that.

UE5 is not TAAU. It's a brand new scaling approach. They haven't revealed what it is.
 
In short: NV pushes a datacenter feature to gaming market, offloading related costs (also about chip development) to gamers. And all they get is a form of TAAU with all its shortcomings - if they realize or not.
In some years, eventually game developers will have found ML applications, and then tensor cores may be justified.

Again the tensor cores do get used, you need them to run the current DLSS implementations. If thats due to NV lying about it or not, thats not up for debate in a topic like this.
 
AFAIU TSR is an improved version of TAAU and not a completely new approach:


Brand new approaches are often overrated and it rarely makes sense to throw away something that already works quite well.

I suppose it's a question of semantics. Maybe it's still TAAU but rebuilt from the ground up.

Temporal Super Resolution
Nanite micropolygon geometry and the fidelity demands of the next generation of games have increased the amount of detail displayed on screen like never before. To meet these demands, we've written a Temporal Super Resolution algorithm from scratch that replaces UE4's TemporalAA for higher-end platforms.

Temporal Super Resolution has the following properties:

  • Output approaching the quality of native 4k renders at input resolutions as low as 1080p, allowing for both higher framerates and better rendering fidelity.

  • Less ghosting against high-frequency backgrounds.

  • Reduced flickering on geometry with high complexity.

  • Runs on any Shader Model 5 capable hardware: D3D11, D3D12, Vulkan, PS5, XSX. Metal coming soon.

  • Shaders specifically optimized for PS5's and XSX's GPU architecture.
In Unreal Engine 5 Early Access, Temporal Super Resolution is enabled by default in your project settings.

By default, the rendered geometric detail will adapt to the rendering resolution leading to difference seen in the comparison above. However, the geometric details can optionally be tweaked to use same geometry as Native 4K rendering to reach an output a lot closer to native 4k.

https://docs.unrealengine.com/5.0/en-US/ReleaseNotes/
 
I suppose it's a question of semantics. Maybe it's still TAAU but rebuilt from the ground up.



https://docs.unrealengine.com/5.0/en-US/ReleaseNotes/
It is the same thing even if they made a new one instead of improving the UE4 version is still TAAU. It needs to be seen how performant it is against DLSS at same IQ before claiming that tensor cores are useless but the problem is it is hard to have a standard for the same IQ.
 
It is the same thing even if they made a new one instead of improving the UE4 version is still TAAU. It needs to be seen how performant it is against DLSS at same IQ before claiming that tensor cores are useless but the problem is it is hard to have a standard for the same IQ.
Oh If only anyone would have compared the two:

DLSS 2.2 vs TSR on Youtube and Imgur Image gallery

In stills DLSS seems better at resolving some finer details than TSR (just llook at the character's scarf, hair and fire in the 50% TSR vs DLSS Performance images).

In the video it seems that TSR has some artifacts with fire and hair that can hopefully be mitigated in later revisions.

Overall though, to me TSR is damn impressive and it would very hard to actually notice the differences in motion while gaming, unless that's all you're really doing.

It's Definitely a gigantic step up from UE4 TAAU, it's not even a contest:

Side-by-side 720p -> 1440p upscale TAAU vs TSR

It's definitely a much bigger jump from TAAU to TSR than it is from TSR to DLSS, particularily with loads of high-geometry detail in the background.
 
Sigh, it looks that I'm unable to post direct links, unless I have > 10 posts, due to spam prevention.

While my post (with links) is pending moderator approval I'll just answer without direct links, you can append the URL parts below manually

It is the same thing even if they made a new one instead of improving the UE4 version is still TAAU. It needs to be seen how performant it is against DLSS at same IQ before claiming that tensor cores are useless but the problem is it is hard to have a standard for the same IQ.

Someone actually compared DLSS 2.2 to TSR on reddit:
reddit: /r/unrealengine/comments/o2ttio/dlss_22_and_tsr_comparing_unreal_engine_5/
youtube: /watch?v=z2v8e_J650I
imgur: /gallery/RPO0IJi

In stills DLSS seems better at resolving some finer details than TSR (just look at the character's scarf, hair and fire in the 50% TSR vs DLSS Performance images).

In the video it seems that TSR has some artifacts with fire and hair in general (that can hopefully be mitigated as UE5 gets out of early access) but it's very hard to see a difference otherwise.

Overall IMO TSR is damn impressive and definitely good enough for actual gameplay, not staring at comparisons frame by frame.

If I had the time to test and validate one upscaling method for all platforms I'd definitely do TSR only.
 
This debate about who (will) win(s) between FSR and DLSS is a hot one. Technically, they are a lot of valid points and at the end, it's difficult to argue about DLSS superiority. But what boggers me is to see that some believe FSR will "kill" DLSS, even in the youtube stratosphere. Let's summarize with some hard facts:
1- It's been generations that AMD owns the console world, but still, it never really impacted Nvidia software dominance in the PC market. Nvidia is an innovator, never standing still. They keep pushing new tech in every generation.
2- Because of more than 2 years lead, DLSS is now implemented in the vast majority of the studios producing Triple A games and in the 2 engines used by the smaller studios (Unreal and Unity). It's just a matter of on/off toggle switch that brings nearly no additional effort. The time when DLSS 1.0 was a titanesque task to activate (with huge work from Nvidia and their thousand of ML servers) is over. And we see it very clear today with the accelerated adoption rate of DLSS2x. It's useless to talk about difficulty of implementation, the background work is done. We are in 2021, not anymore in 2019...
3- Because Nvidia has 80+% of PC gaming market share and RTX range gets the push more than ever (see the latest news saying that less GTX cards are produced in favor of RTX3060), it's obvious that studios can't ignore the market leader and the customers with money to buy the new games to see what their new toys can do.
4- Finally, Nvidia brings lot of incentive to developers when they integrate some green tech (free PR exposition, free marketing and additional sales with the possibility to bundle their title). For some small studios, it's sometimes the difference between loosing money and making a profit !

At the end, they are no winner or looser. But one thing is sure, DLSS won't disappear. In fact, it has never been so strong and we only start to see now the fruits of more than 2 years Nvidia investment on the tech.
Regarding FSR, well it has its purpose and it may be popular but it won't kill DLSS. The latter has too much going on for it without even talking about any technical merit.
 
Last edited:
Regarding FSR, well it has its purpose and it may be popular but it won't kill DLSS. The latter has too much going on for it without even talking about any technical merit.

Yes and theres where the problem lies for many, they are thinking or assuming FSR is AMD's answer to NV's DLSS method, which it clearly isnt, the disparity between the two on a technical level is just too large, as Alex has explained awhile ago, DLSS is doing something entirely different to obtain its superior results.

That said, FSR is a needed addition to the pc gaming space where DLSS or TAAU cant be applied. Its not here to over take both or trying to compete with.

Obviously, AMD's answer to DLSS (as in, ML/AI and hardware acceleration) will most likely make its way in RDNA3+ architectures.
 
What we really need to see is potentially a variety of inferencing benchmarks on turing and ampere gpus, probably related to image processing, that compare performance running inference on cuda cores vs tensor cores. I don't know if those benchmarks exist.
The problem here is that nobody does ML on shader cores these days. It's a long foregone conclusion that you'd be getting 10X or higher performance by doing it via MM units (tensor cores or whatever). So this isn't even something which anyone is interested in benchmarking as the results are obvious.
Whether this applies to things like real time image reconstruction is an interesting question. The idea is obviously that if you get lower frametime from rendering+reconstruction than just from native rendering in reconstructed resolution - then it may be worth implementing. The delta is an interesting question also - how faster should a reconstruction approach be than native rendering to worth it from IQ loss pov? Can NN running on shading ALUs provide a good enough resolution reconstruction fast enough to still provide a solid performance gain? My take would be - unlikely, if other ML stuff is anything to go by. 1-2ms of DLSS2 runtime would balloon to at least 10-20ms on shading ALUs which is obviously too high to be usable. And doing less ML would lower the quality to a point where you'd be getting results like FSR vs DLSS1 probably making this whole approach basically useless in the absence of ML h/w.

UE5 is not TAAU. It's a brand new scaling approach. They haven't revealed what it is.
It's a new TAAU with a new name. TAAU is a pretty wide term for any temporal accumulation based reconstruction approaches. DLSS2 is TAAU too btw, but done and enhanced with a number of NNs instead of shaders.
 
In the video it seems that TSR has some artifacts with fire and hair in general (that can hopefully be mitigated as UE5 gets out of early access) but it's very hard to see a difference otherwise.

Overall IMO TSR is damn impressive and definitely good enough for actual gameplay, not staring at comparisons frame by frame.
Not sure if you noticed but we have an ongoing UE5 thread where TSR has come up often in the discussion. It's definitely worth a read as we are fortunate to have Epic developers comment and post in the thread.
 
Oh If only anyone would have compared the two:

DLSS 2.2 vs TSR on Youtube and Imgur Image gallery

In stills DLSS seems better at resolving some finer details than TSR (just llook at the character's scarf, hair and fire in the 50% TSR vs DLSS Performance images).

In the video it seems that TSR has some artifacts with fire and hair that can hopefully be mitigated in later revisions.

Overall though, to me TSR is damn impressive and it would very hard to actually notice the differences in motion while gaming, unless that's all you're really doing.

It's Definitely a gigantic step up from UE4 TAAU, it's not even a contest:

Side-by-side 720p -> 1440p upscale TAAU vs TSR

It's definitely a much bigger jump from TAAU to TSR than it is from TSR to DLSS, particularily with loads of high-geometry detail in the background.
They are looking close but something is wrong with that video. We see no perf difference between DLSS Quality and DLSS Performance and sometimes the FPS is bigger in the DLSS Quality mode, in the same scene as the DLSS Performance. Probably the same happens when TSR is on screen it is pretty hard to see the FPS counter all the time but when it is visible you can see that there is almost no increase in FPS by going to a lower upscaling mode.
 
They are looking close but something is wrong with that video. We see no perf difference between DLSS Quality and DLSS Performance and sometimes the FPS is bigger in the DLSS Quality mode, in the same scene as the DLSS Performance. Probably the same happens when TSR is on screen it is pretty hard to see the FPS counter all the time but when it is visible you can see that there is almost no increase in FPS by going to a lower upscaling mode.

The video has a disclaimer to ignore the performance because it's bugged in the editor.
 
Wow, they sure have a lot of partnerships. Let's see how this turns out. IMHO the franchise was on a slight decline as of late.
 
Back
Top