Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

They also end the article by stating that Turing's tensor cores and ready and waiting to be used.
Which is a prevailing theory among lots of us, that these cores were thrown in for non-gaming reasons and nVidia are looking for reasons to use them. The latest, greatest upscaling could have run on RTX cards that don't have Tensor cores, and used that silicon for more compute that'd be better for upscaling as it's fast enough. ;)

Tensor cores got a bit of a slap-back in justification from this Control algorithm. Of course, if their AI Research Model can be run efficiently on tensor cores, it might still prove itself. Although in the comparison video, one feels just rendering particles in a separate pass on top would be the best of all worlds and the most efficient use of silicon.

They're saying those cores are there and they're capable of the next round of improvements coming to DLSS which is a more optimized version of their AI research model. It's also a way of reassuring people that they wont need a next gen GPU to handle these improvements when they come. Their AI model utilizes deep learning to train their Image Processing algorithm. The goal is to get that high quality of the AI model performant enough so that it can run on the tensor cores.
That's not what's described. The DLSS process runs on the Tensor cores and is not the 'algorithm' being talked of. DLSS as an ML technique is slow. nVidia found the ML training threw up a new way to reconstruct, but it's too slow to run in realtime as an ML solution. However, the engineers managed to take that new-found knowledge and create a new reconstruction algorithm running on compute*.

The hope is to improve the NN technique so it can be run directly in game; what they term the AI Research Model. One of the reasons its confusing to follow what's going on is nVidia are calling the image processing algorithm 'DLSS' alongside the NN based DLSS. They showcase DLSS videos of control that are running an image-processing algorithm rather than a NN, as an example of what their NN-based DLSS will probably be doing in the future, they hope.

* Perhaps, maybe, it's possible to run image processing on Tensor but I've never heard of it used like that.
 
Which is a prevailing theory among lots of us, that these cores were thrown in for non-gaming reasons and nVidia are looking for reasons to use them. The latest, greatest upscaling could have run on RTX cards that don't have Tensor cores, and used that silicon for more compute that'd be better for upscaling as it's fast enough. ;)

Tensor cores got a bit of a slap-back in justification from this Control algorithm. Of course, if their AI Research Model can be run efficiently on tensor cores, it might still prove itself. Although in the comparison video, one feels just rendering particles in a separate pass on top would be the best of all worlds and the most efficient use of silicon.

That's not what's described. The DLSS process runs on the Tensor cores and is not the 'algorithm' being talked of. DLSS as an ML technique is slow. nVidia found the ML training threw up a new way to reconstruct, but it's too slow to run in realtime as an ML solution. However, the engineers managed to take that new-found knowledge and create a new reconstruction algorithm running on compute*.

The hope is to improve the NN technique so it can be run directly in game; what they term the AI Research Model. One of the reasons its confusing to follow what's going on is nVidia are calling the image processing algorithm 'DLSS' alongside the NN based DLSS. They showcase DLSS videos of control that are running an image-processing algorithm rather than a NN, as an example of what their NN-based DLSS will probably be doing in the future, they hope.

* Perhaps, maybe, it's possible to run image processing on Tensor but I've never heard of it used like that.
Ahh ok. I understand what you're saying now. Your last paragraph suddenly clicked for me. My bad. I see that I was wrong now.

So, given that they hope to get their NN-based DLSS implementation close to the quality of their AI Research model results.. how much performance could that realistically free up over let's say the "Image Processing algorithm DLSS" currently in Control? Would it be that much that it would even be worth the effort?
 
So, given that they hope to get their NN-based DLSS implementation close to the quality of their AI Research model results.. how much performance could that realistically free up over let's say the "Image Processing algorithm DLSS" currently in Control? Would it be that much that it would even be worth the effort?
It probably won't free up any meaningful performance, but it'll improve the quality of the results over the image processing method. The question is more over whether the inclusion of Tensor cores is better than using compute in their place, or whether an all-out compute solution would yield better overall performance (at somewhat reduced quality)?

I've just Googled this though which suggests RTX RT and ML adds very little overhead (10%).
 
* Perhaps, maybe, it's possible to run image processing on Tensor but I've never heard of it used like that.
It's hard to discuss because it's hard to differentiate compute / tensor. But what changed my view on this is the GTX 1660, which added a shrinked version of tensor cores so they can at least do native fp16.
This means there is no need to differntiate at all?
Because fp16 is usueful in games the question is not 'are tensor cores worth it?', but it is 'what features of tensor cores do we really want, and how can we access them?'.

Usually people build an algorithm first, and if it's useful there may come up hardware acceleration. Seen from perspective of gamedev it's the opposite that happened here. GPUs are mainly sold to gamers, and as of yet, games do not use Machine Learning.

So my opinion: Drop int4+8, maybe drop matrix multiplies, keep fp16 + int16, expose to gfx APIs, still call it AI / Tensor and everybody is happy.
I'd like to hear opinions from people knowing about AI, if they see upcoming applicatuions that justify more than that?
 
For the moment Tensor operations seem really limited, everything which isn't matrix multiply and accumulate seems to be done with generic compute (even sigmoid function). There seems to be no way to get from tensor to compute except through shared memory, at least for non NVIDIA Plebs.

It's going to make image processing with tensor cores a bit of a pain, a lot of extra pipelining and buffering.
 
Given the small footprint of Tensor, I imagine they're very simple and will be designed for their purpose of ML as efficiently as possible. Like fixed-function units versus pixel shaders versus unified compute shaders - Tensor is starting at level 1, very low programmability.
 
I wonder how much they really gain by reusing parts of the SM other than shared memory. If they made the tensor cores completely separate apart from that even their old DLSS should have only a small impact.

PS. ignoring power consumption of course.
 
Between Control and FreeStyle Sharpening it seems Nvidia is trying hard to convince us that DLSS is a waste of time :)

https://www.techspot.com/review/1903-dlss-vs-freestyle-vs-ris/

Overall we think this situation is really interesting. AMD introducing RIS may have forced Nvidia to act in updating their sharpening filter available through Freestyle. In the process, they have created a better solution than DLSS which was advertised as a key selling point for RTX graphics cards. Big win for gamers.

Digging deeper into image sharpening, we think Nvidia has the better solution overall when compared to RIS. Freestyle can achieve equivalent image quality, but it also offers an adjustable strength slider which is great for games like The Division 2 that are a bit overprocessed with default settings. You can also configure it on a game-by-game basis. Nvidia’s solution is also much more compatible. It works with all Nvidia GPUs, and supports all modern APIs including DX11.
 
I'm curious. When they're testing these sharpening filters in games like Division 2, Battlefield V are they making sure the in-game sharpening is completely disabled first? Both of those games tend to over-sharpen by default already.
 
I have just tried RIS in Control, upscaled from 1080 to 1440. And i don't like it. Had to quit the game after 10 seconds to disable it.
Good: Yep it's sharp. Can see detailed wood texture bumps on concrete walls really very sharp - hard to believe the game is upscaled.
Bad: TAA flickering is exaggerated. It's acceptable without RIS but now too distracting - i have to turn it off.
(Also the simple aliased debug visuals i use for programming become worse - staircaising is exaggerated. And menu fonts / logos in games loose their nice smooth appearance.)

I'm one of those who hate the artificial shaprness of realtime CG. I think making stuff even sharper is totally wrong, so even without temporal artifacts i would not want this. But surely has potential for those who think different.
I have worked with 4K footage from very expensive cameras at work - they capture about the high frequency detail games show at 1080p in the best case. Sharpening is widely used also here, but results in temporal artifacts unpleasant to watch in motion (crawling pixels).
If we want smooth motion, details have to swim between pixels softly, so contrast between individual pixels has to be low to hide them. In other words: Blurry, and you can only show frequencies at half resolution smoothly. Frequency matching full resolution crawls.

That said, i see a big opportunity for upscaling to get even better images than native resolution, if we can improve TAA tech further anf get rid of damn flickering.
 
If we want smooth motion, details have to swim between pixels softly, so contrast between individual pixels has to be low to hide them. In other words: Blurry, and you can only show frequencies at half resolution smoothly. Frequency matching full resolution crawls.
Which perhaps ties in with why often gamers would say 'looks better' for games with a vaseline lens, because softness is closer to real captured footage. It seems there are two mindsets when dealing with graphics quality, with one preferring clinical sharpness and detail resolve and somewhat more quantifiable through metrics, and the other just preferring what looks good to their eyes which often favours deliberately downgrading visual clarity through motion blur, chromatic aberration, DOF and even intended softness.
 
Last edited:
Might well vary by platform. My feeling is sharpness is more important to PC gamers, and movie-like is preferred by console gamers, because the two have grown up playing different games with different requirements. On PC in twitch shooters and fast RTS/MOBA played up close on a monitor, clarity is essential for fast accuracy, whereas on console with single player adventures and played on the living room TV, emulating what's seen on TV is more fitting. I think. ;)
 
Which perhaps ties in with why often gamers would say 'looks better' for games with a vaseline lens, because softness is closer to real captured footage. It seems there are two mindsets when dealing with graphics quality, with one preferring clinical sharpness and detail resolve and somewhat more quantifiable through metrics, and the other just preferring what looks good to their eyes which often favours deliberately downgrading visual clarity through motion blur, chromatic aberration, DOF and even intended softness.

I think it comes down to how each person's visual system works. People like me who have heighted peripheral vision and thus tend to look around constantly have a difficult time getting over the visual discontinuity that something like DoF and Motion Blur represent in games or media as it doesn't reflect how we view the world.

People with less sensitivity to peripheral motion are more likely to be comfortable focusing on single things at a time, versus looking at multiple things each second. I'd imagine for people like this motion blur and DoF likely aren't as distracting as they aren't actively trying to resolve the things that are blurred multiple times a second.

I think that's why people in the first camp prefer PC games, you can disable those things that don't accurately reflect how your visual system works in real life. Until there is fast and responsive eye tracking that can reflect how people like me look at the world, then DoF and Motion Blur vary between heavily distracting to eyestrain inducing to potentially even headache inducing depending on how prevalent they are.

That's a big difference from the focused blurring that you get from good AA solutions, however. That's where smart sharpening needs to happen. Things that need some blurring (like high contrast edges) should get some blurring to avoid stair stepping while textures should get some sharpening to increase detail and reduce obvious blurring, IMO.

Too much "dumb" sharpening is just as bad as too much blurring.

Regards,
SB
 
Last edited:
How do you cope with TV and movies then? These are blurred all over the shop.

It's bothersome but far less than games. The big difference being that I'm in control of the game, which means I need to constantly be making decisions on where to go, what to do, what to respond to (enemies, threats, etc.) and all that kind of stuff. Just like in the real world I need to be aware of everything as almost anything could be a potential hazard.

A movie I'm just an observer an nothing else. I don't need to look around to decide where to go. I don't need to wonder where enemies are because the movie is going to make it obvious where they are. I don't need to decide what to do based on the circumstances.

Basically, when watching a movie or show almost everything just turns off, especially the parts of my brain dealing with control and observation.

That said, there are times when it's still bad in movies for me. Especially the ones that try to do a POV style like that one Hardcore Henry, for example. Basically anytime it's overdone or if there's too much DOF in a slow scenic pan of landscape. Slow scenic pans are especially bad if there's much blurring as that's telling me that the director wants me to look around the scene.

[edit] Also, just thought about it, but I was probably also conditioned at a young age that movies and TV are just naturally EXTREMELY blurred compared to reality. When I watch shows from the 70's or 80's on VHS, for example...whooo boy is that some blurry stuff.

Regards,
SB
 
Back
Top