Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

Might well vary by platform. My feeling is sharpness is more important to PC gamers, and movie-like is preferred by console gamers, because the two have grown up playing different games with different requirements. On PC in twitch shooters and fast RTS/MOBA played up close on a monitor, clarity is essential for fast accuracy, whereas on console with single player adventures and played on the living room TV, emulating what's seen on TV is more fitting. I think.

This is me atleast :p When i'm playing online FPS games i want it as sharp as possible etc. But for singleplayer games on the TV (both PS4 and pc) i don't mind the 'cinematic experiences' so much, in HZD it does the game good i think.
Oh and VIce City is a great example, the PS2 and Xbox versions are having this blur/trails effect, the PC version originally doesn't have this, and totally lacks that 80's feel the console versions offer. It's not even the same game untill you start modding and it becomes the ultimate version.
 
Things that need some blurring (like high contrast edges) should get some blurring to avoid stair stepping while textures should get some sharpening to increase detail and reduce obvious blurring, IMO.
Thanks for your point on peripherical attention varying between people. That's something i did not consider. Fruitful discussion to define a goal both kinds could agree with...
With this in mind i would ignore DOF and motion blur because it's more artistic choice, less a necessary technical compromise. At least it's not directly related to upsacling or image sharpness. We can also ignore edge AA because that's a goal we all have in common.

But i'd like to discuss the bold argument a bit more, considering those counter arguments:

1. Detailed textures do not help to detect enemies. It's the opposite, because distant / smaller enemies can easily get lost in the noise caused by high frequency detail. (When playing Quake Arena, i made textures so low res they disappeared and only GI remained visible. Many people do this and it helps.)
(Can it be the real reason you want more detail in textures is just... a matter of taste, or something else? Or do you think those details are indeed helpful for you to build a mental image quickly, while for somebody like me it's just 'noise'?)

2. To really help with that mental image, the low frequency content is much more important, like distinctive shapes and color design. This is also true for aesthetics.
(If true, and with polygon limits not much of an issue today, wouldn't this mean texture details are the least important? For example: How does it help if a rock shows high variance texture detail, if it's edges are unnatural straight lines when moving close?)

The reason why i ask is texture space shading, which becomes more likely for me to do. Texture detail would probably drop to half resolution or worse.
 
But i'd like to discuss the bold argument a bit more, considering those counter arguments:

1. Detailed textures do not help to detect enemies. It's the opposite, because distant / smaller enemies can easily get lost in the noise caused by high frequency detail. (When playing Quake Arena, i made textures so low res they disappeared and only GI remained visible. Many people do this and it helps.)
(Can it be the real reason you want more detail in textures is just... a matter of taste, or something else? Or do you think those details are indeed helpful for you to build a mental image quickly, while for somebody like me it's just 'noise'?)

2. To really help with that mental image, the low frequency content is much more important, like distinctive shapes and color design. This is also true for aesthetics.
(If true, and with polygon limits not much of an issue today, wouldn't this mean texture details are the least important? For example: How does it help if a rock shows high variance texture detail, if it's edges are unnatural straight lines when moving close?)

The reason why i ask is texture space shading, which becomes more likely for me to do. Texture detail would probably drop to half resolution or worse.

Detail is obviously also an artistic choice. Cell shaded graphics going for a cartoony look don't need a lot of detail in their textures, however a game going for a more realistic look wouldn't want to have blurry textures.

When going for a realistic look versus artistic look (say like a water color look, for example) then I prefer sharp detailed textures as that's what I expect from something trying to mimic reality to whatever degree the hardware is capable of.

So, while a tree at 100 meters is going to look the same regardless of how detailed the texture is (hence texture LOD systems), when I'm standing next to it, if it devolves into a blurred smudge of a texture then the illusion breaks down.

This becomes especially true if there is dense foliage. A delicate balance has to be struck between enough blurring that the edges don't develop stair stepping, but sharp enough that it's still what one would expect to see when looking at it in the real world. Again, depending on the artistic look a developer is going for.

Regards,
SB
 
Which perhaps ties in with why often gamers would say 'looks better' for games with a vaseline lens, because softness is closer to real captured footage. It seems there are two mindsets when dealing with graphics quality, with one preferring clinical sharpness and detail resolve and somewhat more quantifiable through metrics, and the other just preferring what looks good to their eyes which often favours deliberately downgrading visual clarity through motion blur, chromatic aberration, DOF and even intended softness.
I recall there was debate years back with PC graphics cards over whether various vendors had bugged anisotropic filtering. While early on there were more clear cases of compromised quality like very strong angle dependence in older ATI products and varying levels of brilinear, the later debate came in between whether ATI (or AMD at that point?) was skimping on samples in AF versus Nvidia.
However, at that late stage, it seemed to have been more of a question of whether the observer liked the impact of Nvidia's negative LOD bias that gave something sharper versus the competing solution's default LOD selection.

Similarly, there was a split between those that could tolerate blur introduced by some of the tent filters or quincunx since they personally disliked jaggies, versus those that would prefer sharpness to the point of accepting more aliasing and shimmer. I'm not sure if there's been study over what aspects of visual quality or image stability have higher weights for different subsets of people, or if there are theories on the source of individual variability.
 
Thanks for your point on peripherical attention varying between people. That's something i did not consider. Fruitful discussion to define a goal both kinds could agree with...
With this in mind i would ignore DOF and motion blur because it's more artistic choice, less a necessary technical compromise. At least it's not directly related to upsacling or image sharpness. We can also ignore edge AA because that's a goal we all have in common.

But i'd like to discuss the bold argument a bit more, considering those counter arguments:

1. Detailed textures do not help to detect enemies. It's the opposite, because distant / smaller enemies can easily get lost in the noise caused by high frequency detail. (When playing Quake Arena, i made textures so low res they disappeared and only GI remained visible. Many people do this and it helps.)
(Can it be the real reason you want more detail in textures is just... a matter of taste, or something else? Or do you think those details are indeed helpful for you to build a mental image quickly, while for somebody like me it's just 'noise'?)

2. To really help with that mental image, the low frequency content is much more important, like distinctive shapes and color design. This is also true for aesthetics.
(If true, and with polygon limits not much of an issue today, wouldn't this mean texture details are the least important? For example: How does it help if a rock shows high variance texture detail, if it's edges are unnatural straight lines when moving close?)

The reason why i ask is texture space shading, which becomes more likely for me to do. Texture detail would probably drop to half resolution or worse.

Definitely agreed on detail versus easy spotting. I played through the Halo remake campaign, easily the most familiar single player "campaign" to me, and immediately noticed how much harder to see the detailed grunts were in the suddenly added long grass versus the old, original graphics.

But that's more of an art style consideration than one directly tied to programming. As for programming, the idea that "edges should be blurred" is kind of a coincidence, or rather it has deeper causes for why it might look better to blur edges in some cases. All rendering is aliased from quantization, but with mip maps many "inner" surfaces have their own AA, while edges don't have a precalculated way to do that. Blurring though is just a very, very crude approximation. Truth is temporal AA is more of a "correct" idea, as it'll get you supersampled results, the one you "want" that a camera and eye sees. Trouble there is the reprojection is always some manner of incorrect unless you have a completely static scene and camera, and results in unwanted blur as it is.

As for texture space shading, this would be more correct by necessity. You need to oversample to get a "correct" camera like image, even MSAA is incorrect, as it will not correctly account for shader aliasing. That being said I still feel like there's more artists, and players, might want to do other than just having texture shading supersampling even if it is relatively "cheap" and straightforward in some pipelines. 60fps gets you a smoother look and better temporal aa, higher resolutions as it is get you cleaner images all around including on bothersome edges, etc. etc. Personally "shader based mip supersampling" is a very useful blog post showing a straightforward, cheap, correction for mipmapping that gets you sharper texture results closer to that "camera" target as is.
 
I'm not sure if there's been study over what aspects of visual quality or image stability have higher weights for different subsets of people, or if there are theories on the source of individual variability.

I haven't seen anything specifically dealing with image quality and/or sharpness, but I've read some stuff on standard vs. high frame rate in cinema, and I think it might be a similar kinda thing. Some people love HFR, most people hate it. Nothing really conclusive, but it's definitely linked to neuro-cognition: our visual system is basically a bunch of lies. We receive way less data from our eyeballs than we think we do (in terms of temporal resolution, spatial resolution and color resolution) and our brains do an awful lot of heavy lifting to fill in all the missing stuff.

The sparser data set (standard frame rate and, softer image, I imagine) just makes it much easier for the brains of the majority of people to fill in with data that can mesh easier with what's understood and acceptable as real. The denser the data set, the easier it is to perceive any discrepancy, because the brain is filling less of the gaps.

Hypothesis is, then, that some people are just less sensitive (or less reactive) to the uncanny valley-ish effect this has.
 
I haven't seen anything specifically dealing with image quality and/or sharpness, but I've read some stuff on standard vs. high frame rate in cinema, and I think it might be a similar kinda thing. Some people love HFR, most people hate it. Nothing really conclusive, but it's definitely linked to neuro-cognition: our visual system is basically a bunch of lies. We receive way less data from our eyeballs than we think we do (in terms of temporal resolution, spatial resolution and color resolution) and our brains do an awful lot of heavy lifting to fill in all the missing stuff.

The sparser data set (standard frame rate and, softer image, I imagine) just makes it much easier for the brains of the majority of people to fill in with data that can mesh easier with what's understood and acceptable as real. The denser the data set, the easier it is to perceive any discrepancy, because the brain is filling less of the gaps.

Hypothesis is, then, that some people are just less sensitive (or less reactive) to the uncanny valley-ish effect this has.

Another controversy I've seen recently is how some people are fine with the level of chromatic aberration in The Outer Worlds. Some are physically thrown off by it in terms of eye strain or headaches, others have no comfort issue, and some have difficulty noticing it. That goes beyond those arguing about whether they think it is aesthetically pleasing, although those who develop eye strain would seem to tend towards the negative on it.
Perhaps it is related to which cues their visual systems lean heaviest on for things like object differentiation and focus, and the offset color positions consistently prompt some individuals' brains to continuously strive to focus an image that will not become cleaner.
 
Definitely agreed on detail versus easy spotting. I played through the Halo remake campaign, easily the most familiar single player "campaign" to me, and immediately noticed how much harder to see the detailed grunts were in the suddenly added long grass versus the old, original graphics.

But that's more of an art style consideration than one directly tied to programming. As for programming, the idea that "edges should be blurred" is kind of a coincidence, or rather it has deeper causes for why it might look better to blur edges in some cases. All rendering is aliased from quantization, but with mip maps many "inner" surfaces have their own AA, while edges don't have a precalculated way to do that. Blurring though is just a very, very crude approximation. Truth is temporal AA is more of a "correct" idea, as it'll get you supersampled results, the one you "want" that a camera and eye sees. Trouble there is the reprojection is always some manner of incorrect unless you have a completely static scene and camera, and results in unwanted blur as it is.

As for texture space shading, this would be more correct by necessity. You need to oversample to get a "correct" camera like image, even MSAA is incorrect, as it will not correctly account for shader aliasing. That being said I still feel like there's more artists, and players, might want to do other than just having texture shading supersampling even if it is relatively "cheap" and straightforward in some pipelines. 60fps gets you a smoother look and better temporal aa, higher resolutions as it is get you cleaner images all around including on bothersome edges, etc. etc. Personally "shader based mip supersampling" is a very useful blog post showing a straightforward, cheap, correction for mipmapping that gets you sharper texture results closer to that "camera" target as is.

I'd like to add a point. That the edge of a moving object, is different than a stationary one... yet Games have failed to make this distinction in rendering.

The periphery is able to spot such things, then you turn and focus...
 
Another controversy I've seen recently is how some people are fine with the level of chromatic aberration in The Outer Worlds. Some are physically thrown off by it in terms of eye strain or headaches, others have no comfort issue, and some have difficulty noticing it. That goes beyond those arguing about whether they think it is aesthetically pleasing, although those who develop eye strain would seem to tend towards the negative on it.
Perhaps it is related to which cues their visual systems lean heaviest on for things like object differentiation and focus, and the offset color positions consistently prompt some individuals' brains to continuously strive to focus an image that will not become cleaner.

Our eyes have a significant amount of chromatic aberration (both longitudinal and lateral), they're too simple of a set of lenses to be able to focus all the spectrum of incoming light properly (let alone doing so fore and aft the focal plane). Our brains automagically correct for it in "post" (again, our visual system is a bunch of lies :D).

It's thought that we use our green cones to focus when there's enough light (when there isn't, our vision is mostly rod-based and we have a really hard time focusing), so our internal chromatic aberration model has a very specific look to it, one that is likely very different from the one that we're used to see in photography, with green/purple fringing. That's because the PD/CD focus systems used in cameras work mostly with monochromatic light (takes in all , so it biases focus differently.

Chromatic aberration used in most games uses a model that looks nothing like... well, anything that might exist in the real world, really. So yeah, probably our brains try to get this thing focused, especially the green content, can't, eyes keep hunting for focus, you get eye strain and headaches.
 
I find chromatic aberration in games absolutely horrible to deal with and I usually have to stop playing if I can't turn it off. I've always found it baffling that game developers want that in the game at all.
 
So we naturally generate Chromatic Abberation during VR? I found VR to be pretty good (the void room experience Star Wars one) didn’t feel like anything was off.
 
So we naturally generate Chromatic Abberation during VR? I found VR to be pretty good (the void room experience Star Wars one) didn’t feel like anything was off.

Our eyes generate CA all the time. Our brains just learned how to "fix it" while they were developing during our first few months/years of life - like, seriously, early childhood is pretty much a self-test/bug-fix/optimization beta kinda thing. Big thing seems to be when we add even more CA to stuff and our brains go "uh, thought I had fixed that". VR games without CA should be fine.

VR has a different problem in that even though it's able to create virtual depth (i.e. objects get in focus at different convergence levels), the eyes have to be consistently focused at a single plane of focus, and that plays havoc with accommodation, the physical change in shape of the eyes' lenses. We instinctively do both simultaneously, and it's something that not even eye tracking can deal with.
 
Our eyes generate CA all the time. Our brains just learned how to "fix it" while they were developing during our first few months/years of life - like, seriously, early childhood is pretty much a self-test/bug-fix/optimization beta kinda thing. Big thing seems to be when we add even more CA to stuff and our brains go "uh, thought I had fixed that".
If it were consistent, it'd be filtered. If you wear spectacles, you probably have experienced CA on a new prescription that eventually gets adjusted out. Experiments have put optics on folk that have turned the world upside down, and after a while they auto-correct the world the right way up. Then removing the optics, the world is upside with their normal vision until the brain recalibrates. The problem with CA in games is 1) it's only visible when playing, and only on the screen. 2) Devs dial it up to 11 for some ridiculous reason as if trying to recreate a red/green 3D image from the 80s. A very slight amount can make visuals more realistic, but devs are really poor at adjusting to new visual features like bloom when it first appeared.
 
The problem with CA in games is 1) it's only visible when playing, and only on the screen. 2) Devs dial it up to 11 for some ridiculous reason as if trying to recreate a red/green 3D image from the 80s.

To add to that it probably doesn't help that it's basically superimposed on the surrounding real world which is lit and colored as your eyes and brain expect them to be. That contrast is likely helping to drive some people's brains to distraction. Similar but different to how blurring (DOF, Motion Blur) in games is highly distracting and unpleasant for people like me.

I wonder if it's the fact that I'm partially color blind that CA in games doesn't affect me in the same way as it does for many on this forum.

Regards,
SB
 
I wonder if it's the fact that I'm partially color blind that CA in games doesn't affect me in the same way as it does for many on this forum.
Guess not. I'm one of those who never have a problem with CA, but i think it's the most hated effect. I do not even notice it, aside from say horror game which exaggerates it in case of mental stress. I assume most people that do notice dislike it. I'm also pretty sore nobody does like it at all.
Also, i really enjoy motion blur in general, and DOF is fine. But i hated early DOF implementations ignoring depth, so distant blur also blurred the edges of closer objects, same applies to bloom.

Am i the only one who hates SSAO? Seems so, but i often turn it off if i can, which is rarely possible. I would love a slider so i can turn it from total black to something subtle. Look at Outer Worlds to see some games still exaggerate AO like crazy. So ugly :(

Oh, and i hate rim lighting. Some games have it always on to highlight characters. I hate it even if the artstyle is non photorealistic.
And i hate highlighting objects in general, with a shiver, or soem colored countour. It's so ugly i would prefer to check all objects manually if i can interact with them or not. :)
 
hmmm... thinking of it, likely the reason i do not notice CA is because it's usually applied less at the center and more at the edges of the screen.
But i do not look at the edges, i move the mouse instead. Maybe, because i'm not used to focus on peripherical angles?
Then maybe this also is the reason why some people feel extremely unconfortable with gamepads, like me. I can not play with consoles becasue it feels so unnatural to look around with a gamepad.

Edit: Maybe this also is the reason i can not catch a ball, haha :D
 
The lack of lighting depth without AO bothers me much more, it's far more flat.
 
NVIDIA added DLSS along side RTX in the indie game Deliver Us the Moon, DLSS is now upgraded to a new version that includes 3 quality tiers: Performance, Balanced and, Quality, controlling the rendering resolution of DLSS.

The new version runs on the tensor cores, @1080p and 1440p, NVIDIA recommends using the Quality mode, @4K they recommend Performance mode. Performance uplift using Quality mode is along side the 45% to 65% mark depending on the GPU.

deliver-us-the-moon-fortuna-nvidia-geforce-rtx-ray-tracing-dlss-2560x1440-performance.png


Performance mode extends that uplift to 200% though

deliver-us-the-moon-fortuna-nvidia-geforce-rtx-ray-tracing-dlss-3840x2160-performance.png


https://www.nvidia.com/en-us/geforce/news/deliver-us-the-moon-nvidia-dlss/
 
Back
Top