Nvidia DLSS 1 and 2 antialiasing discussion *spawn*

I live in no bubble...
These are your words regarding frames per second: "e-sports and streaming, as if people who are into such things are the only or even the singularly most important target market for high end GPUs..".

I was just illustrating how obtuse and silly and laughable your statement is. I only mentioned a well known streamer to illustrate and drive my previous point home, that Game Performance is the driving force for a faster/better dGPUs, not in-game features like RT. Not sure why you are unable to even understand this. Otherwise, these Content Creators (with millions of fans) would turn RT on while streaming, since they have the very best, of the best hardware, but none of them turn RT on.

You're tripling down on an inane argument. But ok, let's unpack your claims (factual or not):
- Streamers/Pro Gamers have high end GPUs (because they're enthusiasts, possibly sponsored, or this may be their actual livelihood).
- Streamers only care about frame rates and don't use RT.
- Streamers can have millions of followers.

So, what is this even supposed to prove?
The high end GPU market is mostly streamers?
The high end GPU market is mostly millions of streamer followers?

Clearly, neither is true. The argument does not appear very relevant with regards to the reasons why people would buy high end GPUs.
In fact, I reckon twitch shooters and other e-sports titles generally run very well on low-end hardware, which partly explains why they are so popular.

I maintain that image quality matters is something that is important to a lot more people than you give them credit for.
Why would we bother with ever higher resolutions, perspectively correct textures, anisotropic filtering, anti-aliasing and such if it was not?
And heck, why do some of the top selling games not even have multiplayer to begin with? And others multiplayer that isn't sensitive to frame rates?

So yeah, to put it bluntly, this is the forum where we drool over stuff like DLSS, FRS, and Digital Foundry raytracing videos.
Twitch chat is that way ---->
 
Maybe he means most PC gamers who live in his house. Definitely feeling some elitist 1% vibes. By definition most PC gamers aren’t hardcore fps nuts. Because if everyone is hardcore nobody is hardcore.
*whispers in your ear*
-Raspberry Pi.... ... is a PC with games... you might not be intimated by....


Honestly, stop playing the victim here.
We know that most "PC players" are not hardcore, but we do know that most PC players whom buy a $800~$1,499 dGPU are hardcore. See..?

Also, to keep things in perspective, we are having an elitist discussion, when knowing the actual cost of a modern RTX cards. Even $300 dGPU are elitist, because that is nearly the cost of a new Console. Whereas you already stated, that iGPU/APU are good enough for 99% of the non-elitists (see Pi).
 
We know that most "PC players" are not hardcore, but we do know that most PC players whom buy a $800~$1,499 dGPU are hardcore. See..?
No, we don't know that at all. There are a lot of "enthusiasts" that spend big bucks on PC hardware that just collects dust while browsing youtube. Because buying stuff is fun.
 
You're tripling down on an inane argument. But ok, let's unpack your claims (factual or not):
- Streamers/Pro Gamers have high end GPUs (because they're enthusiasts, possibly sponsored, or this may be their actual livelihood).
- Streamers only care about frame rates and don't use RT.
- Streamers can have millions of followers.

So, what is this even supposed to prove?
The high end GPU market is mostly streamers?
The high end GPU market is mostly millions of streamer followers?

Clearly, neither is true. The argument does not appear very relevant with regards to the reasons why people would buy high end GPUs.
In fact, I reckon twitch shooters and other e-sports titles generally run very well on low-end hardware, which partly explains why they are so popular.

I maintain that image quality matters is something that is important to a lot more people than you give them credit for.
Why would we bother with ever higher resolutions, perspectively correct textures, anisotropic filtering, anti-aliasing and such if it was not?
And heck, why do some of the top selling games not even have multiplayer to begin with? And others multiplayer that isn't sensitive to frame rates?

So yeah, to put it bluntly, this is the forum where we drool over stuff like DLSS, FRS, and Digital Foundry raytracing videos.
Twitch chat is that way ---->


It doesn't prove anything, it illustrates.... that your previous statement about e-sports was laughable.

A good portion of the "little kids" who watch these Streamers, are because they can not afford the hardware, to game at those levels. So they tune into Streamers who can TURN IT UP TO ELEVEN and play the game. Similarly, a good portion of the "adults" who watch those Competitive streams can afford the hardware w/such levels of performance & do COMPETE at that level (120Hz+ monitors anyone?), & watch the Pros play, just like you would Tom Brady.. Little kids do not fuel the RTX market.

I find it very odd, that I have to explain what streamers are and do. The Streamer mention* was only to hyper-illustrate how lost you were about RAY TRACING and thinking those with RTX cards widely use ray tracing in games. That is why I pointed out, that even the well known streamers who are known for TURNING IT UP TO ELEVEN, don't even use RT.... BECAUSE IT SAPS THEIR PERFORMANCE>

I have never mentioned IQ. Pushing straw in peoples faces isn't friendly. (btw, I do game with DLSS on, in Warzone, the IQ needs work.)



*Many Big Name Streamers typically have the best hardware around, some streamers have $20~40K+ worth of equipment and maintained by professionals.
 
Last edited:
Marvel Avengers allows to disable TAA. DLSS comes very close to the native 4K SMAA picture. TAA is dimming the picture and reducing the in picture contrast. Quick comparision between 4K SMAA, DLSS-Q and FSR UQ: Imgsli

Having alternative ways to reconstruct pictures make more sense than just to start with a TAA modified image...
 
Last edited:
Marvel Avengers allows to disable TAA. DLSS comes very close to the native 4K SMAA picture. TAA is dimming the picture and reducing the in picture contrast. Quick comparision between 4K SMAA, DLSS-Q and FSR UQ: Imgsli

Having alternative ways to reconstruct pictures make more sense than just to start with a TAA modified image...
TAA camping will definitely take out the Highlights and over average the frame dimming it a bit for sure. Making it soft.
Problem is that SMAA 1x will have absolutely no temporal Stability, so it would Look super flicker in motion in a modern pbr game.
Unless that is t2x?
 
DLSS is anti-aliasing and Image reconstruction. It replaces AA solution game has (unless they are prefiltering textures, in which case it has that of course).

So you're telling me it's sort of new anti-aliasing then why do some people mention SSAA ? "works along with SSAA" what does it mean ? I assume People are giving wrong info.
 
So you're telling me it's sort of new anti-aliasing then why do some people mention SSAA ? "works along with SSAA" what does it mean ? I assume People are giving wrong info.

Aliasing in a rasterized image is caused by under sampling. "Super sampling" tends to refer to accumulating many samples per pixel to reduce or eliminate aliasing. SSAA refers to a spatial technique where you render at a higher resolution and then downsample to a lower resolution, allowing you to combine many pixels into one average pixel (4 -> 1 in the case of 4k -> 1080p). So you downsample 4k to 1080p and you get 4 samples per pixel instead of native 1080p being 1 sample per pixel. This is oversimplified because different render passes can sample for each pixel, so there's rarely 1 sample per pixel. TAA is also super sampling, but it's temporal. You accumulate samples over time which you can use to improve sample counts per pixel.

DLSS 2.0 is essentially like TAA, but with trained heuristics vs a super-sampled reference image. Essentially it learns how to best use temporal samples to reconstruct a super-sampled image from a lower resolution under-sampled image. Basically it can achieve results greater than TAA because it can use samples in novel ways that would be complex for a human to hard code and optimize for good performance.

When people talk about using DLSS with SSAA they're talking about rendering a game with DLSS to maybe 4k output and then downsampling to their native display resolution. So many you play at 4k DLSS performance mode and then downsample to 1080p. You'll get visual quality probably not quite as good as traditional SSAA, but with better performance. You can do this by coming Nvidia DSR with DLSS.
 
No, we don't know that at all. There are a lot of "enthusiasts" that spend big bucks on PC hardware that just collects dust while browsing youtube. Because buying stuff is fun.
images
 
Back
Top