Upscaling Technology Has Become A Crutch

What game?
I would like to compare as you make claim after claim, but never document your claims.
And "native" doesn't mean what you think it does.
A lot happening on screen is not "native".
Whoa bud, tone down the aggressiveness. We're talking about upscaling here, no one is insulting your family...wow.

Now what do you mean by what game? You asked me earlier for games that have both TAA and no TAA options. I provided some for you.

Yes I'm aware that native doesn't mean all effects are native. However, what you fail to recognize is that certain image quality concessions might be more bothersome to others than it is to you. As a result, what you might deem as an acceptable comprise might be unacceptable to others. The whole reason the subreddits I referenced exist is because there are a group of people who find those concessions to be unacceptable. It would be nice for other options to be provided to users.
 
What game?
I would like to compare as you make claim after claim, but never document your claims.
Boss has only made one claim. He's referred to examples by name.
And "native" doesn't mean what you think it does.
Maybe you're making assumptions and Boss knows exact;y what 'native' means but it using it in the same way people like DF do?

Either way, you could definitely tone down your conversation and aim to make it more conversational.
 
Ratchet and clank pc ports, spiderman pc ports I believe. You can turn off TAA in lies of p, Doom/eternal.

This video shows comparisons between no aa and TAA. I’m trying to find the video that also includes DLSS and when I find it, I will post it.

And comparisons of No AA vs DLAA( and vs TAA)?
 
And comparisons of No AA vs DLAA( and vs TAA)?
Waiting for the comparison shots but DLAA and TAA both cause blurriness, that's why games add sharpening filters alongside these AA methods. DLAA looks nicer and has better motion handling IMO but its all very blurry.
 
Waiting for the comparison shots but DLAA and TAA both cause blurriness, that's why games add sharpening filters alongside these AA methods. DLAA looks nicer and has better motion handling IMO but its all very blurry.
That is the opposite experience of mine.

DLAA > DLSS > TAA and no AA simply has horribad jaggies even at 4K.

Will not be home before Jan (at the family for Xmas), but I do look forward to seeing Boss back up his claims with more than "Go to reddit and read"...
 
Temporal accumulation, stochastic rendering are real things. Going forward finding ways to disable TAA is going to mean wrecking image quality because rendering techniques will rely on temporal information. That should already be true for quite a few games. As people have already pointed out, many parts of rendering have had 1/2 or 1/4 resolution for a long time. The entire image is under-sampled when "rasterizing" which is why it needs anti-aliasing in the first place. You could say that "rasterizing" is a "crutch" for good performance the same way you can say TAA is a "crutch" for good performance.

Gamers have a preference for "sharpness." Personally I'd rather see a soft image with more stability than a sharp image with less. For games chasing realism, 1080p BluRay should be the target. Games don't even come close to that yet.
 
Last edited:
The need for complex rasterized effects and the modest console capabilities necessitated the presence of TAA.

1-Modern materials and shaders had severe aliasing with MSAA as MSAA can't deal with shader aliasing at all.

2-The need for complex lighting meant going with Deferred Shading, which broke MSAA and made it almost ineffective in combating aliasing for most parts in the image, also the cost of MSAA essentially tripled with Deferred Shading.

3-For games made primarily on consoles, TAA now helps with rendering of hair, vegetation, screen space reflections, screen space ambient occlusion, and even screen space shadows and global illumination. In most games you can't turn TAA off because it will break the rendering of the game.

It's not about crutch, it's about rasterization reaching it's limits and needing temporal accumulation and layers of screen space post process lighting "deferred shading" to do it's thing.
 
Why does MSAA fail so badly for deferred rendering? Is there any way to ameliorate that or it's just not possible due to the way NSAA works? I'm honestly looking to be educated in this situation I have a better understanding.
 
Ratchet and clank pc ports, spiderman pc ports I believe. You can turn off TAA in lies of p, Doom/eternal.

This video shows comparisons between no aa and TAA. I’m trying to find the video that also includes DLSS and when I find it, I will post it.


A video consisting of still images, which is a little silly to use as supportive evidence for your position when one of the reasons TAA has gained such prominence is it can actually deal with subpixel/shader aliasing. MSAA/FXAA/SMAA simply don't address this at all, which you can easily see when you switch from TAA/DLSS to SMAA/FXAA and watch as you move through a world and it's just a sea of shimmering pixels, blinking constantly in and out of existence. If games looked the same in stills as they did in motion, then yeah - I would hate TAA too.

On small (22-27") displays (and especially with older games), perhaps I can understand some preferring the look of SMAA/MSAA. But with more modern games stuffed to the brim with fine detail on large TV's, I'd venture to guess those that would prefer those methods vs at least a decent TAA to be in a decidedly small minority. SMAA/FXAA/TAA are very similar in performance cost, TAA just didn't become the de-facto method because developers formed a TAA cabal, it gained prominence because SMAA just looks like shit in motion with the assets of modern games. Just switch to SMAA in Spiderman, the Horizon games etc and move around the world - it's a mess.

You. Need. Temporal. Data.

Modern releases look incredibly blurry to me, and it seems developers/IHVs agree because now every game has a built in sharpening filter slider lol. Games didn't used to need sharpening filters at native resolution.

TAA fundamentally makes games blurrier. I think this makes games look worse. Some prefer the look. I also don't love how TAA's poor outcomes have lead to a whole new set of vendor specific proprietary (for the most part) replacements for it (ie DLAA, DLSS, FSR at native res or upscaling, XeSS).

You don't have to use vendor specific technologies, you can simply use downscaling. The reason you don't of course, is because it's ridiculously expensive. TAA has downsides yes, but everything in consumer graphics is a compromise. TAA was the best compromise at the time to deal with advanced materials rendering in a limited performance budget.

Edit: Should mention I'm all for gamers having choice and even though I don't like the look of it most times, I'm glad devs like Nixxes give you the option for at least someting like SMAA. However as @Scott_Arm and @DavidGraham have detailed, so many modern rendering techniques require that temporal data to function properly. An example of this was the Resident Evil 3 DLSS mod. When I was getting these artifacts. my presumption (and that of the modder responsible for creating it) was that DLSS was interfering with a particular post-process pipeline that hopefully they could bypass. They never could, because the problem was not DLSS - the problem was because it just wasn't the game's own TAA. Turning off DLSS, but also turning off TAA produces the same flashbulb artifacts. One of those games that basically just shits the bed without TAA (and a pretty old one at that!).
 
Last edited:
Why does MSAA fail so badly for deferred rendering? Is there any way to ameliorate that or it's just not possible due to the way NSAA works? I'm honestly looking to be educated in this situation I have a better understanding.
MSAA works only on geometry edges, traditionally, you would shade the polygons then apply MSAA on edge polygons. It worked and all was fine.

Deferred Shading changed that, it decoupled shading and moved it into a later "deferred" stage. So you render the image with basic shading, then process the image, analytically and algorithmically applying the lighting on it, shading it in screen space. You do that in multiple "passes", it's like a photoshop program smartly shading the image in 2D space using analytical 3D information.

Now, because you've done your shading in multiple 2D passes, MSAA can't work because it lost the geometrical 3D information of the scene, it doesn't know where are the edges. If you try to add these 3D information back, your memory footprint will significantly increase, destorying your performance, you would need more VRAM and more memory bandwidth for no image quality benefit, MSAA will still not handle the rapid changes in lighting and high frequency details in the scene. Imagine needing 1GB for rendering a scene with Deferred Shading, 4XMSAA will essentially quadruple that to 4GB with no image quality improvement.

Also if you force MSAA on, multiple 3D information will need to be processed in the passes, which means doing more shading operations, which means cost is bigger.

Forward Rendering is the solution to get MSAA to work, but Forward needlessly shade pixels because it shades them in 3D space even if the pixels are not visible in the final image, with each light added to the scene, you need to shade each pixel again and again and again, making the cost of adding multiple lights prohibitively expensive. If you know your game has limited amounts of lights, go with Forward Shading, if not you go Deferred, as it will shade only the visible pixels in 2D space, which means adding multiple lights is now manageable.

Note that multiple shadow casting lights also breaks Deferred Shading, as shadows work in a "forward" manner, meaning rendering the entire scene from the perspective of the light source to determine which areas are in shadow, adding multiple lights this way means rendering the scene multiple times, we go back to the forward rendering problem, which is why modern games don't have many shadow casting lights.
 
Why does MSAA fail so badly for deferred rendering? Is there any way to ameliorate that or it's just not possible due to the way NSAA works? I'm honestly looking to be educated in this situation I have a better understanding.
MSAA doesn't play all that well with a deferred rendering pipeline which is composited with many separate rendering passes (SSR/AO/lighting/decals/other screen space filters/passes/etc.) that all have to repeatedly access the G-buffer during each of their own phases ...

In a deferred renderer designed for mobile devices where memory bandwidth is extremely limited we would often want to 'collapse' all these different rendering passes into larger more unified rendering passes to lower the amount of memory traffic to the G-buffer ...

On more powerful platforms such as PC or consoles, developers prefer to trade-off memory bandwidth to divide the rendering pipeline into it's many multiple separate rendering passes for higher GPU occupancy. Introducing MSAA in this case potentially breaks this fragile balance since a naive implementation will dispropotionately impact memory bandwidth consumption in comparison to the shading cost. The amount of geometry information that is encoded for a MSAA G-buffer up front linearly scales with respect to the sample count ...
 
Note that multiple shadow casting lights also breaks Deferred Shading, as shadows work in a "forward" manner, meaning rendering the entire scene from the perspective of the light source to determine which areas are in shadow, adding multiple lights this way means rendering the scene multiple times, we go back to the forward rendering problem, which is why modern games don't have many shadow casting lights.
I just wanted to add that in RT, you don't use shadow passes. You can just iterate through all the lights in the scene from world space and test for light vectors that are blocked from light sources and shade the shadows that way. Not only is this the best solution because it eliminates a separate pass but it is accurate enough to capture shadows of objects that are very small in the world (i.e. forks, spoons, books, etc..) that shadow maps can't capture.
 
Why does MSAA fail so badly for deferred rendering?
For 4x MSAA, you need to keep the 4x resolution render targets for the deferred shading. This quadruples the memory footprint and the time spent on g-buffer rasterization. And you need to shade the extra pixels on edges, adding even more time to the 4x g-buffer rasterization workload. The complexity and associated costs of MSAA are unnecessary in modern renderers (plus the tone-mapping would eat your extra MSAA shades for launch anyway if you do the MSAA in a physically plausible way before it). An output resolution coverage mask with a few bits per pixel combined with temporal accumulation is sufficient for achieving great edge AA and reconstruction, which, at quarter shading resolution, would be able to surpass the quality of MSAA at the quaruple resolution. Too bad HW rasterizers can't export such a mask, but should be perfectly doable with something like RT for primary visibility.
 
Last edited:
Temporal accumulation, stochastic rendering are real things. Going forward finding ways to disable TAA is going to mean wrecking image quality because rendering techniques will rely on temporal information. That should already be true for quite a few games. As people have already pointed out, many parts of rendering have had 1/2 or 1/4 resolution for a long time. The entire image is under-sampled when "rasterizing" which is why it needs anti-aliasing in the first place. You could say that "rasterizing" is a "crutch" for good performance the same way you can say TAA is a "crutch" for good performance.

Gamers have a preference for "sharpness." Personally I'd rather see a soft image with more stability than a sharp image with less. For games chasing realism, 1080p BluRay should be the target. Games don't even come close to that yet.
You're entitled to your preferences for sure but, others prefer different things. While many effects are at 1/2 or even quarter resolution, I'd rather have the option to increase the resolution of said effects in exchange for disabling TAA. If the performance cost ramps up, so be it but let it be a decision that I make for myself. Let it not be one that's forced upon me. For me personally, the TAA blur, ghosting, and artifacts is far more troublesome/bothersome than subpixel shimmering. There are many others who share this view as well. The issues with TAA have very little to do with sharpness and more to do with clarity.
A video consisting of still images, which is a little silly to use as supportive evidence for your position when one of the reasons TAA has gained such prominence is it can actually deal with subpixel/shader aliasing. MSAA/FXAA/SMAA simply don't address this at all, which you can easily see when you switch from TAA/DLSS to SMAA/FXAA and watch as you move through a world and it's just a sea of shimmering pixels, blinking constantly in and out of existence. If games looked the same in stills as they did in motion, then yeah - I would hate TAA too.

On small (22-27") displays (and especially with older games), perhaps I can understand some preferring the look of SMAA/MSAA. But with more modern games stuffed to the brim with fine detail on large TV's, I'd venture to guess those that would prefer those methods vs at least a decent TAA to be in a decidedly small minority. SMAA/FXAA/TAA are very similar in performance cost, TAA just didn't become the de-facto method because developers formed a TAA cabal, it gained prominence because SMAA just looks like shit in motion with the assets of modern games. Just switch to SMAA in Spiderman, the Horizon games etc and move around the world - it's a mess.
If the tradeoff is subpixel shimmering/aliasing or a blurry image and ghosting, then the wrong trade off has been made. That is indeed the root of the disagreement. The question is what tradeoffs are suitable and acceptable to the user? There's a subsection of users who strongly disagree with the current tradeoffs being made. Maybe this discussion would be best served in the PC forum as on PC, users have more options to customize the image to their needs?
You. Need. Temporal. Data.
I respectfully disagree. Finally, what I think you're failing to understand and IQandHDR struggled with this as well is that there will be people who don't like or necessarily agree with what you like. You may be fine with TAA and the positives and negatives that come along with it. Others may not be fine with it at all. It would be nice if more options were presented to users.
 
@Boss The problem is you are essentially ruling out temporal data, stochastic rendering as valid ways to come up with new techniques for rasterization, which essentially leaves that entire field of rendering dead. The last drops of blood are already being squeezed for rasterized performance. The big gpu upgrades you'd need to continue getting gains in rasterized performance are not really coming. You'll be playing old looking games at slightly higher resolutions for a long time if you don't accept the reality of where the research is going. If you want to, you can just super-sample your games to increase the number of samples per pixel. Get an Nvidia card and use DRS. The option for what you want already exists. You won't be excited about the performance.
 
@Boss The problem is you are essentially ruling out temporal data, stochastic rendering as valid ways to come up with new techniques for rasterization, which essentially leaves that entire field of rendering dead. The last drops of blood are already being squeezed for rasterized performance. The big gpu upgrades you'd need to continue getting gains in rasterized performance are not really coming. You'll be playing old looking games at slightly higher resolutions for a long time if you don't accept the reality of where the research is going. If you want to, you can just super-sample your games to increase the number of samples per pixel. Get an Nvidia card and use DRS. The option for what you want already exists. You won't be excited about the performance.
I do already have a Nvidia card(RTX 4080) and supersample when possible. In fact, I was planning on purchasing a 5090 but after hearing the leaked prices from Vex's youtube video, it may be a long long time before I invest any additional money into PC gaming hardware. Also, while I am grateful that I was able to purchase a 4080, I also recognize that it's out of the reach of many. That cannot be the defacto solution we look towards. Now if hardware prices were more reasonable, then brute forcing our way to achieve our desired results would not be a problem for me. It is nice to dream though.
 
I do already have a Nvidia card(RTX 4080) and supersample when possible. In fact, I was planning on purchasing a 5090 but after hearing the leaked prices from Vex's youtube video, it may be a long long time before I invest any additional money into PC gaming hardware. Also, while I am grateful that I was able to purchase a 4080, I also recognize that it's out of the reach of many. That cannot be the defacto solution we look towards. Now if hardware prices were more reasonable, then brute forcing our way to achieve our desired results would not be a problem for me. It is nice to dream though.

Then you do understand the contradiction here? Brute forcing is a dream. Gpus are too expensive. It cannot be the de-facto solution. So what's left to do? You re-use good data instead of throwing it away every frame.
 
DLAA > DLSS > TAA and no AA simply has horribad jaggies even at 4K.
Comparing this to older forward rendered games with MSAA, older stuff is way sharper to me. I'm not sure how people can say stuff released today with modern rendering pipelines doesn't look blurrier, otherwise why would we suddenly need sharpness sliders in every game?
You don't have to use vendor specific technologies, you can simply use downscaling. The reason you don't of course, is because it's ridiculously expensive. TAA has downsides yes, but everything in consumer graphics is a compromise. TAA was the best compromise at the time to deal with advanced materials rendering in a limited performance budget.
I mean this is just a technicality lol. Yeah we could all use SSAA but like you said it's not practical and nobody is ever going to be able to play the latest titles with significant SSAA simply due to how games will scale to the hardware that's available on release. TAA might have been the best compromise but I have to say it's almost as if these games are regressing graphically.

For example, I find Halo 5 looks way better than Halo Infinite. Infinite looks like a blurry mess unless you crank up the sharpening, and then it looks like an oversharpened blob.
 
native resolution itself is a crutch
why do you think native resolution is a crutch? Just curious... I ask 'cos some of my colleagues which are AMD fans usually say that they don't like DLSS nor XeSS at all (I told them about DLAA. but they weren't impressed either). They mention that there's nothing like native resolution and that they just prefer to have a hardware that runs games at a native resolution, and that's it. Native is like more pure to them
 
Comparing this to older forward rendered games with MSAA, older stuff is way sharper to me. I'm not sure how people can say stuff released today with modern rendering pipelines doesn't look blurrier, otherwise why would we suddenly need sharpness sliders in every game?

I mean this is just a technicality lol. Yeah we could all use SSAA but like you said it's not practical and nobody is ever going to be able to play the latest titles with significant SSAA simply due to how games will scale to the hardware that's available on release. TAA might have been the best compromise but I have to say it's almost as if these games are regressing graphically.

For example, I find Halo 5 looks way better than Halo Infinite. Infinite looks like a blurry mess unless you crank up the sharpening, and then it looks like an oversharpened blob.
Still waiting for you to document your claims.
 
Back
Top