Upscaling Technology Has Become A Crutch

What game?
I would like to compare as you make claim after claim, but never document your claims.
And "native" doesn't mean what you think it does.
A lot happening on screen is not "native".
Whoa bud, tone down the aggressiveness. We're talking about upscaling here, no one is insulting your family...wow.

Now what do you mean by what game? You asked me earlier for games that have both TAA and no TAA options. I provided some for you.

Yes I'm aware that native doesn't mean all effects are native. However, what you fail to recognize is that certain image quality concessions might be more bothersome to others than it is to you. As a result, what you might deem as an acceptable comprise might be unacceptable to others. The whole reason the subreddits I referenced exist is because there are a group of people who find those concessions to be unacceptable. It would be nice for other options to be provided to users.
 
What game?
I would like to compare as you make claim after claim, but never document your claims.
Boss has only made one claim. He's referred to examples by name.
And "native" doesn't mean what you think it does.
Maybe you're making assumptions and Boss knows exact;y what 'native' means but it using it in the same way people like DF do?

Either way, you could definitely tone down your conversation and aim to make it more conversational.
 
Ratchet and clank pc ports, spiderman pc ports I believe. You can turn off TAA in lies of p, Doom/eternal.

This video shows comparisons between no aa and TAA. I’m trying to find the video that also includes DLSS and when I find it, I will post it.

And comparisons of No AA vs DLAA( and vs TAA)?
 
Waiting for the comparison shots but DLAA and TAA both cause blurriness, that's why games add sharpening filters alongside these AA methods. DLAA looks nicer and has better motion handling IMO but its all very blurry.
That is the opposite experience of mine.

DLAA > DLSS > TAA and no AA simply has horribad jaggies even at 4K.

Will not be home before Jan (at the family for Xmas), but I do look forward to seeing Boss back up his claims with more than "Go to reddit and read"...
 
Temporal accumulation, stochastic rendering are real things. Going forward finding ways to disable TAA is going to mean wrecking image quality because rendering techniques will rely on temporal information. That should already be true for quite a few games. As people have already pointed out, many parts of rendering have had 1/2 or 1/4 resolution for a long time. The entire image is under-sampled when "rasterizing" which is why it needs anti-aliasing in the first place. You could say that "rasterizing" is a "crutch" for good performance the same way you can say TAA is a "crutch" for good performance.

Gamers have a preference for "sharpness." Personally I'd rather see a soft image with more stability than a sharp image with less. For games chasing realism, 1080p BluRay should be the target. Games don't even come close to that yet.
 
Last edited:
The need for complex rasterized effects and the modest console capabilities necessitated the presence of TAA.

1-Modern materials and shaders had severe aliasing with MSAA as MSAA can't deal with shader aliasing at all.

2-The need for complex lighting meant going with Deferred Shading, which broke MSAA and made it almost ineffective in combating aliasing for most parts in the image, also the cost of MSAA essentially tripled with Deferred Shading.

3-For games made primarily on consoles, TAA now helps with rendering of hair, vegetation, screen space reflections, screen space ambient occlusion, and even screen space shadows and global illumination. In most games you can't turn TAA off because it will break the rendering of the game.

It's not about crutch, it's about rasterization reaching it's limits and needing temporal accumulation and layers of screen space post process lighting "deferred shading" to do it's thing.
 
Why does MSAA fail so badly for deferred rendering? Is there any way to ameliorate that or it's just not possible due to the way NSAA works? I'm honestly looking to be educated in this situation I have a better understanding.
 
Ratchet and clank pc ports, spiderman pc ports I believe. You can turn off TAA in lies of p, Doom/eternal.

This video shows comparisons between no aa and TAA. I’m trying to find the video that also includes DLSS and when I find it, I will post it.


A video consisting of still images, which is a little silly to use as supportive evidence for your position when one of the reasons TAA has gained such prominence is it can actually deal with subpixel/shader aliasing. MSAA/FXAA/SMAA simply don't address this at all, which you can easily see when you switch from TAA/DLSS to SMAA/FXAA and watch as you move through a world and it's just a sea of shimmering pixels, blinking constantly in and out of existence. If games looked the same in stills as they did in motion, then yeah - I would hate TAA too.

On small (22-27") displays (and especially with older games), perhaps I can understand some preferring the look of SMAA/MSAA. But with more modern games stuffed to the brim with fine detail on large TV's, I'd venture to guess those that would prefer those methods vs at least a decent TAA to be in a decidedly small minority. SMAA/FXAA/TAA are very similar in performance cost, TAA just didn't become the de-facto method because developers formed a TAA cabal, it gained prominence because SMAA just looks like shit in motion with the assets of modern games. Just switch to SMAA in Spiderman, the Horizon games etc and move around the world - it's a mess.

You. Need. Temporal. Data.

Modern releases look incredibly blurry to me, and it seems developers/IHVs agree because now every game has a built in sharpening filter slider lol. Games didn't used to need sharpening filters at native resolution.

TAA fundamentally makes games blurrier. I think this makes games look worse. Some prefer the look. I also don't love how TAA's poor outcomes have lead to a whole new set of vendor specific proprietary (for the most part) replacements for it (ie DLAA, DLSS, FSR at native res or upscaling, XeSS).

You don't have to use vendor specific technologies, you can simply use downscaling. The reason you don't of course, is because it's ridiculously expensive. TAA has downsides yes, but everything in consumer graphics is a compromise. TAA was the best compromise at the time to deal with advanced materials rendering in a limited performance budget.

Edit: Should mention I'm all for gamers having choice and even though I don't like the look of it most times, I'm glad devs like Nixxes give you the option for at least someting like SMAA. However as @Scott_Arm and @DavidGraham have detailed, so many modern rendering techniques require that temporal data to function properly. An example of this was the Resident Evil 3 DLSS mod. When I was getting these artifacts. my presumption (and that of the modder responsible for creating it) was that DLSS was interfering with a particular post-process pipeline that hopefully they could bypass. They never could, because the problem was not DLSS - the problem was because it just wasn't the game's own TAA. Turning off DLSS, but also turning off TAA produces the same flashbulb artifacts. One of those games that basically just shits the bed without TAA (and a pretty old one at that!).
 
Last edited:
Why does MSAA fail so badly for deferred rendering? Is there any way to ameliorate that or it's just not possible due to the way NSAA works? I'm honestly looking to be educated in this situation I have a better understanding.
MSAA works only on geometry edges, traditionally, you would shade the polygons then apply MSAA on edge polygons. It worked and all was fine.

Deferred Shading changed that, it decoupled shading and moved it into a later "deferred" stage. So you render the image with basic shading, then process the image, analytically and algorithmically applying the lighting on it, shading it in screen space. You do that in multiple "passes", it's like a photoshop program smartly shading the image in 2D space using analytical 3D information.

Now, because you've done your shading in multiple 2D passes, MSAA can't work because it lost the geometrical 3D information of the scene, it doesn't know where are the edges. If you try to add these 3D information back, your memory footprint will significantly increase, destorying your performance, you would need more VRAM and more memory bandwidth for no image quality benefit, MSAA will still not handle the rapid changes in lighting and high frequency details in the scene. Imagine needing 1GB for rendering a scene with Deferred Shading, 4XMSAA will essentially quadruple that to 4GB with no image quality improvement.

Also if you force MSAA on, multiple 3D information will need to be processed in the passes, which means doing more shading operations, which means cost is bigger.

Forward Rendering is the solution to get MSAA to work, but Forward needlessly shade pixels because it shades them in 3D space even if the pixels are not visible in the final image, with each light added to the scene, you need to shade each pixel again and again and again, making the cost of adding multiple lights prohibitively expensive. If you know your game has limited amounts of lights, go with Forward Shading, if not you go Deferred, as it will shade only the visible pixels in 2D space, which means adding multiple lights is now manageable.

Note that multiple shadow casting lights also breaks Deferred Shading, as shadows work in a "forward" manner, meaning rendering the entire scene from the perspective of the light source to determine which areas are in shadow, adding multiple lights this way means rendering the scene multiple times, we go back to the forward rendering problem, which is why modern games don't have many shadow casting lights.
 
Why does MSAA fail so badly for deferred rendering? Is there any way to ameliorate that or it's just not possible due to the way NSAA works? I'm honestly looking to be educated in this situation I have a better understanding.
MSAA doesn't play all that well with a deferred rendering pipeline which is composited with many separate rendering passes (SSR/AO/lighting/decals/other screen space filters/passes/etc.) that all have to repeatedly access the G-buffer during each of their own phases ...

In a deferred renderer designed for mobile devices where memory bandwidth is extremely limited we would often want to 'collapse' all these different rendering passes into larger more unified rendering passes to lower the amount of memory traffic to the G-buffer ...

On more powerful platforms such as PC or consoles, developers prefer to trade-off memory bandwidth to divide the rendering pipeline into it's many multiple separate rendering passes for higher GPU occupancy. Introducing MSAA in this case potentially breaks this fragile balance since a naive implementation will dispropotionately impact memory bandwidth consumption in comparison to the shading cost. The amount of geometry information that is encoded for a MSAA G-buffer up front linearly scales with respect to the sample count ...
 
Back
Top