PS5 Pro *spawn

When it comes to TAA there's five areas for me..

  1. Coverage - How well does it handle jaggies/shimmering
  2. In surface detail - How much has the TAA has trashed the surface detail in pursuit of good coverage
  3. Sharpness - How sharp does the game look and is it over-sharpened
  4. Clarity while stationary - How blurry is the game when the camera isn't moving (This is the best case for TAA and is when it looks the best)
  5. Clarity while in motion - This is where most TAA implementations fall apart and why I don't like comparisons with TAA that only use a still camera

Every TAA implementation will vary with all five areas, some will offer great clarity while in motion, but have poor coverage.

Others (like ND's) will offer good coverage and decent clarity, but destroy surface detail.
Honestly, if I had to take one of those negatives, I would take reducing surface detail every time. Anything that's not ai assisted will fail at least one of those anyways.

So I'd say that the naughty dog TAA is one of the best. DLSS and PSSR will of course offer much better results, as long as they aren't oversharpened.
 
Honestly, if I had to take one of those negatives, I would take reducing surface detail every time. Anything that's not ai assisted will fail at least one of those anyways.

So I'd say that the naughty dog TAA is one of the best. DLSS and PSSR will of course offer much better results, as long as they aren't oversharpened.

I would take reduced coverage and have slightly more jaggies, but have a sharper image, good in-surface detail and better motion clarity.
 
If someone didn't get your point, you didn't get it right.
Natural language is oft ambiguous, particularly in the vernacular. Why so judgy over a mix-up with a few words that's easily solved by looking for a clearer explanation? I'd rather people just debate civilly towards a common understanding - isn't that really what B3D is about??
But it's curious that Pro is using lower settings that the quality mode on base PS5.

However minor, it's as if developers have to choose between PSSR and better settings.
Not seeing the connection. How would higher settings conflict with PSSR's ability to operate?
 
Not seeing the connection. How would higher settings conflict with PSSR's ability to operate?

I think he means that in order to get that PSSR mode performing as it is on the Pro they had to retain the lower settings of the current performance mode for performance reasons, otherwise they would have used the settings from the quality mode?
 
One of the nicest things on pc is being able to replace taa with DLAA in most modern games. I can’t tell you how nice of an upgrade that little toggle is for image quality.
 
Wondering which one I should buy for the PS5 pro as expandable storage. 980 pro or SN850x? Dont think the 990 pro is worth it
 
(T)AA doesn't improve vegetation details. Nor does sharpening (which DLSS also uses) could improve textures details at this point without obvious and ugly sharpening artefacts.
No you have it wrong. Upscaling cannot increase things that don’t exist. DLSS performs DLAA prior to upscaling. Sharpening filters are done in post.

IIRC, all upscaling algorithms to date, anti aliasing is applied before the upscaling happens. That means TAA is destroying detail prior to being upscaled.

Case in point the video below, and worth fast forwarding through for the various areas that TAA applied a harsh Vaseline filter over various parts of Halo infinite.

TAA really killed all the detail the engine was putting out, honestly making it a waste of time to even bother rendering.

What you’re seeing from PSSR is an antialiasing that is significantly better than TAA before the upscale occurs.

Detail preservation is on the AA side of the algorithm. upscaling cannot create detail where none exists.
 
Last edited:
It's crazy to think that modern pixels get blurred by TAA and motion blur, and then go to a sample and hold display that blurs it even more.

3 levels of blur before they get to your eyes.
 
It's crazy to think that modern pixels get blurred by TAA and motion blur, and then go to a sample and hold display that blurs it even more.

3 levels of blur before they get to your eyes.
Yup, it’s also unfortunate that many console games ported to PC don’t have DLSS support.

Well, the real travesty is all this work put into increasing more detail and shader complexity, all this talk about higher resolution textures and SSDs. And we smudge it the eff out once it’s all said and done.
 
Last edited:
IIRC, all upscaling algorithms to date, anti aliasing is applied before the upscaling happens. That means TAA is destroying detail prior to being upscaled.
That's the opposite of what is happening. None of the temporal upscaling algorithms apply any kind of AA (except for some very basic morphological AA in the case of TSR) before upscaling. The idea behind all modern temporal upscalers is to first upsample the low res input image to a higher resolution, then accumulate details frame by frame in the higher output resolution by warping the previous frame (essentially resampling it via MVecs) using low res motion vectors for the moving objects (and typically calculating them on the fly for the camera projection to save bandwidth). Every resampling stage adds blur, as resampling is a weighted averaging of several pixels, and any averaging of neighboring pixels introduces some degree of blurring. However, there are techniques to minimize that.

The issue with the TAA lies not just in resampling but also in the pixel rectification algorithm, which typically uses neighborhood color clamping. In this algorithm, for every pixel, a 3x3 neighborhood of pixels in the current and previous warped frames, called the history buffer, is checked. If the color in the 3x3 neighborhood of the previous frame differs significantly from the same neighborhood in the current frame, the history pixels that fall outside the current frame's color bounding box are either discarded or clamped to the current frame's color.

The problem with this approach is that it not only discards disoccluded areas with neighborhood color clamping, but it also drops all small subpixel geometry or pixel details. This is why TAA never converges for subpixel details and introduces a significant amount of blur. Therefore, TAA is inherently lossy.

Modern DL-based temporal upscalers ditch color clamping altogether and instead use an autoencoder CNN network to determine where to blend objects and where not to (that's why they can get away with 8 bit precision in many cases). These networks are obviously much smarter than a simple heuristic like color clamping, and they can be trained on various cases. As a result, they can analyze what is happening in the frame, blending where necessary and avoiding blending where it is not.

FSR 2.0 does not eliminate color clamping, but it attempts to restore subpixel details by applying the pixel locking heuristic on top (with varying levels of success). As the name suggests, the pixel locking heuristic locks thin features and removes the locks using several other heuristics (by calculating depth discontinuities between frames, using the reactive mask, etc.). Unfortunately for FSR 2.0, these lock removing heuristics are quite fragile, so you often see high frequency ghosting on internal surfaces of objects with high reflectance, such as water, especially if these objects lack motion vectors. This causes water to often look messy with FSR 2.0, and the same issues arise with fire and other particle effects. As a result, FSR 2.0 typically faces more problems compared to DL methods.
 
That's the opposite of what is happening. None of the temporal upscaling algorithms apply any kind of AA (except for some very basic morphological AA in the case of TSR) before upscaling. The idea behind all modern temporal upscalers is to first upsample the low res input image to a higher resolution, then accumulate details frame by frame in the higher output resolution by warping the previous frame (essentially resampling it via MVecs) using low res motion vectors for the moving objects (and typically calculating them on the fly for the camera projection to save bandwidth). Every resampling stage adds blur, as resampling is a weighted averaging of several pixels, and any averaging of neighboring pixels introduces some degree of blurring. However, there are techniques to minimize that.

The issue with the TAA lies not just in resampling but also in the pixel rectification algorithm, which typically uses neighborhood color clamping. In this algorithm, for every pixel, a 3x3 neighborhood of pixels in the current and previous warped frames, called the history buffer, is checked. If the color in the 3x3 neighborhood of the previous frame differs significantly from the same neighborhood in the current frame, the history pixels that fall outside the current frame's color bounding box are either discarded or clamped to the current frame's color.

The problem with this approach is that it not only discards disoccluded areas with neighborhood color clamping, but it also drops all small subpixel geometry or pixel details. This is why TAA never converges for subpixel details and introduces a significant amount of blur. Therefore, TAA is inherently lossy.

Modern DL-based temporal upscalers ditch color clamping altogether and instead use an autoencoder CNN network to determine where to blend objects and where not to (that's why they can get away with 8 bit precision in many cases). These networks are obviously much smarter than a simple heuristic like color clamping, and they can be trained on various cases. As a result, they can analyze what is happening in the frame, blending where necessary and avoiding blending where it is not.

FSR 2.0 does not eliminate color clamping, but it attempts to restore subpixel details by applying the pixel locking heuristic on top (with varying levels of success). As the name suggests, the pixel locking heuristic locks thin features and removes the locks using several other heuristics (by calculating depth discontinuities between frames, using the reactive mask, etc.). Unfortunately for FSR 2.0, these lock removing heuristics are quite fragile, so you often see high frequency ghosting on internal surfaces of objects with high reflectance, such as water, especially if these objects lack motion vectors. This causes water to often look messy with FSR 2.0, and the same issues arise with fire and other particle effects. As a result, FSR 2.0 typically faces more problems compared to DL methods.

Thanks mate. Great post for a quick learn for me. Makes a lot more sense the way you’ve written it out.

So seems to make sense why there was a drive towards native 4K with TAA? The colour clamping 3x3 would have less of an effect because overall more pixels would be preserved ?
 
Which is why I really really don't understand people who still claim native + TAA is better, it clearly is not. I still play Gears 5 to this day, and the amount of ghosting I get with it's TAA is unbearable, I really wish that game had DLSS.

Native + TAA had some upsides vs DLSS mostly
related to improper implementation by the developer (not setting texture sampling bias correctly, not rendering post processing correctly after DLSS not before), but otherwise, the ghosting with TAA is much higher, jaggies are way more, blur is much higher, the whole image is not stable with TAA compared to DLSS.


it’s also unfortunate that many console games ported to PC don’t have DLSS support
I think almost all PlayStation ports have DLSS, they started their PC startegy around 2020, well after DLSS became an established force. Which is why almost all have DLSS (well, except Days Gone and Detroit).

For Xbox they have been a hit and miss, but I feel most titles since 2020 have it too (except Halo Infinite), titles not releases with it at launch got it later via patches (Forza Horizon 5, Flight Simulator 2020, etc).
 
So seems to make sense why there was a drive towards native 4K with TAA? The colour clamping 3x3 would have less of an effect because overall more pixels would be preserved ?
Yes, it does work better with higher resolutions, especially when we talk about TAAU. There have been attempts at other solutions as well - the ID buffer on the PS4 Pro and stencil masks for characters in Uncharted. These can be both more expensive (perf wise) and harder to implement compared to color clamping, but they can produce better results.
 
Which is why I really really don't understand people who still claim native + TAA is better, it clearly is not.

On paper it might not be, but in reality it can be.

Hardware Unboxed have some videos comparing native to DLSS to FSR 2.

And I distinctly remember the road texture in CP2077 looking much better on native than with DLSS or FSR.
 
Yes, it does work better with higher resolutions, especially when we talk about TAAU. There have been attempts at other solutions as well - the ID buffer on the PS4 Pro and stencil masks for characters in Uncharted. These can be both more expensive (perf wise) and harder to implement compared to color clamping, but they can produce better results.
Could an in-engine upscaler/AA solution like TSR take advantage of Nanite's triangle IDs? Does it do so already?
 
Could an in-engine upscaler/AA solution like TSR take advantage of Nanite's triangle IDs? Does it do so already?
Better to ask @Andrew Lauritzen about this. From what I have seen, they don't currently use it. There are many tricks they can employ using the visibility buffer, though. But the TSR already looks good to me. If only they had implemented a higher resolution coverage mask for native resolution edges in motion, it would have been fantastic. Although there is apparently some kind of morphological AA used for prefiltering the low res input (at least in the sources), the edges still break down to a low resolution upscaled look during motion in many cases (not even sure if the prefiltering is being employed at all).
 
Back
Top