Current Generation Games Analysis Technical Discussion [2020-2021] [XBSX|S, PS5, PC]

Status
Not open for further replies.

Been playing this through on the PS5 with this patch - it's a gorgeous game in many spots, but I have one nitpick to otherwise great performance, and that's the DoF effect brought on when you engage perception vision. If you use it often to highlight objects/objectives in a scene, it can routinely bring your framerate below 60 - not huge drops mind you, but enough to induce frame skipping pretty frequently as you get further on, even outside towns/villages where there are drops without it (I mean they are pretty huge though). It's a little frustrating if you use it often, if they had kept the checkerboarded res at ~1800p like it was with the Pro in res mode instead of bumping it up to 2160 it would probably have the headroom to weather this effect without constantly these hiccups.

(Or just yet another 'please gives us VRR support' case)
 
If the CB provides so good picture quality why do people bother with other upscaling techniques? Is it some kind limited to only some engines?

And do i understand this correctly
https://en.wikipedia.org/wiki/Checkerboard_rendering

"While exact implementations tend to vary between developers, the technique generally involves rendering only 50% of any given scene, often rendering it in a grid of 2×2 pixels. A reconstruction filter is then employed to extrapolate the appearance of the unrendered parts of the scene, with the final image then being presented to the viewer as (theoretically) the same as if it had been rendered natively at the target resolution."

So if ps5 is 2160CB is it approx 1080p? (50%)
 
If the CB provides so good picture quality why do people bother with other upscaling techniques? Is it some kind limited to only some engines?

And do i understand this correctly
https://en.wikipedia.org/wiki/Checkerboard_rendering

"While exact implementations tend to vary between developers, the technique generally involves rendering only 50% of any given scene, often rendering it in a grid of 2×2 pixels. A reconstruction filter is then employed to extrapolate the appearance of the unrendered parts of the scene, with the final image then being presented to the viewer as (theoretically) the same as if it had been rendered natively at the target resolution."

So if ps5 is 2160CB is it approx 1080p? (50%)
It is 50% of 2160p but that is not 1080p just like interlacing ;). It is more or less interlacing (only one axis is cut in half) with a temporal component and coordinates
1080p is 25% of the resolution of 2160p as both axis are affected.
 
Last edited:
Checkerboard often is quarter sized actually: see here https://software.intel.com/content/...e-upscaling-on-intel-integrated-graphics.html

If you render like this, it would be 1080p.

webops10010-fig15-motion-vectors-in-reconstruction-783910.png


You can also render only half res in either axis like allandor said. I'm not sure which games use what approaches or how popular they are.
 
According the NXGamer the 2160p CBR on PS5 is so good that it looks as sharp as native ~2000p on XSX starting at 5:15. This is previously what I already found by comparing images.
Side by side on same screen at 4K these 2 consoles look identical

I think the "native" resolution of PS5 avoiding a final bilinear upscaling makes it look good compared to XSX. If both had the same resolution gap but both were upscaled I think the XSX image could look a bit sharper.
Basically: 4K + good CBR = 2000p native + nastly bilinear upscaling to 4k.

Interestingly NXGamer found shadows flichering in some scenes on XSX. From memory the third game with that problem on XSX (4:56).

If the CB provides so good picture quality why do people bother with other upscaling techniques? Is it some kind limited to only some engines?

And do i understand this correctly
https://en.wikipedia.org/wiki/Checkerboard_rendering

So if ps5 is 2160CB is it approx 1080p? (50%)
It's actually around 1530p. But you need to take into account that CBR has a cost, so it will cost more than if you were doing only 1530p: Usually final buffer is native 4K, geometry can be rendered internally at native 4K, it will cost bandwidth and ressources, there are plenty of implementations and variations possible. But and it can improve the final image significantly (all depending of the implementation of course).

It's a complex reconstruction tech and you need to integrate the solution in your engine from the start. It's far from simple upscaling.
 
  • Like
Reactions: snc
If the CB provides so good picture quality why do people bother with other upscaling techniques? Is it some kind limited to only some engines?

And do i understand this correctly
https://en.wikipedia.org/wiki/Checkerboard_rendering

"While exact implementations tend to vary between developers, the technique generally involves rendering only 50% of any given scene, often rendering it in a grid of 2×2 pixels. A reconstruction filter is then employed to extrapolate the appearance of the unrendered parts of the scene, with the final image then being presented to the viewer as (theoretically) the same as if it had been rendered natively at the target resolution."

So if ps5 is 2160CB is it approx 1080p? (50%)
If you are going by the amount of pixels rendered, it's more like 2160i. But instead of skipping every other line vertically, it skips every other pixel in a checkerboard pattern so that when you reconstruct the image, you have neighboring pixels on every side.

Lots of resolution scaling options in game give you a percentage value, and 50% in those options is often 1080p internal at 4k, but that's 50% per axis, so 25% as many pixels. Checkerboarding is 50% less pixels.
 
The version tested was 1.18.12.0. On a Variable Refresh Rate display both consoles can run at a frame rate higher than 30fps. This is difficult to demonstrate though due to capture card limitations and how low framerate compensation works with refresh rate counters on VRR displays. Xbox Series X renders at a native resolution of 2560x1440 and uses a form of temporal upsampling to reconstruct a 3840x2160 resolution. Xbox Series S render at a native resolution of 1920x1080. Texture quality and Level of Detail are noticeably improved on Xbox Series X compared to Xbox Series S.
Screenshot-20210730-142227-01.jpg

Stats for the 30 FPS mode
 
Did some comparisons between the PC at native 4k, 1440p+ CAS, and the PS5 in this thread over at ResetEra if you want to see what the same scene looks like across all 3:

https://www.resetera.com/threads/sh...zed-for-series-x-s-badge.461277/post-70113012

Earlier in the thread I did a post with 4k + CAS enabled as well.

Thank you, it looks very good indeed. However there are some rendering artefacts and this make me thinking about VRS. I dont have to zoom 4x to see it and makes me wonder how people who are very vocal about VRS feel about CBR? Is it fine or would you rather take native resolution? I never worked with anything related to gfx in my life so to my untrained eyes everything looks good.
 
Thank you, it looks very good indeed. However there are some rendering artefacts and this make me thinking about VRS. I dont have to zoom 4x to see it and makes me wonder how people who are very vocal about VRS feel about CBR? Is it fine or would you rather take native resolution? I never worked with anything related to gfx in my life so to my untrained eyes everything looks good.
Well, with higher Framerate (60 instead of 30), CBR gets (like any other TAA solution) get's automatic better. CBR + 30fps is to low in my opinion because you often see artifacts like ghosting etc in those cases.

The problem with many PS4 Pro patches so far is, that they have a desired framerate of 30 and therefore the temporal component uses frames with larger changes) or the base-resolution is just to low and you also get pixelated/sabretooth artifacts (e.g. FF7 Remake has a very bad ghosting issue at my PS4 Pro, especially noticeable in the arena if the camera swings over the white-logo in the middle of the arena, but I currently don't know if that is CBR or just another TAA solution). This can be compensated with a much higher base-resolution or at least higher framerate on PS5. So CBR is still a thing.

It is like DLSS so far. DLSS to increase the res to 1440p (base resolution is than <1080p) is awful blurry vs native 1440p. But 1440p as base resolution to get 4k with DLSS is a much better solution because the algorithm has more data to work with.
 
The problem with many PS4 Pro patches so far is, that they have a desired framerate of 30 and therefore the temporal component uses frames with larger changes) or the base-resolution is just to low and you also get pixelated/sabretooth artifacts (e.g. FF7 Remake has a very bad ghosting issue at my PS4 Pro, especially noticeable in the arena if the camera swings over the white-logo in the middle of the arena, but I currently don't know if that is CBR or just another TAA solution). This can be compensated with a much higher base-resolution or at least higher framerate on PS5. So CBR is still a thing.

It is like DLSS so far. DLSS to increase the res to 1440p (base resolution is than <1080p) is awful blurry vs native 1440p. But 1440p as base resolution to get 4k with DLSS is a much better solution because the algorithm has more data to work with.

ff7 has bad ghosting at 60 on ps5 too -- I think ghosting is noticeable in every temporal solution but it's bad there, I think something else might be wrong (some surfaces not having motion vectors maybe.)

Ultimately though the ghosting is always going to be there for temporal solutions, and it's going to be worse the more your scene changes each frame (framerate and camera speed, but also how often objects cross in front of the camera, etc.) That's the main tradeoff against something like VRS or even just running at native res and accepting lower resolution.

(And, as you probably know, it's like DLSS because dlss is a very similar technique -- they're both temporal upscalers that use motion vectors to predict what pixel(s) to use, dlss is just a more complicated black box process that makes better guesses)
 
Well, with higher Framerate (60 instead of 30), CBR gets (like any other TAA solution) get's automatic better. CBR + 30fps is to low in my opinion because you often see artifacts like ghosting etc in those cases.
I keep seing this message like some kind of mantra on beyond3d but nobody show any proof. From my experience 30fps deathstranding cb on ps4pro looks miles ahead better than 60fps dynamic cb avengers on ps5
 
ff7 has bad ghosting at 60 on ps5 too -- I think ghosting is noticeable in every temporal solution but it's bad there, I think something else might be wrong (some surfaces not having motion vectors maybe.)

Ultimately though the ghosting is always going to be there for temporal solutions, and it's going to be worse the more your scene changes each frame (framerate and camera speed, but also how often objects cross in front of the camera, etc.) That's the main tradeoff against something like VRS or even just running at native res and accepting lower resolution.

(And, as you probably know, it's like DLSS because dlss is a very similar technique -- they're both temporal upscalers that use motion vectors to predict what pixel(s) to use, dlss is just a more complicated black box process that makes better guesses)

Yes, temporal artifacts don't go away, higher framerate just helps to make them less noticeable or if you dislike reconstruction, less obnoxious.

However, the aggressiveness of your temporal solution will also play a part in how noticeable it is as well whether a scene has a lot of high contrast changes (like a dark metal grate with light shining through it). The later can be made significantly worse if you are also using reflections (or any other effect) that are reconstructed at lower resolution or even worse, lower "frame rates" than what you are rendering the main scene at. Hello, really noticeable ghosting then.

But since screenshots sell, if a scene breaks down in motion is far less important for some developers (and looking at forums, some gamers) than whether or not a static "no-motion" shot looks really good or not.

Regards,
SB
 
Yes, temporal artifacts don't go away, higher framerate just helps to make them less noticeable or if you dislike reconstruction, less obnoxious.

Agree with your post overall -- this isn't exactly true though. All else being equal, higher framerate = shorter average motion vectors = more accurate reconstruction. At a sufficiently high (hypothetical) framerate (and resolution) all artifacts would go away except right at the edges of moving objects.
 
Agree with your post overall -- this isn't exactly true though. All else being equal, higher framerate = shorter average motion vectors = more accurate reconstruction. At a sufficiently high (hypothetical) framerate (and resolution) all artifacts would go away except right at the edges of moving objects.

I agree with that in general, however, the artifacts won't go away, they'll just become so small as to be almost impossible to notice if the developer does a good job with their reconstruction technique and don't get too aggressive with it. Currently in games I can still notice them even with 240 Hz rendering speed, but at that point they are much less noticeable.

Basically even at 1000 Hz, they'd still be there, but good luck seeing them. :)

Regards,
SB
 
Well, with higher Framerate (60 instead of 30), CBR gets (like any other TAA solution) get's automatic better. CBR + 30fps is to low in my opinion because you often see artifacts like ghosting etc in those cases.

The problem with many PS4 Pro patches so far is, that they have a desired framerate of 30 and therefore the temporal component uses frames with larger changes) or the base-resolution is just to low and you also get pixelated/sabretooth artifacts (e.g. FF7 Remake has a very bad ghosting issue at my PS4 Pro, especially noticeable in the arena if the camera swings over the white-logo in the middle of the arena, but I currently don't know if that is CBR or just another TAA solution). This can be compensated with a much higher base-resolution or at least higher framerate on PS5. So CBR is still a thing.

It is like DLSS so far. DLSS to increase the res to 1440p (base resolution is than <1080p) is awful blurry vs native 1440p. But 1440p as base resolution to get 4k with DLSS is a much better solution because the algorithm has more data to work with.
About CBR one must not forget that there a plenty of ways of doing it (even in Sony's SDK). Usually the more ressources you allocate, the better it looks, notably in motion. This is where the ID Buffer can bring big benefits, by reducing the artefacts in motion by making the CBR works even well in motion. This is also why ID Buffer can be also used to improve TAA.
 
I keep seing this message like some kind of mantra on beyond3d but nobody show any proof. From my experience 30fps deathstranding cb on ps4pro looks miles ahead better than 60fps dynamic cb avengers on ps5
Avengers or red dead redemption 2 really had bad, bad CBR implementations.

Generally 60 fps reduce the pixel movement from frame to frame. E.g. if you use the previous 5 frames you have 6x 33ms of motion in your calculation at 30 fps, but with 60fps you just have 6x 16ms of motion. So you have more matching pixels. Well in theory. You can still have a bad implementation.
 
Avengers or red dead redemption 2 really had bad, bad CBR implementations.

Generally 60 fps reduce the pixel movement from frame to frame. E.g. if you use the previous 5 frames you have 6x 33ms of motion in your calculation at 30 fps, but with 60fps you just have 6x 16ms of motion. So you have more matching pixels. Well in theory. You can still have a bad implementation.
You also have less time to notice artifacts as each frame is displayed for half as much time. Anyone why's messed with interlaced video at higher framerate can tell you that combing artifacts are worse at 30fps than they are at 60fps, and even less noticeable at 120fps. Not gone, just lessened.
 
Status
Not open for further replies.
Back
Top