Current Generation Games Analysis Technical Discussion [2020-2021] [XBSX|S, PS5, PC]

Status
Not open for further replies.
- The average resolution has increased, but remains within the already established dynamic resolution.
- Pop-in has been reduced on OneX and Series S|X. On Xbox One it has increased.
- Because of the above, performance is somewhat more unstable on all platforms.
- I have noticed some improvement in gunplay. You can see that now the headshots are simpler.
- The density of pedestrians has not changed.

- The shadows maintain their quality.
- Xbox One is affected by a lower LOD in the npcs.
- Vehicle collisions have been revised, although they still have room for improvement.
- I have not suffered any crashes during the recording.

- The average resolution has increased, but remains within the already established dynamic resolution.
- The subtitles now have a bug that was not there before. They can stay on the screen during gameplay.
- The pop-in has improved substantially. As a result, the framerate is somewhat more unstable now.
- The density of pedestrians has not changed. It's still quite a lower number than on Xbox.
- Vehicle collisions have been revised, although they still have room for improvement.
- The bug that left the npcs with a low LOD has been fixed. I have also not seen floating npcs or extremely weird behavior in the AI while recording.
- Shadows still do not have an equivalent improvement on PS5 to the Xbox Series and One X version.
- I have not suffered any crashes during the recording.
- Some options and settings had to be restored because the file has been corrupted. This has not affected the savedata.
 
I hope this thread is a good fit for my question, if not please could it be moved to the correct place?

I was wondering about the quality of upscalling on the PS5 and XBSX. As these rarley run games at native 4K, and generally seem to average about 1440p using dynamic res, how do games achieve the jump to the 4K output resolution? Watching PS5/ XBSX game reviews and other videos, I don't see a lot of complaints about the upscaling quality, in fact some reviews have mentioned how hard it can be to determine the actual render resolution in some instances. Is this different to the upscaling currently provided by AMD's drivers for Radeon cards on PC?

Do any multiplatform game which support PS5/ XBSX and PC have the same dynamic res/ upscalling options and implementations accross all 3 platforms, and if so how do the consoles compare to the PC, especially Radeon cards.

I'm just curious as the current DLSS vs Radeon's current scaling tech puts DLSS 2.0 as vastly superior, often slating the IQ of Radeon in comparison, but are the consoles putting up with the same quality upscaling as Radeon cards or do they do things deferent.

Sorry for the long winded question.

Cheers
 
I hope this thread is a good fit for my question, if not please could it be moved to the correct place?

I was wondering about the quality of upscalling on the PS5 and XBSX. As these rarley run games at native 4K, and generally seem to average about 1440p using dynamic res, how do games achieve the jump to the 4K output resolution? Watching PS5/ XBSX game reviews and other videos, I don't see a lot of complaints about the upscaling quality, in fact some reviews have mentioned how hard it can be to determine the actual render resolution in some instances. Is this different to the upscaling currently provided by AMD's drivers for Radeon cards on PC?

Do any multiplatform game which support PS5/ XBSX and PC have the same dynamic res/ upscalling options and implementations accross all 3 platforms, and if so how do the consoles compare to the PC, especially Radeon cards.

I'm just curious as the current DLSS vs Radeon's current scaling tech puts DLSS 2.0 as vastly superior, often slating the IQ of Radeon in comparison, but are the consoles putting up with the same quality upscaling as Radeon cards or do they do things deferent.

Sorry for the long winded question.

Cheers

not an expert but when they mention good upscaling for consoles I think they mean the TAA which more devs are using nowadays. Afaik, the good TAAs used today don't need specific hardware like RT cores or what not.

No idea about multiplat standouts when using TAA since all I remember DF was gushing at was the TAA in Spiderman and Last of Us 2.

afaik this is still different from something like DLSS 2.0.
 
My understanding is that DLSS is superior to all other implementations. @Dictator created a really nice video on it here:


As far as I understand, there are individual upscaling techniques that have been created by developers (like Insomniac and Remedy) that are good, but again, inferior to DLSS.

Other upscaling techniques are fairly rudimentary, to my understanding.

DLSS is an Nvidia solution and requires Nvidia GPUs.

That's my noobish impression of the upscaling techs.

Edit: and Sony's checkerboard rendering, better than some techniques, still inferior to DLSS.
 
Last edited by a moderator:
My understanding is that DLSS is superior to all other implementations. @Dictator created a really nice video on it here:


As far as I understand, there are individual upscaling techniques that have been created by developers (like Insomniac and Remedy) that are good, but again, inferior to DLSS.

Other upscaling techniques are fairly rudimentary, to my understanding.

DLSS is an Nvidia solution and requires Nvidia GPUs.

That's my noobish impression of the upscaling techs.

Edit: and Sony's checkerboard rendering, better than some techniques, still inferior to DLSS.
I think a relevant question would be whether, if we decide that upscaling is really useful, DLSS is an optimal approach? Nvidia stuck tensor units on their GPUs because they wanted to attack the ML market. And then came up with DLSS so there was some use for them for consumer graphics as well. But is it really the most efficient use of gates if we only want to adress upscaling?
Simply using the existing resources is attractive, because a) it’s there anyway so no extra hardware is required at all, and b) it’s nicely programmable for different approaches.
But I can’t help wondering if we couldn’t do better AND cheaper if we dedicated resources to solving upscaling, and upscaling alone, intelligently.
Acheive a better optimum of quality vs. Silicon vs. Power vs. Simplicity of implementation.
 
I think a relevant question would be whether, if we decide that upscaling is really useful, DLSS is an optimal approach? Nvidia stuck tensor units on their GPUs because they wanted to attack the ML market. And then came up with DLSS so there was some use for them for consumer graphics as well. But is it really the most efficient use of gates if we only want to adress upscaling?
Simply using the existing resources is attractive, because a) it’s there anyway so no extra hardware is required at all, and b) it’s nicely programmable for different approaches.
But I can’t help wondering if we couldn’t do better AND cheaper if we dedicated resources to solving upscaling, and upscaling alone, intelligently.
Acheive a better optimum of quality vs. Silicon vs. Power vs. Simplicity of implementation.
implications for tensor cores are much broader than upscaling however. DLSS is one single method in which it can be leveraged, but there are many other image enhancement methodologies that are available to be explored.

That being said however, the end output is ultimately what matters in my opinion, whether you put it into tensor cores, or some other dedicated form of hardware acceleration, you're ultimately making the trade off of generic silicon to increase speed dramatically in 1 area. As of this moment, very little is likely to outperform a DLSS title in terms of image quality/performance for the same amount of silicon.
 
I'm really curious what effect 60fps can have on checkerboard rendering. Obviously, when it comes to zooming in on still frames, the same issues will be present, but I've yet to see a really satisfactory demonstration of the perceptual difference caused by doubling the framerate.

I'm playing Death Stranding at the moment, and there is definitely visible aliasing. I was really hoping for a simple 60fps patch, maybe with improved AF too, but if rumours are to be believed, the closest we're going to get to that is a "remaster."

"We've included higher resolution assets (which were already authored during content creation) and tweaked a couple of lines of code. £50 please!"

*sigh*
 
My understanding of DLSS vs all other upscaling methods is that DLSS is actually able to add additional detail to the image that wasn't there previously in the rendered pixels. While all other upscaling methods simply interpolate the detail that's already there. I may however simply be talking BS.
 
My understanding of DLSS vs all other upscaling methods is that DLSS is actually able to add additional detail to the image that wasn't there previously in the rendered pixels. While all other upscaling methods simply interpolate the detail that's already there. I may however simply be talking BS.

I think DLSS 2.0 use information from the last frame (motion vectors eg.) before rendering the current frame.
I also read somthing about texture resolution/quality but I cannot remember where atm.
 
I think DLSS 2.0 use information from the last frame (motion vectors eg.) before rendering the current frame.
I also read somthing about texture resolution/quality but I cannot remember where atm.

This one is straight from the source. Probably already linked dozens of times here. I guess it's good to bring it up periodically for those who might not have watched it yet.

 
I also read somthing about texture resolution/quality but I cannot remember where atm.

Texture quality/resolution should be based on the result output resolution and not the lower actual rendering resolution. This may be where some were posting about using LOD Bias for MIP Levels.
 
I think a relevant question would be whether, if we decide that upscaling is really useful, DLSS is an optimal approach? Nvidia stuck tensor units on their GPUs because they wanted to attack the ML market. And then came up with DLSS so there was some use for them for consumer graphics as well. But is it really the most efficient use of gates if we only want to adress upscaling?
Simply using the existing resources is attractive, because a) it’s there anyway so no extra hardware is required at all, and b) it’s nicely programmable for different approaches.
But I can’t help wondering if we couldn’t do better AND cheaper if we dedicated resources to solving upscaling, and upscaling alone, intelligently.
Acheive a better optimum of quality vs. Silicon vs. Power vs. Simplicity of implementation.

Can AMD throw some tensor cores into a hardware scaler for next gen consoles and call it SDNA? LOL. But honestly is that feasible?
 
The version tested was 1.000.002.

PS5 uses a dynamic resolution with the highest native resolution found being approximately 3264x1836 and the lowest native resolution found being approximately 2304x1296. Native resolution pixel counts at 3264x1836 seem to be rare. A form of temporal reconstruction is used to increase the resolution up to approximately 3264x1836 when rendering natively below this resolution.

The UI is rendered at a resolution of 3840x2160.

 
Can AMD throw some tensor cores into a hardware scaler for next gen consoles and call it SDNA? LOL. But honestly is that feasible?
They could, I suspect they would have if they wanted to, but have continued to go the generic compute route instead. Tensor Processing Units are very good at deep learning based style networks, but outside of that domain, they are less useful. This is typically where generic compute is better.

For a great deal of problems we solve, most of them don't require deep learning (and you can get away with running a simpler and fast algo) unless you're doing something like NLP, computer vision, etc, something that computers have tremendous difficulties doing.
 
Texture quality/resolution should be based on the result output resolution and not the lower actual rendering resolution. This may be where some were posting about using LOD Bias for MIP Levels.

That is the thing i read, giving DLSS much better texture quality than eg. Fidelityfx/CAS that use textures at the actual render resolution.
 
That is the thing i read, giving DLSS much better texture quality than eg. Fidelityfx/CAS that use textures at the actual render resolution.
Yup, all temporal upsampling methods get their improved texture quality mostly from combination of mipmap bias and information from previous frames. (DLSS 2, temporal injection, TAAU in UE4 etc.)
 
- PS5 and Series X tuns at 2560x1440p.
- The framerate runs mostly stable on both platforms, although some more drops in Series X.
- Ray-tracing more limited on consoles compared to PC. In some highlights there are fewer elements on PS5 anecdotally. Despite this, it is the best RT we currently have on consoles.
- The input lag in graphics mode is higher than in Oldgen versions. Quite annoying.

- Reflections without ray-tracing suffer from issues with the character model. This did not happen in the Oldgen versions.
- There is a bug that causes a desynchronization in the animation of the protagonist when running.
- The Xbox version still has some freezing problems when pausing the game or accessing the map.
- PS5 has a too dark tone, which forces to increase the brightness to obtain a similar result to the other versions.
 
Status
Not open for further replies.
Back
Top