Current Generation Games Analysis Technical Discussion [2020-2021] [XBSX|S, PS5, PC]

Status
Not open for further replies.
How does the obvious fact that we haven’t hit a resolution peak yet equate to “infinite” and “ever increasing” resolution? I’m sure you can explain it.
Man, your ability to wilfully misinterpret the statements of others is seriously impressive. From the get go, I’ve argued that consumer spending habits are suggesting that resolution is no longer the primary factor for the adoption of display devices. I’ve always maintained that it will peak. You’ve consistently argued against the idea of resolution peaking. First you created a straw man saying that we haven’t hit the limits of display technology, a point I never made nor argued. Then, you maintained that changing display form factors would correspond to an increasing resolution target. A target that keeps rising without bounds is infinite. At no point did you ever acknowledge that resolution would peak otherwise this conversation would have ended a long time ago.

Now you’re trying to weasel your way out of your previous statements. I look forward to the next straw man you’ll create in your upcoming posts.
 
Man, your ability to wilfully misinterpret the statements of others is seriously impressive. From the get go, I’ve argued that consumer spending habits are suggesting that resolution is no longer the primary factor for the adoption of display devices. I’ve always maintained that it will peak. You’ve consistently argued against the idea of resolution peaking. First you created a straw man saying that we haven’t hit the limits of display technology, a point I never made nor argued. Then, you maintained that changing display form factors would correspond to an increasing resolution target. A target that keeps rising without bounds is infinite. At no point did you ever acknowledge that resolution would peak otherwise this conversation would have ended a long time ago.

Now you’re trying to weasel your way out of your previous statements. I look forward to the next straw man you’ll create in your upcoming posts.

Nobody in this thread claimed resolution will never peak. You’re tilting at windmills.
 
That depends on the rendering resolution. You keep looking at DLSS as a performance saving solution where you render a image at a subnative (native being your output resolution) resolution, and that's certainly a use case, but you can also use DSR to to a higher than native resolution, adjust the DLSS settings to have the internal rendering resolution to match native, have DLSS reconstruct the image to a supernative resolution and then supersample it back down to native for glorious results. There is a cost to this, obviously. And that depends on the DLSS performance of your GPU. But it's a hell of a lot less than rendering at the higher resolution DLSS resolves to.

To be clear, the scenario I presented earlier and the one I'm presenting now isn't one where you are rendering less pixels than you would have with DLSS disabled. If you start at the same image quality, for a minor performance hit, DLSS can produce many more samples per pixel, and it looks great.
You’re right, DLSS can be used in that manner but that use case was not the main design goal of DLSS. Now, I have attempted to use it in the way you described in games like read dead redemption 2, call of duty modern warfare, death stranding and a few others. Honestly, in some cases, the difference is appreciable. However, for example, when I compare an image rendered natively at 5k to the DLSS image at say 5k, they don’t always look the same. DLSS always seems to introduce artifacts not present in the original image and it bothers me because I know it’s approximation is not quite correct. For most people, it’s good enough I guess. Once I begin to notice DLSS artifacts in any game, that kills its use for me. The only time when I can tolerate the artifacts is when Ray tracing is used heavily. In that case, I don’t really have a viable alternative as the 3080’s ray tracing performance without DLSS is quite poor.

Anything over 4x AF was "enough" at 1024*768. 16x might be enough for you now at the resolutions you like, but in your hypothetical situation where we have infinite power per pixel, why wouldn't we have infinite pixel density as well.
We will never have infinite pixel density because consumers will not pay for a resolution increase that don’t provide any perceptible improvements. We are already beginning to see that trend now and it’ll only get worse. I also never argued that we’d have infinite power per pixel. I argued that it would become trivial to path trace in real time. Those are not the same things. I believe my hypothetical scenario will occur primarily because display resolution will peak. Our need for computational power in general won’t peak because GPUs don’t exist solely to play games. They’re used to tackle other real world problems.

You can already start to see the limitation of 16x AF at 4k. Hell, maybe even sometimes at 1440p if you don't disable the optimizations in the control panel. And those limitations are only going to get worse as resolution increases.


Do you not use TAA? That ghosts as well.
Earlier you said anything over 4x was good enough at 1024*768. I don’t agree with that at all. AF was clearly limited at that resolution and it could be seen. With regards to seeing the limits of af x16 at 4k, you’ll have to highlight those limits to me. I don’t see the limits you’re referring to but, I don’t know everything so ‍♂️.

TAA is one of the most common aa methods used in games today. Often times, the alternatives provided by devs for as is often fxaa or some other terrible equivalent. I avoid it when I can because it’s also ghosts and as you noticed, I referred to it as awful in one of my previous posts.
 
All the next-gen machines have 4MB L3 cache per 4 core cluster making for a total of 8MB L3 cache.

Ok suspected that from a simple google search, but theres an awfull lot of threads and that wildly varies. I think laptop Zen2/3 cpu's come with 16mb l3, desktop double that.
 
DLSS always seems to introduce artifacts not present in the original image and it bothers me because I know it’s approximation is not quite correct. For most people, it’s good enough I guess. Once I begin to notice DLSS artifacts in any game, that kills its use for me.
Is there something particular about DLSS 'artifacts' that bother you more than other graphics flaws in games(that exist in literally every game ever made)? Cuz this sounds like a purely psychological thing, rather than you actually 'not liking' the image quality that DLSS provides. A case of FOMO, essentially.

Games are all using 'imperfect' solutions for basically everything. I think the main point for DLSS would be that at worst, you might get a very slightly softer image and a few minor elements that dont always play nice with a given game, all for a huge boost in performance overhead. And in best case scenarios, you get near identical-to-native image quality with the same huge boost in performance overhead.

Honestly, this idea that DLSS/reconstruction is just some temporary crutch that will go away once we get more power will age badly. Because we're gonna be getting games built with reconstruction in mind, as standard. With any given hardware, whether today's or in 10 years, you will get that performance overhead if you use it. Remember this isn't all just about pushing framerates. Performance overhead can go into improving the core graphics/ambitions of a game as well. Which is exactly what we're gonna see happen. Meaning that without reconstruction, the games would overall be less impressive.

It's just gonna be a thing going forward from now on. There's no reason to think it will go away no matter how much more powerful GPU's get. The pros/cons aspect is so heavily weighted towards the pros already and that will only increase with time as the pros improve and the cons are reduced.
 
DLSS is what it is. Its a toggled setting that you are free to turn off. So if you are using it, whatever artifacts its injecting into the image its good enough for you to chose it over a lower "native" resolution.
 
Last edited:
Only thing I can really notice is the change to DOF?

I can't imagine there being much of a difference between the PS5 version in terms of quality, there's perhaps only one place I can think of in the game that could show off a large difference. It is when you're going up one of the wooden elevators, the draw distance is low on PS4/PS5 when at the top so this could be an area where the PC version looks better.

The game had a really good reconstruction implementation too so even jumping to native 4k won't offer the benefits it would do over a poor implementation.
 
Only thing I can really notice is the change to DOF?

I can't imagine there being much of a difference between the PS5 version in terms of quality, there's perhaps only one place I can think of in the game that could show off a large difference. It is when you're going up one of the wooden elevators, the draw distance is low on PS4/PS5 when at the top so this could be an area where the PC version looks better.

The game had a really good reconstruction implementation too so even jumping to native 4k won't offer the benefits it would do over a poor implementation.

There is no ps5 version sadly.
 
What Sony do best is create these blockbuster games that have really high production values - not many companies finance these types of games.

It's nothing but excellent that they're choosing to release these titles on PC.
 
Seems to be running at higher settings on the pc version compared to the PS5 footage in that video. Cant wait for the the DF in-depth analysis.

I like that they're actually planning on implementing DLSS for the PC version too, even though it'll never be present on PlayStation. Shows they're not messing about with these conversions, they want to please the PC crowd (and Dictator, hopefully).
 
Status
Not open for further replies.
Back
Top