Digital Foundry Article Technical Discussion [2020]

Status
Not open for further replies.
It was not measurable on XSX because XSX then made the lower cap for dynamic resolution. That cutscene could have just as easily just be came more expensive on every System but was only measured on two - where by one System hid the New cost of He cutscene under dynamic res and a 60 fps cap I think this cutscene is perfectly representative and not unfair to ps5.
It is measurable on ver 1.00 for PS5 and xsx because both versions have lower cap 1440p. Why not use ver. 1.00?
Don’t you think ver. 1.00 is more representative of PS5 performance since it has 6% more performance?
 
Last edited:
It is measurable on ver 1.00 for PS5 and xsx because both versions has lower cap 1440p. Why not use ver. 1.00?
Don’t you think ver. 1.00 is more representative of PS5 performance since it has 6% more performance?

That would completely change the narrative and conclusion of the video. Just pretend that the game never had the 1.00 version to begin with
 
It is measurable on ver 1.00 for PS5 and xsx because both versions have lower cap 1440p. Why not use ver. 1.00?
Don’t you think ver. 1.00 is more representative of PS5 performance since it has 6% more performance?

That would completely change the narrative and conclusion of the video. Just pretend that the game never had the 1.00 version to begin with

What evidence do you have to suggest that the PC versions performance is also not negatively impacted in that scene by the patch? Surely the fairest way of comparing performance is on the same patch level across all systems. That level being the latest level since it's clearly how the game developers intend you to play the game, and how you will be playing the game if you have updated to the latest patch - as you should for stability reasons if nothing else.
 
What evidence do you have to suggest that the PC versions performance is also not negatively impacted in that scene by the patch? Surely the fairest way of comparing performance is on the same patch level across all systems. That level being the latest level since it's clearly how the game developers intend you to play the game, and how you will be playing the game if you have updated to the latest patch - as you should for stability reasons if nothing else.

I think the 'evidence' is that the Xbox performance was not negatively impacted. It's still worse than PS5 (which is .... now considered normal I guess?), but PS5 was the only gimped version if I'm not mistaken.

Also I think the 5700XT and the 2080 are or could be much better than PS5 and to compare systems potential based on a frame drop in a motherf- UBISOFT (!!!) title seems a bit extreme.
 
I think the 'evidence' is that the Xbox performance was not negatively impacted. It's still worse than PS5 (which is .... now considered normal I guess?), but PS5 was the only gimped version if I'm not mistaken.

Didn't the patch lower resolution on the Series X to improve it's performance?

Also I think the 5700XT and the 2080 are or could be much better than PS5 and to compare systems potential based on a frame drop in a motherf- UBISOFT (!!!) title seems a bit extreme.

But the DF video shows performance over time, not just in one single moment of frame drop.
 
I think the 'evidence' is that the Xbox performance was not negatively impacted. It's still worse than PS5 (which is .... now considered normal I guess?), but PS5 was the only gimped version if I'm not mistaken.

Also I think the 5700XT and the 2080 are or could be much better than PS5 and to compare systems potential based on a frame drop in a motherf- UBISOFT (!!!) title seems a bit extreme.

This is the only way it was possible to compare PS5 and PC because there is not enough option on dynamic resolution to make the comparison like to like on more moment, This is a great way to do it.
 
Didn't the patch lower resolution on the Series X to improve it's performance?



But the DF video shows performance over time, not just in one single moment of frame drop.

Make no mistake, it's a great and informative video.

But just to give an example: maybe the PS5 is running at 90fps whereas a 3080 Super is only at 84fps. Because the PS5 is capped at 60 though, we would never know.
 
WRT RT I agree it comes down to devs. Given that the consoles aren't too performant in that metric I don't think its a certainty that RT becomes too pervasive. It's entirely possible usage will be sparse in most titles leaving heavier usage to the occasional Nvidia sponsored title.
Yeah this is what I suspect will largely happen - you'll have 'ultra' RT effect versions on the PC where the resolution of those effects is improved over the console, but I'm skeptical that even with a huge RT advantage it will actually be taken advantage of by developers in terms of actually adding new RT features wholly unique to the PC version.
 
Also I respect that Gennadiy Korol takes his interviews seriously and has a professional looking camera setup and a real microphone. That extra mile is really appreciated. Makes the interview much more of a pleasure to watch. He's also just a good interview. Very well spoken.
 
Yeah this is what I suspect will largely happen - you'll have 'ultra' RT effect versions on the PC where the resolution of those effects is improved over the console, but I'm skeptical that even with a huge RT advantage it will actually be taken advantage of by developers in terms of actually adding new RT features wholly unique to the PC version.

Watch Dogs has already done that. Although it's possible that it's an outlier I guess. Of course the huge RT advantage doesn't have to be used to add new features and effects (although that would be my preference). It may simply be used to give more performance (and thus resolution) for a given level of effects.
 
Also I respect that Gennadiy Korol takes his interviews seriously and has a professional looking camera setup and a real microphone. That extra mile is really appreciated. Makes the interview much more of a pleasure to watch. He's also just a good interview. Very well spoken.

I couldn't work out if the good setup was because of the team's work from home nature or because he's an AV nerd. :D
 
Also I respect that Gennadiy Korol takes his interviews seriously and has a professional looking camera setup and a real microphone. That extra mile is really appreciated. Makes the interview much more of a pleasure to watch. He's also just a good interview. Very well spoken.

Yeah, this. Not everyone is going to have access to that equipment but it makes such a difference in interviews as opposed to using a laptop webcam.
 
When he was explaining what and how they would lower the resolution of several of the layers because of DOF and how you wouldn't be able to tell, I said to myself "hmm VRS would make this a lot easier and more straight forward" only to hear him say towards the end that VRS is one of those features he is looking forward to as their layering system looks like a poor man's version of VRS lol.
 
PS5 average performance leads 5700 by 25~30% in the video. And generally 5700XT leads 5700 by 15% or so. Therefore PS5 should lead 5700XT 10% or above.
Percentages don't work exactly like that. If PS5 is the baseline, we would consider that 100%. If 5700 is 25% slower, it would be 75% of PS5. If 5700XT is 15% faster than 5700, that would put it at 86.25% of PS5, or almost 14% slower.

Turing does FP and INT ops at same clock though. I don't think it's correct to assume RTX 2060S is only 8.1TFLOPS as it's shown in DF comparison. Depending on the game INT rate hovers between 20-35%. I don't have ACV on my PC to check a frame in Nsight but assuming INT rate is at least 20%, RTX 2060S' Flops rate rises to (8.1TFLOPS+1.6TOPS) which is %5 shy of PS5 theoretical numbers.
INTs will never increase your FLOPs because INTs are integer while FLOPs are floating point.
Because the software created by developers may not necessarily expose it. Nvidia's higher geometry throughput has never materialized into anything outside of their over tessellated gameworks effects. Its not certain to me that their RT advantage will either.
nVidia's tessellation advantage was mitigated by AMD "optimizing" their drivers to cap the maximum tessellation level to about a quarter of what nVidia uses. There is a case to be made that those optimizations are valid because there is little to no visual difference, but as is often the case in PC games, when you start increasing resolutions beyond the scope of what was available at the time, you starts seeing things you may not have seen at a lower resolution. I'm curious what some of those games look like with AMD's altered tessellation on and off at 8k. It's a clever trick, but it's also a false equivalency because both cards aren't doing the same amount of work.

An example of something similar historically is when 3Dfx had a feature called mipmap dithering. It dithered the transitions between mipmaps when bilinear filtering was enabled, giving you less noticeable steps between mip levels without the performance penalty of trilinear. This is fine for plenty of games when I had an older Voodoo card, but when I upgraded to a Voodoo3, that thing supported resolutions much higher than before, and while thing weren't what I would call playable at those resolutions, I did mess around with some games at extreme resolutions for it's time. I don't remember what the maximum resolution was in 3d for the Voodoo 3, but I do remember that my monitor could handle it. Might have been 1600*1200, or maybe just 1280*1024, but it was much higher than before, and one of the things I noticed was the mipmap dithering was much more... Identifiable I think is the right word. It still looked better than plain bilinear, but also clearly not as good as true trilinear, and there was a telltale pattern where the mip transitions were. But while at lower resolutions trininear and dithered bilinear looked nearly identical.
 
Status
Not open for further replies.
Back
Top