Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
Question for @Dictator, does the 2060S or 2070S drop frames in scenarios where the consoles do not? Looks like we have a worst case scenario where bandwidth contention is causing drops on consoles in the video, but no equivalent drops presented from PC.

The video would be better represented if the analysis on the cards that you qualify as matching console had similar analysis in areas where there may be other contentions in play.

In the COD video that you did (which I liked very much) you ended by comparing PS5 against a PC with DLSS applied while rendering at a lower resolution and directly compared to console. It stuck me as a false equivalence. Wouldn't expect the same from a checkerboard upscale comparison from console either.

Anyway, love your work. Just a couple of minor criticisms.
 
I'm still amazed at the amount of content @Dictator manages to get into these video's in such a short timespan. A superb level of detail as ever.

Really interesting observations on the memory bandwidth and just how much the UMA of the consoles must be impacting that.
I think this generation, probably more than the last one, the overall system bandwidth is a little on the low side. It is especially the case for the PS5 and the XSS.
 
Question for @Dictator, does the 2060S or 2070S drop frames in scenarios where the consoles do not? Looks like we have a worst case scenario where bandwidth contention is causing drops on consoles in the video, but no equivalent drops presented from PC.

The video would be better represented if the analysis on the cards that you qualify as matching console had similar analysis in areas where there may be other contentions in play.

In the COD video that you did (which I liked very much) you ended by comparing PS5 against a PC with DLSS applied while rendering at a lower resolution and directly compared to console. It stuck me as a false equivalence. Wouldn't expect the same from a checkerboard upscale comparison from console either.

Anyway, love your work. Just a couple of minor criticisms.
He touched on it briefly in the video. He said the PS5 maintained a better framerate than some of those other GPUs at other parts of the Miami stage but he didn't profile it extensively but it is possible to draw up a conclusion that, at least based on the characteristics of this game/engine and based on where development tools are atm, the PS5/XSX will have situations where their shared bandwidth/overall bandwidth will adversely affect the performance you would expect from them. Devs will come up with ways to remedy this (reduced alpha effects res, fewer particles etc) and or come up with ways to make them more performant or develop alternatives eg GPU/compute-based particles.
 
Question for @Dictator, does the 2060S or 2070S drop frames in scenarios where the consoles do not? Looks like we have a worst case scenario where bandwidth contention is causing drops on consoles in the video, but no equivalent drops presented from PC.

The video would be better represented if the analysis on the cards that you qualify as matching console had similar analysis in areas where there may be other contentions in play.

He did exactly that later in the video and highlighted that other areas see lower performance on the 2060s. Although the 2070s still appears faster at the same point.

In the COD video that you did (which I liked very much) you ended by comparing PS5 against a PC with DLSS applied while rendering at a lower resolution and directly compared to console. It stuck me as a false equivalence. Wouldn't expect the same from a checkerboard upscale comparison from console either.

But checkerboard is unambiguously worse image quality than native while DLSS quality mode isn't.

I do agree that we should be aware of the non-DLSS performance level but its really only relevant as an academic comparison as with DLSS on you're getting both higher performance and arguable equal or better image quality. So why would you ever play with it off.

Perhaps the even more important reason why DLSS should be validly considered is that Turing and Ampere are only designed to reach their peak performance levels with it enabled since this isn't a simple software solution but rather an aspect of their hardware design. Die space that could have been allocated to more CUDA cores for example has been given over to tensor cores, which without using DLSS are sitting idle and thus the comparison to a system thats able to use all of its available resources to render the scene is arguably unfair.

Essentially, where DLSS is available, Turing is taking a different hardware acceleration approach to reach the same (for all intents and purposes) end result, so I don't really see why it should be considered an invalid, additional, comparison point.
 
Question for @Dictator, does the 2060S or 2070S drop frames in scenarios where the consoles do not? Looks like we have a worst case scenario where bandwidth contention is causing drops on consoles in the video, but no equivalent drops presented from PC.

The video would be better represented if the analysis on the cards that you qualify as matching console had similar analysis in areas where there may be other contentions in play.

In the COD video that you did (which I liked very much) you ended by comparing PS5 against a PC with DLSS applied while rendering at a lower resolution and directly compared to console. It stuck me as a false equivalence. Wouldn't expect the same from a checkerboard upscale comparison from console either.

Anyway, love your work. Just a couple of minor criticisms.

I diagree this is by far the best point to do a comparison. Another thing to take into account, we don't have unlocked framerate mode on consoles. This would be the perfect way to compare PC and consoles. I hope in the future when PS5 will have it VRR patch we will see more and more option of unclocked framerate.

This is the second time @Dictator use the lowest point of performance after AC Valhalla to compare PC and consoles and this is perfect for this beginning of generation. It will be easier later in the generation when consoles will suffer more with more framerate drop if devs don't let unlocked framerate option. With VRR and backward compatibility mode, unlocked framerate is really an option I hope devs will use on consoles.
 
Checkerboarding is a worse implementation than DLSS, I agree. I'm yet to be convinced that it's noticeable at normal viewing distance. And it looks like checkerboard rendering is looking much better (than earlier implementations) if the latest Resident Evil demo is anything to go by.

Regardless, I think it's important to have directly equivalent tests. I like the comparisons between CBR and DLSS though, those are definitely comparable.
 
I diagree this is by far the best point to do a comparison. Another thing to take into account, we don't have unlocked framerate mode on consoles. This would be the perfect way to compare PC and consoles. I hope in the future when PS5 will have it VRR patch we will see more and more option of unclocked framerate.

This is the second time @Dictator use the lowest point of performance after AC Valhalla to compare PC and consoles and this is perfect for this beginning of generation. It will be easier later in the generation when consoles will suffer more with more framerate drop if devs don't let unlocked framerate option. With VRR and backward compatibility mode, unlocked framerate is really an option I hope devs will use on consoles.

I think you're wrong here. The Assassin's Creed comparison video wasn't showing where the bandwidth was constrained, it was showing where the rendering load caused framerate drops. That then meant we could see the difference in rendering between the cards.

The latest video demonstrates that the consoles may have situations where the bandwidth is constrained, not where the rendering load was too high. It was essentially a comparison of bandwidth only.
 
I think you're wrong here. The Assassin's Creed comparison video wasn't showing where the bandwidth was constrained, it was showing where the rendering load caused framerate drops. That then meant we could see the difference in rendering between the cards.

The latest video demonstrates that the consoles may have situations where the bandwidth is constrained, not where the rendering load was too high. It was essentially a comparison of bandwidth only.

But at the end he said the 2060 Super has lower performance in some place but an unlocked framerate would give the power to perfectly compare with a PC GPU. We can do an average see what it the lowest and maximum framerate.
 
Last edited:
Question for @Dictator, does the 2060S or 2070S drop frames in scenarios where the consoles do not? Looks like we have a worst case scenario where bandwidth contention is causing drops on consoles in the video, but no equivalent drops presented from PC.

The video would be better represented if the analysis on the cards that you qualify as matching console had similar analysis in areas where there may be other contentions in play.

In the COD video that you did (which I liked very much) you ended by comparing PS5 against a PC with DLSS applied while rendering at a lower resolution and directly compared to console. It stuck me as a false equivalence. Wouldn't expect the same from a checkerboard upscale comparison from console either.

Anyway, love your work. Just a couple of minor criticisms.
I will respond more in full later - But the 2070S offered flat out better performance than PS5, not just in the bandwidth constrained area that I Highlight (PS5 drops frames on each Camera cut while 2070S does not). 2060S goes below PS5 as I Show in the Video. XSX bests both the 2060S and 2070S, not not a great Deal above the 2070s though and the 2070S actually has less drastic Camera cut frame drops.
The reason why I find this comparison from this video so compelling is because like watchdogs Legion, we have most of the important settings that greatly affect framerate in the scene we are looking at being 100% there on PC - mirror Match for the big settings. It also just happens to be a scene where the bandwidth sapping particles are full on screen. Ass Creed comparison I unfortunately think Was another of these instances where the particle quality was affecting performance greatly (fire effects on screen there). I also think the same for call of duty. So yes, i find this hitman comp to be the truest measured of performance I have produced yet next to WDL. It just happens to Also be measuring over draw performance. Maybe if more devs are nice to us we will get a game with an unlocked framerate and exact settings at some point to do this again but also Show more of the differences.
I think Blops and Ass Creed were disadvantaging PC in ways that cannot be overlooked especially given the scenes that I was realsitically able to choose from to compare performance.
 
Last edited:
Quaz51 who posted on this site long ago talked in an IGDA event to Christophe Balestra when he was working at Naughty Dog about consoles development and first party development. He said the work of first party is to make console shine and hide the weakness. I suppose compute based particle and quarter or half resolution particles will be used by first party title. When I saw the memory bandwidth of PS5 I was shocked not like many people concerned by variable frequency at all. I was hoping 512 GB/s.

It will be very interesting to compare 6700 and 6700XT to PS5 and Xbox Series X, they will have the advantage to have some infinity cache. If the PS5 GPU had some infinity cache the 448 GB/s memoy bandwidth would have been ok. 6700 and 6700 XT rumors are 384 GB/s of memory bandwidth but with infinity cache.

EDIT:
Another point from the video, the impact of shadow quality settings is so low. I am surprised to see a difference between PS5 and Xbox Series X. And I was surprissed before the video maybe this is a bug.
 
Last edited:
Quaz51 who posted on this site long ao talked in an IGDA event to Christophe Balestra when he was working at Naughty Dog about consoles development and first party development. He said the work of first party is to make console shine and hide the weakness. I suppose compute based particle and quarter or half resolution particles will be used by first party title. When I saw the memory bandwidth of PS5 I was shocked not like many people concerned by variable frequency at all. I was hoping 512 GB/s.

It will be very interesting to compare 6700 and 6700XT to PS5 and Xbox Series X, they will have the advantage to have some infinity cache. If the PS5 GPU had some infinity cache the 448 GB/s memoy bandwidth would have been ok. 6700 and 6700 XT rumors are 384 GB/s of memory bandwidth but with infinity cache.

EDIT:
Another point from the video, the impact of shadow quality settings is so low. I am surprised to see a difference between PS5 and Xbox Series X. And I was surprissed before the video maybe this is a bug.
IOI gave the console's settings to DF. It is not a bug. Consoles have always been about tradeoffs. Something not measured in this video but was endemic in the last-gen compared to PCs at the time was anisotropic filtering. AF was always very low on consoles but increasing the quality on PCs barely made a difference.

Console development will always be a balancing act to making the best out of the resources available for the optimal visual return. In this game at least the consoles fall along the line of what we expect given the capabilities of the console GPUs when compared to the estimated PC equivalent; PS5 is somewhere between the 2060S and the 2070S, or in other words, 2070/5700xt. and the XSX sits above the 2070S, essentially ~2080. Some engines will favor AMD architecture and in such cases, the consoles will swing above their Nvidia equivalents, and some games will favor Nvidia architecture and the opposite will be the case.
You are right in one area which is that console exclusives will present these systems in the best light by working around their weaknesses while amplifying their strengths. This will be more so in the case of PS5 as Xbox is now a platform that extends to PCs so the XSX/XSS will not have exclusives. But even then, most games are developed on PCs these days anyway. These consoles and their devkits are more or less customized PCs. We are way past the days of truly custom hardware that cannot be found elsewhere.
 
Seems like PS5 has performance similar to that of 5700XT in closed box with RT and less total BW, which is something alot of us have assumed and is far from disappointing.

It also seems Hitman is better fit for Nvidia GPUs then you see lately in other games, hence 2060S being that close, as lately 5700XT is more ~ 2070S (raster performance) then 2060S.
 
Last edited:
yeah new df video confirmed my thesis 5700xt is faster than ps5 in h3 ;d in valhalla and new cod it was opposite situation, bandwidth could be the reason. Outside of this minimal fps comparison I would also compare avarage fps in scenes with drops and show percentage table of it (also interested are there drops in 5700xt and 2060/2070super in this scenes with sniper zoom and mendoza mission when ps5 is 60 and xsx drops even to 40).
 
Seems like PS5 has performance similar to that of 5700XT in closed box with RT and less total BW, which is something alot of us have assumed and is far from disappointing.

It also seems Hitman is better fit for Nvidia GPUs then you see lately in other games, hence 2060S being that close, as lately 5700XT is more ~ 2070S (raster performance) then 2060S.

It will probably depends of the engine and when games will begins to use new featureset, it will be at the advantage of PS5 and Xbox Series X compared to 5700 XT.
 
btw one more remark, I know that Dictator benchmarked in gpu limited scenes but it's not that slower cpu will affect minimal frames by 0%
 
Last edited:
yeah new df video confirmed my thesis 5700xt is faster than ps5 in h3 ;d in valhalla and new cod it was opposite situation, bandwidth could be the reason. Outside of this minimal fps comparison I would also compare avarage fps in scenes with drops and show percentage table of it (also interested are there drops in 5700xt and 2060/2070super in this scenes with sniper zoom and mendoza mission when ps5 is 60 and xsx drops even to 40).
Yes that would have being an interesting comparison. If they had used only this alpha heavy scene it would have actually made PS5 a better hardware than XSX. PS5 has 46% better framerate (minimum, it's probably higher) while XSX has 44% better resolution (fixed number). And it would have being interesting to compare XSX against PC (as native 4K is easily testable on PC) and deduce the performance of PS5 against PC GPU.

I am guessing they would have found a performance of about a 2080 for the PS5, like in Valhalla.
 
Checkerboarding is a worse implementation than DLSS, I agree. I'm yet to be convinced that it's noticeable at normal viewing distance.

What's a normal viewing distance though? At what screen size? Is this the same for PC gamers as console gamers? I don't disagree that CBR and DLSS could be indistinguishable from one another beyond a certain screensize/distance limit but the same can also be said of 1440p vs 4K, or even 1080 vs 4K, but we would never argue equivalence between those resolutions on that basis. Asking whether the higher image quality is worth it, or noticeable at a certain distance is a completely different argument to asking if there actually is an image quality difference in the first place. The reason why I'd argue that DLSS (quality mode specifically) is valid to compare with native rendering is because there is arguably no image quality difference on average, or if there is, it can often be in favour of DLSS.
 
What's a normal viewing distance though? At what screen size? Is this the same for PC gamers as console gamers? I don't disagree that CBR and DLSS could be indistinguishable from one another beyond a certain screensize/distance limit but the same can also be said of 1440p vs 4K, or even 1080 vs 4K, but we would never argue equivalence between those resolutions on that basis. Asking whether the higher image quality is worth it, or noticeable at a certain distance is a completely different argument to asking if there actually is an image quality difference in the first place. The reason why I'd argue that DLSS (quality mode specifically) is valid to compare with native rendering is because there is arguably no image quality difference on average, or if there is, it can often be in favour of DLSS.

If you feel it's fair to compare DLSS to native rendering, then I assume you're also okay to compare CBR with native rendering. Both are comparable to native rendering. Leadbetter has said how he was challenged by Cerny and he eventually was able to tell when he had his nose to the screen.

If we accept some upscaling techniques that are platform specific and not others, then we're shifting into confirmation bias.
 
The reason why I'd argue that DLSS (quality mode specifically) is valid to compare with native rendering is because there is arguably no image quality difference on average, or if there is, it can often be in favour of DLSS.
strongly depanding on title as in cyberpunk dlss is far from native resolution and also in case of benchmarking there is no reason to compare different resolution settings
 
On Jaguar CPU, maybe in some scenes, but not on Zen 2 CPUs. The game runs mostly at 60fps on Pro at 1080p. This is a PSVR ready game. And about the severe drops seen on XSX when plenty of alphas, I find them very suprising and I am suprised DF didn't have any problem with them. This is a XB1 game and the XSX should have no trouble running this part at locked 60fps. Those drops clearly show some kind of hardware bottleneck on XSX when plenty of alphas (but not that plenty, it's still a XB1 game). And we already saw similar problems with alphas effects on AC Valhalla and COD (on XSX smoke coming from the gun is significantly reduced, almost inexistent, compared to the PS5 version).

I was thinking the same thing with regards to Alpha effects with this title and ACV. Given the bandwidth advantage of XSX, it doesn’t feel like it should drop so significantly in these areas when compared to the PS5. Is it possible these early games are dipping into the “slow” pool of the XSX memory, either due to developer not having time to optimize or the “tools” still not mature enough in this regard?
 
Status
Not open for further replies.
Back
Top