Digital Foundry Article Technical Discussion [2025]

in another comparison video it looks like PS5 pro perf mode has lower draw distance than XsX perf mode for grass.
hmm, is this also the case versus standard PS5 ? If so, then we can assume it's deliberate for 5Pro. If its the same as PS5, then we can assume there may have been oversight on this one, and they borrowed settings from PS5.
 
Ray traced global illumination makes a big difference in Assassin’s Creed Shadows on consoles, though it locks frames to 30fps.

As usual, great work here by DF. Looking forward to the rest of their analysis on the various platforms, but for readers out there just creeping on this forum, the reason they don't release all the videos at once is because it's just a ton of work to analyze so many platforms that they have to break it up like this.

I think I finally realized why DF does a better job at marketing games than the companies themselves; it's because we can see the gameplay thoroughly analyzed, and appreciate things that most people would have just run by and that helps, because how games present themselves is significantly more than just fps and resolution.
 
5Pro Analysis from Ollie
Disagree that its one of the best upgrade on pro. In 60fps (even though nice that have rtgi) resolution is low and image quality is not good enough. Two other modes that are the way to go are very similar to base ps5.
 
5Pro Analysis from Ollie
One really needs to ask questions of the AC devs.... In the balanced and quality modes, why is the performance of the base ps5 relatively similar to that of the pro? The resolutions are relatively similar and the settings are allegedly similar? The pro is purportedly 2-4x faster in raytracing and 45% faster in raster yet the outcome in those modes are eerily similar? It's a real headscratcher....
 
One will really need to ask questions of the devs at AC.... In the balanced and quality modes, why is the performance of the base ps5 relatively similar to that of the pro? The resolutions are relatively similar and the settings are allegedly similar? The pro is purportedly 2-4x faster in raytracing and 45% faster in raster yet the outcome in those modes are eerily similar? It's a real headscratcher....
It comes down to where the bottlenecks are. You cant use up all your raster and ray tracing if you don’t have the bandwidth for it.
 
It comes down to where the bottlenecks are. You cant use up all your raster and ray tracing if you don’t have the bandwidth for it.
Are we sure that bandwidth is the limiting factor? I don't know, just asking... The performance differential in hardware doesn't manifest itself in the game outside of the perf mode so..... it's something that certainly puzzling. Another question worth asking is what work was done to mitigate/alleviate bottlenecks if any? I can't imagine they spent too much time on the pro version with all the versions of the game they needed to put out.
 
Are we sure that bandwidth is the limiting factor? I don't know, just asking... The performance differential in hardware doesn't manifest itself in the game outside of the perf mode so..... it's something that certainly puzzling. Another question worth asking is what work was done to mitigate/alleviate bottlenecks if any? I can't imagine they spent too much time on the pro version with all the versions of the game they needed to put out.
We don’t know. I’m just saying thst compute can’t do anything without bandwidth. So it despite having way more compute, if you don’t have bandwidth to take advantage of it, the compute sits idle. Typically speaking, compute sits idle. It’s takes way more cycles to move data from memory into cache and then into registers and back out, than it ever does for compute to do its job.

5Pro has about the same bandwidth as XSX, so any rendering paths that hit VRAM hard is going to hamstring the utilization of its increased compute power.
 
Disagree that its one of the best upgrade on pro. In 60fps (even though nice that have rtgi) resolution is low and image quality is not good enough. Two other modes that are the way to go are very similar to base ps5.

Its not using PSSR by the sounds of it? That would could make a huge difference to image quality even though it's roughly the same base res.
 
Its not using PSSR by the sounds of it? That would could make a huge difference to image quality even though it's roughly the same base res.
its not using and not sure its good idea: 1) other ubi games like avatar and star war outlaws didnt scale best with pssr 2) rtgi seems to take whole compute advantage of pro in performance mode
 
While Assassin's Creed's lighting advances with each game, the animations seem to regress, to the point where they look like PS360-era animations.
 
One really needs to ask questions of the AC devs.... In the balanced and quality modes, why is the performance of the base ps5 relatively similar to that of the pro? The resolutions are relatively similar and the settings are allegedly similar? The pro is purportedly 2-4x faster in raytracing and 45% faster in raster yet the outcome in those modes are eerily similar? It's a real headscratcher....

It was hard for me to get good pixel counts in the balanced mode owing to capture limitations, but the resolutions on Pro were definitely coming back higher - just can't say how much. For the quality mode, you are getting additional RT there as well, even if the resolutions are similar.
 
It was hard for me to get good pixel counts in the balanced mode owing to capture limitations, but the resolutions on Pro were definitely coming back higher - just can't say how much. For the quality mode, you are getting additional RT there as well, even if the resolutions are similar.
Thanks for the additional clarification.
 
I've asked this before but no one has ever been able to answer it. On this forum during the ps4 days someone posted a sony slide of ps4 bandwidth showing a none linear reduction in gpu bandwidth as cpu bandwidth use increases. Now my question was has this been improved? or has the faster cpu compounded the issue? I don't have any access to data to answer those questions but just like pc there's pros and cons of of the consoles architecture aswell, and these days with exclusive/dedicated engines being used less maybe efforts to work around the cons are also being neglected.

It's also made me raise an eyebrow when we see something struggle performance wise while also see the resolution drop to like 720p, if they are both what appears to be cpu and gpu bound is it actually bandwidth bound? Does ML work compound it? or RT work because on pc we know that is bandwidth heavy and hits cpu harder. We see comparisons of console hardware has as much bandwidth as x gpu so we should be performing like that (and vice versa on other metrics) and we completely neglect the different memory systems which may help or hinder certain workloads. AMD and Nvidia increasing cache sizes probably also makes bandwidth vs bandwidth comparisons less useful, either way the questions above my pay grade and we can't pop linux on the boxes to try answer it ourselves.

edit: sigh I'm ashamed I just missed the chance to use bespoke in the DF thread. I failed Rich.
 
Last edited:
I've asked this before but no one has ever been able to answer it. On this forum during the ps4 days someone posted a sony slide of ps4 bandwidth showing a none linear reduction in gpu bandwidth as cpu bandwidth use increases. Now my question was has this been improved? or has the faster cpu compounded the issue? I don't have any access to data to answer those questions but just like pc there's pros and cons of of the consoles architecture aswell, and these days with exclusive/dedicated engines being used less maybe efforts to work around the cons are also being neglected.

It's also made me raise an eyebrow when we see something struggle performance wise while also see the resolution drop to like 720p, if they are both what appears to be cpu and gpu bound is it actually bandwidth bound? Does ML work compound it? or RT work because on pc we know that is bandwidth heavy and hits cpu harder. We see comparisons of console hardware has as much bandwidth as x gpu so we should be performing like that (and vice versa on other metrics) and we completely neglect the different memory systems which may help or hinder certain workloads. AMD and Nvidia increasing cache sizes probably also makes bandwidth vs bandwidth comparisons less useful, either way the questions above my pay grade and we can't pop linux on the boxes to try answer it ourselves.
It is an unavoidable side effect of shared memory. Faster CPUs tend to use more bandwidth as you stated. Infinity or 3D cache would help quite a bit but it’s prohibitively expensive in terms of die size. Short of a breakthrough in figuring out how to further shrink SRAM, I wouldn’t expect it it to materialize in a console
 
Last edited:
A faster CPU in itself wouldn't necessarily utilize more bandwidth over a slower one with a given workload. If it's computing the same workload over a similar time frame in theory the data access rate would be similar to that of the slower CPU over the same time frame.

An alternative would be to move away from a completely shared memory pool. The PS5 Pro does this to an extent with access to dedicated DDR5 for the CPU. Although I'm not aware of how isolated that access actually is in terms of resource contention.
 
Back
Top