Digital Foundry Article Technical Discussion [2024]


I don't believe Rich mentions the CPU he used, however.
This kind of testing must be difficult and I really appreciate it. Unfortunately it's hard to find a "console equivalent" CPU. Even a Ryzen 3600 tends to clobber the console CPUs.
 

I don't believe Rich mentions the CPU he used, however.
Why didn't they use a 4700s on PC? This is a totally useless comparison when fps >=60fps as it's very likely the game to be CPU limited here.

Comparing when games is < 30fps is OK (like in Cyberpunk where PS5 actually trashes the 6700), but otherwise it's a very flawed comparison. Professional GPUs benchmarks are being done for decades on PC, the first rule being obviously to use similar CPUs...
 
PS5 enjoys the same performance advantage in terms of fill rate over the 6700 as it does XSX.

And I would assume that's what's helping it to beat out the 6700 at higher resolutions
 
Why didn't they use a 4700s on PC?

Because it's not possible to run that CPU/platform at full performance with modern GPU's. Also, this is clearly a GPU performance comparison and so GPU performance should be isolated as much as possible.

This is a totally useless comparison when fps >=60fps as it's very likely the game to be CPU limited here.

Why is it "very likely" to be CPU limited on PS5? Are any of these games running in native 4K in their 60fps modes on PS5? Because if they are CPU limited like you say - they should be. Or are they in fact running at a reduced resolution compared to their quality modes? And if they are, why would developers choose a resolution (especially if DRS is involved) that leaves excess GPU performance on the table at CPU limited framerates?

Comparing when games is < 30fps is OK (like in Cyberpunk where PS5 actually trashes the 6700),

It trashes the 6700 in that scene because of the 6700's clearly broken RT implementation there (i.e. heavily optimised for Nvidia on the PC side). Turn RT off on both platforms and the 6700 is faster. In the only other RT based game in the suite (Avatar) the 6700 is actually a bit faster.

but otherwise it's a very flawed comparison. Professional GPUs benchmarks are being done for decades on PC, the first rule being obviously to use similar CPUs...

When it's possible to use identical CPU's... which it obviously is in the same PC. When comparing across platforms that's not possible, so isolating GPU performance is the best approach. The whole point of a fixed platform like a console is that it allows the developer to scale the game to match the hardware capabilities. And scaling graphical settings, and specifically resolution is relatively trivial way of ensuring that the GPU is always working to it's maximum potential.
 
Last edited:
PS5 enjoys the same performance advantage in terms of fill rate over the 6700 as it does XSX.

And I would assume that's what's helping it to beat out the 6700 at higher resolutions

I would assume it's more the PS5's significant memory bandwidth advantage. Yes the 6700 has the infinity cache, but particularly given it's limited capacity compared to the high end parts, it's utility at higher resolutions is going to be quite limited. Afterall in the PC space this card is designed for lower resolutions and it's IC is sized accordingly.
 
If Richard performed this comparison with different CPUs, this video has no value. If you're going to run a test, you should control all the parameters you can control to isolate the test subject.
 

I don't believe Rich mentions the CPU he used, however.

More interesting than I would have assumed, some big variances, especially going towards the PC that I wouldn't have expected.

Why didn't they use a 4700s on PC?

Because they likely wanted to test an 'equivalent' PC components that PC gamers would actually you know, have. Even at the time of the PS5's debut, no PC Gamer is going to be using a 4700. Might as well complain the PC isn't using an APU.

If Richard performed this comparison with different CPUs, this video has no value. If you're going to run a test, you should control all the parameters you can control to isolate the test subject.

Complaining about the CPU in this video like it's a PC hardware enthusiast channel comparing the latest Geforce/Radeon releases is missing the point of the video, this is not Hardware Unboxed. With the games tested at the specific resolutions and settings they used, there is going to be no difference between an older 6 core 12 thread CPU and a 7800X3D. I don't see why this is tripping up so many people in this thread, look at the games and the settings/resolutions used people. It's absolutely not a factor!

What this video did do for me though, is illustrate even with roughly 'equivalent' hardware, there can be significant variability in the performance of individual releases - you've got TLOU's poor PC performance on one had, and Hitman3 for the 6700 on the other. Which probably just goes to show you shouldn't use the quality of individual ports to make proclamations about that difference 'revealing' supposed architectural advantages of one platform over the other (such as say, explaining HZD's early PC performance was due to the bottlenecks of PCIE).
 
Last edited:
Complaining about the CPU in this video like it's a PC hardware enthusiast channel comparing the latest Geforce/Radeon releases is silly. For the games tested at the specific resolutions and settings they used, there is going to be no difference between an older 6 core 12 thread CPU and a 7800X3D. I don't see why this is tripping up so many people in this thread, look at the games and the settings/resolutions used people. It's absolutely not a factor!
My whole problem with this post is the complete disregard for the scientific method of testing. I'm not going to even address the claim as shocking as the one in bold. The onus is on you to prove it. You can't make such a claim and expect us to accept it as fact. I can find several 6 core 12 thread cpus that would perform worse with the RX6700 than with the same gpu paired with a 7800x3d.
What this video did do for me though, is illustrate even with roughly 'equivalent' hardware, there can be significant variability in the performance of individual releases - you've got TLOU's poor PC performance on one had, and Hitman3 for the 6700 on the other. Which probably just goes to show you shouldn't use the quality of individual ports to make proclamations about that difference 'revealing' supposed architectural advantages of one platform over the other (such as say, explaining HZD's early PC performance was due to the bottlenecks of PCIE).
This video had the potential to be useful if Richard chose to focus on the cpu differences and api differences between consoles and pc. He could have used the 6700 as a control both locking to the upper bounds of the frequency and lower bounds of the frequency. Then it may have be rather interesting. However, as a gpu test, it's a waste of time. If the testing methodology is flawed, it's safe to assume that the data could be flawed. As a result, the data is worth nothing. What you're inferring from this video reads to me as confirmation bias. Nothing more, nothing less as we cannot derive that conclusion from potentially flawed data.
 
My whole problem with this post is the complete disregard for the scientific method of testing. I'm not going to even address the claim as shocking as the one in bold. The onus is on you to prove it. You can't make such a claim and expect us to accept it as fact. I can find several 6 core 12 thread cpus that would perform worse with the RX6700 than with the same gpu paired with a 7800x3d.

Not in the games at the setting they tested. You actually regged a new account for this?

This video had the potential to be useful if Richard chose to focus on the cpu differences and api differences between consoles and pc.

So you wanted a CPU test then, not really a GPU - and by "CPU test", in this case you basically wanted Rich to test a CPU old enough/down enough in the stack where it would bottleneck a 6700 in the games the tested. That's going to be a pretty old CPU!

I agree that it would be interesting to some degree to see at what point with certain games that a PC CPU would start becoming a bottleneck for PS5-equivalent class GPU's, but you've even got people in this thread complaining that deck was stacked unfairly for the PC because even some of cheapest value PC CPU's, like the 5600, are still faster than the PS5 CPU. No one is going to be surprised that a 4 core CPU, or even a 6 core without RT is a major bottleneck in modern games on the PC, so I'm not sure of the utility of such an investigation. Rather, the utility here is in showing what low/midrange ~$300 GPU's got you in the PS5's timeframe, and how modern equivalents (such as the 4060) compare now.
 
Last edited:
Not in the games at the setting they tested.
I strongly disagree. There were several games running in at or above 60fps with no upper bounds set. The video below shows a 6700xt being gpu limited when comparing a 3600 to a 5600. Like I said, I cannot accept your previous claim to be true when it can be so easily disproven.

 
But the PS5 still outpaces the 3060 with RT, in fact you need the 4060 to just equate the PS5's RT performance. It's an odd one.

I'd probably put this down to a combination of the base (none RT) performance of the PS5 being higher, the PS5 level RT effects being quite light, and so not really stretching the Nvidia RT hardware, and potentially a more efficient RT implementation on the console side thanks to the previously discussed API advantages (which I do think can be relevant to the real world in the case of RT given DXR's well publicised limitations).

All that said though, I wouldn't expect for a moment that those performance delta's would hold if it were possible to ramp up the consoles RT settings to the higher settings available on PC.
 
My whole problem with this post is the complete disregard for the scientific method of testing. I'm not going to even address the claim as shocking as the one in bold. The onus is on you to prove it. You can't make such a claim and expect us to accept it as fact. I can find several 6 core 12 thread cpus that would perform worse with the RX6700 than with the same gpu paired with a 7800x3d.
The scientific method isn't the only way to test thing and produce useful comparisons. If Richard is wanting to compare just the GPU, he needs to eliminate other contributory factors from the PC. that'd mean using a more powerful CPU to ensure games were GPU limited before they were CPU limited.

In short, it's not an attempt to compare a similar spec'd PC in its entirety - it's focussing on just the GPU and so a different methodology is required which makes perfect sense.

(Where is the CPU even mentioned? I didn't catch description of the whole rig in the write-up or skimming the vid).
 
The scientific method isn't the only way to test thing and produce useful comparisons. If Richard is wanting to compare just the GPU, he needs to eliminate other contributory factors from the PC. that'd mean using a more powerful CPU to ensure games were GPU limited before they were CPU limited.

In short, it's not an attempt to compare a similar spec'd PC in its entirety - it's focussing on just the GPU and so a different methodology is required which makes perfect sense.

(Where is the CPU even mentioned? I didn't catch description of the whole rig in the write-up or skimming the vid).
The cpu was never mentioned I believe and the absence of that information speaks loudly.... What you're saying is correct but it still adheres to the scientific method of testing. Unfortunately, you can't actually verify that the console is CPU limited or GPU limited at all. You can guess but, thats it. Because console cpus are so weak, the api is so different, and the memory subsystem is different, you just cannot say at all. Those are 3 potential variables you cannot control. Not only can you not control, you can't even properly monitor those elements to know when they're affecting the experiment. The test methodology was not good and I was surprised because DF usually nails these things.
 
I strongly disagree. There were several games running in at or above 60fps with no upper bounds set. The video below shows a 6700xt being gpu limited when comparing a 3600 to a 5600. Like I said, I cannot accept your previous claim to be true when it can be so easily disproven.


I'm not sure what you're trying to show here. The only game from the video that you posted which was also used in Richards comparison is Cyberpunk. Your video shows that the game is CPU limited (i.e. not GPU limited) on an Ryzen 3600 at 1080p on a 6700XT.

How does that prove that a weaker GPU (6700) would also not be GPU limited at the much higher resolution of 1800p?

The likelihood that a 6700 will be GPU limited (i.e. not CPU limited) at 1800p is far higher than the likelihood that the more powerful 6700XT will be GPU limited at 1080p.
 
I strongly disagree. There were several games running in at or above 60fps with no upper bounds set. The video below shows a 6700xt being gpu limited when comparing a 3600 to a 5600. Like I said, I cannot accept your previous claim to be true when it can be so easily disproven.


So you wanted a different video - you wanted one of those "Let's build the cheapest PS5 PC equivalent we can find in Nov 2020" videos. There's some utility (I guess) in those kinds of videos, and heck Rich kind of did that the end by showing how (badly) a 270X aged in relation to the PS4, but this was not really the point of the video.

This was comparing a GPU with the most similar architecture to the one used in the PS5. Secondly, this GPU has very similar performance counterparts available on the market today in its price class (sadly), and even its bigger brother - from that same generation! - is still available on the market in that $300 category. There is no point to bottleneck it with a CPU like the 3600 that has not been available without massive markups (due to it being discontinued) for years.

A R5 5600 is $130 today - now if you want to argue that with the games tested, at the settings/resolutions Rich used, were also bottlenecked to any significant degree by a 5600, I'd need to see some actual evidence, but if so I would least accept that as fair critique if even that CPU is skewing the results. But I doubt it, and pointing to a video where a far older CPU - one that you cannot buy today and can be beaten in quad-core, sub $100 CPU's that anyone would balk at when using in a CPU comparison, is somehow more illustrative of the 'scientific method', be my guest. But I think that's pure pedantry and not in the scope of what this video was trying to show.

If you want to argue the PS5's GPU was given short-shrift by this video because of potential limitation by its CPU, then I might have accepted that if Rich did a bunch of tests with 120fps modes in games that support it and the PS5 routinely failed to hit 120 where the PC breezed past it, that could have indicated a CPU bottleneck. But he didn't do that, in fact the only console game that actually reaches beyond 60fps was monster hunter rise, which is running at 4k. I guess it's possible it's not hitting 120 at that res because of the CPU, but I'm pretty comfortable assuming no.
 
Last edited:
The cpu was never mentioned I believe and the absence of that information speaks loudly....

It doesn't speak loudly about anything. We can safely assume that Digital Foundry used a CPU on the PC side that was sufficiently powerful to isolate GPU performance - which is what they were testing.

What you're saying is correct but it still adheres to the scientific method of testing. Unfortunately, you can't actually verify that the console is CPU limited or GPU limited at all. You can guess but, thats it. Because console cpus are so weak, the api is so different, and the memory subsystem is different, you just cannot say at all. Those are 3 potential variables you cannot control. Not only can you not control, you can't even properly monitor those elements to know when they're affecting the experiment.

You don't need to control these elements. Game developers will control them for you. As I noted above, if the game is CPU limited on console, the developers would set the resolution to take advantage of the excess GPU power. Or DRS will do that automatically. Its a pretty safe assumption that games on console are generally at, or close to their GPU's limitations because it's so trivial (relatively speaking) to soak up excess GPU power with increased resolution.
 
I'm not sure what you're trying to show here. The only game from the video that you posted which was also used in Richards comparison is Cyberpunk. Your video shows that the game is CPU limited (i.e. not GPU limited) on an Ryzen 3600 at 1080p on a 6700XT.

How does that prove that a weaker GPU (6700) would also not be GPU limited at the much higher resolution of 1800p?

The likelihood that a 6700 will be GPU limited (i.e. not CPU limited) at 1800p is far higher than the likelihood that the more powerful 6700XT will be GPU limited at 1080p.
I believe @Flappy Pannus said:
For the games tested at the specific resolutions and settings they used, there is going to be no difference between an older 6 core 12 thread CPU and a 7800X3D
The burden of proof lies with @Flappy Pannus to prove that this statement is true for all games tested. The video I posted is simply meant to highlight that there exists scenarios where you can be cpu limited by even on a gpu as weak as the 6700. In Richard's video, we see Hitman 3 showing a 40% difference between both GPUs that have a 10% difference in clocks at a high resolution. That result seems rather interesting. It's then followed by monster hunter rise where we see a 37% difference on a 10 percent gpu overclock. I mean, if that does not scream cpu limited, I don't know what will.
 
I believe @Flappy Pannus said:

The burden of proof lies with @Flappy Pannus to prove that this statement is true for all games tested. The video I posted is simply meant to highlight that there exists scenarios where you can be cpu limited by even on a gpu as weak as the 6700. In Richard's video, we see Hitman 3 showing a 40% difference between both GPUs that have a 10% difference in clocks at a high resolution. That result seems rather interesting. It's then followed by monster hunter rise where we see a 37% difference on a 10 percent gpu overclock. I mean, if that does not scream cpu limited, I don't know what will.

...and we also see examples go the other way. Is TLOU then CPU limited on the PC, since there's a huge difference in performance but a very similar GPU?
 
...and we also see examples go the other way. Is TLOU then CPU limited on the PC, since there's a huge difference in performance but a very similar GPU?
The last of us on pc is a known broken port. Again, my point of contention is not the results but the test methodology. Using a known broken port like the last of us highlights poor test methodology because it'll drastically skew the data. If the video was "how does a pc with a ps5 equivalent gpu perform", then that's fine. However, if you're attempting to do an actual GPU comparison like Richard, then this test methodology is wrong.
 
Back
Top