Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
With possibility with ending with wrong idea that you dont need to have strong gpu when majority will not have as powerful cpu and will be more limited in this aspect.

I don't understand what you're saying here. They are isolating GPU performance specifically so it's impossible to get the wrong impression of what level of GPU power you will need. Having a weaker CPU that may be a bottleneck in your system in no way changes how much GPU power you will need to achieve a certain result.

Wrote it many times so will not repeat myself but using mid range gpu with high end cpu can create false representation of what real performance will be on mid range pc.

You've just said it yourself. "mid range pc" Any review that uses a high end CPU to isolate GPU performance is not testing the PC as a whole, they are specifically testing the GPU. There are plenty of tests out there which do the opposite and use a 4090 on weaker CPU's to isolate CPU performance. And there are also tests which do both together, DF are particularly good at this with their "mid range PC tests", i.e. the 3600x + 2700S that Alex regularly uses.
 
I don't understand what you're saying here. They are isolating GPU performance specifically so it's impossible to get the wrong impression of what level of GPU power you will need. Having a weaker CPU that may be a bottleneck in your system in no way changes how much GPU power you will need to achieve a certain result.



You've just said it yourself. "mid range pc" Any review that uses a high end CPU to isolate GPU performance is not testing the PC as a whole, they are specifically testing the GPU. There are plenty of tests out there which do the opposite and use a 4090 on weaker CPU's to isolate CPU performance. And there are also tests which do both together, DF are particularly good at this with their "mid range PC tests", i.e. the 3600x + 2700S that Alex regularly uses.
Without repeating myself, change to 3600x for mid range pc tests was good idea
 
To be fair, there were plenty of df pc gpu comparison vs consoles using top notch cpu on market ;d
Lol yes, but to be fair we don’t call that critical thinking. It’s the best we can do unfortunately, it’s ball parking at best.
 
  • Like
Reactions: snc
Without repeating myself, change to 3600x for mid range pc tests was good idea

It's a good idea if you want to test how a "mid range" PC (as defined by a 3600x + 2700s) compares to the PS5. But it's not so great if you want to determine exactly what level of GPU you need to match the PS5's performance, because you don't know from game to game whether the GPU is being bottlenecked by the CPU.

In an ideal world, sites would first isolate CPU performance to determine exactly what you need to match a console on that front, and then do the same with GPU performance, then put the two together - on a per game basis. But that's a lot of work for every game.
 
It's a good idea if you want to test how a "mid range" PC (as defined by a 3600x + 2700s) compares to the PS5. But it's not so great if you want to determine exactly what level of GPU you need to match the PS5's performance, because you don't know from game to game whether the GPU is being bottlenecked by the CPU.

In an ideal world, sites would first isolate CPU performance to determine exactly what you need to match a console on that front, and then do the same with GPU performance, then put the two together - on a per game basis. But that's a lot of work for every game.
The assumption is that consoles optimizations on the API and HUMA, now the faster IO subsystems, and Asynchronous compute on GPU probably ensures that the bottleneck is often the GPU at least on console.

It would be a reasonable assumption that we would need to ensure that the bottleneck is also on the GPU for the PC to get an idea of what’s going on.

It may not always be the case of course but vsync is likely to do more damage in interpreting results than ensuring the GPU is the bottleneck on PC.
 
With possibility with ending with wrong idea that you dont need to have strong gpu when majority will not have as powerful cpu and will be more limited in this aspect. Wrote it many times so will not repeat myself but using mid range gpu with high end cpu can create false representation of what real performance will be on mid range pc.

Um, if you are CPU limited, it doesn't matter how powerful your GPU is, it's not going to make it run any better at those settings.

A less powerful CPU means you don't need as much GPU power ... because you are become more likely to be limited by the CPU and not the GPU.

Regards,
SB
 
Um, if you are CPU limited, it doesn't matter how powerful your GPU is, it's not going to make it run any better at those settings.

A less powerful CPU means you don't need as much GPU power ... because you are become more likely to be limited by the CPU and not the GPU.

Regards,
SB
I think most users of this forum knows this ;) Hard to find modern game purly gpu limited.
 
I honestly like these videos much better than the console comparisons which have gotten stale by now. Usually, SX and PS5 perform 99% the same and only console warriors nitpick the differences to declare a "winner". This is what DF should do more often. Tech deep dives, dev interviews, videos about rendering techniques, etc. But I guess it wouldn't generate as many clicks as comparisons which is a shame.
Technology deep dives are fine but most of the time im more interested in how games I am interested perform. So of course that gets a lot more traffic

Game technology is definitely interesting when they get it a once over. John stopped doing his DF retro videos which is sad. And there have not been enough major games released recently to do major once overs like this
 
Last edited:
The assumption is that consoles optimizations on the API and HUMA, now the faster IO subsystems, and Asynchronous compute on GPU probably ensures that the bottleneck is often the GPU at least on console.

It would be a reasonable assumption that we would need to ensure that the bottleneck is also on the GPU for the PC to get an idea of what’s going on.

It may not always be the case of course but vsync is likely to do more damage in interpreting results than ensuring the GPU is the bottleneck on PC.
You cannot assume that consoles games are not CPU limited and compare those using high end PC GPUs. Sure consoles have dedicated hardware but remember how weak their CPUs are in the first place. Awfully weak: about an Zen 2 2700.

As we don't know really when those games are CPU or GPU limited any comparison against PC using high end CPU is not fair. What is actually tested is console APU + bandwidth vs whatever CPU + GPU + bandwidth on PC. Problem is most of those benchmarks boldy assume many things and in the end wrongly pretend they fairly compare console GPU vs PC GPU when it's impossible in most cases to properly do so.
 
You cannot assume that consoles games are not CPU limited and compare those using high end PC GPUs. Sure consoles have dedicated hardware but remember how weak their CPUs are in the first place. Awfully weak: about an Zen 2 2700.

As we don't know really when those games are CPU or GPU limited any comparison against PC using high end CPU is not fair. What is actually tested is console APU + bandwidth vs whatever CPU + GPU + bandwidth on PC. Problem is most of those benchmarks boldy assume many things and in the end wrongly pretend they fairly compare console GPU vs PC GPU when it's impossible in most cases to properly do so.
We assume that console games are optimized to hit their target frame rate however. That’s all about you need to know from that profile. If DRS is insufficient in hitting frame rate they would optimize further.

You can’t compare equivalent PC hardware for obvious reasons, the OS, the API, the split memory, etc. there’s too much to compare and it becomes a pointless exercise because at the end of the day the PC owner is buying PC parts and not a console.

If a PC owner wants to know the GPU required to obtain console performance then you’re going to put the bottleneck on the GPU. PC players often are not provided the same DRS settings as console either, so they have less choice in the matter. Outside of proprietary hardware, PC owners always pay more and get slightly less out of it.

I would say, instead of looking at benchmarking as a way to poop on console, look at it as a method of a PC owner to know what they need to build to hit parity.
 
3600/3700x are low end from now on in my eprspective. They reliably fall below <50 FPS avg. in certain chapters and 35 FPS %1 lows in general in games like Hogwarts Legacy (hogsmeade), Last of Us (in general), a plague tale requiem (50-55 fps avg. in a mild town, and 30-40 FPS avg. with rats), Gotham Knights (4 big aaa games released in 2023). It is a telltale sign of a CPU becoming lowend if it cannot even push reliable 60 FPS lock. To me, 60 FPS lock capable CPUs are midrange (5600x/12400f and co.) and 100+ FPS capable CPUs are highend (for newer games, zen 4 and 13th gen intel). I'm purely talking about games going forward, of course. I'm sure 5600x is capable of pushing 100+ fps in many 2015-2022 games. But that has hugely changed and will keep changing.

At least them Zen 2 owners have a chance to upgrade to Zen 3. But Zen 3 too will fall reliably and consatantly below 60 FPS with UE5, I'm sure of this. Maybe 5800x 3d with its raw cache power can weather the storm, but others, eh, not so much.
 
Last edited:
3600/3700x are low end from now on in my eprspective. They reliably fall below <50 FPS avg. in certain chapters and 35 FPS %1 lows in general in games like Hogwarts Legacy (hogsmeade), Last of Us (in general), a plague tale requiem (50-55 fps avg. in a mild town, and 30-40 FPS avg. with rats), Gotham Knights (4 big aaa games released in 2023). It is a telltale sign of a CPU becoming lowend if it cannot even push reliable 60 FPS lock. To me, 60 FPS lock capable CPUs are midrange (5600x/12400f and co.) and 100+ FPS capable CPUs are highend (for newer games, zen 4 and 13th gen intel). I'm purely talking about games going forward, of course. I'm sure 5600x is capable of pushing 100+ fps in many 2015-2022 games. But that has hugely changed and will keep changing.

At least them Zen 2 owners have a chance to upgrade to Zen 3. But Zen 3 too will fall reliably and consatantly below 60 FPS with UE5, I'm sure of this. Maybe 5800x 3d with its raw cache power can weather the storm, but others, eh, not so much.

I have a 3700X and while you're right that it can drop below 60fps in a few games, 60fps is kinda an arbitrary target on PC's, at least where VRR is a factor. In TLOU and Forsaken for example, I just lock the frame rate to 50 fps and get perfectly flat frame time graph. In Plagues Tale and Witcher 3 I use frame generation which puts frame rates well north of 60 fps.

I would like more performance (still considering a 5800X3D drop in replacement), but there's nothing even remotely unplayable on this CPU and for the most part I'm getting awesome performance. It's just semantics but for me that puts it more towards the lower end of mid range. Actual low end would be things like Zen+ and Intels old quad cores. i.e. CPU's where you have to make genuine and significant compromises in the game experience. I'm not really seeing that with this CPU which is why I've been so on the fence for a while on whether to upgrade or not. Do I really need it? It it worth £300 for a few extra frames in a small handful of games that already play perfectly well? Would I even notice the difference? etc....
 
Status
Not open for further replies.
Back
Top