CPU Limited Games, When Are They Going To End?

Cyberpunk, Battlefield 2042, Hogwarts, Flight sim, Watch Dogs Legion, Gotham knights, Forspoken etc. There are plenty of games requiring huge amts of CPU performance.
In Battlefield only if you aim to have 120 fps. 60 fps also works on old CPUs.

However, Flight simulator needs very fast CPUs. Even the fastest CPUs sometimes drop to 30 fps on approach in Frankfurt. Frame regeneration helps a lot in this game.
 
It won't be that high, Intel spent many years offering 3-4% increases in IPC until Ryzen showed up and forced them to start delivering decent gen on gen IPC gains.

But in modern games that's not enough to allow a Quad-Core to overcome two extra physical cores.
I still hate them for that. Perhaps if they'd have tried to push the industry ahead sooner, then the entire gaming industry would be that much further ahead in having properly threaded engines to take advantage of them.
 
Perhaps if they'd have tried to push the industry ahead sooner
The gaming industry is always lagging in multi core coding, stuff are harder to extraxt parallelism from, as gaming is mostly serial in nature (it relies on constant user input). Large single core gains are indispensable, as they accelerate the whole stack, and Intel couldn't do it, nor can AMD.

I remember back in the days, I thought Intel was sandbagging due to lack of competition, but then Intel came out and admitted they can't do it, to achieve major single threaded gains, they need to redesign the CPU from the scratch, as clockspeeds gains are not even enough anymore, gains from clockspeeds are offset by the pipeline bottlnecks (which is why increasing clocks don't give as much performance as in the past), requiring major CPU redesigning, which is expensive as hell, and could interfere with backward compatibility.

I short: we are stuck, the best we can hope for is the anemic 15% increase each couple of years, if that even happens.

 
Skylake is 23% faster than Sandy Bridge (geomean) on this set of benchmarks: https://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/9
They used low spec DDR4. Add in the 3200 ram commonly used in gaming builds and it increases.

In Battlefield only if you aim to have 120 fps. 60 fps also works on old CPUs.

However, Flight simulator needs very fast CPUs. Even the fastest CPUs sometimes drop to 30 fps on approach in Frankfurt. Frame regeneration helps a lot in this game.
60 fps with drops to <20 happening every few seconds anytime you are engaged in battle.
 
A good and popular example is Office, I remember it running very very well on a Pentium 4, new Office versions require significantly more CPU power despite them not offering major upgrades or features over the old ones.

Try to do concurrent multi-user editing on that version of MS Office. Or add really high res images.
 
60 fps with drops to <20 happening every few seconds anytime you are engaged in battle.
There must be something wrong with the PC configuration. Still playing and by now I have 250 hours on it and don't have that. Funnily enough, I myself saw someone with an ancient PC consistently getting over 60 fps.


Where does it drop constantly below 20 fps? 128 players, ray tracing, very high settings, 1080p DLSS with more than 180 fps. I see no point mentioning Battlefield 2042 in the same league as MSFS in terms of CPU limit. There is hardly a better optmimised game. Especially when you consider that some single-player corridor games run worse without ray tarcing.
 
Last edited:
There must be something wrong with the PC configuration. Still playing and by now I have 250 hours on it and don't have that. Funnily enough, I myself saw someone with an ancient PC consistently getting over 60 fps.


Where does it drop constantly below 20 fps? 128 players, ray tracing, very high settings, 1080p DLSS with more than 180 fps. I see no point mentioning Battlefield 2042 in the same league as MSFS in terms of CPU limit. There is hardly a better optmimised game. Especially when you consider that some single-player corridor games run worse without ray tarcing.
He's running a 7950X3D. Not sure how you consider it well optimized when it has virtually all destruction from past BF titles axed as well as a huge reduction in map detail and still regularly drops to 40 fps on the fastest CPU in existence.
 
Last edited:
There is some irrational resentment. If you look at other benchmarks on the channel a linear title like Resident Evil 4 runs worse in the CPU limit. Same goes for Forza, Atomic Heart etc. Even Call of Duty is hardly any better at the 1% low values and there is much less going on. I always looked at about 1080p resolution, max settings, enabled ray tracing.

Watch 5:00 min

Watch 8:50 min

Watch 7:00 min, no ray tracing
 
Last edited:
Without ray tracing one might still get around 60 fps but with ray tracing activated the CPU will drop below 30 fps when driving,
 
Without ray tracing one might still get around 60 fps but with ray tracing activated the CPU will drop below 30 fps when driving,

It'll be more platform bound rather than CPU bound as while the CPU's have aged well, the x79 platform hasn't and the PCIEX connection (It's only 2.0)
 
Another reason why CPU limitations are rampant.


I would find this pretty surprising for game studios. You can use intrinsics and it's honestly not that crazy to do. I think the more difficult case is if you rely on the compiler to SIMD your code for you, and I believe the generated asm can be impacted by small changes to your loops which might suddenly cause your asm to stop being SIMD.
 
Jedi Fallen Order got over 100 FPS on a i9-9900 with DX11: https://www.computerbase.de/2019-11...hnitt_benchmarks_in_full_hd_wqhd_und_ultra_hd

Jedi Survivor gets <60FPS with a Ryzen 5900X and DX12. The Ryzen processor is 20% or so faster and DX12 should be much more efficient. So Respawn has archived worse performance with a much better CPU and more efficient API...

As a guess, really bad CPU sync point failure for Survivor. Devs seem to have gotten used to "make it run on consoles at all and PC can just brute force it" as a strategy.

But now consoles have fairly modern, decent CPUs that have the added benefit of being memory unified SOCs. So you really need to pay attention to CPU optimization on PC instead of just ignoring it.
 
Last edited:
Jedi Fallen Order got over 100 FPS on a i9-9900 with DX11: https://www.computerbase.de/2019-11...hnitt_benchmarks_in_full_hd_wqhd_und_ultra_hd

Jedi Survivor gets <60FPS with a Ryzen 5900X and DX12. The Ryzen processor is 20% or so faster and DX12 should be much more efficient. So Respawn has archived worse performance with a much better CPU and more efficient API...

But the rendering is more ambitious, a lot more. Maybe it would be the same problem under dx11, or worse.
 
In the 3DMark Overhead API test DX12 is 10.8x faster than DX11 MT on my 13600K. So there are no excuses anymore for CPU overhead. I get less than 60FPS with Raytracing in Jedi, same time Cyberpunk can archives over 100 FPS with Pathtracing.
We are in the 10th year of DX12 and it seems to get worse with every year.

/edit: Jedi Survivor is most of the time a static game. There are one or two enemies and then nothing.
 
Last edited:
Back
Top