Why do some console ports run significantly slower on equivalent PC hardware?

Horizon Zero Dawn runs @1080p and low settings on an HD 7970 and can barely achieve 30fps, despite being measurably faster than PS4's GPU (which is equivalent to an HD 7850), while also running significantly lower visual settings than PS4.


What is the reason for this? DX12? Bad optimizations? (Secret console sauce doesn't explain this large deficit).
 
Misc OS overhead for PCs and additional background tasks and processes as well as thicker layers in the graphics APIs. On consoles they can smush together the api and driver layers (as Microsoft Xbox describes it).
 
"Favor Performance" is higher quality settings than the console version of HZD...

But yea... I mean given equivalent hardware... why would you not expect the device streamlined for gaming specifically would not outperform PC? lol.. You literally have development studios designing the games around those devices, and there's tons of optimizations, better dev tools, and lower level access to everything.
 
New "Favor Performance" is higher quality settings than the console version of HZD...
It is not.
"Favor Performance" is a lower preset than "Original", which is the preset used in the consoles. In fact "Favor Performance" is the absolute lowest preset available in the game.
 
Death Stranding runs badly on an HD 7850, it runs 1080p with absolute lowest visual settings, and dips regularly below 30fps. PS4 performs much better.

 
It is not.
"Favor Performance" is a lower preset than "Original", which is the preset used in the consoles. In fact "Favor Performance" is the absolute lowest preset available in the game.
My bad, for some reason I read it as "favor quality" when I watched the first part of the video.

Anyway, 7970 only has 3GB of RAM, and I think even on low settings HZD can eat up quite a bit of VRAM, maybe a bottleneck there?

But yea, it's mostly overhead and simply not being as optimized in general. (also the port was initially done by another team.. so despite being improved massively since launch, it's likely not in the best shape it could have been)
 
Porting in general isn’t easy even if your engine supports multi platform. Requires some serious problem solving in terms of how you do things to resolve on all platforms while building a multi platform title. Even then; there are likely to be differences in rendering between platforms just due to api differences.

Going from a release candidate to another platform is much harder because some decisions made on one platform may technically not be ported over to the other. You have to rebuild those functions in a way to make it run and in doing so they may not be as performant.
 
PS4 has more in common with later GCN versions especially having much more ACE than the 7970, may just be able to optimize to those much better. Would be interesting to compare it running on a radeon 285 vs 7970.
 
It would be interesting to see the results on the rare 6GB version of the 7970.
In the integrated benchmark, it's just above 40 fps avg (Score of 7473) with detail setting "original". In-game at the start of the open-world segment of the prologue, it's just above 30, but I have a hunch, that's because of the large swathes of vegetation there. Have no savegames from later levels handy unfortunately.
 
How is this news? This has been a thing for generations, going as far back as the original Xbox.

Equivalent hardware specs and raw compute performance are necessary conditions. Software is the sufficiency condition.

EDIT: Look at the M1 Pro/Max. When running apps optimised for it it smokes the 3080 Mobile. When it has to resort to emulation or a brute force "sledgehammer" approach (e.g., trying to game on the M1 Pro/Max), it struggles to even match a 3060/3050 Ti Mobile.
 
I doubt that there's any sort of a measurable "overhead" which impacts performance here.
It's just that the D3D12 renderer isn't as fast as the GNM on PS4 built specifically for the h/w.
AFAIK D3D12 doesn't expose a lot of things which GNM has simply because no other vendor on the market would support them even in the future.

VRAM can also play its part of course.
 
This puts into focus the matter of pushing GPUs to their max capabilities.

The HD 7850 in the PS4 (if we are allowed to call it that) is being pushed to it's absolute limits in Horizon Zero Dawn, Death Stranding and Days Gone. It's utilization is 100% at all times, while the HD 7850 or even the HD 7970 on the PC is either not pushed to it's limits, or it is indeed being stretched to the max by junk code in the form of API overhead code or driver management junk .. etc. I think it's a combination of both, they are not really being pushed to their absolute 100%, and are wasting many of their cycles executing unnecessary code.

In the current landscape of modern GPUs, I don't think that they would be operating at their maximum boost frequency if pushed to their max, I think IHVs are exploiting this fact to push boost clocks higher comfortably, they know the chips wouldn't manage these high boost clocks under absolute 100% utilization so they set the boost clocks high comfortably, the chip will fall back to standard clocks if pushed too hard, which is the guaranteed original spec clocks.
 
HZD in particular heavily leverages the fact that it's an APU and not a separate CPU/GPU. When the engine was architected, I suspect it was only ever intended to be PS4 only, and on that hardware, moving data between CPU and GPU is 'free' as they are the same memory pool.

Once it was ported to PC, it was found to be a very strong outlier in regards to performance scaling with PCIe bandwidth; on PC with a separate CPU/GPU, all of that data transfer that the engine was architected around being 'free' and instant with an APU and shared memory pool, now burn through:

1) CPU memory bandwidth
2) PCIe memory bandwidth
3) GPU memory bandwidth

And probably a few compute cycles for each on top of that.
Death Stranding, being initially a PS4 exclusive, also scales strongly with PCIe bandwidth, suspected for the same reason.


I'll try to find the exact graphs for both that I'm remembering in a bit.
 
The benchmark is higher than in game performance. Performance also drops in further areas of the game when you are no longer a child.
Yeah, i realized that. A bit further into the game and no longer being a child, at the end of the Point of the Spear quest, performance is arounbd 34 fps in "original" quality and around 41 with "favor performance" (both in 1080p).
 
Last edited:
Yeah, i realized that. A bit further into the game and no longer being a child, at the end of the Tip of the Spear quest, performance is arounbd 34 fps in "original" quality and around 41 with "favor performance" (both in 1080p).
So it seems the 6GB version maintains higher fps due to the bigger VRAM, still it barely maintains PS4 performance and visuals, the 7970 is a 50% faster GPU than 7850, and even more faster if it's the GHz edition. It should be capable of a far higher performance than this.

the engine was architected around being 'free' and instant with an APU and shared memory pool
Keep in mind that shared pools suffer from memory bandwidth overhead as a result of competition from both the CPU and GPU, this effectively reduce the available bandwidth for both when they run simultaneously.


Death Stranding, being initially a PS4 exclusive, also scales strongly with PCIe bandwidth, suspected for the same reason.
What about Days Gone then? It's UE4. An engine tailored for PCs with separate memory pools.
 
Back
Top