Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
I suspect that's because in a console the developer will specifically code around the CPU's weaknesses (less cache, high latency memory) which is coupled with the lower overhead OS and API's of the console.
Yeah, I theorized earlier that bandwidth plays a major role in this, but apparently I am wrong, even with the 16GB GDDR6 of RAM, the Ryzen 4800S is way behind, keeping in mind that on PC it has no memory contention problem or reserved OS cores.

So what we are left with is probably a combination of inefficient and horrible PC APIs (like DirectX 12), inefficient OS, and some careful extreme tailoring of settings on consoles, I suspect consoles might employ some dynamic settings that are adjusted on the fly to keep CPU performance in check, like dynamic LOD, dynamic physics .. etc, those are not available on PC, just like dynamic resolution scaling.

I suspect some dynamic load balancing across CPU cores is even present on consoles, in combination with clever use of the high GDDR bandwidth which helps maximize cores utilization, as opposed to the cluttered -often single threaded- mess of core utilization we have on PC.
 
Most of us, including Digital Foundry assumed the PS5 CPU and the Ryzen 3600 were close enough in performance to run head to head comparisons. The results of today's video analysis likely comes as a surprise to all of us. Labeling a reasonable hypothesis that prior comparisons may have understated the PS5 GPU performance as console warring is so insane.

The ps5 cpu is a zen 2 8 core 16 thread cpu @ a variable 3.5ghz. The Ryzen 3600 is a 6 core 12 thread 3.6ghz cpu. How would these be comparable ?

Is DF saying that because they can't afford a 3700x which is a 8core/16thread zen 2 @ 3.6ghz ?
 
The ps5 cpu is a zen 2 8 core 16 thread cpu @ a variable 3.5ghz. The Ryzen 3600 is a 6 core 12 thread 3.6ghz cpu. How would these be comparable ?

Is DF saying that because they can't afford a 3700x which is a 8core/16thread zen 2 @ 3.6ghz ?
No, they are comparable, because they tested it (in many many benchmarks over time) that you reach approximately the same or better performance with a 3600.
Also there a benches in the web thatcome to even lower CPUs. Problem with the CPUs in the consoles are, that they normally have no direct counterpart in the PC space. So you can only get an approximation of the speed

E.g. the cache of both consoles is halfed (like in the mobile CPU space/). But not only that, gddr is much more inefficient for CPU tasks (the video shows that quite good), they are speed-limited and in case of the PS5 they have also altered some parts (well Sony cut some parts of the CPU) ...
This all leads to quite different CPUs. In case of the PS5 PC that came out years ago, it was even less comparable to a 2700x, ..

More efficient APIs, smaller OS, ... this all leads to that performance target that DF is currently using. Might change in future, especially if the game optimization on PC continues to be as bad as in some current titles.
 
The PCIe 4.0 x4 connection to the GPU will be having some affect, depending on the game.

In Metro: Exodus dropping to PCIe 2.0 x16 / PCIe 4.0 x4 bandwidth at 1080p can hit the game for 10%+, even without RT (so no updating the BVH tree over PCIe).


metro-exodus-1920-1080.png


Higher resolutions hurt the slower PCIe setups progressively more.

metro-exodus-3840-2160.png


While most of what we're seeing in the DF is going to be the CPU, at least some part of it, in at least some games, is likely going to be due to the very slow PCIe connection. The extent to which PCIe is influencing the results probably can't be determined without running the PC tests at PCIe 2.0 x16 to get similar PCIe bandwidths.
 
A clean sweep of Sony news stories this week, as John, Oliver and Rich react to the alleged PlayStation 5 Pro specs and discuss the apparent debut of the Project Q handheld device in the flesh in the least flattering manner possible. At the same time, Sony is slashing the price on PS5 consoles... so could a new, replacement model be arriving soon? After that, there's more footage from Ratchet and Clank: Rift Apart running off a launch PlayStation 4 hard drive. Plus: are GPU teraflop metrics now essentially useless?

00:00:00 Introduction
00:00:50 News 01: PS5 Pro detailed in new rumours
00:29:08 News 02: Big PS5 summer discounts: what could they mean?
00:37:48 News 03: Project Q handheld potentially leaked
00:46:09 News 04: Ratchet & Clank: Rift Apart vs PS4 HDD!
00:59:13 Supporter Q1: Why don’t developers include an uncapped mode in games, for the benefit of more powerful future consoles?
01:03:58 Supporter Q2: Are teraflop figures for GPU hardware pointless nowadays?
01:08:53 Supporter Q3: Could Rich be given the high-end PC version in system comparison videos for once?
01:10:49 Supporter Q4: Oliver operates in a totally different timezone to everyone else at DF. Is this ever helpful?
01:12:26 Supporter Q5: What are the benefits of playing on original hardware if the console can be emulated very effectively?
 
00:46:09 News 04: Ratchet & Clank: Rift Apart vs PS4 HDD!

The min spec R&C test was interesting, and for the purpose of the 3 way comparison it was right to have the frame rate unlocked.

However I suspect a much superior experience could be had if the game with DRS locked to 30fps. The experience would likely have been much smoother, average resolution higher, there would have been fewer stutters and it might have even helped with the loading sequences too. I'd certainly be interested to see how well the game plays under those circumstances.
 
They cite PS5 vs XBSX as indicative of the TF difference not manifesting. Do we have an idea why yet?

The tools!

Seriously, from all prior developer tweets during the launch of these systems, PS5 dev tools and environment were more familiar (i.e., PS4 like) to them, which in some instances allowed certain games to look and/or performed better than Series X. Now, they're most often matched... or one having a slight edge in resolution or one sticking closer to said framerate cap. Plus, PS5 being the lead platform in most third-party games, doesn't really allow developers a lot of time and space to full stretch the Series X prowess.
 
Last edited:
wonder if Richard doesn't hurt his back 'cos of his tilted posture on the chair?

Other than that, Sony has a tremendous potential to release a decent specced handheld and not that crappy cloud console. If a major gaming company like Sony releases a handheld console they are going to sell on droves and also you can have cloud gaming on such a device.
 
on PS5 Pro speculation, gotta wonder if Kraken is still going to be a thing, and also if they are going to use Kraken on future UE5 games, so PS5 might have an advantage over Xbox Series X, specially taking into account Sony put 200 millions on Unreal Engine 5 development.
 
PS5 Pro would be amazing for PC gaming as it would ensure developers offer a good scalability above PS5 which would filter over to PC too.

So I'm all for it.
 
on PS5 Pro speculation, gotta wonder if Kraken is still going to be a thing, and also if they are going to use Kraken on future UE5 games, so PS5 might have an advantage over Xbox Series X, specially taking into account Sony put 200 millions on Unreal Engine 5 development.

I'm not sure if you're misunderstanding what Kraken is. It's a just a compression format that is licensable for use on any platform - including PC and Xbox. It's a good one, and very efficient on the CPU, I think it's used by Cyberpunk on PC amongst likely many other games.

On PS5, there is a hardware decompression unit that can decompress the Kraken encoded data stream rather than having to rely on the CPU to do that as you would on PC or Xbox if using Kraken.

However Xbox has it's own equivalent (BCPACK) with a a hardware decompressor, and now PC does too in the form of GDeflate and GPU decompression (although as we've seen it's not working as expected in R&C).

There's nothing fundamentally advantageous to PS5 (that we know of) about using Kraken vs the other solutions I mention above.

It's an absolute certainty that the PS5 Pro will also use Kraken as the compression format has to be the same across both consoles or else devs would have to ship 2 different packages. Kraken is also pre-licenced for all devs making PS5 games - so they don't have to license a different solution.
 
They cite PS5 vs XBSX as indicative of the TF difference not manifesting. Do we have an idea why yet?

Presumably the bottlenecks are somewhere else (at least at the moment). Last gen and cross gen workloads are probably not going to fully take advantage of such wide compute combined with limitations in other areas e.g. ROPs, shader engines, L1 GPU cache.

RT, Mesh Shader, more emphasis on compute shaders, and generally longer and more complex pixel shaders should allow more use to be made of all that compute. At least that's what I've been expecting to slowly happen as the generation progresses.

I'm looking forward to seeing how, for example, performance on Cyberpunk: Phantom Liberty differs from the cross gen base game. That's assuming it does of course.

(As others have indicated, PS5 having roughly a 4:1 sales advantage over Series X and having a better dev environment especially early on probably doesn't help either!).
 
They cite PS5 vs XBSX as indicative of the TF difference not manifesting. Do we have an idea why yet?
Yes we do, and from the start. XSX uses a unique 14WGP per Shader engines design while PS5 (like almost all RDNA GPUs) use 10WGP per Shader engines. As expected 14WGP per SE is less efficient for gaming. It's more complex than narrow vs wide design.

 
Status
Not open for further replies.
Back
Top