Digital Foundry Article Technical Discussion [2024]

Do we know that the cache hierarchies in PS5/Pro & XSX have similar bandwidths?
The only things for certain is that L2 is 5MB on XSX and 4MB on 5pro.

L0 is tied per active CU.
xboxseries-xhot-chips-gpu-overview-architecture-RDNA2CU.jpg


So I couldn’t tell you how much L0 it is per CU. And I don’t know how much L1 there is as it is undisclosed.
 
Interesting point. AMD went from 128KB L1 per 5 WGPs in RDNA 1&2 to 256KB per 4 WGPs in RDNA 3. PS5 is at 128KB per 5 WGPs going to 256KB per 8 WGPs in the PS5 Pro.

Why should we assume that MS allocated a relatively stingy 128KB per 7 WGPs? That would be pretty inconsistent with what everyone else has done.

It may be that 128KB per shader array was what AMD was offering at the time MS locked down Series specs, and having AMD make something larger wasn't worth it (delay, cost, performance gain or whatever).

By RDNA3 256KB per SA was a standard part of RDNA with all the libraries and debugging done. It may be that some aspect of RDNA3 helped to drive this, perhaps the dual issue FP32 stuff.

In the same vein, it appears that the Pro's doubled up L0 is taken from RDNA 4, and is to support a newer, wider BVH8 structure.
 
Last edited:
It may be that 128KB per shader array was what AMD was offering at the time MS locked down Series specs, and having AMD make something larger wasn't worth it (delay, cost, performance gain or whatever).

By RDNA3 256KB per SA was a standard part of RDNA with all the libraries and debugging done. It may be that some aspect of RDNA3 helped to done this, perhaps the dual issue FP32 stuff.

In the same vein, it appears that the Pro's doubled up L0 is taken from RDNA 4, and is to support a newer, wider BVH8 structure.

128KB per SA makes sense but it doesn’t make sense to just throw more CUs in there for no benefit.
 
No it has nothing to do with Cerny.

Im saying it’s never been a valid reason to ever judge the performance of a GPU. People were just using it comparing between consoles to point out that in this one particular metric PS5 was superior.
But bigger GPUs with more bandwidth have always performed better and had worse bandwidth ratios. It’s never not been the case except in situations where the GPU is bandwidth starved. Until we know that bandwidth is the bottleneck, you can keep upping the compute profile.
So, at the end of the day, what is the reason why the series x isn't performing up to spec in the majority of titles?

Compared with PS5 and rdna 2 gpu's on PC? I think that series x is the only GPU with 14 WGP per SE. No other rdna 2 gpu has that many. Kepler has talked about it on Twitter.

In the end I'm not a amd engineer, so what do I know :???:
 
So, at the end of the day, what is the reason why the series x isn't performing up to spec in the majority of titles?

Compared with PS5 and rdna 2 gpu's on PC? I think that series x is the only GPU with 14 WGP per SE. No other rdna 2 gpu has that many. Kepler has talked about it on Twitter.

In the end I'm not a amd engineer, so what do I know :???:
It’s just coming down to platform priority and optimization.
 
So, at the end of the day, what is the reason why the series x isn't performing up to spec in the majority of titles?

Compared with PS5 and rdna 2 gpu's on PC? I think that series x is the only GPU with 14 WGP per SE. No other rdna 2 gpu has that many. Kepler has talked about it on Twitter.

In the end I'm not a amd engineer, so what do I know :???:

BW per CU is meaningless because it ignores how fast the CUs are clocked and how fast they can consume data.

If you compare BW per [CU x clock speed] you can see the Series X is actually probably better provided for than the PS5, and even the Series S is doing okay (and is very massively ahead of any mobile AMD APU).

My thought is that Series X was intended to excel at high compute and high bandwidth workloads. RT is one such workload, and games with micro geometry using mesh shader (or something else compute driven like Nanite) are another. Lots of geometry on the fly, and lots of long running, complex shaders keeping compute units busy.

Last gen geometry using vertex shader and last gen pixel shaders are probably not a great way to get the most out of Series X. Machine is doing great with the likes of Alan Wake 2, Avatar and UE5 games though.

Another issue, as iroboto points out, is that not many people can afford to focus too heavily on Xbox. Even MS have chosen not to.
 
The "developers don't optimize for series x" angle doesn't convince me anymore. I don't think many developers are even making platform specific optimizations anymore. In the digital foundry interview with remedy, they said that all optimizations apply to all platforms. Even remedy, a company that wants to excel in tech, doesn't optimize more for one or the other. They just write generic code that works everywhere.

And that's what's probably happening across the whole multiplatform industry, for the most part.
 
The "developers don't optimize for series x" angle doesn't convince me anymore. I don't think many developers are even making platform specific optimizations anymore. In the digital foundry interview with remedy, they said that all optimizations apply to all platforms. Even remedy, a company that wants to excel in tech, doesn't optimize more for one or the other. They just write generic code that works everywhere.

And that's what's probably happening across the whole multiplatform industry, for the most part.

You mean Remedy talking about Alan Wake 2, that uses Mesh Shader and runs quite a bit better on Series X? I don't think Xbox would be having any issues if everyone treated it like Remedy do, even when they weren't using e.g. amplification shader.

And there are always platform specific optimisations, they just aren't normally sexy things that people shout about. Just look how much Nvidia performance improved in Starfield, and how higher FPS modes are coming to that game on Xbox. Optimisation is vast subject and it definitely still happens for specific hardware.

Edit: just differences in how you uses APIs on the same hardware can make a big difference too.
 
Last edited:
The "developers don't optimize for series x" angle doesn't convince me anymore. I don't think many developers are even making platform specific optimizations anymore. In the digital foundry interview with remedy, they said that all optimizations apply to all platforms. Even remedy, a company that wants to excel in tech, doesn't optimize more for one or the other. They just write generic code that works everywhere.

And that's what's probably happening across the whole multiplatform industry, for the most part.
There’s a whole slide deck on UE5 optimization and you can do a quick comparison between how series consoles and PS5 will approach the same problem differently despite having the same hardware. And that just comes down to an API difference.

So I disagree fully with that statement. If what is coded is optimal for PS5 is ported to Xbox (meaning you replicate the GNM functionality and code but with DX12) that doesn’t mean it’s optimal for Xbox.
 
You mean Remedy talking about Alan Wake 2, that uses Mesh Shader and runs quite a bit better on Series X? I don't think Xbox would be having any issues if everyone treated it like Remedy do, and when they weren't using e.g. amplification shader.

And there are always platform specific optimisations, they just aren't normally sexy things that people shout about. Just look how much Nvidia performance improved in Starfield, and how higher FPS modes are coming to that game on Xbox. Optimisation is vast subject and it definitely still happens for specific hardware.

Edit: just differences in how you uses APIs on the same hardware can make a big difference too.

Timestamp with the discussion.

The mesh shader implementation varies just on meshlet size, the optimizations apply to all platforms but some of them are more effective on one compared to the others.

Should we take a Bethesda game as an example? Wouldn't surprise me if the Nvidia performance at launch was improved by fixing a bug more than anything else.

If they are improving the series x frame rate (and not just removing the 30 cap) it will probably apply to PC too.
 
There’s a whole slide deck on UE5 optimization and you can do a quick comparison between how series consoles and PS5 will approach the same problem differently despite having the same hardware. And that just comes down to an API difference.

So I disagree fully with that statement. If what is coded is optimal for PS5 is ported to Xbox (meaning you replicate the GNM functionality and code but with DX12) that doesn’t mean it’s optimal for Xbox.
Are developers going to follow those guidelines or will they chose the path that gets them a working game on all platforms to save money and time?

I think we are too quick to assume that PS5 gets all the care and attention while PC and Xbox get unoptimized code. Most of the time it's probably the easiest and quickest implementation that gets shipped.
 
This finally puts to rest a lot of partisan nonsense about how PS5 eschewed wide shader arrays because Cerny smart, how PS5 eschewed VRS because Cerny smart, that PS5 didn't support int4 and int8 because Cerny smart, that PS5 has a special Geometry Engine that's super custom unique (designed by Cerny, he smart).

Cerny is very smart, but PS5 is the way it is because that was the best Sony could do at the time. Now that they have access to tier 2 VRS, full Mesh Shader equivalence, AI acceleration instructions etc they've got it all. They probably have Sampler Feedback too. And now that the best way to push compute further is to go wider and - if anything - a little slower on balance they are doing that too.

There's a reason that the PS5 Pro is looking similar to the Series X at a high level - it's because they're both derived from the same line of technology, and they both face the same pressures on die area and power, and they both have very smart people deciding what to do with what's available.

Bit of a bummer that PS5 Pro doesn't have any Infinity Cache, but understandable given that it eats up die area. Being 2 ~ 3x faster at RT in the absence of any IC is cool though, and makes me quietly optimistic for RDNA4 and any possible AMD based handheld console.

@Dictator do you know how many ROPs the PS5 Pro has? Is it still 64?
Because hardware bugs,Vega&RDNA1 sPrimitve shader is unprogrammable by game developers,but PS5 can.That's the difference between RDNA1&PS5
 
Are developers going to follow those guidelines or will they chose the path that gets them a working game on all platforms to save money and time?

I think we are too quick to assume that PS5 gets all the care and attention while PC and Xbox get unoptimized code. Most of the time it's probably the easiest and quickest implementation that gets shipped.
No. PS5 is the largest platform based it gets the most attention. This is common knowledge in the industry. PS5 almost always ships with less issues than PC and Xbox.

Why optimize for Xbox when the market is so much less? It’s taken a while to get Series consoles to performing where they should be. With each wave it gets better, but it’s certainly not reflective of launch.
 
No. PS5 is the largest platform based it gets the most attention. This is common knowledge in the industry. PS5 almost always ships with less issues than PC and Xbox.

Why optimize for Xbox when the market is so much less? It’s taken a while to get Series consoles to performing where they should be. With each wave it gets better, but it’s certainly not reflective of launch.
I'm not saying that developers are optimizing for Xbox, I'm saying they aren't optimizing for anyone.

Ps: of course there are edge cases, but I'm talking about the majority of games.
 
I see this narrative still pushed about this damm pesky ps5 lacking features( defined as consumer facing names) HUSH HUSH its probably their market share fault that xbox isnt optimised more.


Meanwhile in the real world

Most games prototyped, build, shown on PC until last month before release. When asked,most devs don't know exact console performance characteristics until the very end. I bet this is that ps5 optimisation ruining everything.


As for mentioned "lacking" features themselves, which are to this day mostly not used even in xbox first party games.


VRS - Not exactly setting the world on fire but... One off- if not the best implementation is software one from call of duty, which even runs on last gen hardware.

Sampler Feedback - I have seen dev asked about lack usage commenting shortly - "too much overhead"

Mesh shader - Sebbi says it mostly the same as primitive shader, Alan wake got almost the same results and here we are another week, another circle of doubt.


I would tend to agree, about 98% devs don't dive inch beyond path of least resistance and sliders/plugins provided. PC is center of gravity not ps5


As for ps5 "lacking" something handy for optimisation and dragging everyone down, we have one direct example in that recent Unreal presentation.

Current status:

- PS- already implemented in Sony sdk

- XB- will be implemented in sdk

- PC- ... well looking how to do it.
 
UE5.4 is tested by Digital Foundry in Fortnite and City Sample demo and compared against UE5.0

-CPU performance is a lot higher (50% to 80% higher depending on the scene/system).
-Shader stutters and traversal stutters are still a major problem that affects frame pacing.
-Hardware Ray Tracing now allows for high resolution reflections and more bounces for emissivie lights.

 
Last edited:
Back
Top