Digital Foundry Article Technical Discussion [2024]

Something like Lego star wars on ps5 makes a lot more sense if there is a PS4 API mode to PS5. And from the results it doesn't look like a very efficient translation.

Series x using the PC ray tracing code in some games explains a lot too.
how many games run better on PS5? Because in this fiasco generation I can only count a few, they are sooooo similar. Tbh, I've just seen a bunch of games running better on the PS5 while most games running equal on both, and of those who run better on a single console we can say more than 60% run slightly better on XSX?
 
the difference between both consoles is just a curiosity more than anything else, 'cos it's so negligible. Also Series X has a split memory pool where one memory pool is faster but the other is slower, while the PS5 has a single memory pool running at the same speed, so when you use the faster memory pool more Xbox has a slight advantage but when you use the slower memory pool it creates a bottleneck compared to PS5.

Kudos to Cerny.
The problem for Microsoft is that they are losing more per console compared to Sony while getting pretty much the same results. The chip is, what? 30% bigger, and that is a huge amount of money spent for invisible advantages.
 
how many games run better on PS5? Because in this fiasco generation I can only count a few, they are sooooo similar. Tbh, I've just seen a bunch of games running better on the PS5 while most games running equal on both, and of those who run better on a single console we can say more than 60% run slightly better on XSX?
I'm sure that if we were to take only games that have clear advantages on one or the other (so no ambiguous situations, like better framerate on one but better settings on the other, or better framerate on one but slightly higher dynamic res on the other) we could count them on the fingers of one or two hands. The final output is 99% the Pam "they are the same image" meme.
 
The problem for Microsoft is that they are losing more per console compared to Sony while getting pretty much the same results. The chip is, what? 30% bigger, and that is a huge amount of money spent for invisible advantages.
It depends actually here probably about what time frame we are talking about. Most likely SONY pushing for high clocks in 2020 meant their first wave of consoles on were quite expensively binned. This would decrease though over time.

XSX went for its lower clock most likely for binning purposes more than anything else (save money, again).
 
Not surprised ps5 can extract better performance with its shader compiler and api. It’s like using Mantle vs Directx.

I wonder if amd always has a native api for development or if it’s something they only design when a new console generation comes around. Like was Mantle something they always had internally and fancied up for release?

Sony also has their own shading language. I wonder how much of the compiler is maintained and optimized by Sony vs AMD. But you can see how a shading language could provide more optimal results if it targeted a single gpu.

Edit: More accurately the API and shader language only have to target a single ISA.
 
Last edited:
the baggage of keeping things PC compatible with DirectX keeps Xbox consoles from ever hitting the pinnacle of optimization. The API being lower level than DX probably makes the compilers job easier and having to guess less what the developer is trying to do.
 
It depends actually here probably about what time frame we are talking about. Most likely SONY pushing for high clocks in 2020 meant their first wave of consoles on were quite expensively binned. This would decrease though over time.

XSX went for its lower clock most likely for binning purposes more than anything else (save money, again).
What's strange to me is that it's literally the same company engineering the chips. They know what's the best combination of CU's and frequency for yields and cost. How did Sony came to the conclusion of using a smaller chip with dynamic higher frequencies and Microsoft went in the other direction?

Especially since rdna 2 gpu's are clocked more like a PS5 and higher, while xsx is so much lower. Unless AMD and Sony have horrible yields at those frequencies, the series x chip is the odd one. The PS5 is also cheaper to manufacture (already in 2021 they were selling at cost, then the cost rose again), so I don't understand how it all happened.
 
Not surprised ps5 can extract better performance with its shader compiler and api. It’s like using Mantle vs Directx.

I wonder if amd always has a native api for development or if it’s something they only design when a new console generation comes around. Like was Mantle something they always had internally and fancied up for release?

Sony also has their own shading language. I wonder how much of the compiler is maintained and optimized by Sony vs AMD. But you can see how a shading language could provide more optimal results if it targeted a single gpu.
Mantle existed because the other PC gfx API vendors at the time weren't entirely sold or were on the fence about ideas like monolithic pipeline model being easier to apply inlining optimizations for static states without the driver secretly compiling their own internal PSOs behind a developer's backs, persistently mapped device upload memory (AKA pinned memory), more advanced GPU-driven rendering APIs, and the biggest elephant in the room being bindless which was fundamentally incompatible with driver side automatic hazard tracking in comparison to older resource binding models of prior gfx APIs ...

Sony's graphics libraries and shading languages are entirely designed in-house without much input (if at all) from AMD. Chances are they probably have a custom graphics kernel (AMDGPU predates modern PS hardware and radeon kernel has been abandoned) since they aren't looking to support too many different sets of hardware and they want nothing to do DX/GL/VK/etc. so their userspace interfaces (GNM/AGC) are entirely original machinations of their own ...
the baggage of keeping things PC compatible with DirectX keeps Xbox consoles from ever hitting the pinnacle of optimization. The API being lower level than DX probably makes the compilers job easier and having to guess less what the developer is trying to do.
Direct3D on Xbox systems usually have console specific 'extensions' for the developers that do want to apply similar optimizations and you can even bypass their shader compiler with custom HLSL intrinsics that often borderline resembles native ISA!
 
Last edited:
What's strange to me is that it's literally the same company engineering the chips. They know what's the best combination of CU's and frequency for yields and cost. How did Sony came to the conclusion of using a smaller chip with dynamic higher frequencies and Microsoft went in the other direction?

Especially since rdna 2 gpu's are clocked more like a PS5 and higher, while xsx is so much lower. Unless AMD and Sony have horrible yields at those frequencies, the series x chip is the odd one. The PS5 is also cheaper to manufacture (already in 2021 they were selling at cost, then the cost rose again), so I don't understand how it all happened.
We don't really know how much NEW 1st party games made specifically for XBOX will make better use of XSX's capabilities, maybe those games would run less well on ps5. Maybe HB2, if available for ps5, wouldn't run as well. Titles using more modern technology also run a bit better on XSX, see: AW2, Avatar. It will be interesting to see how newer games run on ps5 and XSX.

unfortunately, I couldn't write the number 5 in lowercase, but I think it's still understandable :)
 
We don't really know how much NEW 1st party games made specifically for XBOX will make better use of XSX's capabilities, maybe those games would run less well on ps5. Maybe HB2, if available for ps5, wouldn't run as well. Titles using more modern technology also run a bit better on XSX, see: AW2, Avatar. It will be interesting to see how newer games run on ps5 and XSX.

unfortunately, I couldn't write the number 5 in lowercase, but I think it's still understandable :)
Absolutely, compute heavy games tend to that 20% higher teraflop number, even if even in those there have been some mixed results. But making a console that mostly shows improvements only for engines that aren't that common wasn't the right move, if it was their intention. Maybe they expected more "next gen games" sooner? I mean, they are coming out with something in 2026...
 
Mantle existed because the other PC gfx API vendors at the time weren't entirely sold or were on the fence about ideas like monolithic pipeline model being easier to apply inlining optimizations for static states without the driver secretly compiling their own internal PSOs behind a developer's backs, persistently mapped device upload memory (AKA pinned memory), more advanced GPU-driven rendering APIs, and the biggest elephant in the room being bindless which was fundamentally incompatible with driver side automatic hazard tracking in comparison to older resource binding models of prior gfx APIs ...

Sony's graphics libraries and shading languages are entirely designed in-house without much input (if at all) from AMD. Chances are they probably have a custom graphics kernel (AMDGPU predates modern PS hardware and radeon kernel has been abandoned) since they aren't looking to support too many different sets of hardware and they want nothing to do DX/GL/VK/etc. so their userspace interfaces (GNM/AGC) are entirely original machinations of their own ...

Direct3D on Xbox systems usually have console specific 'extensions' for the developers that do want to apply similar optimizations and you can even bypass their shader compiler with custom HLSL intrinsics that often borderline resembles native ISA!

Good on Sony for investing in rolling their own programming model. Seems to be giving them great results.
 
What's strange to me is that it's literally the same company engineering the chips. They know what's the best combination of CU's and frequency for yields and cost. How did Sony came to the conclusion of using a smaller chip with dynamic higher frequencies and Microsoft went in the other direction?

Especially since rdna 2 gpu's are clocked more like a PS5 and higher, while xsx is so much lower. Unless AMD and Sony have horrible yields at those frequencies, the series x chip is the odd one. The PS5 is also cheaper to manufacture (already in 2021 they were selling at cost, then the cost rose again), so I don't understand how it all happened.
Calculated risk for Sony. You don't know if yields will improve or not or by how much over time. Fast silicon costs more than slower silicon just due to yield. There is only 20% differential in size between the 2 chips at launch (300mm2 vs 360mm2). MS may have played it overly safe from the beginning to ensure that their costs were inline, and Sony has more freedom to play with given their position in the market.

Ultimately you pay per wafer. A 20% SoC area difference can be made up by a 20% yield difference. Sony gets more chips per wafer, but they're hoping that yield improves so that they can claw their money back. There was a very specific reason why you didn't see many PS5 Digitals in the wild at launch. It was a loss, even by today's standard it's still likely to be.

Given how they were both priced fairly equally (even today), we can assume that they SoC cost about the same at launch. PS5 was likely sitting at 73% yield, XSX was likely sitting closer to 88%. With these numbers both would produce about 172-173 chips per wafer.
 
Still interesting where graphics will be better an how much better. :)
Interesting how that is possible what XSX chip is 20% larger and have 50% more transistors?
 
It’s impressive that Sony development tools are both more performant and apparently easier to use.

It's legacy of Emotion Engine and Cell days. Counting every cycle, MB, latency, dma, and so on, now channelled on so called of the shelf hardware. There is still quite a lot of those guys/ know how at Sony atg/ sn systems/world wid studios to whom this is default mode of doing things. Add to that luxury of few hardware configs working as extensions and here are the results. Nice to have such example in wider IT world which is day by day more and more abstracted and "performance last".
 
What's strange to me is that it's literally the same company engineering the chips. They know what's the best combination of CU's and frequency for yields and cost.


Xbox has fixed clocks - during even the most demanding game conditions the frequency shouldn't budge. This means leaving a lot of frequency for peak or even typical clocks on the table.

These are the clocks for the 6700XT, a top bin of the 40 CU Navi 22 RDNA2 chip. Ignore the min max numbers in the box for a moment, because these are actually "99th and 1st percentile, so outliers are excluded".

clock-vs-voltage.png


There are points where the frequency is dropping into the 1900 ~ 2000 mhz range during gaming. This is despite the 6700XT having a higher draw on its own than the entire Series X at the wall.

The Series X has significantly less electrical power available for the GPU than the 6700XT, and it has 30% more CUs to spread it amongst. They also can't bin for different models, and Series X chips had to be locked down and in mass production a little earlier too. Fixed clocks of 1825mhz is pretty damn good for the Series X, and in line with what you'd expect from year 2020 RDNA2.

The idea that Series X clocks are low is unfounded - it's the fixed nature of the clocks that makes them appear low relative to PS5 and particularly PC RDNA2 parts.

How did Sony came to the conclusion of using a smaller chip with dynamic higher frequencies and Microsoft went in the other direction?

The PS5 initially did not have dynamic clocks. According to Cerny in Road to PS5, before dynamic clocks they couldn't maintain 2 ghz.

My guess would be that Cerny decided to add AMD Smartshift (or some version of it) to the PS5 when it was delayed to add RDNA2's ray tracing (or a slightly earlier version of it).

You'll probably have noticed that despite the liquid metal TIM and high power draw, PS5 boost clocks well below RDNA2, but it clocks rather better than RDNA1. IMO this is almost certainly because the design exists somewhere between RDNA1 and RDNA2.

Adding RT and PC APU style boost/power sharing to the PS5 are the two things that almost completely nullified Xbox's hardware advantage, even if it delayed the PS5. It's stuff like this that really shows how very good Mark Cerny is at his job, and not nonsense internet stuff about "fully custom Cerny Geometry engines" or whatever.

Edit: Year 2020, not year 2000. 😬
 
Last edited:
Xbox has fixed clocks - during even the most demanding game conditions the frequency shouldn't budge. This means leaving a lot of frequency for peak or even typical clocks on the table.

These are the clocks for the 6700XT, a top bin of the 40 CU Navi 22 RDNA2 chip. Ignore the min max numbers in the box for a moment, because these are actually "99th and 1st percentile, so outliers are excluded".

clock-vs-voltage.png


There are points where the frequency is dropping into the 1900 ~ 2000 mhz range during gaming. This is despite the 6700XT having a higher draw on its own than the entire Series X at the wall.

The Series X has significantly less electrical power available for the GPU than the 6700XT, and it has 30% more CUs to spread it amongst. They also can't bin for different models, and Series X chips had to be locked down and in mass production a little earlier too. Fixed clocks of 1825mhz is pretty damn good for the Series X, and in line with what you'd expect from year 2000 RDNA2.

The idea that Series X clocks are low is unfounded - it's the fixed nature of the clocks that makes them appear low relative to PS5 and particularly PC RDNA2 parts.



The PS5 initially did not have dynamic clocks. According to Cerny in Road to PS5, before dynamic clocks they couldn't maintain 2 ghz.

My guess would be that Cerny decided to add AMD Smartshift (or some version of it) to the PS5 when it was delayed to add RDNA2's ray tracing (or a slightly earlier version of it).

You'll probably have noticed that despite the liquid metal TIM and high power draw, PS5 boost clocks well below RDNA2, but it clocks rather better than RDNA1. IMO this is almost certainly because the design exists somewhere between RDNA1 and RDNA2.

Adding RT and PC APU style boost/power sharing to the PS5 are the two things that almost completely nullified Xbox's hardware advantage, even if it delayed the PS5. It's stuff like this that really shows how very good Mark Cerny is at his job, and not nonsense internet stuff about "fully custom Cerny Geometry engines" or whatever.
Dynamic clocks have always made sense, I wonder if switch 2 will have it.
 
My guess would be that Cerny decided to add AMD Smartshift (or some version of it) to the PS5 when it was delayed to add RDNA2's ray tracing (or a slightly earlier version of it).
SmartShift is about shifting power between the CPU and GPU based on workload, which you would already expect to be happening in an APU. Dynamic clocks have been on the PC for a long time, though I assume Sony's algorithm is different.
 
Back
Top