Digital Foundry Article Technical Discussion [2023]

Status
Not open for further replies.
I doubt developers are aiming for parity across the two consoles, that’s just how the games are performing. If developers can push quality a little bit further on one console versus the other they’ll likely do it, even if it’s a minor res bump or slight increase in settings. One of the Immortals of Aveum devs confirmed that they pushed PS5 settings a bit further, since it was the better performing console for their game.

What you're describing would be very high level and last step optimizations and requiring relatively low effort. At that point the code and game design is basically running how it is.

Developers aren't aiming for parity, but that doesn't they likely for the most part on average (realistically you'll probably find some developer bias, due to whatever reason, for specific hardware platform) are going to put heavy effort into pushing disparity either.

We know for instance some specific advantages of the XSX such as likely higher potential shader throughput. I'd be skeptical that devs are putting a heavy emphasis in things like earlier planning how to leverage that for best advantage over the PS5 version, or later looking to write maybe two sets of shaders optimal for each platform, or things like that.
 
Do you have a link to this statement? Are we talking IQ, framerate, geometry, LoD, textures, world streaming/load, etc.?


What you're describing would be very high level and last step optimizations and requiring relatively low effort. At that point the code and game design is basically running how it is.

Developers aren't aiming for parity, but that doesn't they likely for the most part on average (realistically you'll probably find some developer bias, due to whatever reason, for specific hardware platform) are going to put heavy effort into pushing disparity either.

We know for instance some specific advantages of the XSX such as likely higher potential shader throughput. I'd be skeptical that devs are putting a heavy emphasis in things like earlier planning how to leverage that for best advantage over the PS5 version, or later looking to write maybe two sets of shaders optimal for each platform, or things like that.
I think the nature of cross platform means the game isn't tailored to one specific hardware. On the flip side UE5 in general could be more optimized on Xbox given the work Microsoft and The Coalition put into it.
 


I think the nature of cross platform means the game isn't tailored to one specific hardware. On the flip side UE5 in general could be more optimized on Xbox given the work Microsoft and The Coalition put into it.

Interesting. I wonder how many other developers are having issues with clawing back memory resources (ram) from Series X OS? And it sounds like async compute is faster on the PS5 as well, at least with this game.

I do wonder if PS5 is slightly more performant with the UE5.:unsure:

It’s slightly more AO and slightly different SSR. The frame rate is 5-10% worse off than XSX. It’s not exactly a good trade off.

Wasn't that 5-10% delta in XBSX favor that was mentioned by Thomas at DF, during the review/release period of the game? Since then, there have been several patches addressing these performance issues (especially, the battles within the town areas) on both systems?
 
Why would it be vastly superior? The other problem with this XSX vs PS5 issue that seems to be overlooked is that the XSX's theoretical advantages aren't even all that much higher. The difference is much smaller than what we saw last gen with either the PS4 vs Xbox One or the later refresh Xbox One X vs. PS4 Pro.

Let's just assume with a 20% faster PS5 (the theoretical tflops differential between the XSX and PS5 is actually slightly less than this) across the board, how much better do you think games would look? Keep in mind also that outside of developer focus there is a lot of claw back in terms of the actual PS5/XSX hardware gap as in some areas they are the same and some areas the PS5 is more hardware capable.

In the PC space 20% faster in terms of GPU performance is generally considered the borderline of whether an upgrade is even noticeable at all. The 7900XTX has a larger hardware gap over the 7900XT for instance, and if we do apples to apples comparisons I don't think people would consider the XTX "vastly superior."

In general what are peoples expectations for something that is 20% faster? Keeping in mind the difference even based on theoretical hardware is less than this.

that was my point, there were lots of discussions around new consoles launch how the XsX would crush the PS5 in game, even some claims that some games would run 30fps on PS5 and 60 on XsX, like there was a huge power differential, or how a game would load in 30s on XsX and nearly instant on PS5 etc...Even though the devs themselves were saying the two consoles were so close in power and features.

Now we can see, it's a tie, so now the excuses are the tools, the lead platform, etc... Devs must feel a bit insulted when they read that they did not put much effort into a version of their game because reasons...
 
that was my point, there were lots of discussions around new consoles launch how the XsX would crush the PS5 in game, even some claims that some games would run 30fps on PS5 and 60 on XsX, like there was a huge power differential, or how a game would load in 30s on XsX and nearly instant on PS5 etc...Even though the devs themselves were saying the two consoles were so close in power and features.

Now we can see, it's a tie, so now the excuses are the tools, the lead platform, etc... Devs must feel a bit insulted when they read that they did not put much effort into a version of their game because reasons...

Yeah, I don't buy the whole lead platform shtick. If you look at the prior generation, developers had no problems flexing the XB1X prowess over Pro, and PS4/Pro led that generation as the "lead platform." If anything, Series X seems more bottlenecked (software or hardware wise, or a combination of both) more so than it's market position in the console space.
 
It’s slightly more AO and slightly different SSR. The frame rate is 5-10% worse off than XSX. It’s not exactly a good trade off.
PS5 image quality was also much higher despite the same base resolution. I’m not sure there was ever an explanation for the disparity or whether it impacted performance on PS5, but hopefully this has already been patched out on Series X.
 
PS5 image quality was also much higher despite the same base resolution. I’m not sure there was ever an explanation for the disparity or whether it impacted performance on PS5, but hopefully this has already been patched out on Series X.
The sharpening is different between the versions, annoyingly. You can see something similar on PC where the default FSR2 is VERY oversharpened.
 
The sharpening is different between the versions, annoyingly. You can see something similar on PC where the default FSR2 is VERY oversharpened.

Sharpening can be so subjective and really boils down to the users taste or preference when judging image quality. Some users love less, while some users prefer more. The vast majority of reshade mods for (insert game of choice) and their creators love to implement or configure heavier sharpening settings into games. I'm looking square at you Elden Ring reshade modders... :yep2:

Me personally, I simply stick to whatever defaults that are set in the game and rely more on my personal TV settings.
 


I think the nature of cross platform means the game isn't tailored to one specific hardware. On the flip side UE5 in general could be more optimized on Xbox given the work Microsoft and The Coalition put into it.
but at the end of the day the PS5 just has more usable ram for us.
That's really surprising (in such UE5 game). So evidently there is much more happening behind the "XSX 13.5GB vs PS5 ~12.5GB available ram" mainstream story. It's not the first time a developer complain about XSX memory architecture. The Touryst developers already talked about the fact that it was one of the reason they couldn't do 8K on the XSX version.

The fact that async compute being faster on PS5 is not a surprise as async scheduling does run 22% faster on PS5.
 
That's really surprising (in such UE5 game). So evidently there is much more happening behind the "XSX 13.5GB vs PS5 ~12.5GB available ram" mainstream story. It's not the first time a developer complain about XSX memory architecture. The Touryst developers already talked about the fact that it was one of the reason they couldn't do 8K on the XSX version.

The fact that async compute being faster on PS5 is not a surprise as async scheduling does run 22% faster on PS5.

Besides the obvious being cost, were there any other reasons why X engineers went with a mixed memory configuration? Was (or are) there some benefits performance wise for such a memory setup?
 
PS5 image quality was also much higher despite the same base resolution. I’m not sure there was ever an explanation for the disparity or whether it impacted performance on PS5, but hopefully this has already been patched out on Series X.
I haven’t been following along with the latest patches, but according to DF performance is worse as a result of going to higher resolution

 
that was my point, there were lots of discussions around new consoles launch how the XsX would crush the PS5 in game, even some claims that some games would run 30fps on PS5 and 60 on XsX, like there was a huge power differential, or how a game would load in 30s on XsX and nearly instant on PS5 etc...Even though the devs themselves were saying the two consoles were so close in power and features.

Now we can see, it's a tie, so now the excuses are the tools, the lead platform, etc... Devs must feel a bit insulted when they read that they did not put much effort into a version of their game because reasons...
Well we saw how technology changed however. No longer do we lock resolutions, and all games came with performance modes which removed any chance of having those types of disparities.

Those predictions came during a time in which games had a singular frame rate and locked resolutions and there could be awkward edge cases in which those particular scenarios could happen, but those days are not likely ever occur. I think the last locked resolution game was Plagues Tale.

Lastly, details about XSX came out very late, ROP information was not revealed until hotchips, and most people discussing were under the assumption the XSX wasn’t anemic in fixed function hardware as well. They took a lot more cost cutting than most predictions expected.
 
Besides the obvious being cost, were there any other reasons why X engineers went with a mixed memory configuration? Was (or are) there some benefits performance wise for such a memory setup?

Yep, higher bandwidth. MS engineers at hotchips stated that they wanted bandwidth to feed their GPU and they singled out ray tracing as being particularly BW hungry.

MS could have gone with a simpler 256-bit bus with 8 x 2GB modules - it would have been cheaper than what they went for - but they wanted the bandwidth to feed their GPU.

The downside is developers have to consider what they put in the "GPU optimal" memory, and in the slower (but still fast) "other" memory.

Sadly, RT is pretty weak on consoles and it's been used sparingly (if at all) for the most part.
 
Depends what you're measuring against. Given both consoles are largely the same in performance, we have two different approaches to the same end. One costs more - by how much? And the other draws more power. Or does XBSX cost less/the same along with lower power draw? It's only a really bad choice I think if costs are much higher for the larger slab of silicon, 20% larger than PS5's and/or it makes size reduction harder.
It's also not definitively known what the causes of higher costs are. Fabbing anything in greater volumes tends to reduce costs and Sony are probably leveraging existing large contracts with TSMC to get good prices. It would be interesting (and impossible) to see a cost breakdown all things being equal. I.e. not commercial leveraging and the companies producing a PS5 and Series X consoles a month.
 
Besides the obvious being cost, were there any other reasons why X engineers went with a mixed memory configuration? Was (or are) there some benefits performance wise for such a memory setup?
If you revisit the DF interview with the Xbox System system architect Andrew Goossen, they clearly wanted to the bandwidth to be as high as possible but the memory architecture implemented was a necessary compromise because they were running into signal noise issues:

Digital Foundry: "It sounds like a somewhat complex situation, especially when Microsoft itself has already delivered a more traditional, wider memory interface in Xbox One X - but the notion of working with much faster GDDR6 memory presented some challenges."

Goossen: "When we talked to the system team there were a lot of issues around the complexity of signal integrity and what-not. As you know, with the Xbox One X, we went with the 384[-bit interface] but at these incredible speeds - 14gbps with the GDDR6 - we've pushed as hard as we could and we felt that 320 was a good compromise in terms of achieving as high performance as we could while at the same time building the system that would actually work and we could actually ship."

It sounds like their approach was in part to manage this wider challenge.
 
Last edited by a moderator:
Yeah, I don't buy the whole lead platform shtick. If you look at the prior generation, developers had no problems flexing the XB1X prowess over Pro, and PS4/Pro led that generation as the "lead platform." If anything, Series X seems more bottlenecked (software or hardware wise, or a combination of both) more so than its market position in the console space.
Well, it’s a situation where X1X had significantly more bandwidth, compute, and memory capacity to work with. No additional optimization is needed to drive additional gains.

XSX has both compute and bandwidth advantages, but it could be bottlenecked by the amount of memory capacity available, depending on how the developers want to leverage the space available.

But then I would argue this is the difference between optimizing for 1 platform over the other, games on PS5 are optimized to use the full space available. So more buffers and more loading for PS5; Games optimized for XSX would ideally stick to under 10GB, I’m not necessarily sure if that means longer shaders to reduce the number of buffers, or SFS technologies to ensure you’re only loading textures you really need.

Their market position determines which way you will build the game. If XSX can achieve parity why push further? The optimization could require something else entirely and the costs could be massive.
 
What's the limiting factor on 'signal noise issues' and how do other setups with higher BW deal with it?
I think this could be in relation to integrity mixed calls from cpu and gpu to various parts of the memory. Cpu would typically be calling the slower pool, GPU the faster. Symmetrical pool of memory would likely be much easier to handle higher speeds.
 
But the quote says it was in relation to unified memory.

"When we talked to the system team there were a lot of issues around the complexity of signal integrity and what-not. As you know, with the Xbox One X, we went with the 384[-bit interface] but at these incredible speeds - 14gbps with the GDDR6 - we've pushed as hard as we could and we felt that 320 was a good compromise in terms of achieving as high performance as we could while at the same time building the system that would actually work"

14gbps GDDR6 introduced signal issues that warranted a more complex memory solution. PS5's GGDR6 is 14gbps - 256 bit bus is what enables this? So above 320, your GDDR6 @ 14gbps is 'unstable'? What do GPUs do about this for higher BW than XBSX?
 
Status
Not open for further replies.
Back
Top