Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
Good thing they disclaimered it.

Only seems to be going off their initial videos and not going off any of the followups when some games were patched on both systems to be pretty much parity. For instance, where Control was entirely fixed up on SeriesX later and mentioned by DF on podcasts.

Some points from https://forum.xboxera.com/t/games-analysis-ot-time-to-argue-about-pixels-and-frames/5015/498

It’s all very subjective, which is fine, but the logic is inconsistent.

For instance, the Medium is ‘better on PS5’ here because they dropped RT reflections, RT AO and shadow quality in order to raise resolution.

OK, so the guy thinks resolution is more important than RT, a valid opinion…

… but hold on, right above that entry, we have Little Nightmares 2 being ‘better on PS5’ because it has RT reflections even though it is lower resolution. It’s the exact same difference in reverse, but in one instance it’s better to have a higher res and who cares about RT, in the other who cares about resolution, it’s the RT that matters.

Perhaps he justified this by giving PS5 a slight advantage in performance for LN2, but I just rewatched the video and we are talking about the odd single dropped frame, which DF says is not noticeable in gameplay, and performance was basically 99.9999% perfect.
 
Avengers slightly higher resolution on Xbox? Wasn't that the game that used CBR on PS5 so pixel counts were higher (4k almost always) while it was 1800p-ish on Xbox. So the pixel counts were higher on PS5 but the image quality was much higher on Xbox because the effects buffers were all based on the internal CBR resolution, making it look much lower resolution.

Also, this chart misses the most important leap this generation - Load times.
 
Only seems to be going off their initial videos and not going off any of the followups when some games were patched on both systems to be pretty much parity. For instance, where Control was entirely fixed up on SeriesX later and mentioned by DF on podcasts.
The first response (in french) to the tweet seems to ask if updates were considered. He seems to answer (I’m paraphrasing and probably mistranslating), “Only if DF did a comparison.”

Wasn’t LN2 missing RT on Xbox at launch? I’m certain DF followed up with the patch, but I can’t remember if it merited another comparison video or just a mention in a DF Direct.

It would be nice to know the difference between “slightly better” and “better,” especially when vsync is so often involved. You’d think something like The Medium would show up on both “better on” columns.
 
Good thing they disclaimered it.

Only seems to be going off their initial videos and not going off any of the followups when some games were patched on both systems to be pretty much parity. For instance, where Control was entirely fixed up on SeriesX later and mentioned by DF on podcasts.

Some points from https://forum.xboxera.com/t/games-analysis-ot-time-to-argue-about-pixels-and-frames/5015/498
Yeah some inconsistent here for example tony hawk 2 run smoother on ps5 tough still nice summary
 
Decent summary, although it just confirms what we already knew - That they're very close to each other and will be trading blows over the course of their life.
 
"Better" and "best" are such useless yet common words! Disclaimers aside, the visual representation makes it very clear PS5 is 'better', in no qualified way.

The infographic shouldn't have been made - it's just consolewars fodder.
 
"Better" and "best" are such useless yet common words! Disclaimers aside, the visual representation makes it very clear PS5 is 'better', in no qualified way.

The infographic shouldn't have been made - it's just consolewars fodder.

Tbh it had the opposite impact on me if the intent was to make me all frothy about a particular console. It's just a sea of 'same' or 'slightly better'. :)
 
On a technical discussion, why do we think the clearly on-paper more potent XBSX isn't showing a consistent, even if small, advantage? Are devs just not using that extra bit of GPU power, or are PS5's advantages actually more useful in real-world games? Is XBSX's split RAM topology holding back the GPU?

For scalable engines not particularly optimised to a platform, I'd have thought XBSX would have a small advantage in the majority of titles. Or...has it, but this subjective comparison of differences is weighting the PS5 differences as relatively better. That is, neither is the best and both offer different enough experiences (framerate versus resolution, etc) that they are just different, not better, with what we're seeing from DF being 'better for me personally'? In which case, is there a clear distinction to the advantages of the machines, such as XBSX being better for resolution and image clarity, and PS5 being more consistent in framerates, or somesuch?
 
PS5 has some advantages over XsX and vice versa, so if an engine is designed towards on or the other's strengh, it'll show.
Add "tools", even if it has become a joke it's part of the equation, as these are always evolving, dev time, ressources, etc...
in the end these two consoles are pretty close, and i find it very interesting to see how each works from game to game, and how it will evolve.
Maybe the most interesting is also the XsS which from a game to another, disappoint and then surprises.
Interesting gen indeed for comparisons, much more than the last gen !
 
On a technical discussion, why do we think the clearly on-paper more potent XBSX isn't showing a consistent, even if small, advantage? Are devs just not using that extra bit of GPU power, or are PS5's advantages actually more useful in real-world games? Is XBSX's split RAM topology holding back the GPU?
I think what we've been seeing so far in games is that PS5's advantages (fill rate and the advantages of higher clocked narrower design) have a more immediate advantage in the current slate of games. Not that it's winning every battle, but it's closer than the compute differential would indicate. I would guess that as time goes by and games become more compute heavy, Xbox's advantages will start to show. But I wouldn't say I'm great at predicting the future.
 
Even the framerate portion is nuanced for consistency, as in how do you take into account VRR (Variable Refresh Rate).
 
On a technical discussion, why do we think the clearly on-paper more potent XBSX isn't showing a consistent, even if small, advantage?

Is it clearly more potent on paper though?

The only area where it's clearly more potent is in terms of memory bandwidth but that's only for 10Gb of the total 16Gb, and if a game needs more then 10Gb of VRAM it may cause issues with latency.

The CPU clock advantage is nothing and the GPU isn't faster then PS5's across the board as there's areas where PS5's GPU is faster due to the clock speed advantage.

I just think it's a case of:

1. Current game engines not unanimously preferring a wider GPU
2. RDNA2 CU scaling not being as efficient as MS thought it would be (Makes sense looking at historic AMD PC GPU's)
3. Sony's gamble of 'A rising tide lifts all boats' seems to be paying off

It'll be interesting to see where the performance gap is in 2 years time.
 
On a technical discussion, why do we think the clearly on-paper more potent XBSX isn't showing a consistent, even if small, advantage? Are devs just not using that extra bit of GPU power, or are PS5's advantages actually more useful in real-world games? Is XBSX's split RAM topology holding back the GPU?

For scalable engines not particularly optimised to a platform, I'd have thought XBSX would have a small advantage in the majority of titles. Or...has it, but this subjective comparison of differences is weighting the PS5 differences as relatively better. That is, neither is the best and both offer different enough experiences (framerate versus resolution, etc) that they are just different, not better, with what we're seeing from DF being 'better for me personally'? In which case, is there a clear distinction to the advantages of the machines, such as XBSX being better for resolution and image clarity, and PS5 being more consistent in framerates, or somesuch?
PS5 higher clocks mean less dev effort is required to keep the GPU occupied? Better API in GNM?
 
On a technical discussion, why do we think the clearly on-paper more potent XBSX isn't showing a consistent, even if small, advantage? Are devs just not using that extra bit of GPU power, or are PS5's advantages actually more useful in real-world games? Is XBSX's split RAM topology holding back the GPU?

For scalable engines not particularly optimised to a platform, I'd have thought XBSX would have a small advantage in the majority of titles. Or...has it, but this subjective comparison of differences is weighting the PS5 differences as relatively better. That is, neither is the best and both offer different enough experiences (framerate versus resolution, etc) that they are just different, not better, with what we're seeing from DF being 'better for me personally'? In which case, is there a clear distinction to the advantages of the machines, such as XBSX being better for resolution and image clarity, and PS5 being more consistent in framerates, or somesuch?

IMO, I think it comes down primarily to PS5 likely being the lead development platform for most developers now. That, of course, doesn't explain everything as something like Tales of Arise which performs better on both XBS-S and XBS-X than it does on PS5 despite being made by a Japanese dev studio.

However, that combined with the better front end (ROP/RBE) performance favors games that started development during the previous generation. I see things potentially playing out like this.

Performs relatively better on PS5:
  • Simpler titles where ROP/RBE performance is more important than say compute performance.
    • Note: this doesn't mean the game won't look absolutely gorgeous.
  • Titles that really leverage the storage subsystem. We really haven't seen anything fully exploit this yet.
    • R&C: Rift Apart took advantage of it, but didn't really exploit it as even a slow SSD is more than enough for R&C: Rift apart. Basically it would likely perform almost exactly the same on XBS consoles.
Performs relatively better on XBS-X:
  • More complex titles that find creative ways to really use all that compute power.
  • Titles that are primarily bandwidth bound.
In either case, I expect any advantage to be relatively small with the following caveat. This assumes that both console versions of a game receive relatively equal dev. time and effort.

So basically, I expect that in the majority of cases where one or the other shows a relative advantage that it'll continue to be a fairly small one.

Regards,
SB
 
I think the PS5 is performing as it should, they extract the performance (10.2TF), while the XSX is more capable technically GPU-wise (+12TF), its not showing it due whatever reason we dont really know. It could be GDDR6 allocation but its unlikely with todays games.
What i do i see is the XSX is relatively 'slow' clocked for being a RDNA2 part in that performance range. Look at RDNA2 gpus, 6600XT and up all are high clocked, higher then the PS5, even when they scale up substantionally in CU's (>RX6800).

Relative to dGPU performance, the PS5 is close to the 6600XT's 10.2TF performance figur/benchmarks (again, ballpark). Which would put the XSX as somewhat underperforming, which should be more akin a 6700/XT which is somewhat more capable than PS5's relative performance.

Its likely theres some truth to this comment:

PS5 higher clocks mean less dev effort is required to keep the GPU occupied? Better API in GNM?

Anyway with that said, their close enough that it wont really matter right now. The XSX might pull ahead when we drift away from cross gen and GPU saturation matures.
For everyone and their dog it doesnt matter at all, their just too close.

the GPU isn't faster then PS5's across the board as there's areas where PS5's GPU is faster due to the clock speed advantage.

Its not faster, its more capable, on paper atleast. Narrow vs wide, technically the XSX should outperform it if both GPU's are fully utilized to their strengths. There could be different factors at play, be it memory, tools/gnm, focus of devs on platform, easier to extract performance out of narrow vs wide.
Could even be that RDNA2 as mentioned before likes to clock high? Even a 6600XT clocks in at what, 2.6ghz?
 
some of the same results are due capping game to 30 or 60fps and its really hard to see 20-25% gpu advatnage when there is dynamic res (very popular in this generation and its good direction), also sony could still have more mature/optimized api
 
On a technical discussion, why do we think the clearly on-paper more potent XBSX isn't showing a consistent, even if small, advantage? Are devs just not using that extra bit of GPU power, or are PS5's advantages actually more useful in real-world games? Is XBSX's split RAM topology holding back the GPU
Typically compute shaders should largely favour the one that can put out the most TF provided the bandwidth is available for it to use. To obtain higher and higher FPS, you still need to get passed the frame setup which is fixed for the scene resolution and geometry before you start complicating with compute shaders etc in post. PS5 clearly will have an advantage in setup in both the ROP design and clockspeed, and XSX needs to catch up in post. In scenarios where 30fps are allowed, this should be a clear advantage for XSX. But in 60/120fps scenarios, it's very difficult for it to pull too far away. Software rasterization would largely be in favour of XSX though. We see this in Doom Eternal. If I had 2 movies, I could showcase the 'average'.

The image above is indeed console wars fodder, but it's useful enough to put to rest some of our earlier predictions (at least mine). I was fairly adament that XSX would rule supreme (mind you this is was before knowing about the ROP changes on XSX, I would have assumed ROP count would have scaled with TF but it didn't). So even if it's inaccurate, or biased etc. It shows that PS5 and XSX traded enough blows that it puts to rest the most important argument, which is that XSX is superior; it is rather 'competitive' not dominant.

As per dynamic resolution, it's very difficult to showcase; as much as people want to believe they can count frames, they can only pixel count the frames with aliasing. The words average, min and max are really just a discussion about sampling. And they are sampling what they feel is likely to be the lows and highs. But the idea of knowing the 'average' is very unlikely.

There are 60 frames in a second, and I have evidence of resolution morphing that quickly in a single second. It's quite interesting to watch how dynamic resolution will work on various games, it's quite smooth and hard to notice.

Here is some rough work I have going:
  • Red is XSX
  • Blue is PS5
  • Higher is better - the overall position of the line graph indicates resolution indirectly. There is no reconstruction TAAU happening here. I wish I had a native 4K here to showcase the differences.
  • The shape of the waveform indicates a difference in rendering.

note, I chose this because I needed to see a difference between the graphs and I wanted to see how dynamic resolution moved. I knew in advance that XSX had higher dynamic resolution, this wasn't meant to be console fodder, but just verification that my work and the confirmed results were aligned. While I am curious to see how other games performed on my tests, it's hard to get the footage to do it. Future work is to just extrapolate these graphs into resolution -- get ML to learn how to read it and automagic pixel counting. Lots of testing around reconstruction methods will need to happen, but yea, should be enough to provide and equivalency of optical resolution.

 
Last edited:
Narrow vs wide, technically the XSX should outperform it if both GPU's are fully utilized to their strengths.

Surely if a game tailors to PS5's strengths it wouldn't still perform better on XSX?

There could be different factors at play, be it memory, tools/gnm, focus of devs on platform, easier to extract performance out of narrow vs wide.

It just be that in typical AMD fashion CU scaling is poor.

Historically ATI/AMD GPU's have always scaled poor with CU count (HD5850 vs 5870, HD6950 vs 6970....etc.....etc...)

If you clock Vega 56 and Vega 64 at the same clocks they're within 1% of each other.

Could even be that RDNA2 as mentioned before likes to clock high? Even a 6600XT clocks in at what, 2.6ghz?

That is a very big possibility as it's clearly an architecture that's built for speed.
 
Historically ATI/AMD GPU's have always scaled poor with CU count (HD5850 vs 5870, HD6950 vs 6970....etc.....etc...)

If you clock Vega 56 and Vega 64 at the same clocks they're within 1% of each other.
no longer a case in current games and rdna2 arch
 
Status
Not open for further replies.
Back
Top