Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

Now something actually on topic...

The Ascent with power draw measures. The PS5 hits 220-230 watts, while the Series X around 190 watts, and Series S around 80-90 watts.

I am surprised how the power consumption seems to hover at ~195W on the rooftop on XSX. PS5 has a more stable framerate there, but consumes about 20-30W more. Overall gamee seems to run very well on both consoles.

But the most funny is the PC comparison with RTX. Consumption ~520W at ~50fps. :runaway:
 
I am surprised how the power consumption seems to hover at ~195W on the rooftop on XSX. PS5 has a more stable framerate there, but consumes about 20-30W more. Overall gamee seems to run very well on both consoles.

But the most funny is the PC comparison with RTX. Consumption ~520W at ~50fps. :runaway:

Well yeah, with a 3090 and RT not that surprising, especially when pairing it with a 12900k - basically the most inefficient CPU on the market right now (admittedly though when not even reaching 60fps it's probably not being taxed) . Would be more interesting in terms of efficiency comparisons between platforms to see a more midrange PC, say a 12400 with a 3060ti running at the same quality settings.

Albeit not even able to lock at 50fps with RTX and DLSS quality on a 3090 is...not great. I don't know how Ascent scales outside of RT though, so not sure if a 3060ti could even match the performance of the SX/PS5 in this game. The upscaling they're showing looks pretty good as well, not quite as good as DLSS quality but certainly comparable.
 
Last edited:
I am surprised how the power consumption seems to hover at ~195W on the rooftop on XSX. PS5 has a more stable framerate there, but consumes about 20-30W more. Overall gamee seems to run very well on both consoles.

But the most funny is the PC comparison with RTX. Consumption ~520W at ~50fps. :runaway:

Now compare power consumption to performance ratio with the Steam Deck
 
VGTech video about LEGO star Wars. Switching from 1080p to 1440p and back on PS5 while it's 2040p on XSX in one scene.
As an example in the 60fps Mode, at 1:42 the Series X renders the scene at approximately 2040p and the PS5 switches between 1440p and 1080p in this scene.
And we finally have the real minimum resolutions as it's also dropping to 1080p on XSX (albeit with performance advantage there).
In parts of the demanding Droid Factory level the resolution on both the PS5 and Series X can drop down to 1920x1080 in the 60fps Mode, though this happens less often on Xbox Series X and the Series X has a performance advantage in this area over the PS5.

As expected there is seems to be a problem with DRS on PS5 in this game, very probably a bug. One scene at 1080p vs 2040p (~4x more pixels on XSX) while in another scene both consoles run it at 1080p. That's clearly not normal.

 
One scene at 1080p vs 2040p (~4x more pixels on XSX) while in another scene both consoles run it at 1080p.

Do keep in mind that they also mention that in the area where they noticed that it hits 1080p on XBS-X as well that the PS5 suffers more performance issues than XBS-X in that same area.

So it isn't that both run at 1080p. It's that both can hit 1080p but the PS5 will be at 1080p far more often than the XBS-X.

The title probably isn't as well optimized for the PS5 as the XBS-X. Likely the XBS-X was the lead platform. Usually, the PS5 is the lead platform which gives it some advantages, but that's obviously not the case with this title.

Regards,
SB
 
Last edited:
Epic told us in the Interview we had that the Matrix awakens Demo is not at all maxing memory streaming from disc by a long shot and they said quite poignagntly it is CPU and GPU limitations for framerate. The whole point of Nanite, Virtual Texturing and VSM is to be low Impact on memory. Implying that it is otherwise in PC is easily provably wrong. ∆∆
 
Last edited:
Isn't NX specifically talking about asset decompression being handled by the additional logic blocks/coprocessors found in the consoles SSD controller or I/O complex, and not necessarily speaking about raw SSD speeds/bandwidth, but more so about freeing up CPU resources for other things other than asset decompression needs? In theory by doing so, the consoles could have an advantage CPU wise in like-for-like scenarios where both console and PC CPUs have similar matching clock speeds and core counts (maybe performing better in single threaded cases such as The Matrix Awakens demo).
 
  • Like
Reactions: snc
Epic told us in the Interview we had that the Matrix awakens Demo is not at all maxing memory streaming from disc by a long shot and they said quite poignagntly it is CPU and GPU limitations for framerate. The whole point of Nanite, Virtual Texturing and VSM is to be low Impact on memory. Implying that it is otherwise in PC is easily provably wrong. ∆∆
If you get the chance, you should ask epic why pcs with more powerful hardware than consoles are offering worse performance. Also, if you happen to get your hands on a 5800x3d, please test the demo again. I’m quite keen to see if the 3d vcache yields additional performance.
 
If you get the chance, you should ask epic why pcs with more powerful hardware than consoles are offering worse performance. Also, if you happen to get your hands on a 5800x3d, please test the demo again. I’m quite keen to see if the 3d vcache yields additional performance.
People are not running the City Sample in PC with settings as found in console, which will have an impact on perfoance so we do not know how it scales on different HW at those console settings. Basically - we cannot say better HW is offering worse performance as you type as that test has not been done yet. Getting exact settings is important to making such a statement. There are other differences below PC high than even that which is shown in the DF video: particle spawn for example. I do not have a 5800x3D but we will at DF soon enough.
 
If you get the chance, you should ask epic why pcs with more powerful hardware than consoles are offering worse performance.

My RTX 3090 setup(s) do an admiral job, handily outperforming my PS5 in raw framerate. My son's PC which has good specs (slightly above PS5/X in terms of specs) runs the demo slightly worst than my PS5. The biggest issues with the PC running TMA demo is the awful stutter (shader compilation), poor frame-times and the hard framerate drops that happen often (such as going from 60ish to mid 30s).
 
Last edited:
  • Like
Reactions: snc
NXG at it again. Thx Alex for correcting. A similar specced pc runs the demo just as well when considering settings and resolution. The stuttering is the usual shader compilation problem which dissapears once compiled by the engine.
 
Last edited:
People are not running the City Sample in PC with settings as found in console.

Surely some reduced virtual shadow map sizes and other small tweaks won't cause such a drastic difference? The settings that impact performance the most seemingly are indeed set at 3 on consoles (which is Epic).

For example, compare AO between 3 and 2 for Lumen GI. Here's a Series S compared to the High preset (everything turned to 2 in config file)

q8ddQIH.png


9P0TGEh.jpg


If you look at the AO behind these benches, it's a lot better on Series S compared to PC's High setting (config file 2) and indeed if you turn it back to Epic (3) then it looks exactly the same. It does indicate the consoles, even the Series S, are running pretty high settings for Lumen GI which is the most performance intensive option. Reflections on Series X also look more stable compared to PC's High setting (2), indiciating Epic settings.

Still, like you've said in your video, there are some other differences like roughness biases etc. While those could lead to a nice performance increase, I doubt these optimizations are the reason the demo runs the way it does on PC compared to console.

I do agree with you that we need the exact console settings before comparing it to PC for exact performance comparisons though.
 
according to UE documentation setting 3 is high, 4 is epic, 5 is cinematic ^^ - and if you want to find the exact settings console uses from a visual standpoint I recommend doing the comps.

Hmm, so you are indeed infected with the 3=high virus too, I see :p Do not worry, I will cure you! :D

Is that UE documentation up to date? If you run it at Epic scalability settings in the editor, it looks and performs exactly the same as config file 3 would (and compiled version also defaults to 3 like it does in the editor to Epic). If you set it to Cinematic, it will perform much worse, both in the editor at Cinematic and in the compiled version running at config file 4.

Also if Epic is 4, then 2 must be medium, which turns Lumen GI completely off (according to Epic's documentations as well) However, in the sample Lumen GI turns off at 1 and not at 2.

I am also pretty sure there is a performance and visual quality difference between 1 and 0, but I still have to test that!
 
Last edited:
I hate these settings. But it should obviously go to 11.

Everything should be able to be dialled up to 11!
You are very, very wrong here, tch tch. Do your analysis!

Obviously, settings should go to 12! 11 sounds just... wrong!

:mrgreen:
 
My RTX 3090 setup(s) do an admiral job, handily outperforming my PS5 in raw framerate. My son's PC which has good specs (slightly above PS5/X in terms of specs) runs the demo slightly worst than my PS5. The biggest issues with the PC running TMA demo is the awful stutter (shader compilation), poor frame-times and the hard framerate drops that happen often (such as going from 60ish to mid 30s).

A 3090 is 3x the power of the consoles yet it fails to yield 3x the performance. I’ve tested the demo on my pc with a 3080 and 5900x and it was awful. When I say that pc hardware is performing worse that consoles, I’m talking about expected vs actual performance. The consoles have mobile zen 2 cpus and modified rx6600s. If we use that as a baseline, then it is normal to expect a significant increase in performance when using hardware that is significantly more powerful. The only time I get acceptable performance is when all the settings are at 2. Then it yields close to 2x the performance with significantly worse visuals. Finally, keep in mind that Nvidia’s RT hardware is more performant than Amds yet we’re getting middling results.

Edit: When this demo initially came out on console, I was very vocal about it being nothing special due to its awful performance and my opinion hasn’t changed. There is something really wrong with this demo and ue5 imo. It badly fails to utilize hardware. GPU utilization is often low and cpu utilization is also not the best. Combine in the awful shader compilation stutters and frankly, I can’t say I’m impressed. Visually, it looks good and it has high quality assets but that’s irrelevant to me if it fails to utilize hardware properly.
 
Last edited:
Back
Top