Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

...what we probably wants are metrics per frame on:
a) resolution
b) amount of noise
c) detail per frame
d) alpha effects
e) actors on screen
f) draw distance
etc. We want those to be quantified. I think just having more data sampled correctly isn't really going to change things.

Yes, those things and more would be what I meant with "automated tools". On PC, there are tools that can let you see what the hardware is doing on any given frame as well, although I don't think they work universally across all games (maybe they do, I haven't really looked into it). Unfortunately on consoles only developers have access to those tools and sites doing benchmarks can only go by video recordings.

Regards,
SB
 
Control on consoles had basically a build in 'benchmark' i remember Alex saying in one of their videos (the demo you could run, no fps cap). Would be something if more games had that.
 
More ideally, you'd have one input running across multiple devices simultaneously. That way you could capture identical gameplay clips within whatever randomised world parameters are in effect.
I don't know if this would work in a lot of games with any kind of randomness. Even if you have identical inputs, games like GTA, Elder Scrolls, Fallout, Watch Dogs, Far Cry, Assassin's Creed etc have so much random variability I reckon you would see slight visual differences within seconds and massive ones within 30 seconds or so Many games are rolling dice almost constantly and I think it would be tricky to get that car in the alley to explode in the same way twice, which creates slightly different NPC paths around it, different bits of the car on fire, different particles from exploding glass, different billowing smoke, different reactions for the enemies etc.

It feels like the butterfly effect hugely sped-up.
 
On console (like on PC) VSync On / Off can make a big difference in framerate, we can see some difference in Ghostwire Tokyo. This is why when comparing consoles to PC, the PC game must be set to the same VSync setting otherwise the comparison will be incredibly flawed (and useless)! And this is why NXGamer has the most objective comparisons as he is is always following that rule.

eSg0A4M.png

YOo1MeW.png
 
On console (like on PC) VSync On / Off can make a big difference in framerate, we can see some difference in Ghostwire Tokyo. This is why when comparing consoles to PC, the PC game must be set to the same VSync setting otherwise the comparison will be incredibly flawed (and useless)! And this is why NXGamer has the most objective comparisons as he is is always following that rule.

eSg0A4M.png

YOo1MeW.png
indeed, I was talking about it few times on this forum
 
Thats ghostwire.... it could run at 200fps for the graphics we are getting. Not to bash the game, i think it looks intresting.
 
On console (like on PC) VSync On / Off can make a big difference in framerate, we can see some difference in Ghostwire Tokyo. This is why when comparing consoles to PC, the PC game must be set to the same VSync setting otherwise the comparison will be incredibly flawed (and useless)! And this is why NXGamer has the most objective comparisons as he is is always following that rule.

eSg0A4M.png

YOo1MeW.png
Why *must* the setting be the same on PC, when the real strength of the PC as a platform is it's ability to change settings to the users liking? Why would anyone on PC who is searching for the best performance turn on a setting that degrades performance if other options (like driver level framerate limits, VRR settings, or smart V-Sync) exist that may be more performant.

Should PC comparisons also dial back AF settings to match the poor settings on consoles even though there are little to no performance hit on PC? Why would anyone do that?
 
Why *must* the setting be the same on PC, when the real strength of the PC as a platform is it's ability to change settings to the users liking? Why would anyone on PC who is searching for the best performance turn on a setting that degrades performance if other options (like driver level framerate limits, VRR settings, or smart V-Sync) exist that may be more performant.

Should PC comparisons also dial back AF settings to match the poor settings on consoles even though there are little to no performance hit on PC? Why would anyone do that?

Thats why we usually stick to DF. Yet again NXG which usually stirs up discussions.
 
On console (like on PC) VSync On / Off can make a big difference in framerate, we can see some difference in Ghostwire Tokyo. This is why when comparing consoles to PC, the PC game must be set to the same VSync setting otherwise the comparison will be incredibly flawed (and useless)! And this is why NXGamer has the most objective comparisons as he is is always following that rule.

eSg0A4M.png

YOo1MeW.png

Surely the lower vsync frame rate is simply because frames >60fps are counted as 60 and frames <60fps (even by a little) are counted as 30. Thus the average calculates as much lower.

That's why NXG's common analysis of the PS5 being x% faster based on the averages with vsync enabled can be highly misleading.

If one system is just barely holding its framerate over 60fps and the other is 5% slower, that 5% could result in a large number of frames missing 16.6ms and thus contributing to the average as a 30fps frame resulting in an average frame rate reduction of much greater than 5%.

The situation is even worse if comparing a console game with adaptive vsync vs a PC game without.
 
Surely the lower vsync frame rate is simply because frames >60fps are counted as 60 and frames <60fps (even by a little) are counted as 30. Thus the average calculates as much lower.
It isn't just that. There are games on PC that if you enable vsync or the in game frame limiter, there is a performance penalty. You could have a game running 70+ fps until you enable vsync and it'll average 56 or something.
 
On console (like on PC) VSync On / Off can make a big difference in framerate, we can see some difference in Ghostwire Tokyo. This is why when comparing consoles to PC, the PC game must be set to the same VSync setting otherwise the comparison will be incredibly flawed (and useless)!

If the last two are the same graphics settings but the only exception is vsync, then it's simply a broken implementation. There is no reason to be such a huge disparity if vsync is implemented properly.

And this is why NXGamer has the most objective comparisons as he is is always following that rule.

I have other issues with his videos that I'm not going to rehash here, but in the case of vsync, to treat the PC version of a game like it's a console when comparing relative performance to PC hardware is just silly. It bears mention when a game has a borked vsync for sure, but you also should mention if/when you can remedy it by say, simply forcing vsync through the control panel.

That shouldn't be required yes, but the inherent ability to usually remedy broken implementations like the screen above is a main reason why people bother with all the hassles of the PC in the first place. You can critique it from an ease-of-use perspective no doubt, but the issue with NXGamer's interpretation is just that - it's an interpretation that he then tries to explain as an inherent flaw of the architecture based entirely on supposition.
 
Buffering of various stages of the pipeline can have a huge effect on framerate when vsync is enabled or disabled.

This can easily be different by platform. YouTubers don't always have insight into this kind of stuff. And that's an understatement.

Implementation can be the difference between < 60 fps being 59.9 fps or 30fps.
 
All these disputes and figuring out which of the consoles is more powerful now are actually quite pointless and are likely to become largely useless with the arrival of natively designed for the current generation.

This video clearly shows that GTA 5 uses far from the maximum of the raw power of both consoles (especially xsx). As well as most cross-gen games.
Obviously, in the vast majority of cases, optimization leaves much to be desired and only says that it was easier to port the game to one of the platforms than to the other (due to the toolkit, architecture more similar to the past-gen or something else), or that one of the platforms was simply given more time.

 
It isn't just that. There are games on PC that if you enable vsync or the in game frame limiter, there is a performance penalty. You could have a game running 70+ fps until you enable vsync and it'll average 56 or something.
That is because some frames just don't reach the 16.6 ms target. You just see the framerate for a second, more or less it is always the average of the last second. So if you have a few frames that needed more than 16.6ms you can have this 56fps even with vsync, as this only means that the frames are synched with 60Hz (at 60Hz vsync). The game still outputs 60 frames, but in case of 56 fps, you basically see 4 frames a second time (or one frame 5 times, ...).
Tripple buffering can fix this, but than you are at least one more frame behind. So if a frame needs a bit more time, it just won't be noticeable (except for the additional lag) as this will be fixed by other frames that are ready a bit earlier.
With the open framerate (and e.g. VRR) you see the frames when they are ready to be shown. So if one frame is ready in 10ms and the next in 33ms it will work. But on the other hand this will introduce micro-stuttering which isn't optimal at all. So you should design everything so the framerate doesn't go up and down to much. E.g. this is why you may notice stuttering if you play a 120fps game and the framerate goes down to 80fps and than up to 120fps (even with VRR). It might be more fluent from a technical perspective as a 60fps game, but your mind might notice the differences in animation fluidity.
 
All these disputes and figuring out which of the consoles is more powerful now are actually quite pointless and are likely to become largely useless with the arrival of natively designed for the current generation.

This video clearly shows that GTA 5 uses far from the maximum of the raw power of both consoles (especially xsx). As well as most cross-gen games.
Obviously, in the vast majority of cases, optimization leaves much to be desired and only says that it was easier to port the game to one of the platforms than to the other (due to the toolkit, architecture more similar to the past-gen or something else), or that one of the platforms was simply given more time.


It's just incredibly impressive how little power the XBS-X consumes relative to the PS5 and even more so a PC capable of running a similar level of graphics fidelity. Hell, it uses less power for the entire console than my GTX 1070, not to mention the power hungry RTX 3xxx series cards.

Regards,
SB
 
It's just incredibly impressive how little power the XBS-X consumes relative to the PS5 and even more so a PC capable of running a similar level of graphics fidelity. Hell, it uses less power for the entire console than my GTX 1070, not to mention the power hungry RTX 3xxx series cards.

Regards,
SB
What exactly is so impressive about it? It’s a 7nm RDNA2 chip well within the chip’s power efficiency window unlike the PS5. Maybe it’s just me as I’m not easily impressed. To me, nothing about a tsmc 7nm chip using so little power is impressive. It’s expected.
 
That is because some frames just don't reach the 16.6 ms target. You just see the framerate for a second, more or less it is always the average of the last second. So if you have a few frames that needed more than 16.6ms you can have this 56fps even with vsync, as this only means that the frames are synched with 60Hz (at 60Hz vsync). The game still outputs 60 frames, but in case of 56 fps, you basically see 4 frames a second time (or one frame 5 times, ...).
Tripple buffering can fix this, but than you are at least one more frame behind. So if a frame needs a bit more time, it just won't be noticeable (except for the additional lag) as this will be fixed by other frames that are ready a bit earlier.
With the open framerate (and e.g. VRR) you see the frames when they are ready to be shown. So if one frame is ready in 10ms and the next in 33ms it will work. But on the other hand this will introduce micro-stuttering which isn't optimal at all. So you should design everything so the framerate doesn't go up and down to much. E.g. this is why you may notice stuttering if you play a 120fps game and the framerate goes down to 80fps and than up to 120fps (even with VRR). It might be more fluent from a technical perspective as a 60fps game, but your mind might notice the differences in animation fluidity.
Right, but the swings don't even have to be big. If you have, say, a 17ms frame time (just missing the 16.6ms sync) for 3 frames in a row with db vsync you essentially have a 33ms frame time for 3 frames in a row. You can have a game that has fairly consistent frame times, say 14-18ms, that feels smooth with vrr or without vsync, and if you force a 16.6ms sync with double buffering it can feel and perform much worse. I don't think most people would choose the inconsistent performance in that case, unless they don't have a VRR display and are very sensitive to screen tearing. The question when it comes to benchmarking or comparisons is, should you tune a PC game to what people will likely play, try to match image quality, or try to match performance. Or maybe, if you want to do console to PC comparisons, all of those.

This is why I appreciate the modern trend of performance per setting type videos that @Dictator and other content creators do, where they compare console settings to those found on PC, tweak up settings that look poor on console, tweak down settings that are a bit overkill if that needs doing, and try to find that spot where the settings make sense for the class of hardware being showcased. Forcing parity if a setting has an innate performance penalty on a platform, and is easily disabled or circumvented by a driver setting, is silly on PC. The driver level control panel and the in game settings menu are the strength of the platform.
 
What exactly is so impressive about it? It’s a 7nm RDNA2 chip well within the chip’s power efficiency window unlike the PS5. Maybe it’s just me as I’m not easily impressed. To me, nothing about a tsmc 7nm chip using so little power is impressive. It’s expected.

You don't find it impressive that an entire console that consumes less power than just a PC GPU of comparable performance, not to mention the rest of the system that is required to run any content? And not just a little more power efficient but multiple times more power efficient.

Architecturally it's just the cherry on top that it is also more power efficient than another hardware design that operates in a similar performance envelope with similar design goals (low power living room gaming). This is not to take anything away from the PS5, as the PS5 is also a remarkable design as it too is massively more power efficient than a PC capable of running equivalent settings in a game.

Regards,
SB
 
Back
Top