Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

The fast IO we have available on todays hardware will take awhile before its truly, generally utilized. Were two years in this year and its still quite dire in that regard. A year or two and things will be better perhaps. Its why i think rolling generations actually makes sense nowadays.
 
The fast IO we have available on todays hardware will take awhile before its truly, generally utilized. Were two years in this year and its still quite dire in that regard. A year or two and things will be better perhaps. Its why i think rolling generations actually makes sense nowadays.

I'm sure MS has already planned to move into the rolling gen model and the series consoles are already the first iteration of that.
 
Or even CPU power just old and bad code. People need to understand this nearly no games have perfect code. There are always possibility of optimisation. Some are better than other but here it is not only a problem on current gen consoles but on last gen consoles and PC too. Maybe in the future they can improve the code with better multithreading and load even faster on GTA 6. Don't forget GTA 5 had to run on PS3 PPU...

Same Lance Mc Donald find some easy way to fix frame pacing issue in Soul's game and he said a few days ago Elden ring PS4 solve the issue but if PS5 is better than DX12 or Xbox, this is not as good as PS4 because the PS5 API is new and a bit different than PS4. Basically From Software have difficulty with new API...

EDIT: But PS5 API is easier to deal with than Xbox less changes. Imo there is no reason they can't run Elden Ring at a good 60 fps on current gen consoles and even less reason of performance problem on PC.

A much more interesting benchmark is the matrix awakens demo where PS5 and Xbox Series X are at same level with very modern rendering technology with Nanite, RT GI, RT reflections, Virtual shadow maps.
They are not. Performance is quite more stable on PS5 when driving fast at cross roads.
 

Quality Mode
G9J3ica.jpg


Performance RT Mode

o9Xrn5c.jpg


Performance Mode
vBHiAXa.jpg
 
Funny how tech reviewers can come to different performance measurements. So what was done differently for DigitalFoundry to have PS5 drop to 40 while Series X was 50, or PS5 drop to 20 when Series X held 27?
 
Funny how tech reviewers can come to different performance measurements.
Lies, damned lies, and statistics!

Shortbread's post is really good as the data is there showing how averages were derived. But different sampling patterns, time intervals, etc. can generate different statistical approximations, all correct if performed mathematically correct, and yet potentially different. That's why any real understanding of data needs an application of solid data science, and recognition of the shortfalls of approximating data. Just dumping the raw data gives users the option to perform their own data analysis and derive their own approximations!
 
Lies, damned lies, and statistics!

Shortbread's post is really good as the data is there showing how averages were derived. But different sampling patterns, time intervals, etc. can generate different statistical approximations, all correct if performed mathematically correct, and yet potentially different. That's why any real understanding of data needs an application of solid data science, and recognition of the shortfalls of approximating data. Just dumping the raw data gives users the option to perform their own data analysis and derive their own approximations!

I would argue that even this raw data has minimal value since we can never know if more time and manpower was devoted to Platform A vs Platform B for whatever reasons.
 
Funny how tech reviewers can come to different performance measurements. So what was done differently for DigitalFoundry to have PS5 drop to 40 while Series X was 50, or PS5 drop to 20 when Series X held 27?

Simple answer being, that DF, NXGamer, VG Tech, ElAnalistaDeBits, and all the rest aren't sampling matching areas, scenes or scenarios all the time. In the DF video as you stated, it shows PS5 dropping in certain scenes, while in other areas it outperforms XBSX.

I think the best solution for gamers is to stop focusing on random drops, or the highest hit framerates... but more so focus on framerate averages (and frame-times) as being the main determining factor in judging 'good' performance.
 
Last edited:
Simple answer being, that DF, NXGamer, VG Tech, ElAnalistaDeBits, and all the rest aren't sampling matching areas, scenes or scenarios all the time. In the DF video as you stated, it shows PS5 dropping in certain scenes, while in other areas it outperforms XBSX.

Exactly. As so much of it is more about the selection process of what footage to even analyze to present to others.

I think the best solution for gamers is to stop focusing on random drops, or the highest hit framerates... but more so focus on framerate averages (and frame-times) as being the main determining factor on 'good' performance.

Agreed. What's more important for gamers is if a game has any severe flaws or anti-consumer mechanics.
 
Funny how tech reviewers can come to different performance measurements. So what was done differently for DigitalFoundry to have PS5 drop to 40 while Series X was 50, or PS5 drop to 20 when Series X held 27?
DF found one scene with a clear advantage for XSX and they talked about it in their article at length (in both perf and quality as this is the same scene!). But the reality is that the others found an advantage for PS5 in all their tested scenes. Seems that one scene on PS5 is the exception, not the rule.

We'll have more data with NXGamer analysis but as of now it seems the PS5 is usually performing better while also constantly having better shadows.

It seems that PS5 is rendering an additional projected shadow texture: it's a technique used by Rockstar separate to screen-space ambient occlusion or typical shadows, drawing a texture to bake in shade at specific points.

Even DF say the game is usually performing worse on XSX in the non-RT mode:
Switching over to Series X, the results are also improved, though drops to performance are noticeably more frequent than PS5.
 
Exactly. As so much of it is more about the selection process of what footage to even analyze to present to others.



Agreed. What's more important for gamers is if a game has any severe flaws or anti-consumer mechanics.

So I'm wondering, would it be in gaming tech analysis sites (or streamers) best interest to offer more complete analysis based on completed games or larger playthrough environments? Thus, offering readers/viewers more critical information and footage towards their findings. I mean, we have heard from some gaming tech reviewers (not mentioning any names) about not completing a game (sometimes, not even half), as such, they can't specifically state that IQ and/or performance still holds up, or dives, across a completed game.
 
Last edited:
All i'm seeing is indentical performance between XSX and PS5, the differences are so small and random that it really doesnt matter and probably comes down to other things than power or even optimizations.
 
So I'm wondering, would it be in gaming tech analysis sites (or streamers) best interest to offer more complete analysis based on completed games or larger playthrough environments? Thus, offering readers/viewers more critical information and footage towards their findings. I mean, we have heard from some gaming tech reviewers (not mentioning any names) about not completing a game (sometimes, not even half), as such, they can't specifically state that IQ and/or performance still holds up, or dives, across a completed game.
You can't always complete a game for such tech-tests. They can just use what they have. E.g. maybe a save game into the late game but that's all. Everything else they can do is play it for a few hours and judge the general performance.

Something like a "full tech review" would only work if the game would have a "fast-travel-mechanic" in form of chapters (or something like that) you can select and e.g. something like cheat-codes to easily get to the scenes of interesst. Everything else is just to time consuming (not only to play the game you must also analyze the data, repeat suspicious scenes, ...) and that for multiple consoles.
 
So I'm wondering, would it be in gaming tech analysis sites (or streamers) best interest to offer more complete analysis based on completed games or larger playthrough environments?
No. For the time it'd take to get a complete run through of one game, they could cover multiple titles. Unless playing 5x longer brought in 5x or more money, which it wouldn't, it's wasted effort.
 
No. For the time it'd take to get a complete run through of one game, they could cover multiple titles. Unless playing 5x longer brought in 5x or more money, which it wouldn't, it's wasted effort.

Yes, it is just not feasible to do, say, a frametime analysis of an entire runthrough of a game to attempt to catch all corner cases where there's a potential performance pitfall for hardware architecture A, B, C, D, etc.

While there are a few shorter AAA games that can be completed in single digit hours, ensuring that you cover all parts of any given map to catch any potential pitfall would multiply that length significantly. And then, of course, there are the games that take multiple 10's of hours to complete and multiple 100's of hours if you want to ensure that you cover all areas of the game.

The audience would ideally understand this and know that any benchmark or performance review of a game is inherently imprecise and that definitive statements can rarely be made about isolated drops in performance. Those can, of course, be interesting to discuss in a theoretical manner to attempt to come up with reasons why those might be happening, but even those are at best hypothetical theories. Only the developers will know with any certainty as they are the only ones with access to the code to know exactly what the engine is attempting to do at any given point.

At best, performance benchmarks and analysis can give a generalized "feel" of a game's performance within the limited area within which the benchmark and analysis take place. Extending that outside of the tested area will become less relevant or accurate as different areas in a game may put different levels of stress on different aspects of any given hardware architecture.

While in the past there have been some unscrupulous review sites that cherry picked which areas to use in their published reviews in order to give their hardware of choice a more favorable score in their review, I'd like to think that this happens rarely if at all nowadays. But even then, personal bias can at times lead a person to choose X area rather than Y area which happens to favor what they consider the more powerful hardware because they feel it might be more "representative" of the game. We can see this to an extent in the Graphics subforums here as depending on the person they'll feel that X game or Y game is more "fair" versus being optimised for A or B architecture, which are thus "unfair", and it is almost impossible to get everyone to agree on what games are "fair" and what games are optimised for A or B architecture making those "unfair" to use in benchmark analysis of hardware. Hell, there are times when it's impossible to get any sort of consensus as to what area within X game should be used for benchmark analysis because Y part of the game might "unfairly" favor one hardware architecture over another.

Regards,
SB
 
So I'm wondering, would it be in gaming tech analysis sites (or streamers) best interest to offer more complete analysis based on completed games or larger playthrough environments? Thus, offering readers/viewers more critical information and footage towards their findings. I mean, we have heard from some gaming tech reviewers (not mentioning any names) about not completing a game (sometimes, not even half), as such, they can't specifically state that IQ and/or performance still holds up, or dives, across a completed game.
There's limited time to sample and games are massive. But as long as the sampling pattern is more or less the same, then that's as honest as you can get it.
To improve on analysis etc, they need more data - therefore more tools to provide additional insights on what they sampled. Right now they have their eyes, some rudimentary ways to count resolution and a frame rate counter. If you could extract more information from those sampled clips it would be more valuable than sampling more.
 
Last edited:
There's limited time to sample and games are massive. But as long as the sampling pattern is more or less the same, then that's as honest as you can get it.
To improve on metrics etc, they need more data - therefore more tools to provide additional insights on what they sampled. Right now they have their eyes, some rudimentary ways to count resolution and a frame rate counter. If you could extract more information from those sampled clips it would be more valuable than sampling more.

What would be needed, IMO, to get as much coverage of a game as possible is...
  • Automated tools that can gather real-time data on various metrics while a person plays through the game "blind" (that should eliminate any potential unwanted bias).
    • Ideally you'll also want to record the playthrough.
    • Log of the data should be timestamped so that it can be compared to the playthrough footage.
    • Then you could go back and compare footage to data showing problem areas.
    • Easier to do on PC versus consoles as you can have system level tools for this.
  • A tool that collates that data.
  • Multiple people playing through X game on different hardware at different settings using the above tools.
Tested areas wouldn't be exactly like for like, but with such a massive amount of data, it should be relatively accurate with about as much objectivity as is possible.

That's still an incredibly time and manpower intensive task that is beyond what most (potentially all) review sites would be capable of doing in order to get something that is more comprehensively accurate.

Also, something along those lines without the video recording is likely something that Microsoft and Sony already do to an extent. We know for sure that MS has done this in the past as they mentioned performance logging of games (both internally in the lab and externally on player's machines) on previous generations of hardware in order to determine the direction they would go with a newer hardware design. Of course, they wouldn't be able to do this on the competitor's hardware.

Regards,
SB
 
Last edited:
More ideally, you'd have one input running across multiple devices simultaneously. That way you could capture identical gameplay clips within whatever randomised world parameters are in effect. Dunno if an advanced controller could be programmed to connect to and broadcast to multiple consoles, or otherwise could trigger inputs on separate controllers.
 
What would be needed, IMO, to get as much coverage of a game as possible is...
  • Automated tools that can gather real-time data on various metrics while a person plays through the game "blind" (that should eliminate any potential unwanted bias).
    • Ideally you'll also want to record the playthrough.
    • Log of the data should be timestamped so that it can be compared to the playthrough footage.
    • Then you could go back and compare footage to data showing problem areas.
    • Easier to do on PC versus consoles as you can have system level tools for this.
  • A tool that collates that data.
  • Multiple people playing through X game on different hardware at different settings using the above tools.
Tested areas wouldn't be exactly like for like, but with such a massive amount of data, it should be relatively accurate with about as much objectivity as is possible.

That's still an incredibly time and manpower intensive task that is beyond what most (potentially all) review sites would be capable of doing in order to get something that is more comprehensively accurate.

Also, something along those lines without the video recording is likely something that Microsoft and Sony already do to an extent. We know for sure that MS has done this in the past as they mentioned performance logging of games (both internally in the lab and externally on player's machines) on previous generations of hardware in order to determine the direction they would go with a newer hardware design.

Regards,
SB
I think DF does that already. They spool up multtiple play throughs that are largely full length. And then they import to premiere. Their frame rate plugin is an extension and it renders right on top. They can use premiere to see where there are frame dips. And they snip out sections of play accordingly. And once htey have their data they use that to perform analysis. And once that's complete, they add in anything extra and voice over and the video iis complete. Easier said than done of course, but at least this is my understanding.

While being able to replicate a play through is ideal experimentation and they do their best to do that, but the amount of frames that they collect in the 20k-40K range, when your results are in the 99.93 range, the difference is miniscule and not worth the effort. If they managed to do a single controller on 4 consoles at once, that may be ideal for saving them time, but the analysis won't differ all that much. The expectation here as well from developers is that they hit near 100% frame rate, so really, what we probably wants are metrics per frame on:
a) resolution
b) amount of noise
c) detail per frame
d) alpha effects
e) actors on screen
f) draw distance
etc. We want those to be quantified. I think just having more data sampled correctly isn't really going to change things.
 
Back
Top