Current Generation Games Analysis Technical Discussion [2020-2021] [XBSX|S, PS5, PC]

Status
Not open for further replies.
I didn't hear puha specifically saying xss would hold things back. He had a lot to say how taking existing game(control) and moving it to new gen is not going to create best possible result. For XSS/new gen he said hw can be taken into account when designing game. He also said supporting xss is not as simple as just lowering resolution. This is very reasonable thing to say. At minimum you would have to deploy a build to xss and QA the game. It's entirely possible that even using same binary in different hw different bugs would be encountered. So that means at minimum you need to add QA per console hw. If developer optimizes more than just resolution they would need more resources to create and test those console specific modifications.
 
We'll know definitely when we compare Sony's first party titles to MS/third party titles in 2023 and beyond. If Sony's titles are doing things that are not seen in the latter (Xbox + PC) UNRELATED to streaming / loading, then Xss would have definitely held back the gen.

It'll be interesting to see how SFS, VRS, mesh shaders, and/or ML (doubtful on that one) change the game for the S.
 
Last edited:
Maybe MS should share more on those I/O APIs

AFAIK, they're already out there for Xbox and GDK. They're just not available for PC yet. The actual matter is likely lack of time allocated by management for developers to focus on revamping a code section that already functions just to make it optimized for Series X|S. Especially when you're a smaller team, you're not going to be able to spend time on reworking sections not budgeted for.

Take a look at the recent R* GTA Online event [ https://forum.beyond3d.com/threads/...oading-times-70-reduction.62316/#post-2196166 ] to see how I/O and loading doesn't get nearly the attention they should for existing titles. "Things Work. Don't Change it." There is a lot of legacy code in I/O sections that goes unchanged for decades. Its one area that even Epic had to largely revamp for their Unreal engine, even though it's was usually the first thing larger developers throw out and rewrite to improve performance on last-gen consoles.
 
Maybe MS should share more on those I/O APIs they have internally,

Wait about a month and we will know. Under system&tools session
DirectStorage for Windows
Microsoft is excited to bring DirectStorage, an API in the DirectX family originally designed for the Velocity Architecture to Windows PCs! DirectStorage will bring best-in-class IO tech to both PC and console just as DirectX 12 Ultimate does with rendering tech. With a DirectStorage capable PC and a DirectStorage enabled game, you can look forward to vastly reduced load times and virtual worlds that are more expansive and detailed than ever. In this session, we will be discussing the details of this technology will help you build your next-generation PC games.
https://developer.microsoft.com/en-us/games/events/game-stack-live/
 
I didn't hear puha specifically saying xss would hold things back. He had a lot to say how taking existing game(control) and moving it to new gen is not going to create best possible result.

At 14m30s:
The Series S.. well.. it's no different than the previous generations where the system with the lowest specs does end up dictating a few of the things that you're going to do, because you're going to have to run on that system, right?


At ~15m40s:

It's a lot more difficult to engineer an old game to make sure it works on everything, but now that we're building the future games and hey, we know these are the systems it has to run, we take that into account from day one. And we can make sure all of the platforms have as good of an experience as possible. That's what needs to happen.

And we appreciate that there's a lower barrier entry for the next gen experience, but you know, the more hardware you have, the more you have to ultimately compromise when you are a smaller studio like us, where we just can't spend so much time making sure that all these platforms are super good.
 
I guess there was a part of me thinking they would integrate more /higher end effects with a 30fps mode and lower resolution. But it may be too early in the generation to do that.

Nothing prevents a developer from offering a proper 30 FPS mode along with a proper 60 FPS mode. Basically develop like you would for PC, but on console the developer choses the graphical effects they feel best represent their game at a 30 FPS target and do the same for a 60 FPS target. Granted cost could become an issue, but if you are a multiplatform developer supporting PC, these things are likely already built into your engine.

Regards,
SB
 
AFAIK, they're already out there for Xbox and GDK. They're just not available for PC yet. The actual matter is likely lack of time allocated by management for developers to focus on revamping a code section that already functions just to make it optimized for Series X|S. Especially when you're a smaller team, you're not going to be able to spend time on reworking sections not budgeted for.

Take a look at the recent R* GTA Online event [ https://forum.beyond3d.com/threads/...oading-times-70-reduction.62316/#post-2196166 ] to see how I/O and loading doesn't get nearly the attention they should for existing titles. "Things Work. Don't Change it." There is a lot of legacy code in I/O sections that goes unchanged for decades. Its one area that even Epic had to largely revamp for their Unreal engine, even though it's was usually the first thing larger developers throw out and rewrite to improve performance on last-gen consoles.
Yeah, I also seen such "development-errors" quite a lot, where developers just made a Config-reader that than every time reads in the config file and than release it's resources. Than the next method uses this config-reader and the next and the next. And before you can see it, over 100 accesses to the config-reader are made assuming that the reader will cache the necessary things. Black-box development at its best ;)

Those things won't be noticed if everything works fine. But than e.g. multiple users/processes use the "black-box" and at some point the IO just get's a new bottleneck in the system.

Another dev "failure" is string-concatination. Because it is more readable, today you should use something like format-strings etc. This works fine if you are just parsing a few strings per second. But when used excessively it happens very fast (e.g. inside of a loop over a few functions), that the CPU does nothing else than analysing strings. Simple concatenation is than much faster because the cpu has not to analyse each string and replace it's placeholders. Even better would be, to only let the system do something once and never again. But that might cost a bit of memory.
Those problems just exist because code is written most of the time with the intention that the code should just work to solve the current problem. And if it is reused everybody just assumes that it works if it works. And as long as there is not another problem, no one will analyse the existing code to look if there might be some things that might get solved a bit better (with less resources).

And loading times were never before a real problem. Only with this generations loading times got the priority.
 
The versions tested were 1.000.001 on PS5 and 2.0.0.3 on Xbox Series X|S.

PS5 and Xbox Series X use a dynamic resolution with the highest resolution found being 3840x2160 and the lowest resolution found being approximately 3456x1944.

Xbox Series S uses a dynamic resolution with the highest resolution found being 2560x1440 and the lowest resolution found being approximately 2304x1296.

During gameplay pixel counts at 3840x2160 seem to be common on PS5 and Xbox Series X. Xbox Series S drops below its maximum resolution more often than the other two consoles. The resolution is lower on average during cutscenes than gameplay for all three consoles.

Stuttering unrelated to the frame rate can happen at some points on the Xbox consoles. This issue wasn't encountered on PS5. An example of a level that this stuttering was found in is the level Hit The Road.

 
I had two initial reactions. First one being, yeah the GDK likely feels and operates differently than the XDK setup they've been using forever. Second one being, maybe it's all the I/O changes needed to get the most out of the NVMEs, considering it's an entirely new set of APIs. The same old Win32 style I/O used for decades on PC and Xbox won't get the new benefits, so it's now a different codepath between the two until Velocity Architecture is released for PC too.

I'm curious what the PS4 I/O and PS5 I/O APIs looks like.
Is smart delivery is done through the GDK? I kinda of forget this part. I recall companies trying to circumvent smart delivery by issuing 2 SKUs. And 1 SKU would be forever stuck on XBO/X1X, unable to upgrade and the other SKU to deploy to all. I do wonder if part of the reason of the GDK is to support smart delivery. If so, thinking about it, the main priorities for the GDK likely were around seamless generational support (the BC experience) for launch.
 
Is smart delivery is done through the GDK? I kinda of forget this part. I recall companies trying to circumvent smart delivery by issuing 2 SKUs. And 1 SKU would be forever stuck on XBO/X1X, unable to upgrade and the other SKU to deploy to all. I do wonder if part of the reason of the GDK is to support smart delivery. If so, thinking about it, the main priorities for the GDK likely were around seamless generational support (the BC experience) for launch.

Smart Delivery was around before GDK. It's supported on XDK. It's how they delivered the One X upgrades during last-gen.

It's possible it's how they provided Xbox Play Anywhere with a single purchase giving you both Xbox and PC versions.

In general, it's more than just content delivery. It has substantial back-end tie-ins. It's a very broad system that covers purchases and entitlements, content configurations, content, and cloud saves.
 
Here's the thing though; there aren't a lot of those kind of developers out there. If they're mid-sized teams doing mid-tier games, they probably already work on PC and that's an environment with a lot more volatility in terms of configurations to keep in mind than Series X and S. So why would putting in some time to tune performance for Series S be a dealbreaker? The same can be said for the large dev teams; they work on PC as well, plus some of them have done ports to much weaker hardware like the Switch so...where's the issue?
Switch ports are often handled by external teams and at a later date. Making a game that launches on Series X means having to spend time testing and optimizing for S as well, and I think that's where much of the headache come from. Especially considering that the Series games must be compatible with each other. So in the case of a hypothetical Switch release, there is no requirement that your save files transfer or any multiplayer works outside of the switch ecosystem. So ther opportunity for doing deep optimizations isn't there in the same way it would be with a Switch port.

Yah, could be. Just interesting that the point of high level apis is to ensure some level of forwards compatibility, but if you keep rewriting the apis every time the hardware platform changes you lose that benefit, especially if you end up rewriting things that are trivial. On the other side Sony is supposed to be more low level, but it seems like they only changed the api in places where they absolutely had to, so it's a bit easier to adapt to from a developer perspective. That's how I interpreted it, and it's kind of interesting.
If the new API includes a superset of the old API, that wouldn't be a problem. Also, if all of your apps are virtualized, you can include a version of the API appropriate for the app to ensure compatibility.
 
Is smart delivery is done through the GDK? I kinda of forget this part. I recall companies trying to circumvent smart delivery by issuing 2 SKUs. And 1 SKU would be forever stuck on XBO/X1X, unable to upgrade and the other SKU to deploy to all. I do wonder if part of the reason of the GDK is to support smart delivery. If so, thinking about it, the main priorities for the GDK likely were around seamless generational support (the BC experience) for launch.


The GDK broadly speaking is meant to unify game development between PC and xbox (and the cloud), I presume thats why they dropped the xbox part of the acronym (Xbox development kit changing to game development kit). I think this is largely the reason we haven't seen many games that utilize some of the touted next gen features like direct storage and the like. Even though we have seen the very same games utilizing them on ps5. An example being the next gen version of Remedy's Control, which is substantially smaller install size wise on the PS5 than on xbox, in comparison the xbox versions install size is essentially identical to the install size on PC. Since PC doesnt support it yet it would split the codebases of the game, making one of the main reasons behind the GDK a mute point.

The GDK isnt meant for cross generational support (even though it does support it, to be clear) It just allows devs to easily 'port' their shared codebase game to different platforms. So you would have an identical game running on windows, xbox and the cloud, but different UI's for each. You could even have different versions of the UI for each. For instance if you are making a cloud version of your game, maybe you make two version of the UI, one that has xbox buttom symbols and another with switch button symbols for instance.
 
AFAIK, they're already out there for Xbox and GDK. They're just not available for PC yet. The actual matter is likely lack of time allocated by management for developers to focus on revamping a code section that already functions just to make it optimized for Series X|S. Especially when you're a smaller team, you're not going to be able to spend time on reworking sections not budgeted for.

Take a look at the recent R* GTA Online event [ https://forum.beyond3d.com/threads/...oading-times-70-reduction.62316/#post-2196166 ] to see how I/O and loading doesn't get nearly the attention they should for existing titles. "Things Work. Don't Change it." There is a lot of legacy code in I/O sections that goes unchanged for decades. Its one area that even Epic had to largely revamp for their Unreal engine, even though it's was usually the first thing larger developers throw out and rewrite to improve performance on last-gen consoles.

Oh yeah for sure team size matters a ton for this kind of stuff. IIRC the team on Control for the next-gen ports was a "budget" team, it wasn't Remedy themselves as I think the Remedy team (not sure how many teams they have internally?) is focused on Crossfire X. So if it was a smaller team contracted to handle the ports, they'd be stressed for budget and then there's also the consideration one platform may've been prioritized compared to the other. Which if that were so I'd just guess was the PS5 because that's just one design to focus on and its tools are very similar to PS4's while MS's are more of a complete reworking from what's been said.

I'm always surprised when the smallest, most insignificant files or bits of code cause hangs, crashes, freezes etc. in any software. Even more surprised when it's software from companies with seemingly endless amount of resources they could throw at resolving the issue. Just goes to show to some extent the cost-cutting measures companies take to maximize profits. I'd give more of the doubtful benefit to a smaller team dreaming big and writing some big complex software because they actually could overlook it, and maybe that can be argued somewhat for the massive devs and publishing houses given the sheer amount of code they have to write, but they should still have the want to put in the investment for catching some of the more embarrassing snafus that can be easily fixed once they're found. Something like with the GTA Online example, isn't going to mess with the rest of the code if it's fixed up I think xD.

Nothing prevents a developer from offering a proper 30 FPS mode along with a proper 60 FPS mode. Basically develop like you would for PC, but on console the developer choses the graphical effects they feel best represent their game at a 30 FPS target and do the same for a 60 FPS target. Granted cost could become an issue, but if you are a multiplatform developer supporting PC, these things are likely already built into your engine.

Regards,
SB

"Proper" 60 FPS seems like it's always going to be harder to target because it's simply more demanding. Small changes in minute settings could mean the difference between locked 60 and 60 with random chugs and drops, but then if you drop some settings yet too much further then you get locked 60 that looks kinda ugly.

It's not so much like on PC where I think PC gamers are just more trained to tune their settings and you can always power through to stable 60 by throwing whatever hardware you want at the problem....for the most part.


Shader compile issue a la Control on Series, maybe?

Switch ports are often handled by external teams and at a later date. Making a game that launches on Series X means having to spend time testing and optimizing for S as well, and I think that's where much of the headache come from. Especially considering that the Series games must be compatible with each other. So in the case of a hypothetical Switch release, there is no requirement that your save files transfer or any multiplayer works outside of the switch ecosystem. So ther opportunity for doing deep optimizations isn't there in the same way it would be with a Switch port.

Yeah these are the steps devs have to take if they want their games on Series. I wonder if there's ever a time in the future Microsoft softens up on this, i.e devs only need to work on a Series X version of the game and can rely on GamePass Xcloud streaming for play on Series S.

Which, since they are pushing streaming for tablet, smartphone and even smart TVs (eventually), IMO that's maybe where they want to go in the future. Have that be the option for new games on Series in the future for developers who don't think they'll be able to build a native Series S version of the title. But the economics need to be there (or not be there, maybe better way to say it) to justify it.

For whatever reason, I think Switch Pro/Switch 2 will be a deciding factor here. If it can provide performance around or better than Series S with DLSS 2.0/3.0 (and that depends on a ton of factors, like # of Tensor Cores), then I can see Microsoft providing streaming-optimized versions of Series games for Series S as an option, if they don't want to do native versions for it. MS pushing their own streaming Series device might further encourage that.

However if the next Switch still falls well short of Series S even with DLSS factored in, well it's not like devs aren't going to develop games for it! So that automatically gives them a reason to do native Series S versions of those games too, maybe using the Switch Pro/Switch 2 versions as base for that, and that in turn gives Microsoft less incentive to ease off on native versions of games for Series S since devs will be making native versions for the even less powerful Switch Pro/Switch 2.

I might just be thinking wild on this, but it's a bit amusing how this could end up revolving around Nintendo x3.
 
I doubt any portable device like a hypothetical Switch 2 could match the performance of Series S overall. Reaching Xbox One levels of performance at lower resolutions is within reach because of it's weaker CPU, memory bandwidth and slow I/O. Series S has a pretty capable CPU, fairly fast memory and I/O. SD cards are limited to write speeds of about 100 Mb/s right? And that's a theoretical maximum, I haven't benchmarked one in years (also I think all my card readers are USB2 at most because I hardly use them anymore). These are the things that are really going to hold back a portable, even if they can hit compute performance per pixel that's close to Series S
 
The versions tested were 1.000.006 on PS5 and 2.2.2103.100 on Xbox Series X|S.

Timestamps:
0:00 - Performance Mode
5:37 - Quality Mode

PS5 in performance mode uses a dynamic resolution with the highest resolution found being 3840x2160 and the lowest resolution found being 2560x1440. PS5 in performance mode uses a form of checkerboard rendering to reach the stated resolutions. Checkerboard 3840x2160 seems to be a common rendering resolution on PS5 in performance mode. PS5 in performance mode can sometimes exhibit half horizontal resolution artifacts and the UI elements can sometimes show visible stippling artifacts, both seem to be related to the use of checkerboard rendering.

Xbox Series X in performance mode uses a dynamic resolution with the highest resolution found being 3840x2160 and the lowest resolution found being approximately 2304x1296. Pixel counts between 3648x2052 and 3264x1836 seem to be common on Xbox Series X in performance mode.

Xbox Series S in performance mode uses a dynamic resolution with the highest resolution found being 1920x1080 and the lowest resolution found being 1280x720. Pixel counts at 1920x1080 seem to be common on Xbox Series S in performance mode.

PS5 in quality mode uses a dynamic resolution with the highest resolution found being 3840x2160 and the lowest resolution found being 3200x1800.

Xbox Series X in quality mode uses a dynamic resolution with the highest resolution found being 3840x2160 and the lowest resolution found being approximately 3413x1920.

Xbox Series S in quality mode uses a dynamic resolution with the highest resolution found being 2560x1440 and the lowest resolution found being 2048x1152.

In quality mode the resolution seems to rarely drop below the maximum resolution found on all three consoles.

PS5 and Xbox Series X have some graphical improvements in comparison to Xbox Series S such as higher quality textures in both modes and improved water quality in performance mode.

Quality mode has some graphical improvements over performance mode such improved screen space reflections, higher quality shadows and improved destruction.

Strangely the character models at 0:30 on PS5 are lower quality than both Xbox consoles and this applies to both modes.


CZEygFu.jpg


Notes:
* PS5 Performance Mode Dynamic Resolution: 3840x2160 - 2560x1440 (exibits weird CBR issues)
* XBSX Performance Mode Dynamic Resolution: 3840x2160 - 2304x1296 (3648x2052 - 3264x1836 mostly)
* XBSS Performance Mode Dynamic Resolution: 1920x1080 - 1280x720 (1920x1080 mostly)
* PS5 Quality Mode Dynamic Resolution: 3840x2160 - 3200x1800
* XBSX Quality Mode Dynamic Resolution: 3840x2160 - 3413x1920
* XBSS Quality Mode Dynamic Resolution: 2560x1440 - 2048x1152
* All three rarely drop below the highest resolution boundary in quality mode.
* PS5 exibits lower quality assets at cutscene @0:30 (bug maybe).
* PS5 has a slight performance edge in performance mode.
 
Hard to understand what's going on from that write-up -- are they saying the ps5 is doing checkerboard reconstruction, but the xsx is running native 4k? That seems like a gigantic gap in pixel count being pushed thats kinda being brushed over?
 
Hard to understand what's going on from that write-up -- are they saying the ps5 is doing checkerboard reconstruction, but the xsx is running native 4k? That seems like a gigantic gap in pixel count being pushed thats kinda being brushed over?
PS5 has some headroom as it mostly renders at max res:
Checkerboard 3840x2160 seems to be a common rendering resolution on PS5 in performance mode.
While XSX mostly renders sub-native:
XSX...3648x2052 - 3264x1836 mostly
So, yes, CBR was probably not the best way on PS5, particularly considering the possible CBR artifacts. But at least the framerate is the most stable on PS5 I guess.
 
Status
Not open for further replies.
Back
Top