Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
And one thing I've been thinking about is exactly that: sure, X devkits will improve, but surely the PS5 devkits will also improve in due time? I can't see how things will improve on one side, but not the other?
If we think of the devkits as being able to allow developers to fully tap the entire potential of the hardware, then at best you're operating at theoretical specs.

So there is a limit as you approach 100%, the gains progressively will hit a hyperbole.

But the limit of how poor a devkit is, can be really something. ie, PS4 shipped with GNM, XBO only shipped with DX11. It didn't have a GNM competitor only nearly 1.5 years later. They started off the generation with being well over 100% less pixel resolution. And it eventually closed the gap to the expected -30% threshold. But PS4 couldn't realize the same amount of gains, for obvious reasons, you can't get more power if you're nearly at extracting it all.
 
The thing about something like Unreal Engine is that it's a hugely complex bit of kit, and it's easy to use features in a way that can impact frame rates or cause performance drops in particular moments. There's a certain amount of setup work involved with a scene (including per frame), and while multiple cameras in a scene is common multiple scenes isn't, and I imagine you could easily lose a lot of performance. Things like the order in which you do different steps of the different scenes, how you manage LOD as you change the balance between scenes, how GPUs are naturally less efficient at lower resolutions (multiple scenes = lower resolution per scene), potentially doubling the draw calls, being stalled at the front end of the GPU more ... I dunno stuff like that.

Optimisation isn't just about a dude looking at a screen of code and juggling some operations around - in a engine like Unreal there are a lot of performance implications hidden behind a tick box or a slider or a linking this effect to that effect or whatever. It's going to be tough for a small team pushing a complex engine in an uncommon way.

Even the mighty Squaresoft ran into massive texture streaming issues with UE for FFVII remake at first! And they were a huge team with a huge budget.



It's quite hard to parachute people into a team to fix things. You need to develop some familiarity with the project, be able to work with the people already at the team, not interrupt the workflow of people who are crunching for release, and also have permission from the team to do so. They may feel a delicate balance would be disrupted at a difficult time. A lot of the time simply having access to people who can answer questions is the most important thing - but you have to have worked out what it is you want to ask!



They'll surely improve all-round. With the XSX the issue is that the compute units are going somewhat less well utilised than in PS5, despite the architectural similarity. The question is whether better tools (which allow for insight and fine tuning) will allow this to improve relatively speaking on XSX. It always takes more work to effectively use something that's wider - but there's a point at which either you can't or it's not worth it.

In some areas PS5 will always have an advantage though. For example, fillrate where you're not bound by main memory bandwidth. No amount of squeezing more maths out of the compute units is going to change that. Unless maybe you moved from e.g. a rasterised particle system to tiled compute based one (no, I don't know how you'd do that, I just read about it from someone like sebbbi).

I think all of that's very reasonable tbh. Kind of feel my own expectations of the game (gameplay more in line with classic RE/Silent Hill, for example) not seemingly being met leading to some disappointment which might be causing myself and others to be a bit harsher on the game than is warranted. It's definitely true Bloober aren't a large team at all, but in some ways I wonder if team size is even particularly a factor here. Ninja Theory had a very small team for Hellblade yet were able to make something that was a visual showcase for PS4 when it came out (2014 I think?). The Medium seems pretty linear and similar in scope to Hellblade, the budgets are roughly similar, etc. But I don't recall Hellblade having some of the technical issues The Medium seems to be having.

But that said there's other things I have to keep in consideration. The lockdowns, I don't know how tight they were in Poland, but I can see that impacting a smaller dev like Bloober Team harder than it did, say, CDPR. And travel restrictions, etc. would play a part in preventing technical assistance to travel out, on top of the factors you mention. Though could remote assistance via online have at least been an option? Actually I've been wondering for a bit if Microsoft has an equivalent to Sony's ICE teams and SN Systems for assisting 3P games, though I've heard they only provide those teams to games they're either publishing or have some timed/full exclusivity on. Microsoft isn't publishing The Medium so even if they have such teams they may not've been deployed though if they do, I'd think they would provide that assistance to Bloober Team considering the position this game has in the ecosystem at the moment.

On a different note, about finding ways to offset pixel fillrate disadvantages into something more of a strength with Microsoft's own design, couldn't the TMUs be used to do what you suggest there? Like using sprites (basically sets of animated textures) mixed in with regular particle effects. They would probably have to be canned animations so not suitable for all types of games, but you just capture the particles as a series of images from various POVs, then make textures out of those that can be streamed in and processed by the game to kind of give a "boost" to regular particle effects if they want to make up a bit for lower pixel/particle fillrate compared to Sony?

It's kinda like the prerendered sprite techniques from games like DKC and RE, just taken to a much larger scale; would need to rely a lot on streaming the data in from SSD though so need good latency, I figure this is where things like SFS would really come in handy.
 
Why anyone keeps feeling hurt every time DF's face-off videos say the SeriesX and PS5 are virtually tied is something I have a really hard time understanding.
Or even surprised. The theoretical teraflops difference isn't significant. One GPU is wider/slower and one is narrower/faster. What were people expecting? ¯\_(ツ)_/¯.
 
Or even surprised. The theoretical teraflops difference isn't significant. One GPU is wider/slower and one is narrower/faster. What were people expecting? ¯\_(ツ)_/¯.

Even funnier, both systems are modern AMD RDNA2 designs working within 200-220w footprint. So, performance similarities are bound to happen, even with their own particular bespoke implementations.
 
Even funnier, both systems are modern AMD RDNA2 designs working within 200-220w footprint. So, performance similarities are bound to happen, even with their own particular bespoke implementations.
Yup. There will be certainly be engines that early in this console generation may favour wider/slower or narrower/faster but like all previous generations, they'll evolve to better accommodate both approaches and the differences will be narrow over time.

All three new consoles are great. :yes:
 
Last edited by a moderator:
We return to Night City armed with patch 1.10 - the first major update since launch - to see the state of the game's performance. Base PS4 and Xbox One are in the spotlight, as the two worst performing versions. Can such an early update turn around the fortunes here? Tom and John reunite to see the evidence.
 
And just today they released a New patch.
Time for another video:LOL:
They did say that two patches would be out in January. 1.10 and the 1.11 hot fix it is. The patch notes state:

Hotfix 1.11 is available on PC, consoles and Stadia!

This update addresses two issues that appeared after Patch 1.1:
  • Item randomization has been restored to the previous state.
The save/load loot exploit will be investigated further.
  • A bug in Down on the Street quest has been fixed.
It occurred for some players during a holocall with Takemura, when using a save made on version 1.06 with Down on the Street quest in progress at "Wait For Takemura's call" objective. After loading such a save on version 1.1, the holocall would lack dialogue options and block interactions with other NPCs.​

14Gb on PS4/PS5. Big patch for minor bug fixes.
 
In some areas PS5 will always have an advantage though. For example, fillrate where you're not bound by main memory bandwidth. No amount of squeezing more maths out of the compute units is going to change that. Unless maybe you moved from e.g. a rasterised particle system to tiled compute based one (no, I don't know how you'd do that, I just read about it from someone like sebbbi).

Which you will. ue5 is a compute rasterizer (at least was when we saw it -- the reasonable guess is the implementation will be specific on each platform to maximize its strengths though,) compute renderers particles as you said is becoming fairly common, compute is used to cluster and cull on meshlet based (future) or clustered/tiled (present) renderers which spend compute to save costs on the fixed function side, frostbite just shipped a game with a new compute based order independent spline renderer (and compute shader physics!) for hair...

Ultimately, the xbox has every advantage, (even if the advantage is relatively small) and a forward looking architecture. I doubt a hitman style 50% res advantage will be the norm, but this first wave of games on 5+ year old engines wont either.
 
Ultimately, the xbox has every advantage, (even if the advantage is relatively small) and a forward looking architecture. I doubt a hitman style 50% res advantage will be the norm, but this first wave of games on 5+ year old engines wont either.
4k is 44% more pixels than 1800p not 50% also keep in mind that in Mendoza level xsx has drops to 40ish fps when ps5 is 60 ;)
 
On a different note, about finding ways to offset pixel fillrate disadvantages into something more of a strength with Microsoft's own design, couldn't the TMUs be used to do what you suggest there? Like using sprites (basically sets of animated textures) mixed in with regular particle effects. They would probably have to be canned animations so not suitable for all types of games, but you just capture the particles as a series of images from various POVs, then make textures out of those that can be streamed in and processed by the game to kind of give a "boost" to regular particle effects if they want to make up a bit for lower pixel/particle fillrate compared to Sony?

It's kinda like the prerendered sprite techniques from games like DKC and RE, just taken to a much larger scale; would need to rely a lot on streaming the data in from SSD though so need good latency, I figure this is where things like SFS would really come in handy.

You certainly can use animated textures, but there are probably going to be some costs in terms of memory, decompression (if stored as video and decompressed to a GPU friendly format on the fly), perhaps overhead in telling which frame to use in which area .. plus it would still need blending with the rest of the scene. Billboards and pre-canned 2D animations acting as 3D tend to break down when you get too close or move the camera around.

The best solution for XSX is probably the same as for a lot of other stuff moving forward - moving as much to compute as possible, staying in LDS and cache as much as possible, and probably also running asynchronously with the 3D pipeline. It's just in the case of Xbox it might benefit more than most other GPUs at the moment.

I was looking at an RDNA2 optimisation guide few days ago (out of interest, not because I'm in a position to even begin to use it!), and even for a full screen pass compute is a little faster than pixel shaders running on a full screen quad. The benefits from a good compute based particle system would likely be much greater still, and much faster than using ROPs (even tiled to maximise cache hits - plus you wouldn't block other 3D from using them).

I won't pretend to know what most of this is saying, but you might also find the following page interesting! https://gpuopen.com/performance/

Which you will. ue5 is a compute rasterizer (at least was when we saw it -- the reasonable guess is the implementation will be specific on each platform to maximize its strengths though,) compute renderers particles as you said is becoming fairly common, compute is used to cluster and cull on meshlet based (future) or clustered/tiled (present) renderers which spend compute to save costs on the fixed function side, frostbite just shipped a game with a new compute based order independent spline renderer (and compute shader physics!) for hair...

Ultimately, the xbox has every advantage, (even if the advantage is relatively small) and a forward looking architecture. I doubt a hitman style 50% res advantage will be the norm, but this first wave of games on 5+ year old engines wont either.

I'm really looking forward to seeing UE5 in the wild. It seems very unlikely that Epic haven't been developing it keeping their conversations with Sony and MS in mind, not to mention AMD. It does look like my Kepler GTX 680 and its weak compute would finish it off though ...

I'd agree about Xbox - I don't think the differences will be large or always in its favour, but I think Xbox probably has a little more to gain from future shifts than the PS5 (which will continue to be an extremely solid console till the end, of course).
 
I'd agree about Xbox - I don't think the differences will be large or always in its favour, but I think Xbox probably has a little more to gain from future shifts than the PS5 (which will continue to be an extremely solid console till the end, of course).

In the end it should, it has the more capable GPU. It would be different if say both where at 10TF, with one going wide/slow and the other narrow/fast, but with the same compute spec (same TF).
 
....
Ultimately, the xbox has every advantage, (even if the advantage is relatively small) and a forward looking architecture. I doubt a hitman style 50% res advantage will be the norm, but this first wave of games on 5+ year old engines wont either.
I'd agree about Xbox - I don't think the differences will be large or always in its favour, but I think Xbox probably has a little more to gain from future shifts than the PS5 (which will continue to be an extremely solid console till the end, of course).


Just as a added point to the hitman performance and its comparison between xbox and playstation, one big reason for this is likely intel's sponsorship of the game (link below)

some extracts from the partnership announcement

"Together with Intel, we are working to optimize the game for launch and beyond, with updates, tweaks and improvements coming throughout 2021 that will improve the experience of playing on a high-end PC and multi-core CPUs."

Although they only directly mention CPU optimizations, they also mention adding VRS to the PC version of the game. On top of this hitman 3 uses DX12, so all of these improvements made in partnership with intel presumably largely transfer over to the xbox versions of the game, but wouldn't necessarily transfer to the PS5 version



IOI x Intel - IO Interactive



nothing concrete I know, but might offer some insight into some performance differences, or maybe not.
 
Or even surprised. The theoretical teraflops difference isn't significant. One GPU is wider/slower and one is narrower/faster. What were people expecting? ¯\_(ツ)_/¯.
I think the problem was noise around PS5 issues and Xbox fans talking those up (I remember being told of 'significant' differences in XSX favour). Ironically this gen the gap is smaller and 'diminishing returns' are real, so even if PS5 is running 80% potential vs XSX 50% (figures from my ass) and eventually both run at 90% the difference will not be significant and by then we'll be talking next gen.

4k is 44% more pixels than 1800p not 50% also keep in mind that in Mendoza level xsx has drops to 40ish fps when ps5 is 60 ;)
I know, people are totally ignoring the fact that the gap is actually under 44% - we just don't know the actual figure. Certainly PS5 could have run above 1800p to match the XSX performance issues.
 
I think the problem was noise around PS5 issues and Xbox fans talking those up (I remember being told of 'significant' differences in XSX favour). Ironically this gen the gap is smaller and 'diminishing returns' are real, so even if PS5 is running 80% potential vs XSX 50% (figures from my ass) and eventually both run at 90% the difference will not be significant and by then we'll be talking next gen.


I know, people are totally ignoring the fact that the gap is actually under 44% - we just don't know the actual figure. Certainly PS5 could have run above 1800p to match the XSX performance issues.
we can play with comparing gap in this one scene from Alex video comparing xsx to 5700xt and 5700 and similarly ps5 and xsx advantage is around 10%
 
we can play with comparing gap in this one scene from Alex video comparing xsx to 5700xt and 5700 and similarly ps5 and xsx advantage is around 10%

The best solution for this game would have been dynamic resolution, people say the game wasn't rushed (and I'm not saying it was) and that it wasn't too badly hit by Covid 19 issues, but the bottom line is, why didn't they include dynamic resolution? Because they didn't have the time/resources...therefore the game is not fully optimised on all systems.

That's not to knock the game, it does look very nice...it's just far from the data point to prove anything IMHO.
 
The best solution for this game would have been dynamic resolution, people say the game wasn't rushed (and I'm not saying it was) and that it wasn't too badly hit by Covid 19 issues, but the bottom line is, why didn't they include dynamic resolution? Because they didn't have the time/resources...therefore the game is not fully optimised on all systems.

That's not to knock the game, it does look very nice...it's just far from the data point to prove anything IMHO.
their engine doesn't support dynamic resolution so thats the reason but yeah, it should be the way to go (on the other hand I bought game yeasterday and its sharp on ps5 so this dynamic resolution would be better but not something essential imho)
 
Layman question about dynamic res:

Does an engine change this on a per-frame basis? Let's say:
- frame 1: the engine thinks it will need to render at 1800p in order to stay within 16ms
- frame 2: the engine thinks it will need to render at 1440p in order to stay within 16ms
- frame 3: can be done at 4k etc etc etc?
 
Layman question about dynamic res:

Does an engine change this on a per-frame basis? Let's say:
- frame 1: the engine thinks it will need to render at 1800p in order to stay within 16ms
- frame 2: the engine thinks it will need to render at 1440p in order to stay within 16ms
- frame 3: can be done at 4k etc etc etc?

Yeah, it's typically calculated on a frame by frame basis. The implementations I've read about all talk about measuring how long it takes to do your current frame, then guestimating how long you'll need to do the next. So you adjust the resolution of your next frame based on how long the last one took.

Something like a smoke bomb going off, or an object moving onto or off screen, or a flame thrower firing, tend to grow or shrink over time so the cost grows or reduces over a number of frames. That way the current frame is somewhat indicative of the next. If you're conservative enough you'll always or almost always make the next frame in time, if you push a bit close you might still miss completing the next frame in time, so have to drop a frame or tear (normally at the top part of the screen so it shouldn't be too visible).

Some dynamic res games have a point they won't reduce below - a minimum threshold - at which point they'll drop frames or tear. I think it's likely that below a certain point the time savings from shrinking your res will reduce, as there are certain costs that don't reduce linearly with resolution (GPUs tend to become less efficient at lower resolutions).

Plus there's probably a point where being blurry as heck is seen as worse than losing a few frames....
 
Last edited:
Status
Not open for further replies.
Back
Top