Digital Foundry Article Technical Discussion [2021]

Status
Not open for further replies.
Very demanding but understandable with the dual-rendering of worlds going on. Anyone remember the A Crack In The Slab mission in Dishonored 2? Pulling out your timepiece to show a window into the other time period would massively increase your GPU load.
thats just one of reason, other is poor optimization (but we have too remember its not aaa game from big studio)
 
I believe they explained that PC HDD throughput is actually lot faster than previous gen.

I'm ok with the resolutions(although can go pretty low), but find the texture streaming during that selection screen pretty bad. It should load in - 1 and +1 to whatever one your on at least.

Fix the frame pacing and looks like a solid showing.

I also assume it's not using direct storage, be nice to know if it's using SFS (doubt it) and any other DX12U features.

Well if it would use SFS, we should be able to see soon.

A game using SFS would have either a lower VRAM allocation on DX12U compatible cards versus an incompatible one, higher resolution textures or both...

I doubt its using any DX12U features beyond RT though, as they didn't mention it. As the first next gen exclusive Xbox game, they would surely heavily market the usage of any DX12U feature. It's a shame, I hope we get to see games using the advanced features soon.
 
I'm not impressed at all. The dual pov is nice, but the graphics look nothing specials imo. I guess that's because it has to run on "lower" pc hardware too ?

I don't know, it's not a good look for xbsx imo...
 
I donno. All UE4 games look the same to me, from the lowest patreon games to the high end multi million dollar stuff with the majority of the visual difference due to asset quality (most of the time). Its fine looking but nothing that pops out as next gen imo.
 
thats just one of reason, other is poor optimization (but we have too remember its not aaa game from big studio)

Yeah there is no reason this game should be performing at poor framerates on a 3080 of all cards, even with the two-worlds stuff going on. Apparently there are texture pop-in issues (I've seen a few in video reviews) and pixel glitches on spots of the screen at parts as well. Knowing this game's coming to PS5 later on (probably in the Fall?), I wonder if having to simultaneously develop for that, Series systems, and PC impacted some of the late-level polish the Series version should've gotten.

I also saw David Jaffe's video and he pointed out some of animation collision issues; maybe not so much "issues" but just the fact they seem kind of jarring in a way, given the amount of detail and graphical fidelity/realism the game has. And the resolution dropping at times to 900p is just kind of odd, even considering what the game's doing with its dual-worlds concept.

A lot of this stuff can be patched up in an update but in a way I kind of also agree with Jaffe on if MS provided enough polish/technical support to Bloober Team for this game considering the attention it would surely be getting, especially considering how small a team Bloober are. This game is already quite good visually (particularly in regards to the ghost world aesthetics) but it could be a genuine visual showcase if a bit more technical assistance from MS were provided.

I donno. All UE4 games look the same to me, from the lowest patreon games to the high end multi million dollar stuff with the majority of the visual difference due to asset quality (most of the time). Its fine looking but nothing that pops out as next gen imo.

Dunno, really depends on the game. SFV and GG Xrd are UE4 games I believe, and they look very unique among other UE4 games.

Well if it would use SFS, we should be able to see soon.

A game using SFS would have either a lower VRAM allocation on DX12U compatible cards versus an incompatible one, higher resolution textures or both...

I doubt its using any DX12U features beyond RT though, as they didn't mention it. As the first next gen exclusive Xbox game, they would surely heavily market the usage of any DX12U feature. It's a shame, I hope we get to see games using the advanced features soon.

Yeah I definitely think it's not using certain parts of the SSD I/O like SFS; SFS is meant to prevent the type of texture pop-in this game explicitly has.
 
  • Like
Reactions: snc
People just have to get used to it. In most games PS5 is either performing similarly or better than XSX and Hitman 3 was just an outlier.
This post will age like milk. The consoles will trade blows, but PS5 won't win most battles or will be equal most of the time. Like Yoshida once said "let them dream".
btw John said Medium wouldn't be probably possible on lastgen consoles hdd but ssd is only recommended and not required on pc
PC minimum specs have higher RAM requirements than the last gen consoles. More RAM -> more data/levels can be stored.
 
I am describing the present reality, nothing more. Thinking XSX is destined to flat out beat PS5 in the future in most comparisons is pure fantasy.

There's a secret sauce devkit upgrade coming for the SeriesX that will give it a performance boost in multiplatform titles compared to the PS5 (whose devkit upgrades are totally incapable of doing the same).
It's coming any time now.


This post will age like milk. The consoles will trade blows, but PS5 won't win most battles or will be equal most of the time. Like Yoshida once said "let them dream".
Dreaming, yes. It's called "phase of denial".
Because having both consoles with virtually the same performance and getting them to compete in quality of titles + ecosystem + QoL features rather than epeen pixel metrics no one can discern in realistic scenarios apparently became a bad thing for consumers..
Why anyone keeps feeling hurt every time DF's face-off videos say the SeriesX and PS5 are virtually tied is something I have a really hard time understanding.


Knowing this game's coming to PS5 later on (probably in the Fall?), I wonder if having to simultaneously develop for that, Series systems, and PC impacted some of the late-level polish the Series version should've gotten.
Looking at PC performance, it does look like this is a console-first-and-foremost title indeed. I couldn't blame the devs for this choice, especially given the really small production budget.
 
There's a secret sauce devkit upgrade coming for the SeriesX that will give it a performance boost in multiplatform titles compared to the PS5 (whose devkit upgrades are totally incapable of doing the same).
I really couldn't care less about console wars, yet I find it mildly entertaining reading about this stuff.

And one thing I've been thinking about is exactly that: sure, X devkits will improve, but surely the PS5 devkits will also improve in due time? I can't see how things will improve on one side, but not the other?
 
And one thing I've been thinking about is exactly that: sure, X devkits will improve, but surely the PS5 devkits will also improve in due time? You can't tell me that things will improve on one side, but not the other?

There's a set of beliefs that got progressively ingrained within the console sub-forum for a while:
- Maximum theoretical TFLOP throughput and maximum memory bandwidth are the most defining metrics for comparison.
- Mark Cerny lied / was dishonest when claiming the PS5's GPU would be running at 2.23GHz most of the time (because it goes against the values shown in a certain gospel).
- Whenever the GPU reaches 2.23GHz the console's power consumption must be ridiculously high
- The SeriesX's GPU is more advanced because it's more RDNA2 than the PS5's (also because Sony's particular customizations that deviate from RDNA2 must be worse than AMD's standard set).

That said, they were expecting the SeriesX to be some 20% faster or more, depending on how dishonest Cerny/Sony were being with the 2.23GHz values and assuming the GPU had to downclock to 2GHz and below (again, that gospel..).

Real life results are challenging all that.
On these Digital Foundry face-offs using multiplatform games, the SeriesX and the PS5 are showing similar framerates + render resolutions + effect quality levels (which should be good for everyone but whatever), both consoles are consuming around the 200W and turns out those preposterous 2.23GHz are actually conservative if we compare them to AMD's own RDNA2 PC GPUs on the same process.


This latest secret sauce devkit idea is probably some form of coping mechanism until acceptance can be reached.



BTW, Mark Cerny wasn't dishonest when he mentioned the GPU clocks, nor when he warned against overvaluing the max theoretical TFLOPs as means for comparison, nor when he suggested the PS5 was coming with a SSD, nor any of the other times it's been suggested here that his statements carry a veil of deception.
It's almost like he's the head of Sony's console hardware architecture and not a PR/marketing talking head, and perhaps we should take his statements at face value instead of trying to determine in which way he's actually lying.
 
Last edited by a moderator:
So much rage.
MAXX rage?

Z28VA2h.jpg
 
You seem to be very salty about the comparisons. I am fine with PS5 being equal to Series X ports. I just disagree with him saying PS5 will be better or equal most of the time.

Then we agree on the equal part and I apologize for assuming you were stating otherwise.

I have zero problems with the comparisons. I love that both consoles are getting virtually the same output (even if it makes DF's job a little more boring on that front).
It's the best for eveyone.
 
Yeah there is no reason this game should be performing at poor framerates on a 3080 of all cards, even with the two-worlds stuff going on. Apparently there are texture pop-in issues (I've seen a few in video reviews) and pixel glitches on spots of the screen at parts as well. Knowing this game's coming to PS5 later on (probably in the Fall?), I wonder if having to simultaneously develop for that, Series systems, and PC impacted some of the late-level polish the Series version should've gotten.

The thing about something like Unreal Engine is that it's a hugely complex bit of kit, and it's easy to use features in a way that can impact frame rates or cause performance drops in particular moments. There's a certain amount of setup work involved with a scene (including per frame), and while multiple cameras in a scene is common multiple scenes isn't, and I imagine you could easily lose a lot of performance. Things like the order in which you do different steps of the different scenes, how you manage LOD as you change the balance between scenes, how GPUs are naturally less efficient at lower resolutions (multiple scenes = lower resolution per scene), potentially doubling the draw calls, being stalled at the front end of the GPU more ... I dunno stuff like that.

Optimisation isn't just about a dude looking at a screen of code and juggling some operations around - in a engine like Unreal there are a lot of performance implications hidden behind a tick box or a slider or a linking this effect to that effect or whatever. It's going to be tough for a small team pushing a complex engine in an uncommon way.

Even the mighty Squaresoft ran into massive texture streaming issues with UE for FFVII remake at first! And they were a huge team with a huge budget.

A lot of this stuff can be patched up in an update but in a way I kind of also agree with Jaffe on if MS provided enough polish/technical support to Bloober Team for this game considering the attention it would surely be getting, especially considering how small a team Bloober are. This game is already quite good visually (particularly in regards to the ghost world aesthetics) but it could be a genuine visual showcase if a bit more technical assistance from MS were provided.

It's quite hard to parachute people into a team to fix things. You need to develop some familiarity with the project, be able to work with the people already at the team, not interrupt the workflow of people who are crunching for release, and also have permission from the team to do so. They may feel a delicate balance would be disrupted at a difficult time. A lot of the time simply having access to people who can answer questions is the most important thing - but you have to have worked out what it is you want to ask!

I really couldn't care less about console wars, yet I find it mildly entertaining reading about this stuff.

And one thing I've been thinking about is exactly that: sure, X devkits will improve, but surely the PS5 devkits will also improve in due time? I can't see how things will improve on one side, but not the other?

They'll surely improve all-round. With the XSX the issue is that the compute units are going somewhat less well utilised than in PS5, despite the architectural similarity. The question is whether better tools (which allow for insight and fine tuning) will allow this to improve relatively speaking on XSX. It always takes more work to effectively use something that's wider - but there's a point at which either you can't or it's not worth it.

In some areas PS5 will always have an advantage though. For example, fillrate where you're not bound by main memory bandwidth. No amount of squeezing more maths out of the compute units is going to change that. Unless maybe you moved from e.g. a rasterised particle system to tiled compute based one (no, I don't know how you'd do that, I just read about it from someone like sebbbi).
 
Status
Not open for further replies.
Back
Top