Current Generation Games Analysis Technical Discussion [2024] [XBSX|S, PS5, PC]

GPU Demanding as expected, but excellent consistency:

1715973115959.png

1715973110141.png

Edit: Eh, maybe spoke too soon. Looks like it has a memory leak issue.


Techspot said:
While the game menus and hardware config options are flawless—Nixxes knows what they are doing—I really hate the fact that you can't skip cutscenes or dialogues. "Compiling shaders" and stuttering has been a problem for many PC releases, it's a total non-issue in Ghost of Tsushima. While there's a short 10-second shader compilation stage when you first load into the map, this takes only a few seconds and the results are cached, so subsequent game loads are much faster, unless you change graphics card or the GPU driver.

I'm sure there will be a cutscene skip mod coming in short order.

Techspot said:
Our VRAM testing shows that Ghost of Tsushima is demanding but reasonable with its memory requirements. While it allocates around 10 GB, 8 GB is enough even for 4K at highest settings, which is also confirmed by our RTX 4060 Ti 8 GB vs 16 GB results—there's no performance difference between both cards, even when maxed out. Once you turn on Frame Generation, the VRAM usage increases by 2-3 GB, DLSS Frame Gen does use a few hundred megs more than FSR Frame Gen. For lower resolutions, the VRAM requirements are a bit on the high side, because even 1080p at lowest settings reaches around 5 GB, which could make things difficult for older 4 GB-class cards.

VRAM a little high as with most Nixxes releases, but not ridiculously so. 8GB cards will be fine, especially at the resolutions they will play at (once they fix that memory leak issue, that is).
 
Last edited:
How do you determine it's high? It's not like anyone else is making the same port using less RAM! Surely there's no basis for comparison.

I'm just summarizing Techspot's observations and noting Nixxes general past trend with vram optimization, "a little high" is not exactly a withering critique.

It also has barely upgraded textures from the PS4 Pro version, a system with under 5GB allocated to devs. It's not going to cripple 8GB cards which is the most important thing, but yeah I think noting it's on the edge for 8GB cards at 4k is at least a fair observation considering its origins.
 
Last edited:
I'm not complaining. I just don't understand what the point of reference is. You mention 5 GB for PS4 Pro, but that's rendering a much lower resolution. What's the resolution delta in comparison to the RAM requirement?
 
What's the resolution delta in comparison to the RAM requirement?

It's rendering at 2160 (correction: 1800p) checkerboarded on the Pro in its resolution mode. They only have data for VRAM for the 4090, which is going to allocate more vram than it actually needs and Techspot didn't specify how 'vram usage' was actually measured regardless.

You save around 1.5 GB going from 1440p to 2160p with max details.

1715976916240.png

The real test will be with 8GB cards and frametime consistency when using frame generation, considering the extra VRAM load FG puts and Nvidia advertises the 4060's performance numbers with this game with it enabled. We'll probably also have to wait to see how Nixxes addresses this memory leak issue too and seeing what their fix does wrt vram usage.
 
Last edited:
It's rendering at 2160 checkerboarded on the Pro in its resolution mode. They only have data for VRAM for the 4090, which is going to allocate more vram than it actually needs and Techspot didn't specify how 'vram usage' was actually measured regardless.

You save around 1.5 GB going from 1440p to 2160p with max details.

View attachment 11313

The real test will be with 8GB cards and frametime consistency when using frame generation, considering the extra VRAM load FG puts and Nvidia advertises the 4060's performance numbers with this game with it enabled. We'll probably also have to wait to see how Nixxes addresses this memory leak issue too and seeing what their fix does wrt vram usage.
PS4 pro is 1800p checkboard at a lower texture setting.
 

Seems like draw distance is higher and SSRs are more defined. Some more ground detail here and there too.

Aside from that, it's the exact same though. Seems like it also has issues with particles as those look significantly better on the PS5. Likely some upscaling error.
 

Seems like draw distance is higher and SSRs are more defined. Some more ground detail here and there too.

Aside from that, it's the exact same though. Seems like it also has issues with particles as those look significantly better on the PS5. Likely some upscaling error.
Yea, almost kinda like the bloom effect emitting from particles isn't quite as intense as it should be due to a scaling issue with the particles and resolution.
 
Seems like it also has issues with particles as those look significantly better on the PS5. Likely some upscaling error.

Yeah that jumped out at me too, don't know what, if any reconstruction the channel is using in those comparisons. Hopefully they can remedy it, certainly not the norm even with FSR.
 
Yeah, something is also seriously wrong with the memory management. The game needs way more VRAM than one would expect, considering the low res textures.


That guy analyses it nicely. And we can see that it actually uses that much VRAM and no only allocates it with this behavior.
 
I cannot watch, but is that Person using the Ultra settings? Using Ultra settings is invariably Not a good place to start critiquing VRAM usage from as it is a Lot Higher than PS5
 
So, Metro Exodus EE with Raytracing GI runs at ~100 FPS on a 4090. But Ghost of Tsushima with an outdated lighting system runs slower. Great example of how unoptimized software is holding modern GPUs back. This regression is very fascinating. Would be great if these publisher and developer could start to optimize their games a little bit for nVidia GPUs...
 
So, Metro Exodus EE with Raytracing GI runs at ~100 FPS on a 4090. But Ghost of Tsushima with an outdated lighting system runs slower. Great example of how unoptimized software is holding modern GPUs back. This regression is very fascinating. Would be great if these publisher and developer could start to optimize their games a little bit for nVidia GPUs...
Or from another perspective, shows how uniquely optimized this title is for a different API despite running the same hardware as PC and it cannot get anywhere to the performance profile it should be because its been ported to DirectX. They need a full rewrite of this game to get it running well.

Yet most people want to attribute this performance gap to hardware differences and not Optimization differences.

And even if you optimized for D3D, as we saw with Starfield, you still need to put in the time and effort with each IHV to get performance out of the cards.

It really comes down to the amount of hours put in for each platform to run well.
 
Metro Exodus was a PS4/Xbox One game, too. And yet the engine is flexible enough that with a full raytraced GI system (+reflections, AO, emissive lights) it can use the full potential of a GPU released 18 months later. There are no excuses that Sony cant optimize their engines for nVidia GPUs. I dont even ask for Raytracing anymore...

/edit: Orginale release date was july 2020 less than a year before Metro Exodus EE on the PC...
 
Metro Exodus was a PS4/Xbox One game, too. And yet the engine is flexible enough that with a full raytraced GI system (+reflections, AO, emissive lights) it can use the full potential of a GPU released 18 months later. There are no excuses that Sony cant optimize their engines for nVidia GPUs. I dont even ask for Raytracing anymore...

/edit: Orginale release date was july 2020 less than a year before Metro Exodus EE on the PC...
Porting a title is very different than updating an existing multiplatform release. If you’ve never porting work, I don’t think you’d understand. It’s painful to find out that certain APIs cannot do this or that and suddenly you are on the hook to make everything work together.
 
Or from another perspective, shows how uniquely optimized this title is for a different API despite running the same hardware as PC and it cannot get anywhere to the performance profile it should be because its been ported to DirectX. They need a full rewrite of this game to get it running well.

Yet most people want to attribute this performance gap to hardware differences and not Optimization differences.

And even if you optimized for D3D, as we saw with Starfield, you still need to put in the time and effort with each IHV to get performance out of the cards.

It really comes down to the amount of hours put in for each platform to run well.
Recent games like Starfield, Horizon Forbidden West, and Ghost of Tsushima or graphically high-end UE5 games in general have very powerful GPU driven pipelines/techniques ...

Starfield is a notable example with it's awkward usage of the ExecuteIndirect API which can potentially see it hit the slow paths on some vendors ...

It seems like they really need Work Graphs for better scheduling (spinlocked persistent global work queue in UE5) or it's mesh node extension for an alternative GPU driven state change API (Starfield & Ghost of Tsushima do this) AKA Xbox extension of ExecuteIndirect ...
 
Computerbase tests Hellblade 2.

No HWRT. Relative performances are the usual. FSR should be avoided if possible. Shader compilation stutter is present.
Super heavy at these max settings, but we really need some idea of options performance scaling and what that actually looks like(same with Ghost of Tsushima, by the way). Hopefully there's gonna be some decent settings to turn down to get better performance while still looking good, because PC audience is gonna be less enthused about any recommendation to play this at 1080p/30fps if they dont have some $500+ modern GPU.

That said, Hellblade is UE5, so it was never gonna be some 'smooth as butter' high flying game.
 
Super heavy at these max settings, but we really need some idea of options performance scaling and what that actually looks like(same with Ghost of Tsushima, by the way). Hopefully there's gonna be some decent settings to turn down to get better performance while still looking good, because PC audience is gonna be less enthused about any recommendation to play this at 1080p/30fps if they dont have some $500+ modern GPU.

That said, Hellblade is UE5, so it was never gonna be some 'smooth as butter' high flying game.
Seems like the game still looks pretty good at lower settings, with a nice performance gain.

 
Back
Top