Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

he issue of course with such comparisons is the behaviour of DRS which is different between the two systems. You'd really need to run the scene on the 2080Ti without DRS at both the upper and lower bound of the PS5's DRS range and then try to infer the performance comparison from that. I believe the PS5 ranges from 1512p-2160p in that mode.

And the different quality settings, which according to DF differ in that the PS5 version actually dials down more then the PC version does. Anyway, ports of exclusive games to gauge hardware performance and efficiency.....
 
How does performance look in that opening cutscene on your system with NXG's Fidelity mode matched settings? I'm curious to see if he's correct in the assumption that it's performing on par with a 2080Ti/3070.

The issue of course with such comparisons is the behaviour of DRS which is different between the two systems. You'd really need to run the scene on the 2080Ti without DRS at both the upper and lower bound of the PS5's DRS range and then try to infer the performance comparison from that. I believe the PS5 ranges from 1512p-2160p in that mode.
If you give me the settings he used, I can test it out.
 
Just looked at memory use on my 3060ti using GPU-Z.

At native 1080p max settings with no ray tracing GPU-Z reports the following:

Max VRAM used: 6786GB

Now same max settings but with ray tracing maxed out (So very high reflections, very high geometric detail and object range slider at 10)

Max VRAM used: 6814GB

So ray tracing doesn't do a fat lot for VRAM savings and I still had a comfortable amount of head room on my 8GB 3060ti.

I might test memory use at a higher resolution......be right back.

EDIT: Tested at a fixed 2715x1527 max settings and max RT.

Max VRAM used: 6860GB
 
Last edited:
If you give me the settings he used, I can test it out.

They're actually pretty similar to DF's so probably easier (and potentially more accurate) just to use those:

PS5 Performance RTPS5 FidelityPC Optimised Settings
Texture QualityHighHighHigh
Anisotropic Filtering4x4x8x
Shadow QualityHighHighHigh
Ambient OcclusionSSAOSSAOSSAO
RT Reflections ResolutionHighHighHigh
RT Reflections Geo QualityHighHighHigh
RT Reflections Object Range7/8~108
Hair QualityMedium-HighHighHigh
Level of DetailHighHighHigh
Crowd DensityLow-MediumMedium-HighLow
Traffic DensityUnknownUnknownLow
Depth of Field QualityMedium/HighMedium/HighLow
 
How does performance look in that opening cutscene on your system with NXG's Fidelity mode matched settings? I'm curious to see if he's correct in the assumption that it's performing on par with a 2080Ti/3070.

But PS5 isn't on par with those GPU's, his weak CPU is just making it seem that way.

My 3060ti at 2.1Ghz overclock on the core isn't far off either of those GPU's and I'm getting a locked 60fps in the intro sequence with the same CPU NXG uses.
 
Last edited:
They're actually pretty similar to DF's so probably easier (and potentially more accurate) just to use those:

PS5 Performance RTPS5 FidelityPC Optimised Settings
Texture QualityHighHighHigh
Anisotropic Filtering4x4x8x
Shadow QualityHighHighHigh
Ambient OcclusionSSAOSSAOSSAO
RT Reflections ResolutionHighHighHigh
RT Reflections Geo QualityHighHighHigh
RT Reflections Object Range7/8~108
Hair QualityMedium-HighHighHigh
Level of DetailHighHighHigh
Crowd DensityLow-MediumMedium-HighLow
Traffic DensityUnknownUnknownLow
Depth of Field QualityMedium/HighMedium/HighLow
Ok, but I need to know the DRS scaling target framerate they used.
 
Ok I just tested with my 2080ti..

The high detail mips take 15 seconds to load in... with RT off.. That is NOT fast enough to happen within the context of the cutscene.. so you never see that quality.


If certain cards with less VRAM are loading in these mips faster than my machine, then there's a bug...
I can test that be right back......

EDIT

Copying your video it took my system 5.86 seconds to load that texture on an 8Gb 3060ti.
 
Last edited:
My 3060ti at 2.1Ghz overclock on the core isn't far off either of those GPU's

3060Ti being ballpark 2080Ti performance (i have a 2080Ti), thats seriously impressive. Thats one generation apart and in the lowest end ampere vs highest end ampere TI spectrum (there is no 3050Ti on desktop as far im aware).
 
So why would that be the case?

What CPU and RAM specs do you have?

  • Ryzen 5 3600 stock but it boosts to 4175Mhz during Spiderman
  • Corsair DDR4 at 3400Mhz
  • NVME drive at 3.5GB/s

What CPU do you have?

Maybe because you used something to record the screen and it maybe took some cycles away from somewhere?

Test it without the screen recording.
 
  • Ryzen 5 3600 stock but it boosts to 4175Mhz during Spiderman
  • Corsair DDR4 at 3400Mhz
  • NVME drive at 3.5GB/s

What CPU do you have?

Maybe because you used something to record the screen and it maybe took some cycles away from somewhere?

Test it without the screen recording.
No.. it happens the same whether recording or not.

I have a 3900X @ 4.4hz all cores, 32GB DDR4 3600, and a 7GB/s NVMe....

Wait, what resolution are you at? I'm running 3840x1600 ultrawide there.
 
No.. it happens the same whether recording or not.

I have a 3900X @ 4.4hz all cores, 32GB DDR4 3600, and a 7GB/s NVMe....

Wait, what resolution are you at? I'm running 3840x1600 ultrawide there.

I'm at native 1080p but as the assets are decompressed by the CPU I doubt that's an issue.

I wonder if it's the added CCX latency you'll have as you'll have more on that 3900x?

The game is really sensitive to CC latency on older Ryzens.

I'll and use Nvidia DRS to get me to 4k.
 
I'm at native 1080p but as the assets are decompressed by the CPU I doubt that's an issue.

I wonder if it's the added CCX latency you'll have as you'll have more on that 3900x?

The game is really sensitive to CC latency on older Ryzens.

I'll and use Nvidia DRS to get me to 4k.
Yea, I tested, resolution makes no difference for me.

Hmm.. could be? I dunno. That wouldn't explain why NXGamer is able to get his 2070 with a 2700x to not have the issue regardless...

This is strange. I know I'm running PCIe 3.0 16x as well.. I've tried all different types of combinations and settings.

Maybe I should restart and check my RAM settings.
 
@Remij it took 11 seconds for that texture to load when using native 4k on my 3060ti via Nvidia DRS.

This game is broken when it comes to VRAM use, now I would expect the game to use noticeably more VRAM when at native 4k vs native 1080p.

But look at my numbers:

Max quality settings and max ray tracing settings were used:

  • At native 1080p max VRAM used was 6814GB
  • At a fixed 2715x1527 max VRAM used was 6860GB
  • At native 2160p max VRAM used was 7059GB

I would expect to see much larger increases in VRAM use moving through the resolutions than that, the game doesn't seem to want to use more than 6.8-7GB of VRAM regardless of the resolution being used on my system.

So what I suspect is happening is as the resolution increases the game doesn't want to increase the VRAM usage to accommodate the higher resolution frame buffer so it has to sacrifice something in order to accommodate said larger frame buffer, and this results in less VRAM for textures which results in the lower quality MIPS being used that we're seeing.
 
Last edited:
If the issue presents itself with RT on, on a 2080Ti with 11GB of VRAM at over 600GB/s, and does not present on a 750Ti with 2GB VRAM at 86GB/s with no RT, then that is quite blatantly a bug related to memory allocation in RT mode. There is no amount of RT this game can apply that would balloon memory requirements from 2GB to over 11GB and bandwidth requirements by 7x.

Nixxes have stated the difficulties they had ensuring the split memory pools of the PC were properly optimised for with the correct pool being used at the correct time given this game was originally developed for a UMA, this is likely just an area that needs additional tweaking.
So, what I said then, glad we agree.

The fact is, if you have more Vram with RT (as in my RX6800) it does not happen. Changing code, memory allocation pools etc etc will help/resolve it. But many of you here also state consoles are just PC's, if so why would changes need to be made to the source code?
 
No.. it will not... because GPUs with more VRAM are NOT fixing the issue.... These mips don't load late... they don't load AT ALL in many cases.

In your video you SPEAK about these issues being cleared up with faster CPUs, more RAM ect ect... but you never SHOW it.

If you're saying it works on your 2070 with RT off... and I'm telling you it doesn't on a 3090 with RT off.... then CLEARLY is another issue and not a VRAM issue..
IT IS IN THE VIDEO:-


And, I also state the same thing in the video as late mip loading should not happen at this level, which again I show IN the video at that point.
 
But many of you here also state consoles are just PC's, if so why would changes need to be made to the source code?

Its a port of a game designed specifically around the PS5's setup, architecutre and API's/software. Its generally more in-line to compare multiplatform titles to gauge system performance. Or benchmarks but they dont exist on PS5.
 
IT IS IN THE VIDEO:-


And, I also state the same thing in the video as late mip loading should not happen at this level, which again I show IN the video at that point.
Doesn't look like it to me....

ddsss.png


Again, I told you CERTAIN mips load on time, and others late.... and others... never... within the context of scenes playing out.
 
Back
Top