Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

Thank you

This is enough Data

i know have 3 GPUs in similar scenario

First here is my RTX3080 in Two Scenarios vs RTX3070TI

68fps - 81fps = 20%

64fps - 79fps = 25%

View attachment 6975

View attachment 6976


View attachment 6974


View attachment 6977

this is 25% which is INLINE to what Hardware Unboxed Benchmarked between the Two GPU's in their test run

while we factor in the RTX2080TI in the same scene the RTX3070TI is about 6% ahead of the RTX2080TI

and if we compare it to HUB benchmark guess what? it is exactly 6%! so we are good in business !

View attachment 6978

the RTX3080 in that Scene is 86% ahead of the PS5 of where Peter shows up and 88% in the other Scene
The RTX3070TI is 54% ahead of the PS5 in Peter Scene and 50% in the other
the RTX2080TI now been Shared in two Scene "even though the second scene is different" showing 41-45% ahead of the PS5

so where does that leave the PS5 actual performance? since we got 3 GPUs lining up nicely with HUB findings when you add up how the PS5 performs?
that puts the PS5 Ballpark of around the RTX3060 .. which is also Inline with Digital Foundry

Intresting. And thats for a port with optimizations left to come in special the CPU department.
 
I have to extra input on that situation: It does not "always" tank performance at exact spots and scenes on 8 GB. The performance drops is whenever game decides to breach VRAM and starts using RAM. So even in my own tests, half the time it tanked the performance a bit later, half the time it tanked the performance before. It will always tank when you play more than 3 minutes though.
 
I have to extra input on that situation: It does not "always" tank performance at exact spots and scenes on 8 GB. The performance drops is whenever game decides to breach VRAM and starts using RAM. So even in my own tests, half the time it tanked the performance a bit later, half the time it tanked the performance before. It will always tank when you play more than 3 minutes though.

I wonder if simple system housekeeping can make a huge different in this games performance. i.e. if you're close to maxing out your VRAM as would be the case at high settings and res on an 8GB GPU, then what you have running in the background eating up VRAM could be making a world of difference.

Example: right now I am apparently using 379MB of dedicated VRAM for OVR Server (Oculus) which isn't even open! A further 265MB for Chrome with 11 open tabs, 50MB for my monitors control panel app, 15MB for Epic Games Launcher (also not open) and probably around another 50MB on other miscellaneous processes. So overall with not much running in the background right now I'm using up about 800MB of VRAM. I've seen this go well over 1GB recently too.

Then again, maybe scratch all of the above as perhaps this is exactly why the game caps VRAM usage at 80%, i.e. to allow space for all those other processes so as to allow a seamless multitasking experience when alt-tabbing. I assume something like Cyberpunk that allows the game to use all VRAM would have issues when for example alt-tabbing from the game to the internet browser when the game is using all of the VRAM. Not a great multitasking experience at the expense of a better gaming experience.

In fact it seems like Spiderman's VRAM allocation is actually just following the console model of reserving an amount of memory for the system to guarantee a smooth user experience, whether or not it's needed. An interesting compromise, particularly given that in the PC space, every game can make this decision for itself rather than it being enforced at the system level like on the consoles. There's also the additional complication in the PC space that you never know how much RAM the other processes are going to use so even putting 20% aside may not be enough for all situations (but will often be too much).
 
Example: right now I am apparently using 379MB of dedicated VRAM for OVR Server (Oculus) which isn't even open! A further 265MB for Chrome with 11 open tabs, 50MB for my monitors control panel app, 15MB for Epic Games Launcher (also not open) and probably around another 50MB on other miscellaneous processes. So overall with not much running in the background right now I'm using up about 800MB of VRAM. I've seen this go well over 1GB recently too.

Jesus! You on Windows 11?
 
I wonder if simple system housekeeping can make a huge different in this games performance. i.e. if you're close to maxing out your VRAM as would be the case at high settings and res on an 8GB GPU, then what you have running in the background eating up VRAM could be making a world of difference.

Example: right now I am apparently using 379MB of dedicated VRAM for OVR Server (Oculus) which isn't even open! A further 265MB for Chrome with 11 open tabs, 50MB for my monitors control panel app, 15MB for Epic Games Launcher (also not open) and probably around another 50MB on other miscellaneous processes. So overall with not much running in the background right now I'm using up about 800MB of VRAM. I've seen this go well over 1GB recently too.

Then again, maybe scratch all of the above as perhaps this is exactly why the game caps VRAM usage at 80%, i.e. to allow space for all those other processes so as to allow a seamless multitasking experience when alt-tabbing. I assume something like Cyberpunk that allows the game to use all VRAM would have issues when for example alt-tabbing from the game to the internet browser when the game is using all of the VRAM. Not a great multitasking experience at the expense of a better gaming experience.

In fact it seems like Spiderman's VRAM allocation is actually just following the console model of reserving an amount of memory for the system to guarantee a smooth user experience, whether or not it's needed. An interesting compromise, particularly given that in the PC space, every game can make this decision for itself rather than it being enforced at the system level like on the consoles. There's also the additional complication in the PC space that you never know how much RAM the other processes are going to use so even putting 20% aside may not be enough for all situations (but will often be too much).
Sadly no, my system is always being housekept, especially in relation to VRAM. This is why I'm one of the rare people who actually noticed this issue. Most others do not housekeep, and their idle VRAM usage may range from 500 mb to 1.5 GB (in the case of Steam, Chrome and Discord. And yes, I disable hardware accerelation for all of them, so they don't use VRAM in my configuration even when they're on). This naturally brings the total readout to somewhere close to 7-7.5 GB for most people.

Cyberpunk, as I've noted previously, can allocate more VRAM than this game, and even with hardware accel enabled for software, I never noticed any multitasking problems. It was the Cyberpunk that lost performance with extra program bogging down VRAM in the background, not the other way around. In the case of Spiderman, game bogs itself down way before the maximum VRAM potential is reached, whether you have something open or not.


Here's a video form of the issue. Practically FPS tanks > Set textures to High > FPS gets back up > Set textures to Very High > FPS tanks again after a bit > Set textures to High > FPS gets back up.
 
Should roll these into a single response and a reply. But this one in particular I'll address. It doesn't matter how much CPU there is in a console, the console will always be framerate limited in an SOC design. The faster your framerate, the more bandwidth you take.

If your CPU at 30fps is regularly taking say 20 GB/s of bandwidth, it would become 40 GB/s at 60fps, and 80 GB/s at 120fps.
If your GPU at 30fps is regularly taking say 100GB/s of bandwidth, it would become 200GB/s at 60fps, and 400GB/s at 120fps. The two combined you're at 480GB/s which is maximum theoretical, except that isn't how memory actually works in terms of performance. There are reading and writing and read/write hits and there are asymmetrical losses in bandwidth when the CPU and GPU compete. Quick frankly, there's not a lot available here for consoles to use. So having more CPU isn't going to get around the fact that the GPU will be bandwidth starved the faster it goes reducing the resolution dramatically. See Series S at 120fps. If a game is properly coded, CPU bottlenecking should not happen since they have priority over memory.

tldr; You could never benchmark a GPU on console as you would on PC: making the CPU push faster than the GPU can render. They share the same resources and CPU has priority. The GPU will always be the bottleneck in this scenario. Quite frankly, 120fps on console is a very difficult to maintain.
I wonder how much Infinity Cache would have helped in these new consoles. On paper, it seems like they have plenty of bandwidth but I wonder how IC would act with the shared memory?
 
Sadly no, my system is always being housekept, especially in relation to VRAM. This is why I'm one of the rare people who actually noticed this issue. Most others do not housekeep, and their idle VRAM usage may range from 500 mb to 1.5 GB (in the case of Steam, Chrome and Discord. And yes, I disable hardware accerelation for all of them, so they don't use VRAM in my configuration even when they're on). This naturally brings the total readout to somewhere close to 7-7.5 GB for most people.

Cyberpunk, as I've noted previously, can allocate more VRAM than this game, and even with hardware accel enabled for software, I never noticed any multitasking problems. It was the Cyberpunk that lost performance with extra program bogging down VRAM in the background, not the other way around. In the case of Spiderman, game bogs itself down way before the maximum VRAM potential is reached, whether you have something open or not.


Here's a video form of the issue. Practically FPS tanks > Set textures to High > FPS gets back up > Set textures to Very High > FPS tanks again after a bit > Set textures to High > FPS gets back up.

think even dropping the Textures to High is not enough seeing how the RTX2080TI with it's 11GB VRAM buffer is performing

however i did find that changing your setting in-game you'll still get wonky performance, have to Reboot the game to get stable & better performance
 
I wonder how much Infinity Cache would have helped in these new consoles. On paper, it seems like they have plenty of bandwidth but I wonder how IC would act with the shared memory?

It would depend on how much IC they would have, it would certainly help for ray tracing.
 
Because there are hardly any seriously challenging AAA games Nvidia still has to fall back on the two-year-old Cyberpunk 2077. Let's see how much better the new ray tracing mode will be in Cyberpunk 2077. RTX 4090 seems to generate only 22 fps in UHD with Overdrive ray tracing.

With the exception of a few ray tracing titles this generation is really slow to get going.

 
Last edited:
Because there are hardly any seriously challenging AAA games Nvidia still has to fall back on the two-year-old Cyberpunk 2077. Let's see how much better the new ray tracing mode will be in Cyberpunk 2077. RTX 4090 seems to have only 22 fps in UHD.



With the exception of a few ray tracing titles this generation is really slow to get going.
Honestly, it looks exactly the same as the current CP2077 with RTX on. Seems like that's just a mode that destroys performance for no reason on Non Ada GPUs.
 
My brother who's a pure console gamer (PS5) just said all that performance in the 4090 and it still doesn't have a game with better graphics than Horizon Forbidden West.

I lol'd because he kind of has a point.
 
What is not true. When the PlayStation 4 was shown there were also naysayers in some forums that told that the graphics of Killzone Shadow Fall were barely better than Killzone 3. Which is of course bullshit.

It is true that there has never been so much computing power with so few elaborate games. The RTX 4090 will rest. For many gamers there is little reason to upgrade.
 
Back
Top