This has my attention
imagine a VR mod
This has my attention
Thank you
This is enough Data
i know have 3 GPUs in similar scenario
First here is my RTX3080 in Two Scenarios vs RTX3070TI
68fps - 81fps = 20%
64fps - 79fps = 25%
View attachment 6975
View attachment 6976
View attachment 6974
View attachment 6977
this is 25% which is INLINE to what Hardware Unboxed Benchmarked between the Two GPU's in their test run
while we factor in the RTX2080TI in the same scene the RTX3070TI is about 6% ahead of the RTX2080TI
and if we compare it to HUB benchmark guess what? it is exactly 6%! so we are good in business !
View attachment 6978
the RTX3080 in that Scene is 86% ahead of the PS5 of where Peter shows up and 88% in the other Scene
The RTX3070TI is 54% ahead of the PS5 in Peter Scene and 50% in the other
the RTX2080TI now been Shared in two Scene "even though the second scene is different" showing 41-45% ahead of the PS5
so where does that leave the PS5 actual performance? since we got 3 GPUs lining up nicely with HUB findings when you add up how the PS5 performs?
that puts the PS5 Ballpark of around the RTX3060 .. which is also Inline with Digital Foundry
I have to extra input on that situation: It does not "always" tank performance at exact spots and scenes on 8 GB. The performance drops is whenever game decides to breach VRAM and starts using RAM. So even in my own tests, half the time it tanked the performance a bit later, half the time it tanked the performance before. It will always tank when you play more than 3 minutes though.
Example: right now I am apparently using 379MB of dedicated VRAM for OVR Server (Oculus) which isn't even open! A further 265MB for Chrome with 11 open tabs, 50MB for my monitors control panel app, 15MB for Epic Games Launcher (also not open) and probably around another 50MB on other miscellaneous processes. So overall with not much running in the background right now I'm using up about 800MB of VRAM. I've seen this go well over 1GB recently too.
Sadly no, my system is always being housekept, especially in relation to VRAM. This is why I'm one of the rare people who actually noticed this issue. Most others do not housekeep, and their idle VRAM usage may range from 500 mb to 1.5 GB (in the case of Steam, Chrome and Discord. And yes, I disable hardware accerelation for all of them, so they don't use VRAM in my configuration even when they're on). This naturally brings the total readout to somewhere close to 7-7.5 GB for most people.I wonder if simple system housekeeping can make a huge different in this games performance. i.e. if you're close to maxing out your VRAM as would be the case at high settings and res on an 8GB GPU, then what you have running in the background eating up VRAM could be making a world of difference.
Example: right now I am apparently using 379MB of dedicated VRAM for OVR Server (Oculus) which isn't even open! A further 265MB for Chrome with 11 open tabs, 50MB for my monitors control panel app, 15MB for Epic Games Launcher (also not open) and probably around another 50MB on other miscellaneous processes. So overall with not much running in the background right now I'm using up about 800MB of VRAM. I've seen this go well over 1GB recently too.
Then again, maybe scratch all of the above as perhaps this is exactly why the game caps VRAM usage at 80%, i.e. to allow space for all those other processes so as to allow a seamless multitasking experience when alt-tabbing. I assume something like Cyberpunk that allows the game to use all VRAM would have issues when for example alt-tabbing from the game to the internet browser when the game is using all of the VRAM. Not a great multitasking experience at the expense of a better gaming experience.
In fact it seems like Spiderman's VRAM allocation is actually just following the console model of reserving an amount of memory for the system to guarantee a smooth user experience, whether or not it's needed. An interesting compromise, particularly given that in the PC space, every game can make this decision for itself rather than it being enforced at the system level like on the consoles. There's also the additional complication in the PC space that you never know how much RAM the other processes are going to use so even putting 20% aside may not be enough for all situations (but will often be too much).
I wonder how much Infinity Cache would have helped in these new consoles. On paper, it seems like they have plenty of bandwidth but I wonder how IC would act with the shared memory?Should roll these into a single response and a reply. But this one in particular I'll address. It doesn't matter how much CPU there is in a console, the console will always be framerate limited in an SOC design. The faster your framerate, the more bandwidth you take.
If your CPU at 30fps is regularly taking say 20 GB/s of bandwidth, it would become 40 GB/s at 60fps, and 80 GB/s at 120fps.
If your GPU at 30fps is regularly taking say 100GB/s of bandwidth, it would become 200GB/s at 60fps, and 400GB/s at 120fps. The two combined you're at 480GB/s which is maximum theoretical, except that isn't how memory actually works in terms of performance. There are reading and writing and read/write hits and there are asymmetrical losses in bandwidth when the CPU and GPU compete. Quick frankly, there's not a lot available here for consoles to use. So having more CPU isn't going to get around the fact that the GPU will be bandwidth starved the faster it goes reducing the resolution dramatically. See Series S at 120fps. If a game is properly coded, CPU bottlenecking should not happen since they have priority over memory.
tldr; You could never benchmark a GPU on console as you would on PC: making the CPU push faster than the GPU can render. They share the same resources and CPU has priority. The GPU will always be the bottleneck in this scenario. Quite frankly, 120fps on console is a very difficult to maintain.
Sadly no, my system is always being housekept, especially in relation to VRAM. This is why I'm one of the rare people who actually noticed this issue. Most others do not housekeep, and their idle VRAM usage may range from 500 mb to 1.5 GB (in the case of Steam, Chrome and Discord. And yes, I disable hardware accerelation for all of them, so they don't use VRAM in my configuration even when they're on). This naturally brings the total readout to somewhere close to 7-7.5 GB for most people.
Cyberpunk, as I've noted previously, can allocate more VRAM than this game, and even with hardware accel enabled for software, I never noticed any multitasking problems. It was the Cyberpunk that lost performance with extra program bogging down VRAM in the background, not the other way around. In the case of Spiderman, game bogs itself down way before the maximum VRAM potential is reached, whether you have something open or not.
Here's a video form of the issue. Practically FPS tanks > Set textures to High > FPS gets back up > Set textures to Very High > FPS tanks again after a bit > Set textures to High > FPS gets back up.
I wonder how much Infinity Cache would have helped in these new consoles. On paper, it seems like they have plenty of bandwidth but I wonder how IC would act with the shared memory?
That's me - I'm not real!imagine a VR mod
Not sure if I’m watching a frame graph or a flex. watched VGtech just dominate a bunch of matches for 15 minutes. Whoever was playing is clearly a beast
Honestly, it looks exactly the same as the current CP2077 with RTX on. Seems like that's just a mode that destroys performance for no reason on Non Ada GPUs.Because there are hardly any seriously challenging AAA games Nvidia still has to fall back on the two-year-old Cyberpunk 2077. Let's see how much better the new ray tracing mode will be in Cyberpunk 2077. RTX 4090 seems to have only 22 fps in UHD.
With the exception of a few ray tracing titles this generation is really slow to get going.