Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

May not make any difference, but your video looks like it was at sunset actually.



I'm sure the extra cores help quite a bit.
Problem with that guy is that he probably initially thought my rig was running the game at 45-50 FPS in times square even with PS5 equivalent settings. He got his "gotcha" moment when I mistakenly said I was getting locked 60 (because to me, since the game runs at 60 FPS %99 of the time even there, it feels like a locked 60 fps experience). If I never mentioned that, things would never come so far. I will set to noon for you.
 
The lighting conditions can affect the RT performance in the Times Square area greatly I find.

Granted my i5-12400 is bottlenecked by its ram somewhat, which is only 2800mhz. However in every benchmark I've seen, the Alderlake CPU's outpace Zen2 (and certainly Zen+) models, so even with that stipulation I don't believe the effect of -400mhz on the ram is that crippling.

So console settings (object detail at 8), 1080p, very high textures (RTX 3060 12GB).

In Times Square at night, I had a couple of drops to 59fps. Most of the time 65-70+. With Vsync, even this I wouldn't say would necessarily be a good experience, there were too many where it was just at 60 without vsync and no doubt when I've tried playing with vsync and RT on in this area there are quite a few stutters.

In daytime at noon however, it's far worse (note video may still be processing). There's a particular corner that drops my fps into the 40's. Numerous drops into the 50's. Definitely nothing close to 60fps locked with object detail at 8 and RT in a non-GPU stress area.

Make of that what you will.

Would seem reasonable that lighting conditions and camera position could affect frame rate in this game. Looks like you get some drops when you spin the camera and you're rapidly moving to looking deep into long roads with lots of things down them. Would that be a fair characterisation?

Also crowd density seems to be pretty high in your video. Is that the low setting? I bet high would be pretty beastly!
 
Your original claim and mention of Time Square was you saying it was a locked 60fps.

But we've seen enough here now to not trust what you say moving forward as you just make stuff up.
That's a bit extreme when yamaci17 already apologised for using a dubious definition of 'locked 60'. I think you need to chill out a bit and stop looking for enemies. Talk sensibly about what people mean in their choice of words rather than bickering over semantics when English is 1) often ambiguous and 2) not everyone's native language.
 
Can you provide some citations for this, please?
Citation 1

First time it resurfaced in a mainstream tech channel;

Practically, if you're CPU bound with a lowish end CPU, your theoritical maximum CPU bound performance is approximately %20-25 higher with an AMD GPU. Some believes this does not apply to high end CPUs. I don't know about that.

7MdjZ9F.png


Considering a 4790k is not that far away from a 3600/2700x in terms of pure gaming performance, I'd say 2700x/3600 is being highly affected of it.


This post is also useful to see its potential effect on 2700x, the CPU that is constantly being compared to PS5.

For example, 2700x bottlenecks around 79 framerate with a 3090, while bottlenecking around 96 FPS with a 6900xt.

FH4 is 124 vs 144 etc.

It all depends from game to game.

If you watch the 1st video by nerdtechgasm, it goes all the way back to GTX 700 series where hardware scheduler is removed from NV architecture cards to reduce power consumption compared to competing cards at the time. I think they stayed with that design, since it was not that useful to begin with. Then NV started using their own software scheduler, which actually is great for DX11 titles. It practically has the ability to make the code more "multi threaded" by design, without developer doing anything. As discussed in the video, it was the reason why some games actually did perform better on Nvidia in terms of CPU back in DX11 days.

DX12/Vulkan by design can make use of hardware schedulers, which AMD cards and consoles have. NVIDIA however kept using their software path, which practically adds the extra CPU cost that is being discussed above.

It is not a myth, or a lie, or a hyperbole. Its practically irrevelant if you're GPU bound. That's why it never was discussed a lot, since it never made sense that someone would pair a Ryzen 2600 with a 3060ti. Problematic parts where it starts to affect hardware is unbalanced combos such as that. Zen 2 is really better, maybe I was too harsh saying Zen 3 at a minimum. But Zen/Zen+ really feels horrible for Amplere, and even high end Turing products.

I think this thing may get traction once games get more CPU bound when we move into the nextgen. But maybe NVIDIA will re-add that hardware scheduler. We will see tomorrow
 
May not make any difference, but your video looks like it was at sunset actually.



I'm sure the extra cores help quite a bit.
Okay this time I went a bit far. It practically never dropped below 60 in the entire video, despite sudden turns, high speed dives and swings, it never dropped below 60. I literally couldn't force it to.


As I said, I use PS5 equivalent low crowd and traffic density. Eurogamer/DF themselves suggest using those settings. Other settings murder CPU performance. But I respect everyone's choice. I could too play with higher settings, and I would have no trouble, with VRR and all that.

But even with them set to Low, Times Square is full of NPCs and cars so to me, those higher settings is not worth it for me.
 
Would seem reasonable that lighting conditions and camera position could affect frame rate in this game. Looks like you get some drops when you spin the camera and you're rapidly moving to looking deep into long roads with lots of things down them. Would that be a fair characterisation?

That's the 'corner of death', which Alex also mentioned in his video - the BVH structure of that segment is particularly complex apparently.

"Spinning the camera and rapidly moving" is kind of par for the course when web-slinging through a city though. Hard to avoid. :) Yeah though, one of the biggest problem areas with this game I've find that I've mentioned is when occluded geometry is suddenly brought into view, when swinging around and at least having a good part of the city in your viewport it's not an issue - the problem occurs when you're running up a tall building and suddenly catapult yourself over the roof and entire city is immediately brought into view. On my system that causes a few frames of stutter almost every time, RT or not. It's pretty annoying and I hope they can address this.

Also crowd density seems to be pretty high in your video. Is that the low setting? I bet high would be pretty beastly!

Whoops! Good catch. My apologies, I had crowd/car density set to High from my time playing without RT. Re-testing it now with both set to medium and in daytime, it's much better. No drops below 60. At night though, I still get drops into the high 50's - but certainly no 40's. 58 seems to be lowest.

Made an extensive video of the area, showing my settings, and cycling from noon to night. Just started processing so it will be a while.
 
If you watch the 1st video by nerdtechgasm, it goes all the way back to GTX 700 series where hardware scheduler

Yeah, that was the source I was thinking of but I couldn't remember his name. I actually posted that video here ages ago and it got some pushback, particularly on that claim that Nvidia doesn't have a 'hardware scheduler' of any sort. It was made in 2017, mind you.

That was an extensive list of games in the Anandtech thread though, and it's pretty damning with respect to Nvidia low-end CPU performance. Aside from just general price competition it's another indication of the problems of one player being so dominant, situations like this are just not going to be exposed that often. I would love to see non-RT performance with Spiderman on something like a Zen+ with modern AMD vs Nvidia cards though and recent drivers.
 
Last edited:
Yeah, that was the source I was thinking of but I couldn't remember his name. I actually posted that video here ages ago and it got some pushback, particularly on that claim that Nvidia doesn't have a 'hardware scheduler' of any sort.

That was an extensive list of games in the Anandtech thread though, and it's pretty damning with respect to Nvidia low-end CPU performance. Aside from just general price competition it's another indication of the problems of one player being so dominant, situations like this are just not going to be exposed that often. I would love to see non-RT performance with Spiderman on something like a Zen+ with modern AMD vs Nvidia cards though and recent drivers.
Only if I had more hardware in my hands. I'd test specifically CPU bound locations (like we discuss, Times Square). Most benchmarks in review channels are done in arbitrary random locations. Cyberpunk's Jig Jig street would be a great place to test this on higher end CPUs. For example 1440p/no RT/modest raster settings, a 3080 pitted against 6800xt in Jig Jig street with a say, 5800x.

Reason is because Hardware unboxed did some mistake by testing 1080p/medium. Of course people would focus on that instead. But we get into bottlenecks at higher settings, resolutions. Like this whole discussion, spawned from saying Spiderman is CPU bound even at 4K. If that's the case, this situation should also affect that. But Spiderman is a weird case it performs worse on AMD cards in terms of CPU boundness actually, which is weirder.


High end NV GPUs getting more CPU bound framerate at lower resolutions is an interesting situation that counters this entire overhead situation. Maybe Nvidia optimized something on their end for the case of Spiderman? Could be. The whole software scheduler thing itself is a software wizardry. But it could be a one time thing too. If that's the case, my theory of NV driver adding CPU costs on top of things could also be very well wrong in this specific case. :)

I mean I still get a CPU bound performance around 60-70 FPS even in "times square" with my 2700x. So the game is not super bad in that regard. I think problem is really that crowd density setting. It really is too costly in some cases.
 
Okay this time I went a bit far. It practically never dropped below 60 in the entire video, despite sudden turns, high speed dives and swings, it never dropped below 60. I literally couldn't force it to.


As I said, I use PS5 equivalent low crowd and traffic density. Eurogamer/DF themselves suggest using those settings. Other settings murder CPU performance. But I respect everyone's choice. I could too play with higher settings, and I would have no trouble, with VRR and all that.

But even with them set to Low, Times Square is full of NPCs and cars so to me, those higher settings is not worth it for me.

It could be an aspect of how the video is being recorded, but there are definitely areas in that video that are dropping frames. It doesn't really look "smooth", and if you manually step through teh frames you can see areas where frames are being duplicated.

There's no frame time graph in this video, so I can't see whether this in the game's rendering or some issue with the recording or Youtube or something. If you have the time, would you be able repeat this kind of noon example with the frame time graph on? I'm interested to see how the frame times are working out. Averaged fps over time can hide a magnitude of peculiarities!

Low for traffic and NPC's would seem to be a sensible tradeoff. It looks like maybe (maybe?) PS5 is a little above that, but I'm not sure.
 
Granted my i5-12400 is bottlenecked by its ram somewhat, which is only 2800mhz. However in every benchmark I've seen, the Alderlake CPU's outpace Zen2 (and certainly Zen+) models, so even with that stipulation I don't believe the effect of -400mhz on the ram is that crippling.

It's probably having a bigger impact than you think in this game. Checkout the massive performance improvement the 12900K see's when moving from DDR4 3200Mhz (already faster than yours) to DDR5 6400Mhz:


We're looking at more than 1/3 extra performance there!

Okay this time I went a bit far. It practically never dropped below 60 in the entire video, despite sudden turns, high speed dives and swings, it never dropped below 60. I literally couldn't force it to.


As I said, I use PS5 equivalent low crowd and traffic density. Eurogamer/DF themselves suggest using those settings. Other settings murder CPU performance. But I respect everyone's choice. I could too play with higher settings, and I would have no trouble, with VRR and all that.

Fair play that is some very solid performance and 60fps+ for as much of the video that I watched. Very impressive you managed to do that on a 2700X although I'm struggling to understand where the difference lies between your setup and DF/NXG's who were seeing quite heavy CPU drops at very similar settings.. Also worth noting that where you have object range set to 8, the console in performance mode is slightly below this and given it's a very heavy CPU setting, even that slight drop is probably worth 2-3fps which would shore up that performance even more.

Whoops! Good catch. My apologies, I had crowd/car density set to High from my time playing without RT. Re-testing it now with both set to medium and in daytime, it's much better. No drops below 60. At night though, I still get drops into the high 50's - but certainly no 40's. 58 seems to be lowest.

Made an extensive video of the area, showing my settings, and cycling from noon to night. Just started processing so it will be a while.

Excellent performance there too. I'd be fairly confident that even moderately clocked DDR5 would clear up those drops below 60fps with ease.
 
It could be an aspect of how the video is being recorded, but there are definitely areas in that video that are dropping frames. It doesn't really look "smooth", and if you manually step through teh frames you can see areas where frames are being duplicated.

Youtube consistently fucks up the frame pacing in my uploaded videos, especially from Shadowplay and especially at 4k. I've never been able to figure out why (clearly other people can get around this as I've seen perfectly paced videos) but just fyi, eyeballing youtube videos isn't that reliable.

Low for traffic and NPC's would seem to be a sensible tradeoff. It looks like maybe (maybe?) PS5 is a little above that, but I'm not sure.

I think DF basically came to the conclusion they were some custom mix between low and medium. There's not a single equivalent (always appreciate when ports have a simple console-settings like 'original').
 
It could be an aspect of how the video is being recorded, but there are definitely areas in that video that are dropping frames. It doesn't really look "smooth", and if you manually step through teh frames you can see areas where frames are being duplicated.

There's no frame time graph in this video, so I can't see whether this in the game's rendering or some issue with the recording or Youtube or something. If you have the time, would you be able repeat this kind of noon example with the frame time graph on? I'm interested to see how the frame times are working out. Averaged fps over time can hide a magnitude of peculiarities!

Low for traffic and NPC's would seem to be a sensible tradeoff. It looks like maybe (maybe?) PS5 is a little above that, but I'm not sure.
Probably due to recording, it is a 30 FPS recording in the end. I can re-enable frametime graph and re-capture a video later on.

x7iFcVZ.png


Low still spawns more NPCs in the grey road, but no NPCs in the middle pavement. Medium spawns both, while having more NPCs overall. Sadly this makes us unable to match PS5 like for like. If we count, Low still has more NPCs in this scene actuall. And very low just goes and destroys it.

See how even with low, there are a lot of NPCs in the far end of the road, and minimal amount of NPCs on PS5.

Very unbalanced setting overall. :D
 
It's probably having a bigger impact than you think in this game. Checkout the massive performance improvement the 12900K see's when moving from DDR4 3200Mhz (already faster than yours) to DDR5 6400Mhz:


We're looking at more than 1/3 extra performance there!

Oh yeah I'm aware in Spiderman it has an outsized impact which is why I mentioned it, albeit DDR4->5 is a massively larger jump than my 400mhz deficit. Regardless my problem was using PS5 Fidelity Mode crowd settings and trying to get 60fps.

That's such an incredible jump with DDR5 though. I hope Nixxes can still improve CPU performance but even if this is the peak, there's at least a huge performance uplift that will be more accessible when the prices continue to drop. I doubt I will buy new DDR4 again.

Excellent performance there too. I'd be fairly confident that even moderately clocked DDR5 would clear up those drops below 60fps with ease.

Did you mean to say DDR4? If so yes, just going to a proper 3200mhz ram would keep me at 60fps most likely.
 
I practically got my super cheap 3000 CL15 kits to 3400 CL14 by merely giving it 1.4v. Could be a part of reason why my 2700x performs above average than most users. Shame I lost IMC lottery, mine cannot even boot 3533, so 3600 MHz is out of question. We got the exact same cheap kits to my friend's 3600 and we got an easy 3600 CL16 overclock in place.

Zen+/Zen 2 benefits highly from RAM OC. It is a cheap way to claw back some performance back. Sad that Ballistix line has been discontinued.
 
Did you mean to say DDR4? If so yes, just going to a proper 3200mhz ram would keep me at 60fps most likely.

I did actually mean DDR5. I don't know what speed it starts at but whatever that is I assume you'd see a big performance uplift, i.e. it's not necessary to go up tot he dizzy heights of 6400Mhz. But yeah, even DDR4 3200Mhz would probably be a healthy increase.
 
I practically got my super cheap 3000 CL15 kits to 3400 CL14 by merely giving it 1.4v.

Believe me I tried extensively to tweak it to get it higher than 2800, as it is actually rated at 3200mhz, it's G-Skill Trident, it's just years old. It was in a i5-9400 B-series motherboard so I just had it at 2666mhz, if I noticed it couldn't do 3200mhz as advertised I would have used the return window when I could.

(Also this is a very shitty Asrock motherboard)
 
Last edited:
Yeah, that was the source I was thinking of but I couldn't remember his name. I actually posted that video here ages ago and it got some pushback, particularly on that claim that Nvidia doesn't have a 'hardware scheduler' of any sort. It was made in 2017, mind you.

If you watch the 1st video by nerdtechgasm, it goes all the way back to GTX 700 series where hardware scheduler is removed from NV architecture cards
Guys, we have debunked this myth before, this has nothing to do with software lookup tables or scoreboarding, NVIDIA reimplemented hardware ones later in Pascal, Turing and Ampere .. yet nothing has changed.

The problem is related to a limitation of the API itself, it was explained in great length here by the venerable member Lurkmass.

In essence, the DX12 resources binding model is defective for NVIDIA hardware, it was designed for Mantle style hardware (AMD) primarily, so on other hardware it generates tons of extra calls and descriptor instructions, adding more CPU overhead on NVIDIA hardware.

Developers need to take care of that when coding with DX12 to circumvent these pitfalls but few actually pay attention.

You can read about it in details here down below.



 
Last edited:
Back
Top