Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

I grabbed some RenderDoc captures at 4k if anyone is interested. It doesn't support RT though so keep that in mind. This is also at native resolution so no upscaling or IGTI.

Scene 1
Stats for Spider-Man_2022.08.13_16.28.12_frame7166.rdc.

File size: 2463.44MB (4258.28MB uncompressed, compression ratio 1.73:1)
Persistent Data (approx): 137.36MB, Frame-initial data (approx): 4114.07MB

*** Summary ***

Draw calls: 938
Dispatch calls: 133
API calls: 9534
API: Draw/Dispatch call ratio: 8.90196

845 Textures - 997.11 MB (996.97 MB over 32x32), 101 RTs - 1189.56 MB.
Avg. tex dimension: 873.218x839.324 (914.449x879.03 over 32x32)
2922 Buffers - 5631.53 MB total 898.22 MB IBs 805.28 MB VBs.
7818.19 MB - Grand total GPU buffer + texture load.

Scene 2
Stats for Spider-Man_2022.08.13_16.28.12_frame9970.rdc.

File size: 2638.23MB (4603.82MB uncompressed, compression ratio 1.75:1)
Persistent Data (approx): 287.20MB, Frame-initial data (approx): 4300.38MB

*** Summary ***

Draw calls: 4620
Dispatch calls: 167
API calls: 50065
API: Draw/Dispatch call ratio: 10.4585

1836 Textures - 1239.83 MB (1239.69 MB over 32x32), 104 RTs - 1190.67 MB.
Avg. tex dimension: 590.038x571.024 (606.422x587.603 over 32x32)
3862 Buffers - 5883.21 MB total 898.22 MB IBs 805.28 MB VBs.
8313.71 MB - Grand total GPU buffer + texture load.

Scene 3
Stats for Spider-Man_2022.08.13_16.28.12_frame14894.rdc.

File size: 2893.82MB (5064.99MB uncompressed, compression ratio 1.75:1)
Persistent Data (approx): 175.07MB, Frame-initial data (approx): 4871.66MB

*** Summary ***

Draw calls: 7158
Dispatch calls: 138
API calls: 69741
API: Draw/Dispatch call ratio: 9.5588

2386 Textures - 1799.89 MB (1799.59 MB over 32x32), 129 RTs - 1203.65 MB.
Avg. tex dimension: 690.992x665.378 (709.994x684.157 over 32x32)
4404 Buffers - 6326.40 MB total 898.22 MB IBs 805.28 MB VBs.
9329.94 MB - Grand total GPU buffer + texture load.
 
The only good thing about NXG's videos is that he does you a more 'typical' PC for his comparisons so it's easier to see where the average Joe's PC would run the game look.
It's a Zen1-based CPU that was released 2 years before the PS5 and is easily bested by $100 quad-core CPU's now. If you're going to be comparing relative performance between a modern console and even a 'budget' PC you should use a relatively modern budget PC.

The Zen1 line wasn't even really considered a great gaming CPU upon release, it's main draw was that it was at least competitive and offered more cores for less money. Zen2 is when AMD really got going.
 
Buffers taking up 6.3GB. Resolution is the biggest hit here. Textures are only 1.8GB. Still large however.
I tried to replicate that third scene above at 1080p. It saved about 800MB on render targets and 1GB on buffers.

I also can't help noticing that the index/vertex buffers remain constant in all these captures. I don't have an explanation for that.

Scene
Stats for Spider-Man_2022.08.13_23.33.29_frame4695.rdc.

File size: 2419.59MB (4263.81MB uncompressed, compression ratio 1.76:1)
Persistent Data (approx): 327.00MB, Frame-initial data (approx): 3923.75MB

*** Summary ***

Draw calls: 6289
Dispatch calls: 106
API calls: 63350
API: Draw/Dispatch call ratio: 9.90618

1678 Textures - 1755.53 MB (1755.38 MB over 32x32), 96 RTs - 413.00 MB.
Avg. tex dimension: 804.176x762.843 (832.254x789.798 over 32x32)
3698 Buffers - 5397.80 MB total 898.22 MB IBs 805.28 MB VBs.
7566.33 MB - Grand total GPU buffer + texture load.
 
I tried to replicate that third scene above at 1080p. It saved about 800MB on render targets and 1GB on buffers.

I also can't help noticing that the index/vertex buffers remain constant in all these captures. I don't have an explanation for that.

Scene
Interesting. Well understandably index and vertex will stay the same; I’m not expecting them to change with resolution. I was expecting render targets to drop a lot more however. But it’s possible that most of these targets are not a full 4K until the very end; so possible a lot of in between resolutions on render targets until close to final output.
 
Interesting. Well understandably index and vertex will stay the same; I’m not expecting them to change with resolution. I was expecting render targets to drop a lot more however. But it’s possible that most of these targets are not a full 4K until the very end; so possible a lot of in between resolutions on render targets until close to final output.
I meant that all of those captures report the exact same figures and I've never seen that before. Most of the time these numbers change simply by turning the camera and it makes me wonder whether it could be misreported. But the geometry might also be kept memory resident for raytracing reasons, for example. RT is disabled in these tests but still.

But I was also surprised that the numbers didn't go down more. Render targets often show a roughly 3x increase between 1080p and 4k but I expected the buffers to go down by more than that. I've noticed a bit more PCIe traffic in Sony's ports and it makes me wonder if it's a relic of the unified memory system in their consoles. Graphics memory residing in main memory isn't normally represented by tools like MSI Afterburner and they can be somewhat misleading in that regard.
 
Sure I am, if at this point you still dont get why 2700xt is good hw representstio of ps5 cpu Im done with this subject ;) your imagination is stronger than any proofs (btw I post also pure cpu benchmarks without pci infouence but then apparently only reason for results was memory type :d)
You are being incredibly insincere here, and intentionally dishonest. I spent hours going through reviews and benchmarks and, to be honest, I don't deserve to be responding to bad faith bullshit like this. Every response of yours ignores the entirety of what you were wrong about.
 
You are being incredibly insincere here, and intentionally dishonest. I spent hours going through reviews and benchmarks and, to be honest, I don't deserve to be responding to bad faith bullshit like this. Every response of yours ignores the entirety of what you were wrong about.
Dude chilout, its hw talk and you starting some shit talk about insecure lol :d if you spend so much time analyzing you would just acept fact that 2700xt is very close hw representstion of ps5 cpu, yeah, its just 399$ console, its normal to have little underwhelming cpu, still way above last generation
 
Dude chilout, its hw talk and you starting some shit talk about insecure lol :d if you spend so much time analyzing you would just acept fact that 2700xt is very close hw representstion of ps5 cpu, yeah, its just 399$ console, its normal to have little underwhelming cpu, still way above last generation

I have repeatedly said that 2700x trades blows with 4700s benches. I was the first person to say that, when you denied it and falsely claimed that the 2700x was greatly ahead of the 4700s based on benches you clearly were unable to understand. I actually provided the benches that showed this.

You repeatedly falsely represent what I said, no matter the references and quotes I provide about what we both said. No matter how many times.

Your dishonesty is clearly intentional at this point, as you have proved many times now.
 
I tried to replicate that third scene above at 1080p. It saved about 800MB on render targets and 1GB on buffers.

I also can't help noticing that the index/vertex buffers remain constant in all these captures. I don't have an explanation for that.

Scene
It would be interesting to run close to PS5 equivalent settings to get a rough idea how much video memory is allocated on the PS5.

Edit: Ah, the program sadly has no RT support. That's a shame.
 
It's a Zen1-based CPU that was released 2 years before the PS5 and is easily bested by $100 quad-core CPU's now. If you're going to be comparing relative performance between a modern console and even a 'budget' PC you should use a relatively modern budget PC.

The Zen1 line wasn't even really considered a great gaming CPU upon release, it's main draw was that it was at least competitive and offered more cores for less money. Zen2 is when AMD really got going.
People don't change their CPU's any where near as much as they do their GPU and having a CPU that's 2+ years old is common, budget PC's use even older CPU.

I'm still on a 4770k (Until Wednesday next week that is)
 
People don't change their CPU's any where near as much as they do their GPU and having a CPU that's 2+ years old is common, budget PC's use even older CPU.

I'm still on a 4770k (Until Wednesday next week that is)

Thats because CPU's hadnt so much improvements untill recently. Since Zen3 and the newer Intel CPU's, performance has been increasing rather much. DF has been covering that in one of their Directs i believe. The difference between a 10900 to a alder lake was very impressive.
 
Thats because CPU's hadnt so much improvements untill recently. Since Zen3 and the newer Intel CPU's, performance has been increasing rather much. DF has been covering that in one of their Directs i believe. The difference between a 10900 to a alder lake was very impressive.

Add to that until now with last gen and cross gen there has been no real need to upgrade your CPU for non extreme framerates. That's starting to change now with these next gen only games that are starting to release on PC.
 
People don't change their CPU's any where near as much as they do their GPU and having a CPU that's 2+ years old is common, budget PC's use even older CPU.

His videos are primarily about comparing platforms, and his specific bent is to use them to extrapolate (often incorrectly) on the inherent architectural advantages/disadvantages of each. When your target comparison is a new generation of consoles, you should be comparing with something relatively recent on the PC if you're going to bring arguments about efficiency into it.
 
Here's something interesting. The highest RT setting seems to double the CPU bandwidth used over no RT while running at the same framerate.


KcxnNV3.png
 
Here's something interesting. The highest RT setting seems to double the CPU bandwidth used over no RT while running at the same framerate.


KcxnNV3.png

Perhaps this could present an advantage for consoles in some conditions, as barring some kind of limit within the console APUs they should have a lot more CPU memory bandwidth available.
 
Perhaps this could present an advantage for consoles in some conditions, as barring some kind of limit within the console APUs they should have a lot more CPU memory bandwidth available.

The CPU bandwidth used is certainly at the top end of what a typical DDR4 3200mhz setup would give you. It might be interesting to see how performance scales with faster memory.
 
The CPU bandwidth used is certainly at the top end of what a typical DDR4 3200mhz setup would give you. It might be interesting to see how performance scales with faster memory.

Yeah, this might be a good game to look at when Zen 4 and DDR5 land, see if performance gains going to that differ particularly from other games.

Do we know if Spiderman is using AVX2? 256-bit operations seem to be pretty demanding in terms of BW on PC, though on PS5 the situation would be a bit difference as it's got half width vector operations.
 
Is it normal for the CPU to be updating the BVH? I think this is done via GPGPU on consoles IIRC
 
Back
Top