Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

Dude, those tests are for a super fast discrete GPU connected connect over the 4700S PCI-e bus.

The 4700S desktop kit has a slow as fuck PCI-e 2.0 x 4 connection. Those tests are showing the massive bottleneck of a super, super, super slow PCI-e connection.

You can not infer anything about CPU performance from these graphs.

AMD 4700S + RTX 3090 (PCIe 2.0 x4): 1.33 GB/s <------ This is what you are looking at

Ryzen 7 3700X + Radeon 500 (PCIe 4.0): 25.5 GB/s <---------- This is what a normal PC has

This is a PCI-e bottleneck test.

"Here we can see that the 4700S' maximum attainable bandwidth weighs in at 1.33 GB/s with the GTX 3090, but dropping the same GPU into an X570 motherboard results in ~10X more throughput (13.43 GB/s) for a PCIe 3.0 connection and 19X more with PCIe 4.0 (25.5 GB/s)"




It's not the same hardware being used, the PS5 is not connected to its GPU by a tiny, prehistoric, super slow PCI-e 2.0 (lmao) x4 (double lmao) bus. It's connected internally over something very fast, probably in the tens of GB/s.

You can infer absolutely nothing meaningful about the PS5 from this. This is not a configuration that would ever be used on a PS5.
there are also pure cpu tests and 2700x still faster
 
there are also pure cpu tests and 2700x still faster

And they show exactly what I stated, which you tried to claim was wrong by linking to a page of PCI-e 2.0 x 4 benchmarks which in no way represent the conditions that exist in the PS5.

As I said:

"BTW, a 2700X is not faster across the board than the PS5 CPU. PS5 has weak FPUs, and loses out marginally there to the higher clocked 2700X (which also has weak FPUs). At other operations the PS5 CPU can pull ahead slightly despite the clock speed deficit and smaller cache. But this on its own is far from the full story, because when inside a PC vs a PS5 they are required to do different things by the various layers of software, and the impact of that can dwarf any small difference in performance."

Here are some far more meaningful benchmarks that people might find interesting:


 
And they show exactly what I stated, which you tried to claim was wrong by linking to a page of PCI-e 2.0 x 4 benchmarks which in no way represent the conditions that exist in the PS5.

As I said:

"BTW, a 2700X is not faster across the board than the PS5 CPU. PS5 has weak FPUs, and loses out marginally there to the higher clocked 2700X (which also has weak FPUs). At other operations the PS5 CPU can pull ahead slightly despite the clock speed deficit and smaller cache. But this on its own is far from the full story, because when inside a PC vs a PS5 they are required to do different things by the various layers of software, and the impact of that can dwarf any small difference in performance."

Here are some far more meaningful benchmarks that people might find interesting:


"BTW, a 2700X is not faster across the board than the PS5 CPU. PS5 has weak FPUs, and loses out marginally there to the higher clocked 2700X (which also has weak FPUs). At other operations the PS5 CPU can pull ahead slightly despite the clock speed deficit and smaller cache. But this on its own is far from the full story, because when inside a PC vs a PS5 they are required to do different things by the various layers of software, and the impact of that can dwarf any small difference in performance." -
and I think this quote is very good summary, 2700x is not bad representative of ps5 cpu hw as you wanted to claim (its opposite, its very good in terms of hw performance level similarity), tough due to lower level software and strong ps5 io you have to have much better cpu on pc to get similar performance
 
And yes, this is where we get classic NxG - extrapolating something that sounds all technical-like based on...? :) He does indeed say "..and largely because it can transfer data much better" which is why the PS5 can maintain 60fps solidly in RT mode. Like...what? He infers this is the cause of those 100ms spikes on the PC, but notes they are much rarer without RT - does he believe RT mode is streaming massively more amounts of data then? Perhaps he means CPU bottlenecks, but saying it's due to "transferring data much better" is a very odd way to put it.

Regardless, we've seen that the higher-end GPU's are not getting these spikes, so it's not even necessarily a CPU bottleneck (outside of Zen2), let along a 'data transfer' issue, whatever the heck that is. I mean when a quad-core can average 100+fps in a CPU bound situation, I don't think the lack of hardware texture decompression blocks on PC CPU/GPU's are the problem with this game right now.

I'll keep the bulk of the DF discussion in the DF thread, but since I dinged NxG here on that point I've got to give him some credit when it looks like he was correct, at least somewhat. It wasn't exactly explained well in his video, but Alex did talk to Nixxes directly and asked about the CPU limitations. In the Times Square area which is particularly heavy with RT, a Ryzen 5 3600 running with RT object culling settings set to even slightly below the PS5's RT Performance mode, will see drops to 55fps swinging around that block. On that:

"I asked Nixxes why the game is so heavy on the CPU, and we learned that the BvH building there is very expensive, as well as the extra cost on PC of decompressing game assets from storage into memory by using the CPU".

So in a way, yes it can 'transfer data much better' in alleviating the CPU from (apparently) a big part of the equation. More modern CPU's can power past this, and going from benchmarks even budget Alderlakes can manage 60+fps fine with RT, but older gen CPU's with the added cost of handling the additional CPU load that RT brings on top of the CPU being required to decompress assets instead of the GPU handling it (and potentially asynchronous shader compilation as well!), show the current architectural detriment of the PC, at least with respect to requiring far more CPU grunt to power past it.
 
Last edited:
I'll keep the bulk of the DF discussion in the DF thread, but since I dinged NxG here on that point I've got to give him some credit when it looks like he was correct, at least somewhat. It wasn't exactly explained well in his video, but Alex did talk to Nixxes directly and asked about the CPU limitations. In the Times Square area which is particularly heavy with RT, a Ryzen 5 3600 running with RT object culling settings set to even slightly below the PS5's RT Performance mode, will see drops to 55fps swinging around that block. On that:

"I asked Nixxes why the game is so heavy on the CPU, and we learned that the BvH building there is very expensive, as well as the extra cost on PC of decompressing game assets from storage into memory by using the CPU".

So in a way, yes it can 'transfer data much better' in alleviating the CPU from (apparently) a big part of the equation. More modern CPU's can power past this, and going from benchmarks even budget Alderlakes can manage 60+fps fine with RT, but older gen CPU's with the added cost of handling the additional CPU load that brings on top of the CPU being required to decompress assets instead of the GPU handling it (and potentially asynchronous shader compilation as well, show the current architectural detriment of the PC, at least with respect to requiring far more CPU grunt to power past it.
So you can actually get more performance on PS5 thanks to its custom I/O. Interesting. That could explain the big performance delta in some scenes (mostly fast traveling in crossroads) in UE5 demo between PS5 vs XSX.
 
there are also pure cpu tests and 2700x still faster

In latency sensitive tests because it's paired with GDDR6 instead of DDR4. Those tests aren't necessarily representative of games. Especially not console games that will be optimised to work around such weaknesses and play tot he GPU's strength. It's also hobbled by an insufficient cooler which likely introduces throttling.

Overall the AMD 4700S is decent in threaded work, even though the lackluster cooler probably holds it back. Still, the disappointing tradeoff of vastly lower single-threaded performance, likely due to higher GDDR6 latency, ruins the value proposition.

So you can actually get more performance on PS5 thanks to its custom I/O. Interesting. That could explain the big performance delta in some scenes (mostly fast traveling in crossroads) in UE5 demo between PS5 vs XSX.

It shouldn't be a surprise that the PS5 using a dedicated hardware decompression unit to move that activity off the CPU is going to result in a higher CPU load, and lower performance on a PC CPU of similar performance.

That said, give the streaming throughput maxes out at around 500MB/s I expect the BHV updates are by far the bigger CPU performance hog here.
 
That said, give the streaming throughput maxes out at around 500MB/s I expect the BHV updates are by far the bigger CPU performance hog here.

Yeah that's fair to note. Nixxes saying it's a factor doesn't exactly say how much. 10%? 30%? More? We'll see with later patches I guess, the huge performance improvements they got in RT within a week of patches may give some indication that the trouble spots aren't necessarily this specific hardware detriment.
 
In latency sensitive tests because it's paired with GDDR6 instead of DDR4. Those tests aren't necessarily representative of games. Especially not console games that will be optimised to work around such weaknesses and play tot he GPU's strength. It's also hobbled by an insufficient cooler which likely introduces throttling.
Give me one example showing ps5 cpu is 2700xt killer ;d Im sure this cpus are either close in terms of raw hw perf (ps5 tough on newer arch is lower clocked and some fpu removed) or even 2700xt has some leverage
 
I'll keep the bulk of the DF discussion in the DF thread, but since I dinged NxG here on that point I've got to give him some credit when it looks like he was correct, at least somewhat. It wasn't exactly explained well in his video, but Alex did talk to Nixxes directly and asked about the CPU limitations. In the Times Square area which is particularly heavy with RT, a Ryzen 5 3600 running with RT object culling settings set to even slightly below the PS5's RT Performance mode, will see drops to 55fps swinging around that block. On that:

"I asked Nixxes why the game is so heavy on the CPU, and we learned that the BvH building there is very expensive, as well as the extra cost on PC of decompressing game assets from storage into memory by using the CPU".

So in a way, yes it can 'transfer data much better' in alleviating the CPU from (apparently) a big part of the equation. More modern CPU's can power past this, and going from benchmarks even budget Alderlakes can manage 60+fps fine with RT, but older gen CPU's with the added cost of handling the additional CPU load that RT brings on top of the CPU being required to decompress assets instead of the GPU handling it (and potentially asynchronous shader compilation as well!), show the current architectural detriment of the PC, at least with respect to requiring far more CPU grunt to power past it.

Kudos for calling this out but don't give him too much credit. He still...

1. Claims the PC's CPU limitations are "largely because [PS5] can transfer data much better" where-as the reality is that this is probably the minor factor vs the heavy BHV update cost given the game is never streaming more than 500MB/s and usually much less. The real reason why he's seeing slower performance on his test beds vs the PS5 is that he's using a slower CPU and is also very likely (since he makes no mention of lowering it which given this would show advantage to the PS5, he would) using max RT Reflections Object Range which is higher by 2-3 steps than the PS5's performance mode which he's comparing too. To put that in context. At roughly PS5 matched settings but an object range off 6 (1-2 steps below PS5 performance mode) the 3600x in Alex's video is bottoming out at around 55fps or ~92% of the demonstrated performance level while swinging around Times Sq. A 3700x which has 33% more cores and is arguably the closest equivalent in the PC space to the PS5 CPU may well be able to lock that at 60fps, possibly even at the higher object range while still dealing with the software decompression load.

2. Baselessly suggests that "a fully locked 60 fps certainly in a big part of the game of travelling will come at very expensive cost and right now may not be fully possible" while Alex's video shows that at roughly PS5 Performance mode matched settings it's possible to consistently exceed 90-100fps in heavy traversal areas on the fastest current CPU's. And there are a lot of CPU's i9-2900K is less than 50% faster than.

3. Makes the ridiculous claim that you need an RTX 3070 to equal the PS5's graphical performance whereas Alex's video shows a 2060s achieving pretty much the same level of performance at the same settings, i.e. 1440p, optimised settings with a less aggressive form of DRS. That said DRS does screw around with this comparison a fair bit and it would have been nice to see some more direct performance comparisons in Alex's video. e.g. what GPU would it take to lock in 60fps at 1440p + IGTI to 4K + optimised settings. And Ditto at the PS5's lowest DRS resolution bound. That would give us an upper and lower GPU performance bound to match the PS5.
 
IMO it sounded a lot more like transfer speeds were the thing that was being talked about and not  CPU burdens from decompression in IGN video. The context and wording give that away.

Regardless - BVH build is going to be by far more expensive than CPU decompression - source? Turn on and off RT and watch the insane hit to Performance at a CPU limited resolution.

Good thing Nixxes have told US that they are working on getting BVH on the CPU to be much faster - it is not optimal yet according to them. Expect better CPU performance at the same settings over the next few patches.
 
Last edited:
IMO it sounded a lot more like transfer speeds were the thing that was being talked about and not  CPU burdens from decompression in IGN video. The context and wording give that away.

Regardless - BVH build is going to be by far more expensive than CPU decompression - source? Turn on and off RT and watch the insane hit to Performance at a CPU limited resolution.

Yeah good point.

Good thing Nixxes have told US that they are working on getting BVH on the CPU to be much faster - it is not optimal yet according to them. Expect better CPU performance at the same settings over the next few patches.

Good to hear, thanks.
 
and I think this quote is very good summary, 2700x is not bad representative of ps5 cpu hw as you wanted to claim (its opposite, its very good in terms of hw performance level similarity), tough due to lower level software and strong ps5 io you have to have much better cpu on pc to get similar performance

I didn't claim that the 2700X was a "bad representative" of PS5 CPU performance in terms of processing ability - as I have repeatedly stated, they are fairly close in the ways you can test in simple benchmarks. I even linked to benches showing this, after you mistook those PCI-e 2.0 x4 bottleneck benchmarks for being some kind of evidence that PS5 was awful in comparison. Which clearly, it isn't.

What I said was that the on the PC a 2700X is going to hobble performance, and hobble performance across the board in a way that simply does not apply to the PS5. And that's true. I also said that 2700X is a bad CPU to use if you wish to compare performance of other aspects of the PC to the PS5. That's also true.

The PS5 CPU in the 4700S is lower clocked, lesser cached, and higher latency than the 2700X, but it still manages to trade blows at 3.5 ghz (or potentially lower). That should tell you how fucking awful the 2700X is for judging the PC as a platform in it's entirety vs PS5.

NXG's PC performance is consistently poor compared to other mainstream PCs, but that doesn't stop him from drawing all kinds of conclusions (unrelentingly with a positive spin for PS5) based upon his system's shitty performance.


In latency sensitive tests because it's paired with GDDR6 instead of DDR4. Those tests aren't necessarily representative of games. Especially not console games that will be optimised to work around such weaknesses and play tot he GPU's strength. It's also hobbled by an insufficient cooler which likely introduces throttling.

I'd missed the cooler thing. We can add that to the list I guess...

It shouldn't be a surprise that the PS5 using a dedicated hardware decompression unit to move that activity off the CPU is going to result in a higher CPU load, and lower performance on a PC CPU of similar performance.

That said, give the streaming throughput maxes out at around 500MB/s I expect the BHV updates are by far the bigger CPU performance hog here.

Yeah, my guess is that the acceleration structures for RT are hitting the CPU hardest of all. Given that RT on the PC can be using massively higher quality assets, it's possible that the PC is having to work with more detailed assets to build its structures, plus there's possibly also some additional overhead involved in using APIs to move data across the PCI-e bus and into GPU ram every frame.

Maybe (maybe?) faster GPUs with more memory bandwidth can get this done and set up ready for rendering faster, and so take a proportionately smaller hit to performance?

Give me one example showing ps5 cpu is 2700xt killer ;d Im sure this cpus are either close in terms of raw hw perf (ps5 tough on newer arch is lower clocked and some fpu removed) or even 2700xt has some leverage

You've gone in a matter of posts from claiming that "we literaly have benchmarks and 2700x is winning by quite a margin", which isn't true once you stop benchmarking the PCI-e 2.0 x4 bus, to now demanding other people provide "one example showing ps5 cpu is 2700xt killer".

Dude, you're better than this.
 
So you can actually get more performance on PS5 thanks to its custom I/O. Interesting. That could explain the big performance delta in some scenes (mostly fast traveling in crossroads) in UE5 demo between PS5 vs XSX.
IIf the XSX is not using DS wich is capable of doing decompression that is not happening on cpu. I dont really see how this explains anything, maybe besides if the hw on xsx is not used it suffers from same pitfalls as PC from doing decompressions on cpu. If this is the case.
 
The only good thing about NXG's videos is that he does you a more 'typical' PC for his comparisons so it's easier to see where the average Joe's PC would run the game look.
 
So you can actually get more performance on PS5 thanks to its custom I/O. Interesting. That could explain the big performance delta in some scenes (mostly fast traveling in crossroads) in UE5 demo between PS5 vs XSX.
There are a lot of assumptions that would have to happen for this to be the case. But yes, if you manage to push the bottleneck onto the decompression unit and the amount of data streamed in is high enough you start to put pressure on areas of the engine that previously didn’t have this issue.

That being said, one would have to assume that Series consoles are not optimally streaming in data which I think is unlikely. There is likely another reason for performance dips as we have found the as sorts of random frame rate losses on series consoles.
 
You've gone in a matter of posts from claiming that "we literaly have benchmarks and 2700x is winning by quite a margin", which isn't true once you stop benchmarking the PCI-e 2.0 x4 bus, to now demanding other people provide "one example showing ps5 cpu is 2700xt killer".

Dude, you're better than this.
Sure I am, if at this point you still dont get why 2700xt is good hw representstio of ps5 cpu Im done with this subject ;) your imagination is stronger than any proofs (btw I post also pure cpu benchmarks without pci infouence but then apparently only reason for results was memory type :d)
 
Last edited:
Back
Top