Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

His 2700x is Zen+ while PS5 is Zen 2

So I have no idea where you get that they're "quite similar" in performance from as PS5 is closer to a 3800x which is substantially faster in Spiderman than the 2700x.
You know its not full pc zen 2 and we have pure cpu benchmarks (excluding memory bis limitation) thats shows indeed they are close.
 
You know its not full pc zen 2 and we have pure cpu benchmarks (excluding memory bis limitation) thats shows indeed they are close.

No, you think we have pure CPU benchmarks from gauging the performance of a PC 'equivalent' CPU that's hampered with PC OS and API overheads that PS5's CPU doesn't have to deal with :rolleyes:

That's like trying to gauge how an AMD HD7850 GPU performs in a PC by testing how it performs inside base PS4 (It's wrong)
 
Last edited:
This is a complete misrepresentation of the situation. Yes it's common knowledge that PC's need more system memory than consoles to run the same game and certainly you can claim that as an architectural advantage if you wish - it's hardly news. But it's no architectural disadvantage that PC's have to use a portion of VRAM for non-game related applications and services. The consoles also reserve (a bigger) chunk of VRAM for that exact same purpose.

Ultimately you've taken a PC with half the VRAM of the PS5, found it's running into massive VRAM bottlenecks in comparison to the PS5 and are now trying to blame that on some architectural deficiency in the PC that results in it wasting RAM and and processor cycles. When in fact, the issue is simply that the PC you're using has less VRAM than the console! Had you even 10-12GB VRAM (much less overall memory than the PS5) you would have mostly or entirely mitigated that VRAM limitation and the PC would have performed completely in line with where we'd expect it to given it's usual relative performance in relation to the PS5. Yes, you would still need more system RAM, but that's a given, and always has been.

As to UMA vs separate memory pools. This is an architectural advantage from a developer point of view to be sure. But if the separate memory pools are handled properly then it's not going to cause some disproportionate loss of compute or memory performance as you seem to be suggesting. In fact split pools have a not insignificant advantage over a unified pool from a raw performance perspective due to the lack of bandwidth contention, and lower latency on the CPU side.



If you acknowledge that the CPU is a bottleneck in those tests (performance RT for the 2070 and fidelity for the 6800) then why on Earth to you draw GPU to GPU performance comparisons from those tests? You literally say things like "the PS5 is x% faster then the RTX 2070/ RX 6800 across this benchmark run". And yet here you are admitting that the GPU is CPU limited.
I am done replying to things I call out and explain clearly in the video, even timestamped so save you the effort and quoted.

"Therefore, the metrics here reflect the test of not the GPU, but the rest of the system elements"

 
Realy bizzare people attack Nxgamer for using cpu quite similar in performance to ps5 cpu and ignore using highend cpu by df. In the end you have benchmarks from completly different approach that we should apprieciate. Its all plastic or metal game toys, chillout people.
It is also hugely bizarre when NxGamer wants to emulate PS5 performance to a T, using a similar CPU, whereas using a GPU with a 7-.5 GB VRAM budget. Convenient, I would think not. He can use RTX 3060 to extrapolate where PS5 lands, for example, but he chooses not to.

Also, as other said Zen+ was a horrible architecture. It has enormously higher inter CCX latencies compared to Zen 2 CPUs. The fact that he claims most people have "worse" CPUs than him is a HUGE false report. Majority of mid-range gamers have Ryzen 3600s in their rigs, which outpaces his 2700 by a huge margin. It also aged horribly. It only performs somewhat okay when you pair with enormously high RAM speeds and tight timings. Which a casual user would usually not do.

Worse is, even an i7 7700k from 2017 OUTPERFORMS 2700 in many games. SPIDERMAN included. A casual i5 10400f DESTROYS 2700's performance. That's a literal budget 120 bucks CPU.



Vd2RJpE.jpg



For starters, his CPU is by its core, is still a 2700 at 3.8 GHz. He also paired it with 2800 MHz, with unknown timings. Most Zen PC builders have been using 3000/3200 kits back then. And even 3600 MHz is mostly mainstream now.

Above test, 2700x is clocking up to 4-4.1 GHz, alongside with 3200 MHz RAM. With those specs, 2700x is %18 slower than an entry level 10th gen i3. 10TH GEN. an almost 3 year old ENTRY LEVEL CPU. Midrange 3 year old i5 is also leaving it in dust, by %28 margin. And then comes the 3600. 3060 IS widely more popular than 2700/2700x ever was. Most people who had 1600/2600 initially upgraded to 3600 until seeing how badly they were getting bottlenecks here and there. 6 CORE ZEN 2 3600 is outperforming the 8 CORE ZEN+ 2700x CPU by a huge %35 margin.

Now, he has less clocks (3.8 GHz versus the typical 4 GHz of 2700x), he has less RAM bandwidth (2800 MHz versus typical 3200/3600 MHz). The gap would be even wider for a Ryzen 3600 user who uses 3600 MHz RAM, which is quite cheap nowadays and easy to get a hold of. I'm not even delving into the Zen 3 region, where a Ryzen 5600 (non X) is... decimating what 2700 has to offer. He loses around %6 due to his CPU clocks, most likely another %20 due to how slow his Infinity Fabric is (going from 3200 to 2800 will also hurt inter CCX latency hugely, due to having 1.4 GHz of IF frequency instead of 1.6 GHz). Overall you're looking at a performance profile that is almost %45 slower than a usual, regular Ryzen ZEN 2 setup. Zen 2 itself is usually %15-20 faster clock-by-clock than Zen+. There's also a huge point everyone seems to ignore: NVIDIA's software scheduler overhead. NVIDIA does not have a specific hardware scheduler, therefore it uses another extra %15-20 CPU resources to achieve similiar task on PC compared to AMD cards. This specifically also hurts older Zen CPUs, where NVIDIA's driver constantly uses all threads available for the scheduling task, which again, incurs CCX latency penalties.

Overall, Zen+ and RTX 2000/3000 is not a good pairing for high refresh rate scenarios. Even Zen 2 is not optimal, but it made huge improvements over Zen+. At average you may see not huge gains, but in actual gaming scenarios, Zen 2 can take the ball and run away.

Even a casual Ryzen 3600/2060 super rig at 1440p would outperform his PC due to how badly CPU bottlenecked he is at 1440p. As I said, a Zen+ CPU is not representative of a "casual" gamer with "casual" specs. These CCX issues, penalties, latencies ,NONE of them matters for a cheap, budget, i5 10400f. That CPU just handles the game fine. Anyone who pairs a RTX 2070 and above with a Zen+ CPU is doin something horribly wrong. At a minimum, you would pair a 2070 and above with a 8700k, 10400f, or Ryzen 3600. If you play with a 2600, 2700 or 2700x, you're simply accepting to get suboptimal performance in most cases.

All in all, from NVIDIA's driver overhead specifically affecting older latency sensitive Zen CPUs, to most casual gamers not even using those outdated CPUs, this push for "2700 is a good match for PS5!" is pointless. He specifically, purposefully refuses to use his sweet Zen 2 3600 with his 2070, because then the CPU bottlenecked situations would hugely be solved. As I said, Zen+ was never meant for high refresh rate gaming in mind. And better yet, it is still a different architecture than Zen 2, with increased latencies and architectural problems.

If most gamers had Zen+ CPUs, I would agree with him. But no, I literally have 50+ gamer friends who only handful of them uses a decrepit Zen+ CPU. Most of them are either on 9th gen Intel CPUs or Zen 2 CPUs or higher. Even if he used a casual 10400f in his testing, he would at least get %40-50 better CPU bound performance he already gets. He has so many factors on his PC that specifically also causing the performance to be even worse, having lower clocks, lower RAM speeds and so on. The chart speaks for itself. Even at its most optimal, 2700x is hugely a failure compared to low-end i3s and midrange i5s.

He will call all these points tangent or whatever. Lol.
 
Some here really overestimate how much effective video memory the PS5 has.

The CPU eats A LOT of that away. In reality, these consoles have 7-9 GB as video memory available, not over 10 GB. It's why in most games like Control and Watch Dogs Legion, they are using high or medium textures, while a dedicated 10 GB card has no issues playing these games with Raytracing and texture quality packs enabled. The more demanding a title gets in its physics and general game logic department, the more DRAM it needs so the available video memory shrinks, while PCs obviously do not have that issue as they have dedicated RAM available. I suspect in the future when games get much more dependend on the CPU, less video memory compared to now will be available on the PS5. That is one of the advantages the PC architecture has in comparison to the consoles, as all that stuff can be offloaded into DRAM so the PC can use it's entire VRAM for graphics.

If you compare the texture quality of dozens of games cross plattform, you will find in many games the PS5 and XSX are running shockingly low texture quality, even in comparison to a 8 GB card on PC.

Now, Spiderman shows a different behaviour. We cannot tell if 8 GB are enough or not in the game at PS5 equivalent settings, as the game only uses around 6.4 GB VRAM. We will see though when Nixxes patches it.
 
need to stop the attacks on @Michael Thompson.
Also against each other as I'm seeing these flare up as well.
Just stick to the arguments presented and not attack him as a person please. He is like any other forum member here, we need to stay consistent. It is impossible for him to defend both his character and his work at the same time. So just do him a favour and leave his character out of it.
Present good arguments, posts, with good counter points/facts/evidence, or just sit on the side lines and read. We do not want to be in a situation where we encourage piling and things are unnecessarily getting hostile.

This may need to be moderated heavier if the thread can't settle into formal discussion.
 
I think the discussion here got too out of hand. People are giving too much credit to Sony ports on PC. There should not be made too many conclusions.
So far, I don't think their implementation on PC is great. Just because they didn't exist before does not mean they should get a free pass.

I think it's a solid looking port (I don't have it or RT hardware), and it's very encouraging, but yeah I think too much has been made of it. @Michael Thompson describes the port as "maximising the PC platform", but I really don't think it does.

It's a great start, and I can see why he's enthusiastic, and yeah it's brilliant to see more next gen games stretch PC hardware again after last gen, but the PC port at this time is leaving a lot on the table.

For example, it doesn't scale well beyond about six cores, it prematurely limits GPU physical VRAM used, and upon an arbitrary threshold pushes it back even further, driving PCIe use up further. And as the game is already sending a fair bit of traffic over PCIe in RT mode, this is an issue. Definitely one people should know about.

Going further, the game isn't using DS, and it doesn't appear to be using SFS which might along with high speed, low latency streaming, be able to reduce VRAM footprint with very HQ textures and improve performance on 8 GB cards. And I'm not saying it should be using those things, I'm just saying that they're available now (SF for a while actually) and it doesn't.

So it looks to me like a great start and bodes well for the future (of both ports and of Nixxes), but the game isn't some supreme benchmark of what the PC platform can do.

Nevertheless, some good contributions came out of the discussion.

Yeah, it's been great especially to see some new posters dropping in with some really interesting data to share. Seeing users highlight LOD bugs across a range of hardware has been informative too.

I think if Michael can perhaps be a little more open to feedback (which can admittedly be very hard if hostility has built up), he can get some benefit from being here, just as people can benefit from his observations.

Edit: and as @iroboto points out above, we need to maintain a proper approach to handling conversation here, and I say that as someone who has been critical of some of NXG's claims in the past, and who goes off on one from time to time too.

Realy bizzare people attack Nxgamer for using cpu quite similar in performance to ps5 cpu and ignore using highend cpu by df. In the end you have benchmarks from completly different approach that we should apprieciate. Its all plastic or metal game toys, chillout people.

Nothing wrong with a range of configuration being tested, but the lower end your CPU the less meaningful your GPU scaling extrapolations will be. Especially if you're testing a game that struggles on your CPU.
 
Last edited:
Some here really overestimate how much effective video memory the PS5 has.

The CPU eats A LOT of that away. In reality, these consoles have 7-9 GB as video memory available, not over 10 GB. It's why in most games like Control and Watch Dogs Legion, they are using high or medium textures, while a dedicated 10 GB card has no issues playing these games with Raytracing and texture quality packs enabled. The more demanding a title gets in its physics and general game logic department, the more DRAM it needs so the available video memory shrinks, while PCs obviously do not have that issue as they have dedicated RAM available. I suspect in the future when games get much more dependend on the CPU, less video memory compared to now will be available on the PS5. That is one of the advantages the PC architecture has in comparison to the consoles, as all that stuff can be offloaded into DRAM so the PC can use it's entire VRAM for graphics.

If you compare the texture quality of dozens of games cross plattform, you will find in many games the PS5 and XSX are running shockingly low texture quality, even in comparison to a 8 GB card on PC.

Now, Spiderman shows a different behaviour. We cannot tell if 8 GB are enough or not in the game at PS5 equivalent settings, as the game only uses around 6.4 GB VRAM. We will see though when Nixxes patches it.
I also was surprised when they omitted from using Ray Tracing on PS5 /Xbox in Far Cry 6. FC6 has super light ray tracing implementation where even a 6600xt with its wonk bandwidth is getting 1080p/60 FPS. I too believee that they come into some kind of memory limitation, and it became a choice between ultra textures and ray tracing and they chose ultra textures. WD Legion is quite opposite, it has a ultra texture on PC and ray tracing. Funny enough, consoles do not employ ultra texture but use ray tracing, despite WD Legion's RT being much, much heavier than FC6. So this tells me that consoles also do not have enough memory to handle both RT and ultra textures at 4K/upscaling modes in the case of WD Legion and FC 6. Whether it is warranted or justified however is beyond me.

By the way, you won't get a good experience in FC6 with 10 GB with ultra textures at 4K, even with upscaling, with ray tracing enabled. 3080 has the grunt but it will either tank the performance after a certain playtime, or game engine will downgrade textures, which defeats the purpose of having those textures. At 1440p, 10 GB is actually enough for both. Consoles use 4K lods, despite having lower resolution dips, so we have to take 4K as a metric here for memory capability.
 
I also was surprised when they omitted from using Ray Tracing on PS5 /Xbox in Far Cry 6. FC6 has super light ray tracing implementation where even a 6600xt with its wonk bandwidth is getting 1080p/60 FPS. I too believee that they come into some kind of memory limitation, and it became a choice between ultra textures and ray tracing and they chose ultra textures. WD Legion is quite opposite, it has a ultra texture on PC and ray tracing. Funny enough, consoles do not employ ultra texture but use ray tracing, despite WD Legion's RT being much, much heavier than FC6. So this tells me that consoles also do not have enough memory to handle both RT and ultra textures at 4K/upscaling modes in the case of WD Legion and FC 6. Whether it is warranted or justified however is beyond me.

By the way, you won't get a good experience in FC6 with 10 GB with ultra textures at 4K, even with upscaling, with ray tracing enabled. 3080 has the grunt but it will either tank the performance after a certain playtime, or game engine will downgrade textures, which defeats the purpose of having those textures. At 1440p, 10 GB is actually enough for both. Consoles use 4K lods, despite having lower resolution dips, so we have to take 4K as a metric here for memory capability.
Yup, also the reason why Cyberpunk only uses RT local shadows in fidelity mode on the consoles. I found out RT local shadows are much lighter in VRAM compared to the other options. The headroom should be there to run atleast sun shadows and maybe even medium RT lighting at a lower resolution, but then memory skyrockets which is very likely why you're only getting local shadows.
 
Yup, also the reason why Cyberpunk only uses RT local shadows in fidelity mode on the consoles. I found out RT local shadows are much lighter in VRAM compared to the other options.
Funny you mentioning Cyberpunk, a game marginally looking almost a gen ahead of Spider-man while running RT reflections, shadows, and GI all while still being able to adhere to a 8 GB VRAM budget.

Reminds me of Crysis 3 back then, which ran perfectly on 2 GB cards. We had tons of games who chewed through 4-6 GB VRAM which marginally looked worse than C3. VRAM itself cannot be a huge constraint, specifically if you optimise for it. But I respect developers who leave lower VRAM amounts behind.
 
DF's approach has been used by damn near every PC reviewer for the last 20+ years. Why do you think Hardware Unboxed uses a test rig with 32GB of DDR5, a beefy CPU, a high-end motherboard, and other top-tier components to run their GPU tests? Why do you think they use a 3090 Ti at 1080p to test CPUs? This is to get rid of bottlenecks and test specific pieces of hardware. Almost no one online will test an entire system because there are just too many variables to take into account.

Criticize DF by all means for that but then, criticize the whole PC review industry of the last two decades.
That doesn't make it right for every situation, though. I think testing new release GPUs with high end systems, even if the GPU is of the lower performance tier, is a good idea so you can judge it's performance against other GPUs, but if you are testing a specific PC to compare it to a console, you have to find some point of data to normalize to. NXG has tried to normalize for graphics settings, and that's a fine approach. If I was going to be critical of his work I would say that we do know some settings aren't like for like because the the settings don't allow for it.

Personally, I would try to normalize for performance. What settings can we have on and get 60fps, 30fps, etc. Want RT, turn these settings on to get the desired framerate. And then compare image quality.

Now, he has less clocks (3.8 GHz versus the typical 4 GHz of 2700x), he has less RAM bandwidth (2800 MHz versus typical 3200/3600 MHz).
I have an aging Ryzen 5 2600x with 3200mhz memory, and I can't remember a CPU I've ever owned that's more affected by memory timing. It used to be that adjusting timings would get you 2-5% performance in most cases, and maybe 10% in edge cases on other CPUs, but when I built this system I remember testing Tome Raider (Rise of I think), and getting something like 20% better framerates in the benchmark. I would suspect that better memory would explain why others here have stated they get better performance using the same CPU.

Some here really overestimate how much effective video memory the PS5 has.

The CPU eats A LOT of that away. In reality, these consoles have 7-9 GB as video memory available, not over 10 GB. It's why in most games like Control and Watch Dogs Legion, they are using high or medium textures, while a dedicated 10 GB card has no issues playing these games with Raytracing and texture quality packs enabled. The more demanding a title gets in its physics and general game logic department, the more DRAM it needs so the available video memory shrinks, while PCs obviously do not have that issue as they have dedicated RAM available. I suspect in the future when games get much more dependend on the CPU, less video memory compared to now will be available on the PS5. That is one of the advantages the PC architecture has in comparison to the consoles, as all that stuff can be offloaded into DRAM so the PC can use it's entire VRAM for graphics.

If you compare the texture quality of dozens of games cross plattform, you will find in many games the PS5 and XSX are running shockingly low texture quality, even in comparison to a 8 GB card on PC.

Now, Spiderman shows a different behaviour. We cannot tell if 8 GB are enough or not in the game at PS5 equivalent settings, as the game only uses around 6.4 GB VRAM. We will see though when Nixxes patches it.
Spider-man has last-gen roots and a now-gen remaster, though. It's CPU workload is firmly in last generation, while it's GPU workload is in the current. So on PS5, you would have proportionally more memory available to the GPU if you simply ported the game, and those resources would be used for better textures, higher res buffers, and of course RT.

Since I don't own Spider-man on PC, I can only judge it based on 2nd hand accounts. But from where I'm sitting it looks more like a combination of user error (settings too high) and settings menu confusion (max texture size not warning of vram limitations, some setting causing unexpected performance penalties, etc) causing many of the issues people are seeing. Maybe some day I'll pick it up and find out for myself.
 
That doesn't make it right for every situation, though. I think testing new release GPUs with high end systems, even if the GPU is of the lower performance tier, is a good idea so you can judge it's performance against other GPUs, but if you are testing a specific PC to compare it to a console, you have to find some point of data to normalize to. NXG has tried to normalize for graphics settings, and that's a fine approach. If I was going to be critical of his work I would say that we do know some settings aren't like for like because the the settings don't allow for it.
It makes it right for situations in which the objective is to measure the full performance of a specific component. As I said before, almost no one tries (and I don't think DF does) to test entire systems because there are too many things that might differ. Once the viewer/reader knows how his component should perform in an ideal scenario, they decide what's best for them. Testing a system with a 2600+2070 is useful for those with that config but doesn't tell much to someone with a 10900K+2070, hence why most reviewers review pieces of hardware, not rigs.
 
Last edited:
I have an aging Ryzen 5 2600x with 3200mhz memory, and I can't remember a CPU I've ever owned that's more affected by memory timing. It used to be that adjusting timings would get you 2-5% performance in most cases, and maybe 10% in edge cases on other CPUs, but when I built this system I remember testing Tome Raider (Rise of I think), and getting something like 20% better framerates in the benchmark. I would suspect that better memory would explain why others here have stated they get better performance using the same CPU.
I'm glad AMD made huge improvements to that architecture. Nowdays you can plop a 2133 MHz on a Ryzen 5600 and it would most likely decimate my 2700x using 3466 MHz RAM. That was a brief transition era where they tried something new, they also had very good yields with the way they produced CPUs. ZEN 2 still utilized double-CCX for 6 core and 8 core counterparts but they made huge improvements to internal latencies. Though they are still there, but Zen 3 6 core and 8 core CPUs are finally free of dreaded CCX thing. I hoped that if AMD kept pushing double-CCX on their consumer CPUs, developers on PCs would have to work around it. Now that it became a bygone era thing, I believe situation will only get worse with time. I don't think developers will sit and optimize around double CCX and try avoiding cross-CCX communication when most of the current consumer CPUs do not have be optimized around such a design.

I think AMD gave users a clear upgrade route, so they also have the cure to it. AM4 is one hell of a socket.
 
Some here really overestimate how much effective video memory the PS5 has.

The CPU eats A LOT of that away. In reality, these consoles have 7-9 GB as video memory available, not over 10 GB. It's why in most games like Control and Watch Dogs Legion, they are using high or medium textures, while a dedicated 10 GB card has no issues playing these games with Raytracing and texture quality packs enabled.

I think in Control though, there is actually no difference in resolution with Medium->Very High textures, like the Resident Evil games it's just a streaming budget thing to reduce texture pop-in. Maybe I've missed it, but Ive never seen a texture in Control on PS5 that's actually lower-res than the PC version - the only difference is they actually present the highest-res version much quicker than the PC version OOTB. Thankfully, the mod community to the rescue to the point where the extra vram allocated to that streaming budget can actually be taken advantage of and get you those high-res textures displayed even quicker than the PS5. Ridiculous that's needed at all, but it works.

If you compare the texture quality of dozens of games cross plattform, you will find in many games the PS5 and XSX are running shockingly low texture quality, even in comparison to a 8 GB card on PC.

Do you have other examples? I'll grant you Watch Dogs Legion for sure going by DF's video, but actual native PS5/XSX titles? PS4 titles with PS5 frame rate unlock patches of course, but I'm just not aware of that many native title that have worse textures than the PC version.

There is a good point though that I've poked fun at before - there's this great tendency to make statements on architectures based on the last available port, either PC->console or vice versa. I mean Yakuza: Like a Dragon can run at 4k/60 on my PC, whereas on the PS5 it's 1440 for 60fps. Wow! That's more than double the performance, PC architecture ftw!!!!
 
Last edited:
In multiplatform titles 8GB vram seems to be a good match for the current gen consoles indeed if not more generous on pc, it seems. It shouldnt be too far off atleast, considering the total budget on consoles allocated to games (both vram and cpu ram).
Ports are again abit so-so, be it pc>PS5 or vice versa. its going to swing the orignal platforms way no matter how good you optimize the port in like for like situations on equal hardware.
 
Realy bizzare people attack Nxgamer for using cpu quite similar in performance to ps5 cpu and ignore using highend cpu by df. In the end you have benchmarks from completly different approach that we should apprieciate. Its all plastic or metal game toys, chillout people.

I really don't see the equivalence here at all. DF are doing exactly what they should be doing and exactly what PC reviewers have been doing for decades when trying to directly compare GPU performance. And that is to remove all traces of a CPU bottleneck by using the most powerful CPU possible to isolate GPU performance.

It's not as if DF don't also show how the game performs on a more typical system just like NXG is doing (Alex uses a 3600x for this). But when they do this they make very clear they're testing overall system performance on that system only, they call out the CPU bottlenecks clearly where they exist, and they don't try to draw conclusions about GPU performance which the extrapolate to higher end GPU's.

I think the discussion here got too out of hand. People are giving too much credit to Sony ports on PC. There should not be made too many conclusions.
So far, I don't think their implementation on PC is great. Just because they didn't exist before does not mean they should get a free pass.

Nevertheless, some good contributions came out of the discussion.

I wouldn't say it's a bad port at all. Nixxes are one of the best porting studios in the business and know the PC architecture very well. For the most part this seems like a great port, at least now that it's had a couple of patches. The problem though is that it was originally a 'to the metal' PS4 game designed for a unified memory architecture with no thought whatsoever put into designing the engine for us on split memory pools and the PC API stack. So it must have been a mammoth undertaking to get it working (as Alex's video demonstrates). It's had a few issues but most of them seem to have been addressed or improved now within a couple of weeks of launch. But yes, the texture streaming bug and the under allocation of VRAM are still two pretty serious bugs that need to be dealt with asap.

And you think you are not biased ? On pc forum ? Everybody has preferences.

B3D isn't a PC forum ;) In fact doesn't it have much higher footfall in the console sections than the PC ones? Many of the people posting in this "console technology" thread are console gamers too. I think there's just a higher standard here for technical accuracy, regardless of your platform preference than you see at most other forums.

I am done replying to things I call out and explain clearly in the video, even timestamped so save you the effort and quoted.

"Therefore, the metrics here reflect the test of not the GPU, but the rest of the system elements"


Thanks for taking the time to identify that and apologies if I or anyone else here are coming across too aggressively. I think several people have strong concerns with the framing you use in many of your videos and in some cases, the technical content presented, but I'm sure were all glad you've taken the time to come here and discuss those concerns directly, even if we're unable to come to agreement on them.

Regarding the timestamps above, yes fair enough you do call out a couple of times that the GPU is bottlenecked in Performance RT mode. But as I said earlier, at other points int he video you draw direct GPU performance conclusions from this section. Take this for example:


Here you state "you can close this up with much faster gpu and cpu motherboard memory and the end result of this is the ps5 is already able to push ahead on this largely last generation based title and the perceived higher specification pc hardware is struggling".

So you're stating in a CPU limited situation that you need a faster GPU equal the PS5. You also talk about perceived higher specs when in fact your PC is lower spec than the PS5 in most regards.

And then here, mere seconds after the point you linked above, you present a whole section on the relative GPU specs and even use these to justify why you don't think the RX 6800 is performing more than a few percent higher than than the PS5 - in a CPU limited sequence :confused: :


In your words here: "but yet again we do have moments that can be two percent faster on the rtx machine and 65 faster on the playstation 5. and this will be because of the various requirements and strengths of each machine on a frame by frame basis bandwidth is close between the two gpus but this is one metric with fill rate being considerably high on the ps5 thanks which drops in higher frequency interestingly even against the overclock 2070 the ps5 has around 10 fill rate gain on pixels and textures on paper at least and in reality this is likely going to be even bigger and in comparison the rx 6800 has between 35 to 41 higher fill rate and 12 percent higher bandwidth which explains why a full 4k resolution at the same settings as ps5 infidelity mode are only around the 45 fps level on average and in fact due to the other areas can only be three percent faster across the same test run and settings aside at fixed 4k output"

You are quite clearly linking the PC's performance here to the specific specifications of the GPU, despite only moments earlier stating that this sequence would be CPU bound.

That doesn't make it right for every situation, though. I think testing new release GPUs with high end systems, even if the GPU is of the lower performance tier, is a good idea so you can judge it's performance against other GPUs, but if you are testing a specific PC to compare it to a console, you have to find some point of data to normalize to. NXG has tried to normalize for graphics settings, and that's a fine approach. If I was going to be critical of his work I would say that we do know some settings aren't like for like because the the settings don't allow for it.

If you're just testing how a specific PC compares to the console at console matches settings, and nothing else, then I agree there's no problem with using a weaker CPU. It's when you try to make direct GPU to GPU performance comparisons in CPU limited scenarios, or when you frame a specific specification bottleneck in that system as a more general architectural deficiency or inefficiency that it becomes a problem.

Personally, I would try to normalize for performance. What settings can we have on and get 60fps, 30fps, etc. Want RT, turn these settings on to get the desired framerate. And then compare image quality.

Yes I agree this would also be an interesting approach. I do like the exact setting matches too, but where we see a massive mismatch in performance as we do here than it might be better to investigate a different approach like the one above too for a more balanced view of the situation.
 
Back
Top