Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

There are two issues here:

1. You state that you're simply comparing how your specific machine (a common combination of CPU and GPU) performs vs the PS5 at PS5 like settings. That would be completely fair IMO if you actually presented it that way in the videos. But you don't. You show whole reels of footage (RT performance mode) of the 2070 underperforming in CPU limited scenario's while providing commentary on how many percent faster the PS5 is than that GPU. That's simply wrong because you are not measuring GPU performance there, yet you are giving the impression of the GPU itself being heavily outperformed by the PS5. I'll grant you do also discuss CPU limitations during those scenes but you repeatedly flip flop the messaging back to direct GPU performance comparisons. At one point you even criticize "other" content producers for using high end CPU's (specifically the 12900k, so no prizes for guessing who that criticism was targeted at) in their GPU comparison pieces which is in fact exactly what they should be doing, and what you should be doing if you want to make direct GPU to GPU performance comparisons. You isolate the tested components performance by ensuring no other component acts as a bottleneck.

I know you also added a whole section which is likely not CPU limited in the Fidelity matched settings comparison, but that entire comparison is seemingly invalidated by the VRAM limitation which I'll give you the benefit of the doubt on and assume you simply didn't realize at the time of putting out the video. Sure you can argue that you're still showing the relative performance of the systems because that's simply how much VRAM the component that you're testing. But if you're going to do that then you need to make it very clear to the viewer that the 2070 isn't being fully utilised at these particular settings because it's VRAM limited. But you don't present it that way. In fact you never once mention VRAM limiting the GPU's frame rate there and instead frame it as the PS5 GPU simply being more powerful and/or more efficient due to running in a console environment.

2. if you're truly only wanting to show the relative experience possible on two specific systems as opposed to a deeper architectural analysis of those systems performance potential then the basis of the comparison is unfair to begin with. If you want to show how the experience on your machine compares in Spiderman vs the PS5 then limiting the testing to PS5 matched settings which are suboptimal for that machine and ideal for the PS5 is skewing the result. The VRAM limitation which has been demonstrated spectacularly in this thread is a perfect example of that. It's a fact, that the 2070 is lagging the PS5 in VRAM capacity, but it's also (generally speaking) matching or exceeding it in RT capability and has tensor cores to provide superior upscaling quality at a given internal resolution. So while showing the matched PS5 settings is valid, it should be balanced by showing what experience is possible at more PC favoring settings. In this case that may have been something like High textures with very high ray tracing resolution and geometry, 16xAF and DLSS Performance which on balance should provide a much better balance of graphics and performance better suited to the 2070's strengths and compare far more favorably with the PS5 experience.



So call it out as a bug specific to the PC version. Give the PS5 version all the kudos you want for not having that bug, but don't frame that bug as some kind of general architectural deficiency of the PC platform which "requires a $500-$600 CPU to address".



That's fair enough, but see above. It's not that fact that you call attention to the difference, it's how you frame it as a platform architecture advantage rather than what it is - a software bug.



But framing this as the solution to the problem (which it isn't because VRAM often has no impact on the issue), thus re-enforcing the false argument that this is simply an architectural deficiency that needs to be brute forced past in the PC... is wrong.



VRAM has been proven incontrovertibly within this thread as a limitation on the 2070 at the settings you are using.

I don't think anyone has claimed that the CPU is a bottleneck in GPU bound scenarios, only pointed out that you make GPU performance comparisons in CPU bound scenario's.

Personally I think the game is a great port, but it was also a mammoth undertaking and is open to much closer scrutiny than the vast majority of ports and so inevitably, a few fairly serious bugs have been identified, but as you state, Nixxes seems to be quick to fix them, hopefully that will be the case with the VRAM under-allocation and mip loading issues. Until then, the onus is on testers like you to ensure the public is properly informed where these bugs impact testing and not to use them to make potentially misleading claims about platform architectural advantages or raw performance claims.



That's a really good find. I didn't even know that column existed in task manager but I'll be using it future! iroboto beat me to it but I just wanted to second his statements about your excellent contributions in this thread. It's certainly change the way I think about VRAM. Very informative!
I would agree that the cpu is bottlenecking the 2070 by quite a bit the problem is you have a double standard on where that applies wouldn’t the ps5 gpu which has the same level cpu as used in the video also be bottlenecked in that same situation and in Essence still leave the gpu in an equivalent position. What it sounds like you want is for the pc gpu to be alleviated from cpu bottlenecks but still allow the ps5 gpu to be cpu bottlenecked and I’m sorry but that’s a travesty. We wouldn’t do a gpu benchmark between a 3090 and 3060 but one has a 12900k and the other a 12600k would we?
 
PS5 does use IGTIAA on Fidelity Mode 4K VRR, correct? It's also dynamic. I'm not sure how close here is how it runs on my 2080 Ti at Native 4K/TAA Fidelity Mode settings in that scene. My 2080 Ti is paired with an i9-9900K.

View attachment 6912

In that scene, it fluctuates between 61-68fps.

Here is how it runs at Native 4K/ITGI Ultra Quality

View attachment 6913

Once again, in that scene, it fluctuates between 75-81fps. There is no DSR in any of those examples. Whatever the case, I think that the PS5 being equivalent to a 3070/2080 Ti is a gross exaggeration. The 2080 Ti is A LOT faster in those scenes.
You have a better cpu if you have a 2700x or 3700x try the test again with that and watch it fall to the low 50s
 
Which is true but that's a bit disingenuous because that's using settings tailored for the PS5. For instance, it runs the game with a paltry 4X AF even in Fidelity Mode whereas the 2070 has absolutely no issue with 16X. That'd be like taking a scenario in which there are heavy RT effects where the 2070 shines over the PS5 and using those results to prove that RTX 2070>PS5 and call it a day. You have to be honest and point out the differences in their strengths and structures. The 2070 is the closest match to a PS5 on the NVIDIA camp but it still isn't a PS5 GPU so trying to argue which one is more powerful overall is a fool's errand because they're not 100% the same and are from different manufacturers. It's simply agreed that in general circumstances where one doesn't have a specific aspect bottlenecked by the game they're running, they should perform in the same ballpark.

In this game specifically, with PS5 settings, the PS5 performs much better than the 2070 due to using settings that take advantage of the PS5's strengths, not because of some insane architectural deficiency inherent to the PC environment.
Isn’t the 2070 super-2080 a closer match when rasterized?
 
There is absolutely some truth in 'console magic/optimization', even if some people here are very reluctant to acknowledge it.

The problem with NXGamer is that he uses very flawed methodology to go about demonstrating things. Really, more to the point, he has a strong bias to start with that leads him to want to demonstrate certain things in the first place instead of just doing some objective testing and then concluding anything from that.

I might also accuse him of simply not understanding things well enough to know how to do good testing or good analysis, but I honestly dont believe that. I think he's entirely knowledgeable enough to avoid this if he wanted to. He just cant drop the Playstation fanboy mindset, though. It always pokes through and colors his judgements.
It’s not even console optimization the construction of a lot of pc ports make them more demanding to do a similar task basically the 3070 has to bench 20 pounds while the ps5 benches 17 and they both get 30 reps in
 
This is definitely true but I also think you’re far more likely to find those advantages on the CPU side than on the GPU. My 2080 Ti in the same scene outperforms his 2070 by over 45fps (248%).

I think a much more interesting exercise would have been to investigate the CPU-specific optimizations and how the hardware-accelerated decompression helps. You need an extremely beefy CPU for this game. The CPU in the PS5 isn’t all that strong so there is something happening there. Much more so than with the GPU.
The cpu in the ps5 is easily the worst part of the system it’s not just “not that strong” it’s why I go in on people using 12900ks with their gpus when benching against the consoles. I highly suspect the ps5 is running into cpu bottlenecks in a lot of games targetting high framerate
 
Still, it doesn't matter that we can have endless system RAM, the game is tailored around console's total memory budget.


I said from 150 MB to 1 GB. 150-300 MB is for 1080p, 300-400 MB for 1440p and 500 MB-1000 MB for 4K. It can also grow with more elements on screen. It depends. So you never seeing it close to 1 GB does not make it any less true regarding DWM's VRAM usage on higher end monitors. Even now, as of today, my completely clean desktop at 4K desktop uses 427 MB of VRAM. I'm sure other 1440p/4K users can back me up in this regard.


%60 is a brutal number. What I care is AAA games. I can use a sample of 20 big AAA games and that would be "personally" enough for me to see a "pattern" where devs cap the allocation. If you want to see 20 big 2019-2022 AAA games having artificial caps around %85-90, I can show it to you. I personally said that Cyberpunk 2077 and Doom Eternal uses almost all available VRAM, so DL2 might be in that group.

I'm using the latst patched and it is not still raised.


That's the trick with NVIDIA, they give enough VRAM to a point where it appears to be fine, until it isn't. Of course 8 GB is going to be fine, even for 1-2 years down the line. However at the end, horrible things could happen to it 5 years later. You may even think you could get away with small texture quality drops, but that may not be the case here. When I lookat GTX 770 / AC unity requirements back in 2014 and see how funny some people have remarks, saying GTX 770 has enormously speedy VRAM compared to PS4, and how it wouldn't matter that it doesn't have much VRAM, and how it would still destroy PS4. Fast forward today, card is enormously bottlenecked by its tiny buffer on games PS4 casually trashes like Spiderman, God of War and RDR 2, and in all recent AAA games you practically has to use super ugly low res textures to fit something meaningful into its small buffer.



I will factor them in once I see them in action. I'm spectical, since I've never seen Windos nor NVIDIA giving special care or optimizations for older hardware. They invent something new and always leave old hardware behind. To my knowledge Sampler Feedback Streaming can be a huge VRAM optimization that Xbox consoles will use, but only the sampler feedback part of it exists on DX12 Ultimate compliant cards currently, and for the "streaming" part, you need dedicated hardware, a special texture streaming block that exists on Xbox consoles, yet it does not exist on any consumer GPU. It is problematic. NVIDIA and AMD also doesn't have any decompression block on their GPUs. Even the sampler feedback is not fully supported on Ampere, forcing Microsoft to leave it at "feature level 0.9" whereas RDNA2 cards have it at "feature level 1.0". Even before the thing is used, Ampere is already lagging behind, probably due to planned... you know what. It is just not pleasant to see these oddities. It feels like Pascal-Async all over again. So be spectical that those nextgen things will have useful results on older gen cards. Even older desktop motherboards could prove a challenge in this regard, if they decide to invent a new tech for newer gen motherboards. Who knows?
Do you feel like there will be a situation where the pc uses more memory
 
Feel free to point me to where you did this in the video, but I don't recall at any point you calling out in that video that the 2070 is underutilized because of a VRAM limitation? It's all well and good saying you said it might be an issue in the future, and this appears to be a good example of it. But instead of calling that out you've framed it as a general 2070 performance deficiency vs the PS5.

Look at it from the other side. What if we could change the console settings and someone came along and did the comparison with high textures, 16x AA and very high RT geometry and reflections leading to what would likely be lower performance on the PS5, then subsequently claimed the PS5 is weaker than a 2070. would you consider that a fair conclusion? Because that's effectively what you're doing.



Again, none of this is an issue. The issue if that you don't call this out as the reason for the much lower than expected performance and instead frame this as a more general PS5 efficiency/architectural advantage which you extrapolate to other GPU's.



Good point, so perhaps you could have pointed out that the 3060 - a GPU essentially equivalent to your own but featuring 12GB of VRAM would have performed much better in the comparison. You could have even simulated in by dropping the texture setting down on your GPU.



Well this is another issue entirely but you compared the RX 6800 at a locked 4k to the PS5 at a dynamic 4k and then drew general conclusions than the RX6800 is only 15-20% faster. Why would you do that?



Like for like tests are interesting. But whether you admit it or not, they are always skewed in the consoles favour because those settings are optimised for the consoles specific architecture and thus sub optimal for the different architecture of the PC. That's why PC games come with settings that the user can change in order to optimise to their own specific configuration. You could very easily have commented on this, or demonstrated it in the video if you truly cared about balance. Instead, you used this result which is a pretty extreme outlier example of the suboptimal settings having a disproportionately negative impact on the PC, as a spring board to make more generalised claims that PC's of "equivalent" or even better specs can be significantly outperformed by consoles due to architectural and efficiency advantages. When in actual fact, the 2070 is simply lacking in a key spec for this game compared with the PS5 9VRAM) and that's having a disproportionate (and rare) negative impact on it's performance in this game.



Given you are already skewing the results by using a weaker CPU than that in the PS5 anyway I really don't see the need to be overconcerned about reducing CPU load. Reducing CPU load is exactly what you should be doing if you're trying to compare GPU performance.
Isn’t the 2700 equal to the ps5 cpu?
 
DF's approach has been used by damn near every PC reviewer for the last 20+ years. Why do you think Hardware Unboxed uses a test rig with 32GB of DDR5, a beefy CPU, a high-end motherboard, and other top-tier components to run their GPU tests? Why do you think they use a 3090 Ti at 1080p to test CPUs? This is to get rid of bottlenecks and test specific pieces of hardware. Almost no one online will test an entire system because there are just too many variables to take into account.

Criticize DF by all means for that but then, criticize the whole PC review industry of the last two decades.
People in the pc space DO NOT bench two different gpus with different cpus they use the same one. Your arguing for them to not do that
 
It makes it right for situations in which the objective is to measure the full performance of a specific component. As I said before, almost no one tries (and I don't think DF does) to test entire systems because there are too many things that might differ. Once the viewer/reader knows how his component should perform in an ideal scenario, they decide what's best for them. Testing a system with a 2600+2070 is useful for those with that config but doesn't tell much to someone with a 10900K+2070, hence why most reviewers review pieces of hardware, not rigs.
Don’t think there are many people on the planet pairing 12900ks with a 2070 (i personally think they are dumb if they are)
 
I also was surprised when they omitted from using Ray Tracing on PS5 /Xbox in Far Cry 6. FC6 has super light ray tracing implementation where even a 6600xt with its wonk bandwidth is getting 1080p/60 FPS. I too believee that they come into some kind of memory limitation, and it became a choice between ultra textures and ray tracing and they chose ultra textures. WD Legion is quite opposite, it has a ultra texture on PC and ray tracing. Funny enough, consoles do not employ ultra texture but use ray tracing, despite WD Legion's RT being much, much heavier than FC6. So this tells me that consoles also do not have enough memory to handle both RT and ultra textures at 4K/upscaling modes in the case of WD Legion and FC 6. Whether it is warranted or justified however is beyond me.

By the way, you won't get a good experience in FC6 with 10 GB with ultra textures at 4K, even with upscaling, with ray tracing enabled. 3080 has the grunt but it will either tank the performance after a certain playtime, or game engine will downgrade textures, which defeats the purpose of having those textures. At 1440p, 10 GB is actually enough for both. Consoles use 4K lods, despite having lower resolution dips, so we have to take 4K as a metric here for memory capability.
I thought far cry 6 was 4k on the series x both Nx gamer and DF mentioned jt
 
I really don't see the equivalence here at all. DF are doing exactly what they should be doing and exactly what PC reviewers have been doing for decades when trying to directly compare GPU performance. And that is to remove all traces of a CPU bottleneck by using the most powerful CPU possible to isolate GPU performance.

It's not as if DF don't also show how the game performs on a more typical system just like NXG is doing (Alex uses a 3600x for this). But when they do this they make very clear they're testing overall system performance on that system only, they call out the CPU bottlenecks clearly where they exist, and they don't try to draw conclusions about GPU performance which the extrapolate to higher end GPU's.



I wouldn't say it's a bad port at all. Nixxes are one of the best porting studios in the business and know the PC architecture very well. For the most part this seems like a great port, at least now that it's had a couple of patches. The problem though is that it was originally a 'to the metal' PS4 game designed for a unified memory architecture with no thought whatsoever put into designing the engine for us on split memory pools and the PC API stack. So it must have been a mammoth undertaking to get it working (as Alex's video demonstrates). It's had a few issues but most of them seem to have been addressed or improved now within a couple of weeks of launch. But yes, the texture streaming bug and the under allocation of VRAM are still two pretty serious bugs that need to be dealt with asap.



B3D isn't a PC forum ;) In fact doesn't it have much higher footfall in the console sections than the PC ones? Many of the people posting in this "console technology" thread are console gamers too. I think there's just a higher standard here for technical accuracy, regardless of your platform preference than you see at most other forums.



Thanks for taking the time to identify that and apologies if I or anyone else here are coming across too aggressively. I think several people have strong concerns with the framing you use in many of your videos and in some cases, the technical content presented, but I'm sure were all glad you've taken the time to come here and discuss those concerns directly, even if we're unable to come to agreement on them.

Regarding the timestamps above, yes fair enough you do call out a couple of times that the GPU is bottlenecked in Performance RT mode. But as I said earlier, at other points int he video you draw direct GPU performance conclusions from this section. Take this for example:


Here you state "you can close this up with much faster gpu and cpu motherboard memory and the end result of this is the ps5 is already able to push ahead on this largely last generation based title and the perceived higher specification pc hardware is struggling".

So you're stating in a CPU limited situation that you need a faster GPU equal the PS5. You also talk about perceived higher specs when in fact your PC is lower spec than the PS5 in most regards.

And then here, mere seconds after the point you linked above, you present a whole section on the relative GPU specs and even use these to justify why you don't think the RX 6800 is performing more than a few percent higher than than the PS5 - in a CPU limited sequence :confused: :


In your words here: "but yet again we do have moments that can be two percent faster on the rtx machine and 65 faster on the playstation 5. and this will be because of the various requirements and strengths of each machine on a frame by frame basis bandwidth is close between the two gpus but this is one metric with fill rate being considerably high on the ps5 thanks which drops in higher frequency interestingly even against the overclock 2070 the ps5 has around 10 fill rate gain on pixels and textures on paper at least and in reality this is likely going to be even bigger and in comparison the rx 6800 has between 35 to 41 higher fill rate and 12 percent higher bandwidth which explains why a full 4k resolution at the same settings as ps5 infidelity mode are only around the 45 fps level on average and in fact due to the other areas can only be three percent faster across the same test run and settings aside at fixed 4k output"

You are quite clearly linking the PC's performance here to the specific specifications of the GPU, despite only moments earlier stating that this sequence would be CPU bound.



If you're just testing how a specific PC compares to the console at console matches settings, and nothing else, then I agree there's no problem with using a weaker CPU. It's when you try to make direct GPU to GPU performance comparisons in CPU limited scenarios, or when you frame a specific specification bottleneck in that system as a more general architectural deficiency or inefficiency that it becomes a problem.



Yes I agree this would also be an interesting approach. I do like the exact setting matches too, but where we see a massive mismatch in performance as we do here than it might be better to investigate a different approach like the one above too for a more balanced view of the situation.
This logic goes out the window when you consider the ps5 gpu may not be Living up to it’s full potential cause of the cpu it has
 
I dont think that has much to do with there being 'more console users' or the other way around. Its the forum's structure which some have pointed out and requested to be altered awhile ago. Perhaps there are more pc oriented users due to the platform (its always evolving, more capable, you can tweak, new tech is there first etc etc) but at the same time also game on console and the other way around. The forums layout, like DF topics being in the console sections along other main topics, discussions happen there. Also, the graphics forums are mostly pc-centric.

And yeah, most here including myself have both consoles and pc's, ive generally always had PC+PS as it was the killer combo for decades untill now where things start to become more and more spread across all platforms, which is a good thing really.

As for NXG YT channel, i think it has been covered now regarding its content. We dont have to keep batting him i think. It spawned two new quality posters which really inject some good old fashion technical discussions without all the platform warring/bias to it. Makes for more healthy discussions and less irritations to other posters (including myself). Its these 'PS5 is better because of cache scrubbers and NVME' etc without really undermining that thats really the case, which started about two years ago that made these discussions unhealthy in my eyes.
These videos have every damn time stirred up the discussions, every damn time. This time i think it was worth it due to said new posters and a quite nice tech discussion that followed, which others can learn from. And again, im glad we DF and one its members active here.
Your DF?
 
This is what I'm saying, though. NXG has tested with his system for a while now. He has maybe 3 or 4 configs plus some data from IGN staff to pull from. The sample size isn't that large. NXG isn't reviewing video cards, he's reviewing games based on their performance across different hardware configurations. It's not the same as testing a slew of video cards against a benchmark suite. He's essentially testing his hardware against current software and trying to normalize visual settings. There's nothing inherently wrong with that, but not all settings can be dialed to equivalent settings in some games (Spider-man being one of them) and his conclusions aren't always ones I agree with.

Alex at DF does test a few different CPU/GPU configurations for many of his settings review videos. That's one of the reasons they are so good. If you have this class CPU, these settings affect it. If you have this class GPU, you want to change these. There is a bit of user imagination required to combine all of the data points if you have a mismatched system with an overly powerful GPU or CPU for the rest of your configuration, of course.


And this is the real issue I have with NXG's work product. He is essentially testing his system against PS5 while trying to achieve settings parity, and he has the hardware he has. That's all fine. It's when he goes on about having a top 4% machine and makes statements about the PC platform in general that aren't necessarily true for cases outside his own. But in the case of Spider-man, it isn't like the game is without performance issues and curiosities on PC.

On the Spider-Man topic, and reviews that review towards "what settings make it playable", here's an example of a youtuber who does just that with an entry level system.
His desktop is a Ryzen 3600X with a GTX1650S but he also tests on some old laptops. Doesn't do any ini tweaks or hex edits, just uses the settings menu to see if a game is playable on the laptop, and what settings are best for the desktop.
I actually agree with this post and it’s the main criticism I have with Nx gamers work. It’s appreciated your remaining objectiveb
 
The cpu in the ps5 is easily the worst part of the system it’s not just “not that strong” it’s why I go in on people using 12900ks with their gpus when benching against the consoles. I highly suspect the ps5 is running into cpu bottlenecks in a lot of games targetting high framerate
Should roll these into a single response and a reply. But this one in particular I'll address. It doesn't matter how much CPU there is in a console, the console will always be framerate limited in an SOC design. The faster your framerate, the more bandwidth you take.

If your CPU at 30fps is regularly taking say 20 GB/s of bandwidth, it would become 40 GB/s at 60fps, and 80 GB/s at 120fps.
If your GPU at 30fps is regularly taking say 100GB/s of bandwidth, it would become 200GB/s at 60fps, and 400GB/s at 120fps. The two combined you're at 480GB/s which is maximum theoretical, except that isn't how memory actually works in terms of performance. There are reading and writing and read/write hits and there are asymmetrical losses in bandwidth when the CPU and GPU compete. Quick frankly, there's not a lot available here for consoles to use. So having more CPU isn't going to get around the fact that the GPU will be bandwidth starved the faster it goes reducing the resolution dramatically. See Series S at 120fps. If a game is properly coded, CPU bottlenecking should not happen since they have priority over memory.

tldr; You could never benchmark a GPU on console as you would on PC: making the CPU push faster than the GPU can render. They share the same resources and CPU has priority. The GPU will always be the bottleneck in this scenario. Quite frankly, 120fps on console is a very difficult to maintain.
 
(10) Wo Long: Fallen Dynasty | Xbox Series S/X vs PS5 | Graphics Comparison Demo | Analista De Bits - YouTube

PS5: Resolution Mode: 1440p/60fps FPS Mode: Dynamic 1440p/60fps (Common 1332p)
Xbox Series S: 1080p/60fps
Xbox Series X: Resolution Mode: 1440p/60fps FPS Mode: Dynamic 1440p/60fps (Common 1296p)
- This is a demo and the game still has a long way to go to finish its development, so its quality will differ from the final version.
- All versions use temporary reconstruction rendering.
- Xbox Series S has lower quality lighting, shadows, textures, draw distance and some post-processing effects.
- Loading times are slightly faster on PS5.
- The framerate on Xbox Series S is considerably more stable than on the other 2 consoles due to its lower graphical demands. On PS5/XSX, I recommend FPS mode.
- Xbox Series X has a slightly lower average resolution in FPS mode compared to PS5.


The demo is out, from personal experience it constantly crashes/giving me errors on XSX. The workaround for some people is to kill the game start new and skip tutorial. Unfortunately it dosnt work for me i got popup every minute or so with info that something went wrong.
 
Back
Top