Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

No they won't because once again, your CPU is piss poor in Spiderman and does not represent the performance people with better CPU's will have.

Why are you still not understanding that?

Me above, earlier "Final piece is the CPU - The 2070 is hardly ever CPU bound in the Fidelity mode test but I do state that it is almost always either Memory bound or CPU bound in the others."
 
I am sorry, but your reply has just gone off on a tangent and just sounds like a "yeah well, in 3 years the PS5 will be low end with the 2070" and other things I have not said or inferred.

e.g. the bolded above, I say the EXACT opposite both in the video and my post. You can POWER PAST the issue with more VRAM etc etc, as has always been the case for PC. But the reality is an 8GB GPU owner is going to have to lower settings because of their hardware. The fact you and others do not like that fact or play semantic does not make it less true, my videos are based on evidence-based reporting and analysis, not selling a product, making things look better or playing an angle.
It still does not change the fact that I can use fidelity matched settings at 1440p and get great performance always above 60. If you're at 1440p, you don't even have to lower any settings. Lowering settings, specifically texture one, is suited for 4K resolution. I just tell you the reality of gamers on PC: I'm pretty sure a huge majority of people who have 2070 and 3070 have their run off the mill 1440p screens. I'm not saying that they will be able to keep parity at 1440p. I'm not quite sure on that, but for now, it is possible to keep parity at 1440p in terms of textures. It might also change.

It is tangent, because we're actually discussing different things. You and I are looking at this situation from very different perspectives. I'm just trying to explain mine to you. Sorry but I simply disagree with your perspectives. There's nothing I can do about that. I cannot use my 3070 8 GB outside of its targeted spec. 8 GB VRAM meant that NVIDIA purposefully aimed this card at 1440p market. It can do entry level 4K gaming for pre-2020 games, but is not doing anything remarkable at newer gen games. Even DL2, at native 4K without RT, not even getting upwards of 45 frames without DLSS. DLSS helps, but you seem to completely ignore that too since it would be "tangent". DLSS is the highest quality upscaler we have in modern video games, it would also help 3070 or 2070 in Spiderman, as it does help in other games, also reduces VRAM consumption a bit, but of course then it wouldn't be a proper match for your video, so it becomes a tangent.

Still does not change the fact that any "niche" RTX owner who uses a 2070 and 3070 at 4K will always utilize DLSS, at least the Quality mode. PS5 can have its special super native 4K mode, and can have enough VRAM for it.
 
Last edited:
VRAM - Yes, it is and issue and I have covered this years ago in multiple videos on how it CAN and WILL impact more than just performance and I am actually glad that so many here are now talking about that point which is something I have talked about for so long and it has never seemed to have sunk in. Even those here that disagree with me are really agreeing, including yourself, which is a positive. I covered it here in this video and others (such as my God of war and PS5 launch videos) in that VRAM makes a difference, RAM makes and data is paramount in all computer work of which games is likely one of the most demanding and performance impacting bar Rocket aviation, planes etc.

Feel free to point me to where you did this in the video, but I don't recall at any point you calling out in that video that the 2070 is underutilized because of a VRAM limitation? It's all well and good saying you said it might be an issue in the future, and this appears to be a good example of it. But instead of calling that out you've framed it as a general 2070 performance deficiency vs the PS5.

Look at it from the other side. What if we could change the console settings and someone came along and did the comparison with high textures, 16x AA and very high RT geometry and reflections leading to what would likely be lower performance on the PS5, then subsequently claimed the PS5 is weaker than a 2070. would you consider that a fair conclusion? Because that's effectively what you're doing.

But, we need to keep in focus that VRAM is Hardware and is part of the GPU. My test(s) are designed to test the GPU at the same settings as the console (or as close as possible) which WILL and SHOULD include the VRAM. As such the results here (within the Fidelity mode only mind as I clearly sign-post in the video), represent what all 2070 owners and to some degree 3070 owners will see when matching those settings.

Again, none of this is an issue. The issue if that you don't call this out as the reason for the much lower than expected performance and instead frame this as a more general PS5 efficiency/architectural advantage which you extrapolate to other GPU's.

If a 12GB 2070 existed I would have tested, so long as I had it.

Good point, so perhaps you could have pointed out that the 3060 - a GPU essentially equivalent to your own but featuring 12GB of VRAM would have performed much better in the comparison. You could have even simulated in by dropping the texture setting down on your GPU.

I did test and show this with the 16GB RX6800 which will allocate 13.5GB of VRAM for the game, showing it can use more and it does not suffer anywhere near the level the 2070 does.

Well this is another issue entirely but you compared the RX 6800 at a locked 4k to the PS5 at a dynamic 4k and then drew general conclusions than the RX6800 is only 15-20% faster. Why would you do that?

the results here are not to show PS5 in a positive light (although many think that is my aim, it is not), or to show PC in a bad light (again not my aim although many think it is). As in everything I do, it is just data and information for owners and reflects how THIS game, in THIS mode, on THIS machine will run for all (give or take). Also by dropping settings be them textures, hair, particles what ever lower is not a like for like test.

Like for like tests are interesting. But whether you admit it or not, they are always skewed in the consoles favour because those settings are optimised for the consoles specific architecture and thus sub optimal for the different architecture of the PC. That's why PC games come with settings that the user can change in order to optimise to their own specific configuration. You could very easily have commented on this, or demonstrated it in the video if you truly cared about balance. Instead, you used this result which is a pretty extreme outlier example of the suboptimal settings having a disproportionately negative impact on the PC, as a spring board to make more generalised claims that PC's of "equivalent" or even better specs can be significantly outperformed by consoles due to architectural and efficiency advantages. When in actual fact, the 2070 is simply lacking in a key spec for this game compared with the PS5 9VRAM) and that's having a disproportionate (and rare) negative impact on it's performance in this game.

Had I dropped textures to High the IQ impact would still happen and then reductions in CPU load (texture decompress is reduced) bandwith, cache would all help. Which brings me on to my next point, why the VRAM 'Bug' as you call it on PC.

Given you are already skewing the results by using a weaker CPU than that in the PS5 anyway I really don't see the need to be overconcerned about reducing CPU load. Reducing CPU load is exactly what you should be doing if you're trying to compare GPU performance.
 
Me above, earlier "Final piece is the CPU - The 2070 is hardly ever CPU bound in the Fidelity mode test but I do state that it is almost always either Memory bound or CPU bound in the others."

Unfountnately for you every single site that's done a better standard of testing in Spiderman disprove your claim (Please see the attached)

With the game tested at less than PS5 ray tracing settings the 2700x is terrible.

In the graphics test an RTX 2070 'Super' at native 1440p with 'high' ray tracing can run 70fps minimum and 79fps on average.

But at the exact same settings your CPU is only good for 41fps on the minimum and 58fps on average meaning it is completely and utterly incapable of running an RTX2070 'Super' at full potential, heck it can't even run a none 'Super' variant to full potential.

And remember, that's using a ray tracing LOD of only 6, testing with increased ray tracing LOD to match PS5 like you did will result in even lower frame rates due to a larger CPU bottleneck.

So please stop saying shit like this...

As such the results here (within the Fidelity mode only mind as I clearly sign-post in the video), represent what all 2070 owners and to some degree 3070 owners will see when matching those settings.

Because it is statically and factually wrong.

I'm done with you now and just happy people on Twitter are seeing this thread and now starting to see holes in your claims.
 

Attachments

  • 1440p-RT.png
    1440p-RT.png
    164.9 KB · Views: 8
  • 1440p-High.png
    1440p-High.png
    169.4 KB · Views: 8
Last edited:
Like for like tests are interesting. But whether you admit it or not, they are always skewed in the consoles favour because those settings are optimised for the consoles specific architecture and thus sub optimal for the different architecture of the PC. That's why PC games come with settings that the user can change in order to optimise to their own specific configuration.

You are correct btw. Here is my 3070 at native 1440p with DLAA (much better than TAA and incurs a %10 performance penalty) with Very High textures (matched PS5), Very High geometry (creates hugely better reflections than PS5), with 16x Aniso. I'm practically getting a perfect locked 60 with my own special "fidelity" mode, while GPU having %15-20 headroom in most cases (okay I admit it, I'm running into CPU bottlenecks above 65 FPS! Lol!)

The fact that I can still get an almost locked 60 with an 2700x with very high geometry, a very super CPU bound setting that improves reflections over PS5, proves that PC is a different thing than consoles where you can tailor your own experience to your own strenghts. Very high textures are used so the "but very high textures need better CPU, decompression and stuff :((" is also made irrevelant with this single video evidence. I use both VH textures and VH geometry at native 1440p and get a perfect performance profile out of my low-end 2700x. (actually, not a 2700x. I declocked it back to 3.7 Ghz for fun and giggles, lol)

It is entirely VRAM bottlenecked. The fact that enormous frame drops he's having not happening to me is huge evidence that entire thing is choking down VRAM heavily at 4K and becomes a complete non issue at 1440p. And at 1440p I have so much performance headroom that I can enable very high geometry while retaining the 60 FPS target, DLAA and 16x Aniso and other various enhancements. Looking back at the video, I could use that %15-20 GPU headroom to weather particle settings for example. Just a food for thought.
 

You are correct btw. Here is my 3070 at native 1440p with DLAA (much better than TAA and incurs a %10 performance penalty) with Very High textures (matched PS5), Very High geometry (creates hugely better reflections than PS5), with 16x Aniso. I'm practically getting a perfect locked 60 with my own special "fidelity" mode, while GPU having %15-20 headroom in most cases (okay I admit it, I'm running into CPU bottlenecks above 65 FPS! Lol!)

The fact that I can still get an almost locked 60 with an 2700x with very high geometry, a very super CPU bound setting that improves reflections over PS5, proves that PC is a different thing than consoles where you can tailor your own experience to your own strenghts. Very high textures are used so the "but very high textures need better CPU, decompression and stuff :((" is also made irrevelant with this single video evidence. I use both VH textures and VH geometry at native 1440p and get a perfect performance profile out of my low-end 2700x. (actually, not a 2700x. I declocked it back to 3.7 Ghz for fun and giggles, lol)

It is entirely VRAM bottlenecked. The fact that enormous frame drops he's having not happening to me is huge evidence that entire thing is choking down VRAM heavily at 4K and becomes a complete non issue at 1440p. And at 1440p I have so much performance headroom that I can enable very high geometry while retaining the 60 FPS target, DLAA and 16x Aniso and other various enhancements. Looking back at the video, I could use that %15-20 GPU headroom to weather particle settings for example. Just a food for thought.

Yet he keeps rambling on about mark cerny, SSD, IO and cache scrubbs. My 3070m 8gb laptop with 5800h outpowers the PS5 version. I consider the 3070m being like a 3060/3060Ti, but a RTX2070, even though being ancient 2018 gpu, should be good enough to match and exceed the PS5 version bar the vram/texture setting, but in general a greater experience when tailored to it.
And thats for a port.

I have not even tested on my 2080Ti pc yet. With its 11gb vram and even more raw processing power it shouldnt be a problem either.

Edit: your video btw? Certainly looks impressive.
 
Which brings me to my penultimate point.

Architecture - I see that you are saying this is nothing to do with the PS5 architecture or console to PC, but it really is. As the point is that Console have a Unified Ram architecture (hUMA) and PC does not, well dGPU's mostly do not. As such if they did then changes here would not need to made to source to compensate (and whatever work the driver will be doing, although DX12 does but memory allocation almost entirely on the Developer now compared to DX11). The fact is the PC has to 'Waste' Ram, Bandwidth, CPU and GPU cycles etc etc where the Console does not. And the results present this, this is not to say things cannot be changed, improved etc but they cannot be solved only worked around or powered past (within reason). Hence why more VRAM helps, faster PCIe helps, faster CPU helps, faster SYSram helps etc etc.
I think as a forum post, this is fairly acceptable, but presenting this on IGN for instance is misleading imo.

Mainly when we think of architecture, we think of pillars that hold something up, the shape, the multiple systems working together to create performance. The only challenge with this title has just been this VRAM bottleneck, for a video card that was never marketed to be 4K. And as a reviewer, in particular, your role in evaluating hardware should be to evaluate it based on how well it achieves what it set out to do, and the 3070 was never marketed as a 4K native card. I should be clear here, while the ALU is quite high for what it is, but the VRAM amounts are fairly paltry, and Nvidia has been well criticized for this. And so to ask a card, to do something it shouldn't be capable of and then say this is where PS5 is better architecture is misleading. There's nothing particular about architecture when we are discussing a bottleneck, so if you're going to talk about why a card doesn't perform, there should have been a large segment on hitting VRAM limitations. If you want to see how a card performs without VRAM limitations, you need to setup for that and explain that experiment to the viewers.

I think saying things like the PC has to waste ram, bandwidth, CPU and GPU cycles where consoles do not, is not the full story either. HUMA architectures prioritize CPU bandwidth over GPU, meaning contention and in particular asymmetrical contention of bandwidth. Its a large reason why we're seeing such low AF settings in comparison to PC. Consoles, in particular PS5, has to deal with sharing of power between CPU and GPU, which doesn't happen in the PC space, both CPU and GPU are free to clock as high as the silicon can handle. PCIE bandwidth is effectively higher than SSD on PS5 if you're looking purely at bringing textures in. The only challenge here is that there are so many configurations around memory, that most developers won't target 16-32GB of system memory for PC in terms of using system memory in that way. But really we're talking about a PORT here, in which if they wanted this to run well on PC, they would have had to rewrite a large portion of the streaming system to accommodate for smaller VRAM amounts, but with huge system RAM amounts, and they didn't want to do that. And that's okay, but that's precisely why we need to bring VRAM limitations front and center. If Developers won't code for it, maybe we shouldn't be buying or they should sell video cards with such paltry VRAM.

But, we need to keep in focus that VRAM is Hardware and is part of the GPU. My test(s) are designed to test the GPU at the same settings as the console (or as close as possible) which WILL and SHOULD include the VRAM
Sure, but once again, the VRAM allocation provided to the xx70 series cards are designed for resolutions at 1440p. And games that require more VRAM at certain settings makes this impossible for it despite the other components on the GPU. It doesn't matter how much ALU, or rasterizers, or triangles a GPU can push, when all that hardware is sitting idle waiting for work to arrive. And that's a pretty critical piece in explaining performance here. Talking about when a GPU is sitting idle due to a bottleneck, and saying that a GPU has hit its hardware at these settings are very profoundly different messages.

I can go through the first pages of Neogaf, or other Pro Sony sites, and it's clear the messaging there (from your video) is that the PS5 is outperforming these cards without any real understanding of why. I think as much as you are having a tough time here with certain posters, you need ask yourself, if those people got the messaging right over there. If you're comfortable with that, then so be it. But if you're not, and you feel like people aren't getting the message, then you should consider which messages you are glossing over, and which messages you are driving home in those videos.

I don't know if it's clear, your messaging in your videos is showing how PS5 is punching above it's weight. I think most PC people would disagree with that messaging, if you think that is the wrong messaging we are receiving then I think it makes sense to see what you said that got us thinking that way. I think VRAM limitations is something that needs to be brought forward with messaging, especially to Nvidia and the PC community. That needs to be a full discussion so that people are not mistaking ALU performance for VRAM bottlenecks. You claim you bring it up, but I didn't get that from your video, it's there in passing , but I think it's pretty clear the main message is not that.

I wish you the best of luck, honestly, whether you care or not what we think doesn't matter. But each time you make a video, there is an opportunity to improve or rectify. I don't know if you want to go back and address this, it's up to you. Your videos cause our forums to come to life, for the wrong reasons perhaps, but it comes to life, so I don't mind it. Certainly glad you are here to discuss your points. I don't see another forum that is going to give you good shop talk feedback.
 
I think as a forum post, this is fairly acceptable, but presenting this on IGN for instance is misleading imo.

Mainly when we think of architecture, we think of pillars that hold something up, the shape, the multiple systems working together to create performance. The only challenge with this title has just been this VRAM bottleneck, for a video card that was never marketed to be 4K. And as a reviewer, in particular, your role in evaluating hardware should be to evaluate it based on how well it achieves what it set out to do, and the 3070 was never marketed as a 4K native card. I should be clear here, while the ALU is quite high for what it is, but the VRAM amounts are fairly paltry, and Nvidia has been well criticized for this. And so to ask a card, to do something it shouldn't be capable of and then say this is where PS5 is better architecture is misleading. There's nothing particular about architecture when we are discussing a bottleneck, so if you're going to talk about why a card doesn't perform, there should have been a large segment on hitting VRAM limitations. If you want to see how a card performs without VRAM limitations, you need to setup for that and explain that experiment to the viewers.

I think saying things like the PC has to waste ram, bandwidth, CPU and GPU cycles where consoles do not, is not the full story either. HUMA architectures prioritize CPU bandwidth over GPU, meaning contention and in particular asymmetrical contention of bandwidth. Its a large reason why we're seeing such low AF settings in comparison to PC. Consoles, in particular PS5, has to deal with sharing of power between CPU and GPU, which doesn't happen in the PC space, both CPU and GPU are free to clock as high as the silicon can handle. PCIE bandwidth is effectively higher than SSD on PS5 if you're looking purely at bringing textures in. The only challenge here is that there are so many configurations around memory, that most developers won't target 16-32GB of system memory for PC in terms of using system memory in that way. But really we're talking about a PORT here, in which if they wanted this to run well on PC, they would have had to rewrite a large portion of the streaming system to accommodate for smaller VRAM amounts, but with huge system RAM amounts, and they didn't want to do that. And that's okay, but that's precisely why we need to bring VRAM limitations front and center. If Developers won't code for it, maybe we shouldn't be buying or they should sell video cards with such paltry VRAM.


Sure, but once again, the VRAM allocation provided to the xx70 series cards are designed for resolutions at 1440p. And games that require more VRAM at certain settings makes this impossible for it despite the other components on the GPU. It doesn't matter how much ALU, or rasterizers, or triangles a GPU can push, when all that hardware is sitting idle waiting for work to arrive. And that's a pretty critical piece in explaining performance here. Talking about when a GPU is sitting idle due to a bottleneck, and saying that a GPU has hit its hardware at these settings are very profoundly different messages.

I can go through the first pages of Neogaf, or other Pro Sony sites, and it's clear the messaging there (from your video) is that the PS5 is outperforming these cards without any real understanding of why. I think as much as you are having a tough time here with certain posters, you need ask yourself, if those people got the messaging right over there. If you're comfortable with that, then so be it. But if you're not, and you feel like people aren't getting the message, then you should consider which messages you are glossing over, and which messages you are driving home in those videos.

I don't know if it's clear, your messaging in your videos is showing how PS5 is punching above it's weight. I think most PC people would disagree with that messaging, if you think that is the wrong messaging we are receiving then I think it makes sense to see what you said that got us thinking that way. I think VRAM limitations is something that needs to be brought forward with messaging, especially to Nvidia and the PC community. That needs to be a full discussion so that people are not mistaking ALU performance for VRAM bottlenecks. You claim you bring it up, but I didn't get that from your video, it's there in passing , but I think it's pretty clear the main message is not that.

I wish you the best of luck, honestly, whether you care or not what we think doesn't matter. But each time you make a video, there is an opportunity to improve or rectify. I don't know if you want to go back and address this, it's up to you. Your videos cause our forums to come to life, for the wrong reasons perhaps, but it comes to life, so I don't mind it. Certainly glad you are here to discuss your points. I don't see another forum that is going to give you good shop talk feedback.
You stated it better than me. Thanks. He, Michael, might have good intentions. But his userbase, based on the comments he also leaves a special "like", and based on Twitter replies he has and he positively answers back, majority of people who follows him thinks of different things than him (if he indeed has good intentions doing that video, that is). That is my entire problem. How the information is presented. Most people look at PS5 destroying a 2070 with 45 frames versus 19 frames, and will call DF "see, DF are PC fanboys, they said PS5 was a 2070 and 2060 super, now PS5 destroys their precious 2070!" That is the takeaway most of his followers get. That's where I have a problem with him. If he also showed how, say 3060, performs against 2070, and shown how VRAM is hurting 2070 actively compared to an equivalent performance card, at least some people would see the actual reason behind it, and therefore, I wouldn't have any issue with the video.
 
I think saying things like the PC has to waste ram, bandwidth, CPU and GPU cycles where consoles do not, is not the full story either. HUMA architectures prioritize CPU bandwidth over GPU, meaning contention and in particular asymmetrical contention of bandwidth. Its a large reason why we're seeing such low AF settings in comparison to PC. Consoles, in particular PS5, has to deal with sharing of power between CPU and GPU, which doesn't happen in the PC space, both CPU and GPU are free to clock as high as the silicon can handle. PCIE bandwidth is effectively higher than SSD on PS5 if you're looking purely at bringing textures in. The only challenge here is that there are so many configurations around memory, that most developers won't target 16-32GB of system memory for PC in terms of using system memory in that way. But really we're talking about a PORT here, in which if they wanted this to run well on PC, they would have had to rewrite a large portion of the streaming system to accommodate for smaller VRAM amounts, but with huge system RAM amounts, and they didn't want to do that. And that's okay, but that's precisely why we need to bring VRAM limitations front and center. If Developers won't code for it, maybe we shouldn't be buying or they should sell video cards with such paltry VRAM.

Indeed, its a port. Ports usually run the best on the intended system it was coded and optimized for. And for that, its surprising to see Spidermand and other sony titles run so well (thanks to Nixxes). However its still a port, and in this case Nixxes isnt even done optimizing for it as they promised before, where they touched upon the two memory pools and cpu usage.

Also, while NV and its 3070 should get hammered for its relative low amount of VRAM for a 2020 gpu in the mid/high range, aside from it being a 1440p gpu, there was also the notion of DLSS in mind and its improvements, giving you very close to native 4k results. Going forwards, DLSS/ML will certainly only improve from here.

AMD have had much larger vram pools since awhile, their 2020 RX series go from 12 to 16gb and infinity cache. Almost certainly NV will supply with higher vram allocations this time around, along with improved tensor/AI (DLSS) performance and quality.
 
"It is common knowledge lowering textures quality will result in better performance, even if the CPU is the bottleneck."
"As I showed you above, it's capable of utilizing above 7GB of VRAM, so clearly utilizing 6GB of VRAM means you're not VRAM starved."
"Like come on dude, your VRAM point is invalid. If the 3070 only had 6GB of GDDR, than your point would have been valid."
"So how can it be VRAM starved if it has a GB it can utilize?"

This is what I have to deal with now. This is how far some people will go to defend that 2070 is not VRAM starved. They so want to believe that PS5 is natively overpowering 2070 without VRAM bottleneck, now they will claim textures themselves cause a performance impact.


Here is proof that at 1440p, alternating between high and very high textures do not incur a performance drop. I'm really getting tired of seeing people go to extremes to invalidate my findings.
 
Architecture - I see that you are saying this is nothing to do with the PS5 architecture or console to PC, but it really is. As the point is that Console have a Unified Ram architecture (hUMA) and PC does not, well dGPU's mostly do not. As such if they did then changes here would not need to made to source to compensate (and whatever work the driver will be doing, although DX12 does but memory allocation almost entirely on the Developer now compared to DX11). The fact is the PC has to 'Waste' Ram, Bandwidth, CPU and GPU cycles etc etc where the Console does not. And the results present this, this is not to say things cannot be changed, improved etc but they cannot be solved only worked around or powered past (within reason). Hence why more VRAM helps, faster PCIe helps, faster CPU helps, faster SYSram helps etc etc.

This is a complete misrepresentation of the situation. Yes it's common knowledge that PC's need more system memory than consoles to run the same game and certainly you can claim that as an architectural advantage if you wish - it's hardly news. But it's no architectural disadvantage that PC's have to use a portion of VRAM for non-game related applications and services. The consoles also reserve (a bigger) chunk of VRAM for that exact same purpose.

Ultimately you've taken a PC with half the VRAM of the PS5, found it's running into massive VRAM bottlenecks in comparison to the PS5 and are now trying to blame that on some architectural deficiency in the PC that results in it wasting RAM and and processor cycles. When in fact, the issue is simply that the PC you're using has less VRAM than the console! Had you even 10-12GB VRAM (much less overall memory than the PS5) you would have mostly or entirely mitigated that VRAM limitation and the PC would have performed completely in line with where we'd expect it to given it's usual relative performance in relation to the PS5. Yes, you would still need more system RAM, but that's a given, and always has been.

As to UMA vs separate memory pools. This is an architectural advantage from a developer point of view to be sure. But if the separate memory pools are handled properly then it's not going to cause some disproportionate loss of compute or memory performance as you seem to be suggesting. In fact split pools have a not insignificant advantage over a unified pool from a raw performance perspective due to the lack of bandwidth contention, and lower latency on the CPU side.

Final piece is the CPU - The 2070 is hardly ever CPU bound in the Fidelity mode test but I do state that it is almost always either Memory bound or CPU bound in the others. I also state the RX6800 is nearly always CPU bound even in the fidelity mode due to the high CPU demands. Which brings me back to the point above, this is, again as I stated and to be clear Mark Cerny did at the reveal, that the dedicated decompression blocks, aligned SSD speeds and entire Architectual design here means the CPU demands on PC, in the current technical sphere in that space, will be far higher than previous generations. I did discuss this before the PS5 launched stating that the ability to achieve PS5 results will (as in identical) will need higher requirements and to double a 30fps title to 60fps would be extremely high.

If you acknowledge that the CPU is a bottleneck in those tests (performance RT for the 2070 and fidelity for the 6800) then why on Earth to you draw GPU to GPU performance comparisons from those tests? You literally say things like "the PS5 is x% faster then the RTX 2070/ RX 6800 across this benchmark run". And yet here you are admitting that the GPU is CPU limited.
 
I think the discussion here got too out of hand. People are giving too much credit to Sony ports on PC. There should not be made too many conclusions.
So far, I don't think their implementation on PC is great. Just because they didn't exist before does not mean they should get a free pass.

Nevertheless, some good contributions came out of the discussion.
 
Last edited:
I think the discussion here got too out of hand. People are giving too much credit to Sony ports on PC. There should not be made too many conclusions.
So far, I don't think their implementation on PC is great. Just because they didn't exist before does not mean they should get a free pass.

Nevertheless, some good contributions came out of the discussion.

This is where I'm excited to see what the TLOU Remake performs like as it was announced for PC early on I'm expecting the engine to have ben tweaked to work well with PC's.
 
@Michael Thompson I think the way you framed the entire video was poorly done.

What I recommend for your future comparisons: Make it very clear what the bottleneck is if such a situation arises again. A VRAM bottleneck isn't like most bottlenecks. It absolutely tanks performance and you should have put a huge red flag over that fact. No one here who watched that video came out with that impression. It seemed you really meant to show how much better the PS5 was. Now, I know that you say your objective was to compare it to the 2070 because it's its closest competitor on the NVIDIA side but that's just that; it's the closest. It's not a PS5 GPU and has therefore different limitations. By using PS5 settings, you're basically throwing it on the PS5's court and going "see how badly it performs" when the PS5 has 50% more VRAM.

Offer alternatives: The PS5 uses 4X AF in Fidelity Mode when AF 16X has been basically free on PC for god knows how long. You never brought it up in your video. I don't think you touched upon DLSS either. You could have said "At matched PS5 settings, the 2070 just doesn't cut it but here are what you can do instead:" Then you could have gone over cranking up AF, using DLSS, lowering texture quality to alleviate the bottleneck, etc. The ones who possess a rig similar to yours only came to the conclusion that their performance is garbage compared to the PS5. They didn't come out with how to mitigate these issues or improve performance to have a console-like experience taking advantage of their hardware (albeit not identical).
 
Realy bizzare people attack Nxgamer for using cpu quite similar in performance to ps5 cpu and ignore using highend cpu by df. In the end you have benchmarks from completly different approach that we should apprieciate. Its all plastic or metal game toys, chillout people.
It's not about what hardware he uses. it's about how he interprets the results he gets and the conclusions he draws from the data. They're very clearly biased.
 
Realy bizzare people attack Nxgamer for using cpu quite similar in performance to ps5 cpu and ignore using highend cpu by df. In the end you have benchmarks from completly different approach that we should apprieciate. Its all plastic or metal game toys, chillout people.
DF's approach has been used by damn near every PC reviewer for the last 20+ years. Why do you think Hardware Unboxed uses a test rig with 32GB of DDR5, a beefy CPU, a high-end motherboard, and other top-tier components to run their GPU tests? Why do you think they use a 3090 Ti at 1080p to test CPUs? This is to get rid of bottlenecks and test specific pieces of hardware. Almost no one online will test an entire system because there are just too many variables to take into account.

Criticize DF by all means for that but then, criticize the whole PC review industry of the last two decades.
 
Back
Top