Current Generation Games Analysis Technical Discussion [2022] [XBSX|S, PS5, PC]

Come on, claiming you "have never noticed any sort of bias from them" is just ridiculous. HBU is the definition of bias.

But i dont see a problem with Modern Warfare 2. Is just a console optimized game which got a cheap port to the PC. Even raytracing was discarded after three games.

Lovelace runs okay. The 4090 uses ~380W while delivering ~70% more performance than 3090TI:
 
4090 is 78% faster than 90TI and 93% faster than vanilla 3090 and 114% faster than 3080. Call of Duty has the highest differential between Lovelace and Ampere out of all games tested, it's the same with Vanguard. Seems whatever is broken on Ampere with this engine is working fine on Lovelace. Since nvidia is underperforming massively in this title outside Lovelace, im sure a (ocd) 2070 is working in overdrive as we speak.
 
Come on, claiming you "have never noticed any sort of bias from them" is just ridiculous. HBU is the definition of bias.
Ya know what I hear when people say this in cases like this? "I'm biased myself and am incapable of dealing with data points that go against it, so I accuse others of bias to reconcile reality".

No, I genuinely have not noticed any bias whatsoever. But that's cuz I have no bone in the fight, either. I've certainly disagreed with HU's conclusions/talking points/testing methods on various things, but that is not the same as saying they are inherently trying to favor any one brand, much less revolving their ENTIRE BENCHMARK SUITE around making a single GPU from 2019 look good. lol When you're saying silly stuff like that, you've got to start thinking maybe your original premise just isn't correct to begin with.

Again, it's a bit embarrassing to see this narrative being so popular on here. I'd expect better.
 
Last edited:
4090 is 78% faster than 90TI and 93% faster than vanilla 3090 and 114% faster than 3080. Call of Duty has the highest differential between Lovelace and Ampere out of all games tested, it's the same with Vanguard. Seems whatever is broken on Ampere with this engine is working fine on Lovelace. Since nvidia is underperforming massively in this title outside Lovelace, im sure a (ocd) 2070 is working in overdrive as we speak.

Power consumption is very good indicator that the engine and DX12 isnt optimized for nVidia GPUs. Same with AC:Valhalla which uses less than 340W on a 4090 in 4K...
nVidia fixed two problems of Ampere: Low GPU clocks and not enough bandwidth. Both helps in these games.
 
Those FH 5 results from PCGH are a result of VRAM. It has nothing to do with "next gen geometry and lighting", which Forza doesnt even have. 5700xt has aged just fine. In the majority of engines it performs better than its Turing competitor. You are mostly limited to UE4 titles when looking for bad performance.
We've debunked this deceptive comparison video before, essentially he modified his testing methodology this time, as he used High settings and avoided Ultra or Max settings, which contributed to the margins, this video was deceiving, the guy is so self indlugent and so vulnerable about his flawed recommendation of the 5700XT over modern Turing cards, that he is willing to cheat his audience for it.

For example: last time he did that same comparison, he tested Cyberpunk at High settings, Valhalla at Very High and Watch Dogs at Ultra settings, he also tested dozens of games at their max settings. This time he switched and lowered the quality settings a notch or two: Cyberpunk at Medium, Valhalla at High and Watch Dogs at Very High, the other dozens of games were tested using significantly lower than max settings.

You don't lower settings in games you've previously tested at higher settings (Cyberpunk, Valhalla, Watch Dogs). Secondly, you don't lower settings in games that are achieving 200+fps (such as Siege), he is trying hard to optimize the shrinking margin to uphold his narrative. He also introduced ReBAR this time for the 5700XT, which wasn't available last time, and which very few users will manage to actually get it working on the 5700XT.

Last time the 5700XT had a 15% lead over the 2060Super, now it has a 13% lead despite the lowered quality settings. So the margin is decreasing, a "finewine" aging for the 2060Super in Steve's book, necessitating the deceptive move to try and uphold his "narrative" of the 5700XT being the superior choice by lessening the reduction of the margin.

Modern warfare is a joke 6800xt beating a 3090ti.. game is amd optimized to the fullest
And Steve has the audacity to call this game a fine example of the 5700XT "aging well" .. and some people call him unbiased!
 
Anytime a game is a lot faster on Nvidia its because of Nvidia's superior hardware. When a game is a lot faster on AMD it's because it's poorly optimized for Nvidia.

It doesn't really matter what the settings are as long as the VRAM doesn't become a bottleneck. The 5700xt is just faster than its Turing competitor in the vast majority of engines.
 
You just sound super paranoid to me. HU has been growing well aside from any specific 5700XT cult following.

It's just such a bizarre conspiracy that you should probably look at why you're going down such a hole in getting upset over this.

And it's quite sad to see this sort of commentary in general and so many people agreeing with it. Wild accusations of bias just cuz they say something you disagree/dont like hearing is the kind of piddle I expect to see in the Youtube comment section, not here. I've never noticed any sort of bias from them, personally. People take their relatively non-enthusiast views on ray tracing as some hatred of Nvidia when I think they're being pretty reasonable myself. I dont think ray tracing is some huge deal at the moment and makes little difference for too much cost in the vast majority of implementations. I'm sure that'll change in time, but for now, you dont need to be some AMD fanboy like y'all are claiming to think like this at all. I'm certainly not. Never even owned an AMD GPU.
It's not an accusation that HW Unboxed is AMD biased, it's a fact. We have so much evidence. I already brought some evidence with me and you can find many more in this thread. That DLSS 2.0 video vs FSR 1.0 is the perfect example.


Here they say "neither DLSS nor FSR 1.0 look good at 1080p" basically throwing FSR 1.0 and DLSS 2.0 in the same basket. Which is completely, utterly false and of course DLSS is leagues ahead of FSR 1.0. If you look at the video, DLSS looks actually better than native 1080p in the very data they showed. The ballon lines are more detailed, the image is sharper, is more stable and has less aliasing. https://imgsli.com/NjE4NDI

So that is wrong based on their own data they showed. DLSS 2.0 is of course miles ahead of FSR 1.0 especially at such a low resolution, yet they throw both upscaling methods in the same basket, disregarding the benefit of DLSS 2.0. Why? Most likely because so that RDNA1 owners do not feel like they miss out because most gamers play at 1440p and 1080p with this card, though that is the speculation part of it. I have pointed out the flaw in this video to Steve and he insulted me, calling me the worst kind of degenerate fanboy. I am not the first one too, Steve attacks other people regularly on Twitter when they disagree with his views, that guy is super unprofessional.

Their thumbnails are also super biased. When its about Intel or Nvidia, they use words like fail, terrible etc, but as soon as they create AMD stuff, they highlight the positive aspects of it. The thumbnail they made for their DLSS 3 video is basically a mistake of DLSS3. Same with XeSS. But with FSR 2.0, they do not highlight the negative aspects, but the positive aspect (that its nearlyon par with DLSS) There was a photo somewhere in this thread highlighting this.

I am sorry, but their bias is super clear, I'm honestly surprised you are not seeing it.

BTW, I'm not a Nvidia fanboy at all. I'm rooting for AMD this generation and I can't wait for tomorrow.

Anytime a game is a lot faster on Nvidia its because of Nvidia's superior hardware. When a game is a lot faster on AMD it's because it's poorly optimized for Nvidia.

It doesn't really matter what the settings are as long as the VRAM doesn't become a bottleneck. The 5700xt is just faster than its Turing competitor in the vast majority of engines.

Turing has HW-accelerated Raytracing, DP4a, Mesh Shaders, Sampler Feedback, VRS. If Turing gets faster when more current gen only games come out, there is a very valid reason for it.

And don't turn it into a Nvidia and AMD fight. Of course that also applies to RDNA2. I suspect even the lowest RDNA2 GPUs like the 6600 and perhaps even the 6500XT will destroy the 5700XT when cross gen finally ends.
 
Last edited:
And Steve has the audacity to call this game a fine example of the 5700XT "aging well" .. and some people call him unbiased!
I don't think that's a fair criticism in this case. Steve does state multiple times that NVIDIA needs to get their drivers up to speed. The 5700XT is an old card now and is running this game very well so I'd say it certainly has aged well. Especially when it initially competed with a 2070 but now is closer to a 2080 in most benchmarks.
 
How so?! It is still close to 2060 Super in performance according to Steve's own mass benchmark.
You're right. I looked it up and it's only 6% faster than the 2060S and on par with the 2070S. I thought it had supplanted the 2070S and was almost neck-and-neck with the 2080.
 
Those FH 5 results from PCGH are a result of VRAM. It has nothing to do with "next gen geometry and lighting", which Forza doesnt even have. 5700xt has aged just fine. In the majority of engines it performs better than its Turing competitor. You are mostly limited to UE4 titles when looking for bad performance.
Not sure why you're saying they're a result of VRAM when every single card that you'd compare the 5700/XT to also has 8GB of VRAM.

Even the 2060 non-Super with only 6GB of VRAM has a more consistent presentation in that title and better 1% lows.
You lose 3fps on the 'average' versus the 5700XT, but gain 3fps on the 1% lows, so the overall experience is probably a wash.
 
Not sure why you're saying they're a result of VRAM when every single card that you'd compare the 5700/XT to also has 8GB of VRAM.

Even the 2060 non-Super with only 6GB of VRAM has a more consistent presentation in that title and better 1% lows.
You lose 3fps on the 'average' versus the 5700XT, but gain 3fps on the 1% lows, so the overall experience is probably a wash.
Because the writers of that article specifically pointed it out. Nvidia and AMD handle VRAM differently.

Turing has HW-accelerated Raytracing, DP4a, Mesh Shaders, Sampler Feedback, VRS. If Turing gets faster when more current gen only games come out, there is a very valid reason for it.

And don't turn it into a Nvidia and AMD fight. Of course that also applies to RDNA2. I suspect even the lowest RDNA2 GPUs like the 6600 and perhaps even the 6500XT will destroy the 5700XT when cross gen fi
nally ends.
Those features are irrelevant right now as no developer even uses them, not to mention this sentiment has been around long before they even existed.
 
Last edited:
It doesn't really matter what the settings are as long as the VRAM doesn't become a bottleneck. The 5700xt is just faster than its Turing competitor in the vast majority of engines.

Of course the settings matter. If you demand a game uses its DXR setting then Turing is infinitely faster. If you enable DLSS - which can look better than native even at 1080p output - then Turing can have a relative gain over even the inferior FSR.

If settings don't matter then why don't we just max out DXR, turn on DLSS and lets see what these aggregates of benches of modern, high profile games show for these pieces of hardware? Oh of course, because settings matter!

And once again, it should be noted that tens of millions of console gamers are enjoying RT with systems well below 2060S RT performance.

Bias: "cause to feel or show inclination or prejudice for or against someone or something". It's okay to not think RT is worth it or that you don't like the look of DLSS. But please don't present those personal inclinations as some kind of objective basis for judging graphics cards.

Depending upon what you want either the 5700XT or the 2060S could have been clearly the better choice, or equally good value.
 
Of course the settings matter. If you demand a game uses its DXR setting then Turing is infinitely faster. If you enable DLSS - which can look better than native even at 1080p output - then Turing can have a relative gain over even the inferior FSR.

If settings don't matter then why don't we just max out DXR, turn on DLSS and lets see what these aggregates of benches of modern, high profile games show for these pieces of hardware? Oh of course, because settings matter!

And once again, it should be noted that tens of millions of console gamers are enjoying RT with systems well below 2060S RT performance.

Bias: "cause to feel or show inclination or prejudice for or against someone or something". It's okay to not think RT is worth it or that you don't like the look of DLSS. But please don't present those personal inclinations as some kind of objective basis for judging graphics cards.

Depending upon what you want either the 5700XT or the 2060S could have been clearly the better choice, or equally good value.
I assumed it was obvious I’m referring to settings that are supported and produce identical IQ on both GPUs.
 
I assumed it was obvious I’m referring to settings that are supported and produce identical IQ on both GPUs.

But those are IQ settings that are in some definite ways lower than tens of millions of consoles are actually using, and that tens of millions of graphics cards might well be surpassing.

And why the ever living fuck is DLSS excluded from judgements about cards that have DLSS available to them, when specifically talking about games that support DLSS?
 
And why the ever living fuck is DLSS excluded from judgements about cards that have DLSS available to them, when specifically talking about games that support DLSS?
Yeah I felt this way about Gameworks features in the past. I remember playing the Arkham games with PhysX on for the first time or seeing Witcher 3's hairworks and wondering why no one benchmarks with these features on. I get wanting settings parity, but if you want to know if a graphics card is the right one for you, you have to know what it's going to bring to the table.

This complaint isn't specifically about HUB. It's been hard to find benchmarks with those features on period. And if you already had a card with these features, you might want to know if a new generation would bring enough performance to run with those features on.
 
But those are IQ settings that are in some definite ways lower than tens of millions of consoles are actually using, and that tens of millions of graphics cards might well be surpassing.

And why the ever living fuck is DLSS excluded from judgements about cards that have DLSS available to them, when specifically talking about games that support DLSS?
They should be considered but thats an entirely different argument than the 5700xt aging poorly from a performance perspective. The lack of DX12 ultimate features in particular has been of complete irrelvence and likely will for several more years.
 
Yeah I felt this way about Gameworks features in the past. I remember playing the Arkham games with PhysX on for the first time or seeing Witcher 3's hairworks and wondering why no one benchmarks with these features on. I get wanting settings parity, but if you want to know if a graphics card is the right one for you, you have to know what it's going to bring to the table.

This complaint isn't specifically about HUB. It's been hard to find benchmarks with those features on period. And if you already had a card with these features, you might want to know if a new generation would bring enough performance to run with those features on.

I can totally understand that, but for me it wasn't hard finding either reviews that benchmarked PhysX or Hairworks or whatever else and then deciding it wasn't worth it to enable either of those features.

So, with my 1070, I already knew that Hairworks was completely irrelevant to me since the hit to performance was far to large for the very limited benefit that it brought.

Nowadays, I'm find with a baseline parity benchmark and then mentions of whether or not X proprietary feature reduces performance, increases performance, reduces IQ alot, reduces IQ a little, etc.

I it helps that for me, I've found that in the vast majority of games, DLSS 2.x is irrelevant to me because in most cases the performance it brings isn't worth the artifacts that it introduces. I can, however, certainly see how some users might feel differently and either don't care about the additional artifacts or honestly don't notice them while playing.

IMO, it all comes down to a review site attempting to find a set of benchmarks that would benefit the most amount of people. If they want to limit the appeal of their benchmark to a certain subset of the gaming population, that's fine as well.

The nice thing about having many review sites that cater to different gaming demographics is that it allows people to find sites that cater to benchmarking games in a way that is more suitable to how they would like to play. Don't generally want to enable RT due to the performance hit? Find a review site that has benchmarks with RT disabled. Want to only see reviews of games where RT is enabled? Find a review site that ensures all games are tested with RT on. Don't like upscaling artifacts? Find review sites that benchmark with DLSS, FSR, and XESS off. Don't mind upscaling artifacts? Find review sites with DLSS, FSR, and XESS on. Etc.

IMO, having multiple review sites that DO NOT benchmark with identical settings is what everyone should be cheering for. That has the greatest potential to then have reviews that will be relevant to almost all of the gaming populace, even if no single review site can hope to have benchmarks that are relevant to everyone.

What I hate is people trying to discredit a review site because they don't benchmark how that person thinks games should be benchmarked. It also doesn't help that people continue to try to use benchmarks to claim X GPU is the absolute bestest for everyone based only on criteria that that person thinks is important and then dismisses anyone that disagrees and thinks this other set of criteria is the most important. Bleh.

I mean why should it matter if X person thinks A hardware is the best for them but Y person thinks B hardware is best for them? And then either X or Y attempts to discredit the other when they obviously have different opinions as to what is important when they game and thus neither will ever agree on which hardware is the bestest in the galaxy much less the universe? Wouldn't it be better if X and Y can just be happy that the other person has gaming hardware that they enjoy playing games on?

BTW - most of that wasn't directed towards your post. :p

Regards,
SB
 
Last edited:
They should be considered but thats an entirely different argument than the 5700xt aging poorly from a performance perspective. The lack of DX12 ultimate features in particular has been of complete irrelvence and likely will for several more years.
That is because of cross gen.

Now as cross gen slowly but surely ends, we can expect the use of DX12 Ultimate in games.
 
Back
Top