No DX12 Software is Suitable for Benchmarking *spawn*

He modified his testing methodology this time, he used High settings and avoided Ultra or Max settings, which contributed to the margins, this video is decieving, the guy is so self indlugent and so vulnerable about his flawed recommendation of the 5700XT over modern Turing cards, that he is willing to cheat his audience for it.

For example: last time he did that same comparison, he tested Cyberpunk at High settings, Valhalla at Very High and Watch Dogs at Ultra settings, he also tested dozens of games at their max settings.

This time he switched and lowered the quality settings a notch or two: Cyberpunk at Medium, Valhalla at High and Watch Dogs at Very High, the other dozens of games were tested using significantly lower than max settings.

Last time the 5700XT had a 15% lead over the 2060Super, now it has a 13% lead despite the lowered quality settings. So the margin is decreasing, a "finewine" aging for the 2060Super in Steve's book, necessitating the deceptive move to try and uphold his "narrative" of the 5700XT being the superior choice by lesseing the reduction of the margin.

Yeah but you have to admit, if you simply accept that they are right to:

- Ignore that for the last 18 months, the RX 5700 XT hasn't been able to match graphics effect parity with next gen consoles ...
- ... even the Series S ...
- Pretend that no-one games on 60 or 75 hz panels, or has a VRR display where they're happy to set FPS_MAX on games where they want eye candy
- Dismiss DLSS because of the admittedly inferior FSR (even though the 2060S and 3060 can do both, which surely is better than just FSR - I mean wtf?)
- Talk about the compromises that might be required for RT ...
- ... while pretending that never even having the choice of RT in any game is somehow not a fucking compromise in your GPU
- Imply that RT at 2060S levels is too slow while...
- .... ignoring the fact it's plenty fast enough for many people in many cases ... even in some of the games they're benching...
- ... and also ignore that the 2060S and especially the 3060 are faster at RT than new-gen consoles, where tens of millions of people are enjoying games with RT

I mean, if you do all that, and more .... then you can see that this was a great video for HU to put out in 2022.
 
And if you want to look at how these cards do now (which is what was the purpose of this video AFAIU) then brushing off DLSS and RT is a very weird thing to do.
He is just trying to get "brownie points" and justify the purchase with AMD fan base. No mention of black screen driver issues which are still as prevalent now as back then.
  • Enhanced Sync may cause a black screen to occur when enabled on some games and system configurations. Any users who may be experiencing issues with Enhanced Sync enabled should disable it as a temporary workaround.
 
The bug you mention is separate from the old issue which was fixed ages ago and has even potential to affect only people using that specific sync (for which there is little point these days)
Got a link to the driver release notes? AFAIK AMD the issue still occurs and are monitoring the issue.

AMD RX 5700-series black screen

It’s not a total fix, however, as AMD notes that “although Radeon Software Adrenalin 2020 Edition 20.2.2 resolves many black screen issues, AMD is aware that some users may still experience black screen or system hang issues during extended periods of gameplay. AMD will continue to monitor and investigate reports of these issues closely.”

AMD has, for instance, decided to remove the Enhanced Sync feature from the gaming profile specifically because it still causes a black screen issue “on some games and system configurations” despite the fix. It also states that the Instant Replay or other third-party capture tools will potentially result in stuttering performance in some games too.

So yeah, it’s still not all 100% rosy out there.
 
Last edited by a moderator:
- Talk about the compromises that might be required for RT ...
- ... while pretending that never even having the choice of RT in any game is somehow not a fucking compromise in your GPU

- Imply that RT at 2060S levels is too slow while...
- .... ignoring the fact it's plenty fast enough for many people in many cases ... even in some of the games they're benching...
- ... and also ignore that the 2060S and especially the 3060 are faster at RT than new-gen consoles, where tens of millions of people are enjoying games with RT

Great points.
 
Suffice to say black screen issues still remain on an un-fixed issue on the RX 5700 as indicated without a complete fix now for years. Unfortunately HUB choose not to include negative aspects (black screen, RT and upscaling techniques) in their review that did not age well with the product.
 
Driver issues with RDNA1 plagued the cards for at least half a year after launch, and yes, Steve should have mentioned that while talking about how great of a buy 5700XT was back at $400 launch moment. But nah.
 
As someone that bought a 5700XT on launch day and used it until Feb 2021 when I upgraded to a 6900XT, I had zero black screen issues. The only driver issue I had was a short time when RDR2 in Vulkan mode would have weird effects with deformable terrain like mud/snow. The 5700XT is now im my wife's PC, still with no black screen issues.
 
He modified his testing methodology this time, he used High settings and avoided Ultra or Max settings, which contributed to the margins, this video is decieving, the guy is so self indlugent and so vulnerable about his flawed recommendation of the 5700XT over modern Turing cards, that he is willing to cheat his audience for it.
I also don't get why they are benchmarking FH5 at High instead of Extreme settings. These settings look a generation apart when it comes to overall detail and high looks much worse than Series S in quality mode. Who would want that?

As PCGH found out, at Extreme settings, the 5700XT falls well below a 2070 at 1440p while it performs like normal at lower settings. https://www.pcgameshardware.de/Forz...ark-Test-Steam-Systemanforderungen-1382783/4/
 
a. Because it's the most used engine out there, and in majority of non-AAA games using it AMD is actually losing to Nv counterparts more often than not (the most recent example being Stray).
b. Because it's the most h/w agnostic engine out there. His top-5 best performing games have only one UE4 game and it's an AMD "sponsored" title.

Also if you want to "look back at which was the better buy when new" you really shouldn't even mention FSR.
And if you want to look at how these cards do now (which is what was the purpose of this video AFAIU) then brushing off DLSS and RT is a very weird thing to do.
It is a widely used engine which is why he benchmarked 5 titles using it. By what metric is it the most IHV agnostic engine?

And plenty of NV sponsored titled are included and no complaints about those.
 
It is a widely used engine which is why he benchmarked 5 titles using it. By what metric is it the most IHV agnostic engine?

And plenty of NV sponsored titled are included and no complaints about those.
Certain members here won't ever be satisfied if someone doesn't share their exact views and still dares to publish them. It happens every time someone dares to suggest AMD is good or possibly even better option than specific NV card.
 
It is a widely used engine which is why he benchmarked 5 titles using it.
Does that reflect the percentage of games using the engine overall?
Also what 5? I've counted 4, of which 3 are from AMD partner program. Does that reflect the percentage of UE games partnering with AMD?

By what metric is it the most IHV agnostic engine?
It's open source to the IHVs meaning that any adjustments they want to make are getting into point releases.
It's also the least prone to sudden performance variations between IHVs according to all the benchmarks - Steve's including.

And plenty of NV sponsored titled are included and no complaints about those.
It's not about how many "sponsored" titles are included, it's about how the h/w is performing on average. Plenty of Nv sponsored titles run better on AMD h/w, it's not an indication of anything (less so with "AMD titles" where Turing+ features tend to be blocked completely).
 
Does that reflect the percentage of games using the engine overall?
Also what 5? I've counted 4, of which 3 are from AMD partner program. Does that reflect the percentage of UE games partnering with AMD?


It's open source to the IHVs meaning that any adjustments they want to make are getting into point releases.
It's also the least prone to sudden performance variations between IHVs according to all the benchmarks - Steve's including.


It's not about how many "sponsored" titles are included, it's about how the h/w is performing on average. Plenty of Nv sponsored titles run better on AMD h/w, it's not an indication of anything (less so with "AMD titles" where Turing+ features tend to be blocked completely).
Tiny Tina’s Wonderland, Gears 5, PUBG, Asetto and Outer Worlds if I’m not mistaken.

When have review game selections ever been mapped to the % of games using a given engine? That would be a terrible philosophy. Benchmarking as many different games/engines as your timetable allows is a better approach.

The majority of games don’t support Turing+ proprietary features. The majority of games in general don't support either IHV’s proprietary features and haven’t for many years. It’s less to do with AMD/Nvidia blocking anything and more to do with console focused game development.
 
Last edited:
Oh, didn't know that one was UE4.

When have review game selections ever been mapped to the % of games using a given engine?
When they announce the intention of showing how some card is doing performance and feature wise in general?

Benchmarking as many different games/engines as your timetable allows is a better approach.
Benchmarking without thinking on what you're benchmarking is a terrible philosophy.

The majority of games don’t support Turing+ proprietary features.
The only "proprietary" feature of Turing+ is DLSS, and a huge share of UE4 games support it these days.

It’s less to do with AMD/Nvidia blocking anything and more to do with console focused game development.
It has everything to do with some IHV blocking something, because consoles? These support all Turing+ features, including fast ML math.
 
Oh, didn't know that one was UE4.


When they announce the intention of showing how some card is doing performance and feature wise in general?


Benchmarking without thinking on what you're benchmarking is a terrible philosophy.


The only "proprietary" feature of Turing+ is DLSS, and a huge share of UE4 games support it these days.


It has everything to do with some IHV blocking something, because consoles? These support all Turing+ features, including fast ML math.
It hasn’t been demonstrated if the current consoles are performant enough for useful ML. Prior gen consoles which virtually all titles are still releasing on don’t support much of Turning’s feature-set.
 
It hasn’t been demonstrated if the current consoles are performant enough for useful ML.
I'd think that ~40TOPs of INT8 should be plenty for some ML considering that it's fairly close to what 2060 has.

Prior gen consoles which virtually all titles are still releasing on don’t support much of Turning’s feature-set.
So? Prior gen consoles has not been the target for PC development since circa 2018.
 
Ray Tracing?

Is that an actual technical limitation though?

I've brought this up before on whether or not DX12's changes are actually required or at least better for ray tracing, or if it's just a limitation by design choice.

I've had this general issue with the idea of DX12 and "low level" APIs that essentially push more onus onto the individual software vendors (game developers) for optimization over the hardware vendors and how it doesn't really make sense from a market dynamics stand point.
 
Back
Top