Value of Hardware Unboxed benchmarking

We spent the entire last page detailing how this is never going to be true, high end visual features will always require high end hardware, this has been the history of PC gaming since forever.
I must have missed that but it’ll be eventually true. I’m not sure why it will never be true as full RT/path tracing cannot remain as an exclusively “high end” feature for an indefinite amount of time. Remaining in that state will drastically impact its industry adoption.
If you buy a low end GPU, don't expect it to run max path tracing at max native res at 120fps, you may be able to do medium ray tracing at a good 60fps or so, you may be able to do selective ray tracing at high fps, but expecting full RT to run on a 4060 at 60fps is not realistic at all.
The premise of my argument is that until average GPUs can run full RT at average settings so 1080p60 for example, it’s of no benefit to the average user. Usually when average GPUs can run a set of features, we see mass adoption in many games.
In the past, Anti Aliasing remained a very high end feature for a long time, the same was true for shader model 3 effects, dynamic shadow maps, soft shadows, high levels of tessellation, physics, ... etc. Ray Tracing is no exception.
Ok and how does that contradict what I said earlier? When those features were high end, it was of no benefit to the average user?
For example, a 4060 can handle Cyberpunk with ray tracing just fine, but with path tracing? Of course not! This is a high end feature introduced to run on high end hardware, you can't demand that a 4060 be able to do path tracing at 60fps. However, you are guaranteed to be able to do it on 6060 in the future.
I never specified that the 4060 in general needs to be the one that allows average users to run full RT. So I’m not sure why there’s a focus on the 4060 in general other than to highlight that as it stands, the average user can’t run full RT.
 
Generally I agree but Windows 11 does have some meaningful requirements . You can work around them to a point, but this isn't something I would go recommending to people. Could lead to unexpected behavior. E.g. Bitlocker should always be supported but this wouldn't be the case on old, unsupported hardware.

Windows 10 can run on just about anything.
Windows 11’s requirements are for encryption however and don’t really have anything to do with performance. You can also bypass it trivially with essentially no consequences outside of Bitlocker. I think recently MS straight up waived the requirements for some people upgrading but I haven’t done much research on this, only saw an article I didn’t really read lol.

Point is: no other GPU has system requirements like this so it’s worth investigating what’s going on here. Someone tried to argue every GPU has system requirements because Windows (and other OSes) have system requirements, which I think is a ridiculous way of looking at it since Windows system requirements might as well be “functional computer”.
 
Windows 11’s requirements are for encryption however and don’t really have anything to do with performance. You can also bypass it trivially with essentially no consequences outside of Bitlocker. I think recently MS straight up waived the requirements for some people upgrading but I haven’t done much research on this, only saw an article I didn’t really read lol.

Point is: no other GPU has system requirements like this so it’s worth investigating what’s going on here. Someone tried to argue every GPU has system requirements because Windows (and other OSes) have system requirements, which I think is a ridiculous way of looking at it since Windows system requirements might as well be “functional computer”.
I pretty much agree but it is what it is.

It's an unfortunate position Intel is in, where the users who would be most likely to want a GPU upgrade can't take full advantage of their hardware. Even a perfectly competent Ryzen 5600 is a problem for the B580 in some of those games. Hell even the 5700X3D + B580 gets massively choked up in Spiderman. The 4060 loses almost nothing going from 9800X3D to 5700X3D. I'd really like to know what's causing this since it doesn't seem to be rebar.
 
I pretty much agree but it is what it is.

It's an unfortunate position Intel is in, where the users who would be most likely to want a GPU upgrade can't take full advantage of their hardware. Even a perfectly competent Ryzen 5600 is a problem for the B580 in some of those games. Hell even the 5700X3D gets massively choked up in Spiderman.
Yeah at 1080p no AM4 CPU is enough for B580, which is insane. No wonder Intel promotes it as a 1440p card. I wish Steve could test with a budget Intel CPU that has some E-Cores.
 
I pretty much agree but it is what it is.

It's an unfortunate position Intel is in, where the users who would be most likely to want a GPU upgrade can't take full advantage of their hardware. Even a perfectly competent Ryzen 5600 is a problem for the B580 in some of those games. Hell even the 5700X3D + B580 gets massively choked up in Spiderman. The 4060 loses almost nothing going from 9800X3D to 5700X3D. I'd really like to know what's causing this since it doesn't seem to be rebar.
Inefficient driver implementation. Which honestly is nothing new for Arc or Intel iGPUs in general. Also worth remembering that the card is just PCIE x8 which means that it will suffer some performance loss on any platform with PCIE 3.0 or older. But this is also true for competing cards from both AMD and Nvidia.
 
Inefficient driver implementation. Which honestly is nothing new for Arc or Intel iGPUs in general. Also worth remembering that the card is just PCIE x8 which means that it will suffer some performance loss on any platform with PCIE 3.0 or older. But this is also true for competing cards from both AMD and Nvidia.
To bring it back to the original topic, I think this is a pretty good illustration of the value of HUB’s benchmarking. The original testing was with ‘unsupported’ (but I’d maintain are still fairly modern CPUs, especially for budget gamers) CPUs, but further investigation shows even their supported products don’t perform properly with their hardware.

I’ll disagree with HUB on things (I believe Steve said something like DLSS adds latency which is either poorly worded if he meant frame gen or outright incorrect if he meant upscaling) but overall I’m glad we have reviewers who still investigate things like this.
 
To bring it back to the original topic, I think this is a pretty good illustration of the value of HUB’s benchmarking. The original testing was with ‘unsupported’ (but I’d maintain are still fairly modern CPUs, especially for budget gamers) CPUs, but further investigation shows even their supported products don’t perform properly with their hardware.

I’ll disagree with HUB on things (I believe Steve said something like DLSS adds latency which is either poorly worded if he meant frame gen or outright incorrect if he meant upscaling) but overall I’m glad we have reviewers who still investigate things like this.
Mr. Unboxed could've completely avoided the hoopla if he'd done the original testing on a 3600. It still takes a huge hit and it is supported. Then to not mention that the 2600 is unsupported makes him look like an idiot even though he was on to a real problem.
 
Mr. Unboxed could've completely avoided the hoopla if he'd done the original testing on a 3600. It still takes a huge hit and it is supported. Then to not mention that the 2600 is unsupported makes him look like an idiot even though he was on to a real problem.

It's one of those odd areas I find about the pc hardware review space. I find they slip outside of professionalism. A lot of these people started without formal training, and kind of grew into having standards and practices. One basic thing, with any company, is if you ask for support they'll check to make sure you're operating a supported configuration. If they say it requires a gen 10 intel cpu, and you try to use a gen 9, then you're outside of support. It may seem stupid, but the recommendations are explicit for a reason. It's just the case that they need to bound the parameters to properly test. They can't test everything.


The requirements are explicit. So it just made a lot more sense to start with intel 10th gen and Ryzen 3000, because they are supported configurations. They show the problem. Should have been the first video, and honestly all of the review sites kind of missed the mark because none of them caught this in their initial reviews where they ended up praising the product. It's kind of egg on their face as well, whether it's HUB or GN or hardware canucks. I get the methodology of testing every gpu with the top end cpu to ensure you're not cpu limited, but then you can miss some scaling problems like this. They even kind of went through this when there was the Nvidia driver overhead stuff that suggested AMD drivers worked better on lower end cpus. So it's not like it's something they haven't seen before.
 
It's one of those odd areas I find about the pc hardware review space. I find they slip outside of professionalism. A lot of these people started without formal training, and kind of grew into having standards and practices. One basic thing, with any company, is if you ask for support they'll check to make sure you're operating a supported configuration. If they say it requires a gen 10 intel cpu, and you try to use a gen 9, then you're outside of support. It may seem stupid, but the recommendations are explicit for a reason. It's just the case that they need to bound the parameters to properly test. They can't test everything.


The requirements are explicit. So it just made a lot more sense to start with intel 10th gen and Ryzen 3000, because they are supported configurations. They show the problem. Should have been the first video, and honestly all of the review sites kind of missed the mark because none of them caught this in their initial reviews where they ended up praising the product. It's kind of egg on their face as well, whether it's HUB or GN or hardware canucks. I get the methodology of testing every gpu with the top end cpu to ensure you're not cpu limited, but then you can miss some scaling problems like this. They even kind of went through this when there was the Nvidia driver overhead stuff that suggested AMD drivers worked better on lower end cpus. So it's not like it's something they haven't seen before.
It does change the calculus of a prospective buyer. The card doesn't perform like the reviews suggest it does unless you have a top of the line CPU, in which case you aren't interested in a $250 GPU.

IDK how they should proceed going forward. Maybe continue testing on overpowered CPUs and do a sanity check every now and then on a more mainstream CPU. Especially for budget card testing. Seems they've already identified the game (Spiderman) that can be used to check for CPU bound performance regression. Too bad they have to worry about this at all since it's more of their time potentially wasted.
 
@homerdog If you’re in the business of testing and verification it’s not time wasted. It’s literally the job. It’s particularly tough in the pc space because there are so many configurations and you have to infer general performance by benchmarking a variety of software. There’s tons of ground to cover and you have to be selective. CPU scaling is a reasonable thing to test. The fact that the whole industry of reviewers missed it is somewhat damning.
 
@homerdog If you’re in the business of testing and verification it’s not time wasted. It’s literally the job. It’s particularly tough in the pc space because there are so many configurations and you have to infer general performance by benchmarking a variety of software. There’s tons of ground to cover and you have to be selective. CPU scaling is a reasonable thing to test. The fact that the whole industry of reviewers missed it is somewhat damning.
For sure. I just mean this wasn't practically super necessary before since NV/AMD didn't have such an issue with CPU scaling. Or if they did it wasn't nearly this bad. So while not time wasted, it is more time. When I see those graphs my mind immediately goes to how grueling it must be to sit there for hours on end running benchmarks.
 
Remaining in that state will drastically impact its industry adoption.
I don't believe so, people were arguing ray tracing will be slowly adopted, but in practice it was never the case, it was rapidly adopted by most engines and most AAA titles and dozens of console titles, that we now moved on to path tracing, and the list of path traced titles is growing rapidly. Even Mark Cerny is susrpized of this accelerated pace of adoption.

The premise of my argument is that until average GPUs can run full RT at average settings so 1080p60
Again that's not practical. Average GPUs can't even run rasterized games at max settings at 1080p60 (native). You need upscaling, and frame generation to so that, especially with UE5 games. Why do you expect ray tracing to act differently?

I never specified that the 4060 in general needs to be the one that allows average users to run full RT
A 4070 can do path tracing at 1080p60 with upscaling and frame generation, if that's your definition of averge GPU. Medium ray tracing works very very well on the 4070, enough for 1440p and high fps.

At any rate, I am gonna reiterate again .. rasterization has reached it's limits, developers and console makers realise that now, they see ray tracing as the new avenue by which they can unlock new visuals and performance levels. It's why ray tracing will keep spreading more and more.
 
Last edited:
Mr. Unboxed could've completely avoided the hoopla if he'd done the original testing on a 3600. It still takes a huge hit and it is supported. Then to not mention that the 2600 is unsupported makes him look like an idiot even though he was on to a real problem.
Or maybe some people on this forum could’ve actually addressed the topic or issue raised instead of trying to meme and dunk on HUB. :nope:
 
Mr. Unboxed could've completely avoided the hoopla if he'd done the original testing on a 3600. It still takes a huge hit and it is supported. Then to not mention that the 2600 is unsupported makes him look like an idiot even though he was on to a real problem.
I think there is value in testing 'unsupported' configurations. Windows 11 had requirements that are basically bullshit used to push their new encryption paradigm and are easily bypassed, and we found this out via testing. If a game has a 3600 as its minimum CPU, that doesn't mean theres zero value in testing it with something older, as sometimes we are surprised by how conservative system requirements are.

Hard to address anything when the testing presented is done with painfully obvious issues. Other than that - Arc has bad drivers. Shocking news!
This is unscientific and ironic because a sentence earlier you are accusing him of having methodology issues lol. We have no idea if this is a driver issue and this seems separate from Arc's other driver issues (mostly related to how it implements old DX9 stuff with dxvk). There's a good chance this is an architectural problem and CPUs that don't properly support ReBAR will never get good performance on Arc (so, on AMD I think everything pre Zen 3 doesn't fully support ReBAR). But we don't know and won't know unless Intel reveals this or fixes it with a driver update.

Like would you have rather he just not make this series of videos? Yeah we knew Arc historically has bad drivers but why not investigate this specific set of driver/architecture issues?
 
This is unscientific and ironic because a sentence earlier you are accusing him of having methodology issues lol.
No, I didn't. Please don't invent things I've never said.

We have no idea if this is a driver issue and this seems separate from Arc's other driver issues (mostly related to how it implements old DX9 stuff with dxvk).
A higher CPU load of the driver is a driver issue. What we don't know is if this issue is stemming from the badly written driver or it is tied to some h/w issues which the driver tries to solve this way. And no amount of testing games will give you an answer to that. So on the contrary - the idea that any testing is interesting is completely "unscientific".

Like would you have rather he just not make this series of videos?
I'm not interested in Arc so I honestly don't care. I'm also not interested in anything slower than Zen5 x3D CPU wise so you can guess what I think about these benchmark.
 
No, I didn't. Please don't invent things I've never said.
Literally in the comment I quoted:
Hard to address anything when the testing presented is done with painfully obvious issues.
Is this not claiming he has bad methodology?

A higher CPU load of the driver is a driver issue. What we don't know is if this issue is stemming from the badly written driver or it is tied to some h/w issues which the driver tries to solve this way.
"It's a driver problem, unless its actually caused by a hardware issue thats masked by bad drivers"... so good chance it has nothing to do with drivers and can't simply be fixed by an update?

And no amount of testing games will give you an answer to that. So on the contrary - the idea that any testing is interesting is completely "unscientific".
So because it can't be fully diagnosed via benchmarks, the benchmarks are unscientific and should be ignored?

I'm not interested in Arc so I honestly don't care. I'm also not interested in anything slower than Zen5 x3D CPU wise so you can guess what I think about these benchmark.
So you don't care about hardware and benchmarking, you care about high end hardware and benchmarking. Do you not see the value at all for people who maybe don't want to spend $2500 on their gaming PC?
 
Back
Top