Value of Hardware Unboxed benchmarking

Native resolution rendering is an archaic concept and isn’t even an accurate description of how games work today. Lots of stuff already happens at lower than native resolution (shadow maps, reflections, GI).

With 8K on the horizon upscaling will only become more prevalent. UE5 performance targets assume upscaling is enabled. DRS, VRS, DLSS, FSR etc are only going to get better as engines are built from the ground up with those features in mind.

Any reviewer not named digital foundry will have a tough time as different tiers of hardware will achieve the same output resolution and performance and the difference will be how much DRS kicks in. I fully expect this to be the norm on PC in the future just like it is today on consoles.
I don't agree that native resolution is an archaic concept and i don't think it'll ever be. All of these lower resolution effects are rendered as such because the hardware can't cope. When GI is low resolution, it's very visible and it's the same thing with shadow maps and reflections. To some it's "good enough" and saves on performance.

As far as 8k is concerned, it only has a market penetration of 1.26% in the USA and for good reason. You need a pretty big display to appreciate 8k and I don't think it'll be really relevant outside of the large display sizes. In the monitor space, I don't think it'll be really relevant at all. As it stands, 4k monitors only represent 2.73% adoption in the steam hardware survey. Furthermore, broadcasters haven't even caught up with 4k content yet so 8k adoption will be extremely slow. By the time 8k starts being relevant, we should have at least 5 more GPU archs from Nvidia assuming a 2 year cadence.
 
Last edited:
I get you but as far as I'm concerned, it's a non issue. Look at the reddit link below. On average, reviewers say the 7900xtx is faster than the 4080 and HUB's data falls in line with that....

Does it though? The Reddit link below is good but it's essentially showing the 4080 is 3% slower than the 7900XTX at pure raster while being 25% faster in RT. However HUBs combined RT/Raster benchmarks at the start of this thread has the 7900XTX as 1% faster. That doesn't add up unless the games (and game settings) were pretty carefully selected to produce such a a result.


Like I said in an earlier post, DLSS and FSR supported games represent less than 1% of games on steam. Why should any reviewer spend so much effort on the varying technologies when its basically statistically irrelevant?

I don't agree with this logic. Counting games that were released before DLSS itself existed makes no sense. Nor does counting the thousands of indie games that have no use for it given their incredibly low performance requirements. What we need to be counting here is how many modern, performance intensive games support upscaling, and more importantly than that, the trend over time. I haven't plotted that trend myself but it certainly seems like more games than ever these days are supporting upscaling, particularly if you include their own inbuild upscaling techniques as well as DLSS/FSR. In fact I'd say some form of upscaling is supported in the vast majority of modern AA/AAA releases.
 
I don't agree that native resolution is an archaic concept and i don't think it'll ever be. All of these lower resolution effects are rendered as such because the hardware can't cope.
"Coping" isn't the right way to think about it. It's a tradeoff. You have some available hardware horsepower. Question is how to make best use of that horsepower. You can either burn it on number of samples or quality of each sample. In the past you had to max out sample rate to match that of the display device (with digital i.e. LCD/OLED displays), otherwise the blurring/aliasing was unacceptable. Reconstruction gives us a much gentler falloff in quality as sample rate drops, allowing for higher quality samples (and effects such as RT). The availability of this tradeoff is almost a strict win.

You are free to prefer being on the extreme edge of this pareto frontier (i.e., native samples at all cost). But you need to stop using expletives to advance your position that others are clearly not accepting. Especially in a thread that's more about reconstruction A-v-B (actually it's even more narrow, it's about how a specific reviewer evaluates those options) rather than one that argues the benefits of reconstruction itself.
 
Does it though? The Reddit link below is good but it's essentially showing the 4080 is 3% slower than the 7900XTX at pure raster while being 25% faster in RT. However HUBs combined RT/Raster benchmarks at the start of this thread has the 7900XTX as 1% faster. That doesn't add up unless the games (and game settings) were pretty carefully selected to produce such a a result.
Well considering that the vast majority of games don't support RT or upscaling, is it really suspicious? Below is a link to the top games played games/apps on steam. In the top twenty games, I think none of them support ray tracing? Maybe warzone but on Nvidia's website, it says DLSS not RT so....?


I don't agree with this logic. Counting games that were released before DLSS itself existed makes no sense. Nor does counting the thousands of indie games that have no use for it given their incredibly low performance requirements. What we need to be counting here is how many modern, performance intensive games support upscaling, and more importantly than that, the trend over time. I haven't plotted that trend myself but it certainly seems like more games than ever these days are supporting upscaling, particularly if you include their own inbuild upscaling techniques as well as DLSS/FSR. In fact I'd say some form of upscaling is supported in the vast majority of modern AA/AAA releases.
It doesn't really matter if games existed before DLSS because DLSS was retrofitted into old games. As long as the game supports TAA, i think it can be retrofitted. Secondly, we must count the old games because people are still playing those games and they're relevant. What's the point of arbitrarily filtering discounting certain games? People buy their GPUs to play all sorts of games and many play titles that don't support RT with those GPUs.
 
Different GPUs can have different performance bottlenecks.
Which basically shows that Steve "proving" his point with testing just one GF GPU isn't how it works either. This last video of his is completely useless in what he wanted to prove, skewed with weird results (which contradict their own earlier benchmarks) and test selection (showing FSR1 in comparison to DLSS is just plain misleading).

Steve's are different 50% of the time? Now I know you're lying.
To me it looks very much like you don't know anything...

My guy, you're bringing 1-2 fps "differences" as the pillar of the your argument?
When said 1-2 fps result in ~10% difference and have the same effect on the graphs of "average" performance difference Steve likes so much - yes, I do.
Also I'm not "your guy".

You're being intentionally obtuse.
No, I'm not. It's you who asked for proof, got it and now it's suddenly doesn't matter to you and I am "being obtuse".

@BRiT why is this person isn't banned yet? How many personal attacks on other members does he need to post to be banned already?
 
Last edited:
"Coping" isn't the right way to think about it. It's a tradeoff. You have some available hardware horsepower. Question is how to make best use of that horsepower. You can either burn it on number of samples or quality of each sample. In the past you had to max out sample rate to match that of the display device (with digital i.e. LCD/OLED displays), otherwise the blurring/aliasing was unacceptable. Reconstruction gives us a much gentler falloff in quality as sample rate drops, allowing for higher quality samples (and effects such as RT). The availability of this tradeoff is almost a strict win.

You are free to prefer being on the extreme edge of this pareto frontier (i.e., native samples at all cost). But you need to stop using expletives to advance your position that others are clearly not accepting. Especially in a thread that's more about reconstruction A-v-B (actually it's even more narrow, it's about how a specific reviewer evaluates those options) rather than one that argues the benefits of reconstruction itself.

Yup, a tradeoff, and while it's certainly extreme to say all reconstruction techniques (DLSS, FSR, etc.) are shite in all situations it's also also equally extreme to say that all reconstruction techniques are superior to "native" resolution (understanding being that not all parts of the rendering pipeline are happening at native resolution).

For myself, in general, I still find both DLSS and FSR to be unsuitable for general gaming as they typically lack the same image stability as "native". Where they tend to have a more favorable impact is when they are better than the reconstruction techniques that a particular game might be using. For me, it's still extremely rare for me to find DLSS or FSR to be an actual improvement in actual gameplay (versus a static screenshot or static camera). That's when instability, for me, will become glaringly obvious and become an unpleasant distraction while playing.

For others, the performance of DLSS/FSR is well worth it since they aren't significantly impacted by the areas in which reconstruction still have a ways to go.

Obviously DLSS and FSR and other such techniques continue to improve over time, so I'm constantly re-evaluating whether they provide a "good enough" experience for me to use them for a performance uplift or not.

TL: DR - too many people going to either extreme of DLSS/FSR is shite or DLSS/FSR is always better than native which will inevitably lead to a butting of heads. People need to understand that not everyone perceives things in the same way and what one person finds visually annoying another may not.

Regards,
SB
 
For myself, in general, I still find both DLSS and FSR to be unsuitable for general gaming as they typically lack the same image stability as "native".

With my recent 'hack' that I've found for my system I'm pretty much at a level where DLSS is better than native in nearly every case.

Although I haven't tried every game.
 
"Coping" isn't the right way to think about it. It's a tradeoff. You have some available hardware horsepower. Question is how to make best use of that horsepower. You can either burn it on number of samples or quality of each sample. In the past you had to max out sample rate to match that of the display device (with digital i.e. LCD/OLED displays), otherwise the blurring/aliasing was unacceptable. Reconstruction gives us a much gentler falloff in quality as sample rate drops, allowing for higher quality samples (and effects such as RT). The availability of this tradeoff is almost a strict win.

You are free to prefer being on the extreme edge of this pareto frontier (i.e., native samples at all cost). But you need to stop using expletives to advance your position that others are clearly not accepting. Especially in a thread that's more about reconstruction A-v-B (actually it's even more narrow, it's about how a specific reviewer evaluates those options) rather than one that argues the benefits of reconstruction itself.
Trade offs exist because we're resource limited but, as the hardware gets stronger, the trade-offs we once had to make fade into irrelevance and we're faced with new tradeoffs. The issue is that what one deems as an acceptable tradeoff varies from person to person.

With regards to my aggressiveness towards a couple of members on the forum. These guys have been taking cheap shots at certain reviewers because they don't like their approach to reviewing and they've been doing it unchecked for a long while. Frankly, it's getting annoying. It's easy to sit on the sidelines and take pot shots at others because of some perceived bias about a technology that is statistically irrelevant. Nobody is perfect and certainly people can make mistakes however, an honest effort must be made to understand someone's position if you truly want to have discourse. They want to be given the benefit of the doubt but, they don't do so to the reviewers they criticize. If I felt their arguments had any merit, I would give it proper consideration. The problem is that I don't think their comments have any merit. Comparing DLSS Balanced to FSR Quality, and other ridiculous propositions cannot be taken seriously.
 
Last edited:
Trade offs exist because we're resource limited but, as the hardware gets stronger, the trade-offs we once had to make fade into irrelevance and we're faced with new tradeoffs. The issue is that what one deems as an acceptable tradeoff varies from person to person.

It's unlikely that there will never be trade-offs. There are and have always been trade-offs. There are still many rendering techniques that could be used to improve the visuals of real-time 3D rendered games that aren't used because they are too expensive. When hardware performance improves we see some of those techniques become useable as their performance impact becomes more tolerable.

This is how it has always been with 3D rendering and at least for the forseeable future, how it will always be.

And there will always be trade-off rendering techniques. In that sense, DLSS/FSR is similar to the progression of AA in games. FSAA is too expensive but looks great, some people still use it if it's available. NV's Quincunx AA was inexpensive but looked like shite, nobody uses it anymore :p. AMD's MSAA was performant and looked great, it took NV a few years for them to match and eventually exceed AMD in that area. Now, MSAA isn't even used typically because it's not very effective with how games are rendered nowadays.

Regards,
SB
 
Last edited:
Thing is, developers are going to start making games with reconstruction as the default. Or basically - they'll expect gamers to be using it, with native rendering only really for those with either lower resolutions or extreme setups that just have an abundance of GPU overhead.

I also dont understand this idea that reconstruction needs to be 'perfect' to be worthwhile. It's such a weird mentality, given the general prevalence of 'performance is king' thinking, and the idea that we've generally always accepted slight compromises in visuals in order to push performance with all these other graphics settings, yet somehow the same thinking doesn't get applied here, even when there's significant performance gains to be had in most cases.

As for the attacks on HUB, I find that in 95%+ of cases, those accusing HUB of being biased are themselves the biased ones. HUB aint perfect(no reviewer/benchmarker is), but they dont deserve even a tiny fraction of the hate they've been getting from the completely embarrassing PC gaming community.
 
With regards to my aggressiveness towards a couple of members on the forum. These guys have been taking cheap shots at certain reviewers because they don't like their approach to reviewing and they've been doing it unchecked for a long while.
No. It's because you like the results which this reviewer shows and want them to continue as is.
Nothing else can be the reason b/c why would you care about what other people say about some reviewer?

They want to be given the benefit of the doubt but, they don't do so to the reviewers they criticize.
The reviewer who has been the center of such controversies for so many times now that I can't even remember the count?
The reviewer who has been notorious about not using DLSS1 because it had bad IQ but now has no issues with using FSR1 to prove his point?
The reviewer who has been saying for a couple of years that ray tracing doesn't matter and that's why they won't benchmark it to just silently start benchmarking RT now since wow it does matter?
That reviewer?
Also look at the thread name.

Comparing DLSS Balanced to FSR Quality, and other ridiculous propositions cannot be taken seriously.
Of course they can. Because DLSS Balanced and even Performance often looks better than FSR2 Quality.
The only ridiculous thing here is your posts saying how people shouldn't believe their eyes because you think it's ridiculous.

Done talking to you.
 
I also dont understand this idea that reconstruction needs to be 'perfect' to be worthwhile. It's such a weird mentality, given the general prevalence of 'performance is king' thinking, and the idea that we've generally always accepted slight compromises in visuals in order to push performance with all these other graphics settings, yet somehow the same thinking doesn't get applied here, even when there's significant performance gains to be had in most cases.

This is because reconstruction isn't the only way to gain performance. I will happily lower settings for performance before I use DLSS/FSR in the vast majority of games that implement DLSS/FSR.

Thus far the only times DLSS/FSR have been seen as a viable alternative to me is when the developer has implemented an even worse form of reconstruction. Not all developer made reconstruction I consider worse than DLSS/FSR as I value image stability more than I value increased performance from the reconstruction used.

Regards,
SB
 
No. It's because you like the results which this reviewer shows and want them to continue as is.
Nothing else can be the reason b/c why would you care about what other people say about some reviewer?


The reviewer who has been the center of such controversies for so many times now that I can't even remember the count?
The reviewer who has been notorious about not using DLSS1 because it had bad IQ but now has no issues with using FSR1 to prove his point?
The reviewer who has been saying for a couple of years that ray tracing doesn't matter and that's why they won't benchmark it to just silently start benchmarking RT now since wow it does matter?
That reviewer?
Also look at the thread name.


Of course they can. Because DLSS Balanced and even Performance often looks better than FSR2 Quality.
The only ridiculous thing here is your posts saying how people shouldn't believe their eyes because you think it's ridiculous.

Done talking to you.
I like the results and thats why I bought an amd gpu? Oh wait, I bought a 4090.... That doesn't fit your narrative does it.... Anyway, I ignored your previous post and I'm not going to respond to rest of this post because I don't care about your opinion. It has no merit.
 
For myself, in general, I still find both DLSS and FSR to be unsuitable for general gaming as they typically lack the same image stability as "native".

In comparison to native 1080p or 1440p UHD DLSS looks and often runs better.

DF
"RE4 Remake image quality on PS5 or PC is a reminder that pixel counts are generally not informative anymore. Native 1440p w/ TAA on PC runs and looks a heck of a lot worse than modded 4K DLSS in performance mode, even though 4K DLSS performance mode has far fewer internal pixels."

 
Last edited:
I find the position illogical.

DF
"RE4 Remake image quality on PS5 or PC is a reminder that pixel counts are generally not informative anymore. Native 1440p w/ TAA on PC runs and looks a heck of a lot worse than modded 4K DLSS in performance mode, even though 4K DLSS performance mode has far fewer internal pixels."

2 things. Firstly, shouldn't we be looking into why their TAA implementation is so poor? RE4 remake has a whole host of issues both on consoles and pc so their TAA implementation could just be broken. Secondly, what looks better is subjective. In that example, DLSS places an emphasis on the two posts. If you're walking in a dark dimly lit environment, those posts will blend into the environment but in the DLSS image, they appear to be in the foreground. Futhermore, DLSS brightens the scene and I don't know that this is correct. This just reminds me of the artistic intent vs what looks better that happens in the tv space especially when it comes to EOTF curve and what not. To me the DLSS image doesn't look better, it looks sharper/brighter and I don't think people should conflate the two. Sharper is tangible and measurable, better is subjective.
 
Last edited:
In comparison to native 1080p or 1440p UHD DLSS looks and often runs better.

DF
"RE4 Remake image quality on PS5 or PC is a reminder that pixel counts are generally not informative anymore. Native 1440p w/ TAA on PC runs and looks a heck of a lot worse than modded 4K DLSS in performance mode, even though 4K DLSS performance mode has far fewer internal pixels."

Hm, isnt that incorrect? DLSS has actually more information because it uses temporal informations and a trained network.
For example the image quality of 4xOGSSAA and 4xSGSSAA looks different even with the same amount of additional samples are used.
 
Hm, isnt that incorrect? DLSS has actually more information because it uses temporal information and a trained network.
For example the image quality of 4xOGSSAA and 4xSGSSAA looks different even with the same amount of additional samples are used.

It's correct, DLSS has fewer internal pixels per frame to work with.

Especially performance mode which is 720p internally at 1440p.
 
Well considering that the vast majority of games don't support RT or upscaling, is it really suspicious?

It is if you look at how many of the games in HUBs own test suite support RT. But then they choose to benchmark with it off... Or twice, once on, once off, even though both GPU's are ore than capable of running with it on.

Below is a link to the top games played games/apps on steam. In the top twenty games, I think none of them support ray tracing? Maybe warzone but on Nvidia's website, it says DLSS not RT so....?


I don't see why most of those games would be relevant to modern GPU benchmarking though. Most of them are online multiplayers that will run very well on 6+ year old GPU's.

If you're testing the performance of a modern GPU, you want to use modern games that will at least somewhat stress it. And a good proportion of those games these days support RT.

It doesn't really matter if games existed before DLSS because DLSS was retrofitted into old games. As long as the game supports TAA, i think it can be retrofitted. Secondly, we must count the old games because people are still playing those games and they're relevant. What's the point of arbitrarily filtering discounting certain games? People buy their GPUs to play all sorts of games and many play titles that don't support RT with those GPUs.

Because old games don't need upscaling on modern GPU's? I don't think it's at all reasonable to expect developers to go back and retro fit years, or even decades old games with DLSS that already run perfectly well at native resolutions on modern GPU's.

It's just senseless to say a graphical feature isn't relevant in a contemporary context by looking at how regularly it wasn't used before it was even invented. What matters is what proportion of games being released now use it.
 
It is if you look at how many of the games in HUBs own test suite support RT. But then they choose to benchmark with it off... Or twice, once on, once off, even though both GPU's are ore than capable of running with it on.
Wait, so the issue is that you don't agree with their approach? Their outcome falls in line with other review outlets in both raster and raytracing? In their reviews Nvidia is faster in raytracing and in the case of the 7900xtx, it's faster than the 4080 in raster. To me, it doesn't appear suspicious. That being said, I'm not assuming a motive behind their actions.
I don't see why most of those games would be relevant to modern GPU benchmarking though. Most of them are online multiplayers that will run very well on 6+ year old GPU's.

If you're testing the performance of a modern GPU, you want to use modern games that will at least somewhat stress it. And a good proportion of those games these days support RT.
Why wouldn't it be relevant? Those are the games being played by people frequently. Not everyone is interested in the AAA one and done games. Why should that be the metric by which GPUs are judged as it's that only accounts for a subset of games. People buy new GPUs and continue playing CS:Go or Apex or Warzone, etc. At least as far as regular consumers are concerned, they're buying GPUs to play games.
Because old games don't need upscaling on modern GPU's? I don't think it's at all reasonable to expect developers to go back and retro fit years, or even decades old games with DLSS that already run perfectly well at native resolutions on modern GPU's.

It's just senseless to say a graphical feature isn't relevant in a contemporary context by looking at how regularly it wasn't used before it was even invented. What matters is what proportion of games being released now use it.
I'm not asking that they go do that although Nvidia is surely going out of their way to assist in making it a reality. Old games don't need upscaling because today's gpus are powerful enough to run them without the need for upscaling. The games that struggle on modern hardware today will soon be old games. Future GPUs will run them without the need of upscaling because they're powerful enough to do so. Then upscaling will become irrelevant in those games. This is how technology advances and I see no reason why it should change. As far as I'm concerned, I cannot understand why people are clinging onto these techniques which are clearly here to bridge the gap as we transition from raster -> hyrbrid rt -> full path tracing. If not for this transition, I see absolutely no need for upscaling technologies like DLSS or FSR.
 
Back
Top