Value of Hardware Unboxed benchmarking *spawn

Is it a better option than the RX 6800 XT? Usually the fidelity settings highly affect the overall price/performance result and will vary from review to review.
6800XT is a good deal while supplies last. But with the introduction of Navi 32, Navi 21 has likely been out of production for a bit now.

I'm talking about properly new GPU's. Old GPU's being discounted to match the value of new GPU's has been a thing forever cuz they otherwise wouldn't be able to sell.
 
They benched Hogwarts Legacies without Raytracing. Can you explain, why somebody wouldnt play this game with 74FPS on a 4070 but has no problem with Starfield running with 76FPS on a 7800XT?#
Can you show where HUB said that 74fps is 'unplayable' in Hogwarts Legacy?

Cuz if not, it just seems like you're flailing.
 
Right, gamers aren't allowed to use non-Ultra settings or they get cast into hell.
Does this work with a 4060, too? I mean this card cost only $299. Save $200, just play with medium instead of ultra.

I looked at the RT benchmarks: Hogwarts was replaced with Jedi Survivor (yes, this game with useless Raytracing), Spider-Man used with "RT High" instead of "Very High" and Cyberpunk was benched with "RT medium" instead of the "RT ultra" preset. So, they did not only count these games twice, they used reduced settings, too.

Can you show where HUB said that 74fps is 'unplayable' in Hogwarts Legacy?

Cuz if not, it just seems like you're flailing.
Why would they bench games twice, if fps do not matter?
 
Last edited:
Imagine Nvidia owning 80% of the market and Nvidia fans feeling persecuted.... Imagine living that life. As someone who has a significant amount of money invested in Nvidia, I can confidently say that the current lineup offers objectively bad value for money. The play has never been to buy their gpu's, it's been to buy their shares. That kind of brand loyalty, the kind where people allow themselves to be fleeced only means one thing, big profits...
 
Does this work with a 4060, too? I mean this card cost only $299. Save $200, just play with medium instead of ultra.
I mean that's why this card exists, is it not?

The claim that a card can't be considered good (or the "best") value, because it doesn't hit some arbitrary performance metric would apply to both AMD and Nvidia. In fact your own arguments appear to invalidate the 4060 Ti, which is doubly ironic considering the discussion earlier in this thread.
 
I mean that's why this card exists, is it not?

The claim that a card can't be considered good (or the "best") value, because it doesn't hit some arbitrary performance metric would apply to both AMD and Nvidia. In fact your own arguments appear to invalidate the 4060 Ti, which is doubly ironic considering the discussion earlier in this thread.
The 4060TI is just bad. Thats the reason why i used the 4060.

In 1440p the 7800XT is 73% faster and cost 66% more: https://www.techpowerup.com/review/amd-radeon-rx-7800-xt/32.html
Reduce one or two settings on a 4060 and the price performance ratio will switch. Does this make the 4060 "the obvious choice"?
 
Last edited:
The 4060TI is just bad. Thats the reason why i used the 4060.

In 1440p the 7800XT is 73% faster and cost 66% more: https://www.techpowerup.com/review/amd-radeon-rx-7800-xt/32.html
Reduce one or two settings on a 4060 and the price performance ratio will switch. Does this make the 4060 "the obvious choice"?
Yes, if you're not going to benefit substantially from the extra performance (because lower quality settings look "good enough"), the 4060 is the obvious choice in this comparison.

That's again, why the 4060 exists as an option. It seems like you're not considering that the worth of a given "quality level" is entirely subjective. Tim was simply giving his preference, given the better value (measured in performance/dollar) provided by the 7800 XT. He wasn't trying to speak for all users and if the 7800 XT doesn't hit your particular quality/performance level bar, that's perfectly fine.

I see lot of time being dedicated to a handful of words being taken out of context and no real attempt to understand the broader conclusions of the review.
 
Reduce one or two settings on a 4060 and the price performance ratio will switch.
It's only a proper ratio when the same settings are used on both cards. Otherwise we are comparing apples to oranges. Lowering the settings on one to then compare to the other to somehow change the performance ratio is deliberately putting a bias into the comparison.

And then there's additional features, which are separate from the above.
As of right now, RT, upscaling, frame generation and so on are all considered additional features and are not the baseline for price/performance comparisons. They are generally tested separately with their own price/performance ratios, as they should.
Just like in the past where AA was tested separately (I still remember those days), and then tessellation was tested separately, it's now the turn for the likes of RT to be tested separately. The majority of games do not support RT yet, nor is the hardware actually up to snuff to the degree that it should. Nobody questions when we are testing with AA nowadays, and that will ultimately happen with RT. But we are not there yet, and we will not be there until a ~$300 can comfortably run RT at 1080p/60 without upscaling.

And in my opinion, frame generation and upscaling should ALWAYS be tested separately, because it is pretty much fake performance.
 
As of right now, RT, upscaling, frame generation and so on are all considered additional features and are not the baseline for price/performance comparisons. They are generally tested separately with their own price/performance ratios, as they should.
Just like in the past where AA was tested separately (I still remember those days), and then tessellation was tested separately, it's now the turn for the likes of RT to be tested separately. The majority of games do not support RT yet, nor is the hardware actually up to snuff to the degree that it should.

The vast majority of games do support reconstruction, however. This is exactly why AMD's nonsense with DLSS exclusion gained such as backlash, it's expected now. It's generally regarded as a cock-up when reconstruction isn't in a modern game.

And in my opinion, frame generation and upscaling should ALWAYS be tested separately, because it is pretty much fake performance.

Frame generation? Sure, it's not widely supported enough and has a more limited scenarios that can narrow its utility for the general public (starting frame rate, VRR display, etc).

But calling upscaling 'fake performance' is absolutely ridiculous. TAA is only referred to as 'native' because that's what came before, but it's also using data from previous frames to impart the impression of a higher resolution image. Games are filled to the brim with numerous render targets that vary widely in base resolution. There's no such thing as a 'real' starting image, there are always numerous techniques involved to mitigate the limitations of rendering wrt performance of hardware at the time. It's just what's the most effective smoke and mirrors technique.

DLSS/XESS is utilizing the transistor budget of a GPU to accelerate a graphical effect like any other - the fact that AMD didn't invest in this area like Intel/Nvidia did is a detriment to their architecture, just as skimping on vram and bus width can be a detriment to Nvidia's.

Yes, I still think we should have different benchmarks for native. vs reconstruction, but those reconstruction results actually have to be taken into account for the final conclusion - and I don't mean simply a performance bar chart. There is little point to 'equate' say, FSR Quality in a game to DLSS Quality and say they're equal simply because they both give the same relative performance uplift. If FSR Quality looks decidedly worse than DLSS, then all you're doing is benchmarking the names of what each GPU manufacturer gives to their tech, it's meaningless as to the entire point of reconstruction which is to effectively present an image which should be indistinguishable (or even better) than the same image run at a much higher base resolution. You're effectively just parotting marketing.

Like if one card delivers a superior visual result in its Balanced mode vs. another in its Quality, that needs to be actually mentioned and weighed in to the final score. If the majority of games in your test suite have reconstruction, then as a reviewer it's on you to make the argument as to why you believe you should run these games in native. Maybe there are some artifacts with reconstruction, it definitely happens in some games more than others, sure! But if you're heavily weighing, or exclusively weighing your final scores on one cards native performance when virtually all the games in your test suite support reconstruction, then your methodology may not be accurately reflecting the way people actually play these games.
 
Last edited:
But if you're heavily weighing, or exclusively weighing your final scores on one cards native performance when virtually all the games in your test suite support reconstruction, then your methodology may not be accurately reflecting the way people actually play these games.
Benchmarks were never intended to match how people ultimately play the games. Considering most people are running either 1060s, 1650s or 3060s, we can assume that most of them are not running max settings in most games, yet benchmarks are pretty much always at max settings. They are there to assess a card's power and performance. What other way is there to measurement the viability of a card compared to others? The default falls on native performance to guarantee apples to apples comparisons. How else can you compare Intel and AMD GPUs to nVidia's? Any other approach might be unknowingly giving an edge to one over the other (although some of you would obviously love that). If all games support reconstruction, that should be looked at, but separately, at least until there is one standard for it that everyone can use.

Upscaling tech by default falls outside of standard benchmarks, because there is no standard upscaling tech (yet). Considering DLSS is proprietary, it never will be the standard one either. That will end up being FSR or XeSS, or some derivative of them.

And yes, upscaling is fake performance. It's what the consoles used when the hardware was not good enough to run natively. Consoles allegedly supported 1080p, but in reality were 720p and upscaled to pretend to support 1080p. It is exactly in support of my argument that modern day hardware is not up to snuff to render RT when graphics cards are doing the same thing as the consoles. If hardware was actually good enough for RT, DLSS probably wouldn't have been created in the first place, but that's another tangent.

Upscaled 4K is not native 4K, no matter how you slice it. If you argue that the scaling looks better than native 4K, it's still doesn't make it native 4k. You can use the tech at native 4K, upscale it to for example 8K, and downsample it back to 4K to get even better image quality. Then you can say it is a feature that improves image quality at said resolution, and only then was it rendered at 4k and can the card's performance be assessed at that resolution. 4K DLSS is native 1440p (depending on the setting) with post-processing, and 4K FSR is native 1440p (depending on the setting) with post-processing.
Better post-processing is a feature that should not be ignored, but it is not a standard for benchmarking, especially if it's proprietary.
 
Upscaled 4K is not native 4K, no matter how you slice it. If you argue that the scaling looks better than native 4K, it's still doesn't make it native 4k. You can use the tech at native 4K, upscale it to for example 8K, and downsample it back to 4K to get even better image quality. Then you can say it is a feature that improves image quality at said resolution, and only then was it rendered at 4k and can the card's performance be assessed at that resolution.

That absolutely does not change the value proposition of a GPU with a more powerful upscaling hardware component though even if you did that silly hypothetical. The GPU with the inferior upscaling will still require more power to deliver the equivalent visual result of the card that can use a lower scaling value to reconstruct it at 8k.

Also, a card performing 8K reconstruction is in fact, rendering at that resolution - this is the precise difference between reconstruction and simply upscaling, despite the terms being used interchangeably now. That 8K reconstructed image has new detail added to it.

As I mentioned as well, even that 'native' 4K image is likely composed of many different render targets with varying levels of internal resolution.

4K DLSS is native 1440p (depending on the setting) with post-processing, and 4K FSR is native 1440p (depending on the setting) with post-processing.

And the end result can be quite different, that's the point. It's not actually 'like for like' if the upscaling algorithm on one card is delivering a better result due to it using the hardware resources allocated to that GPU for that express purpose of making the rendering more efficient wrt final output image quality.

Better post-processing is a feature that should not be ignored, but it is not a standard for benchmarking, especially if it's proprietary.

I never said it should become the 'standard' way of benchmarking in that it should be the only way, what I did say however, is that it does need to factor into the final conclusions and overall value assessment in reviews. The numbers always have to be put into context, and it works both ways - you wouldn't simply refer to frame generation benchmarks either when comparing the performance.

You're not actually evaluating the reconstruction abilities of competing GPU's if you simply take the names given to each quality level and benchmark it against the equivalently named feature of your competitor, if they actually behave quite differently in delivering the final image they're designed to - that matters. It's their entire purpose. If it didn't, then AMD wouldn't even need FSR2, we could just use FSR1. Most reviewers at least thankfully, recognize that would be ridiculous as the quality levels are so drastically different despite AMD also naming the internal res settings of FSR1 the same.

As for the bizarre stipulation that you can't include proprietary technologies in reviews, if the actual implementation is widespread enough, then it being 'proprietary' is completely irrelevant when evaluating the value of hardware. We buy these GPU's to play games, if the games commonly support a feature of a card that can deliver a superior experience with the same base resolution input, it doesn't matter if it's open source or not. You are not making a political statement with what gaming GPU you buy.

We compare the overall value proposition of hardware utilizing proprietary technologies all the time, in fact one of the reasons this generation of mid/low-end cards are so derided is because of the relatively weak value proposition they have when compared to the price of consoles.
 
Last edited:
Lol. Lowering resolution is fake performance :ROFLMAO:
You can :ROFLMAO: all you want. It is fake performance if you claim that it renders at a higher resolution when it's not. Just like frame generation produces fake frames, upscaling produces fake pixels. Simple as that.

That absolutely does not change the value proposition of a GPU with a more powerful upscaling hardware component though even if you did that silly hypothetical. The GPU with the inferior upscaling will still require more power to deliver the equivalent visual result of the card that can use a lower scaling value to reconstruct it at 8k.
Yes, and that would be measurable.

Also, a card performing 8K reconstruction is in fact, rendering at that resolution - this is the precise difference between reconstruction and simply upscaling, despite the terms being used interchangeably now. That 8K reconstructed image has new detail added to it.
Is it really rendering?
Remember interlacing? That was also considered reconstruction when the missing lines were added, but doing that was never actually considered rendering.

As I mentioned as well, even that 'native' 4K image is likely composed of many different render targets with varying levels of internal resolution.
That has always been the case, where textures and effects all can have different resolutions. Ultimately they all come together, which is called rasterization. DLSS/FSR/XeSS, neither of them are rasterization.

So if we are measuring rasterization performance, DLSS/FSR/XeSS is an additional variable that distorts results, meaning it should be measured separately.
If we are measuring rasterization performance, RT is an additional variable that distorts results, meaning it should be measured separately.
If we are measuring rasterization performance, frame generation is an additional variable that distorts results, meaning it should be measured separately.

For the record, I did not say that it should not be measured, or that it should not be taken into account. But rasterization is the main feature for the vast majority of games, and it is only fair to use that as a baseline, and consider the rest bonuses.

And the end result can be quite different, that's the point. It's not actually 'like for like' if the upscaling algorithm on one card is delivering a better result due to it using the hardware resources allocated to that GPU for that express purpose of making the rendering more efficient wrt final output image quality.
No disagreement here. That doesn't mean that it should be the basis for choosing a graphics card. And that statement doesn't mean that it should be ignored either.

I never said it should become the 'standard' way of benchmarking in that it should be the only way, what I did say however, is that it does need to factor into the final conclusions and overall value assessment in reviews. The numbers always have to be put into context, and it works both ways - you wouldn't simply refer to frame generation benchmarks either when comparing the performance.
Glad to see we can agree.

You're not actually evaluating the reconstruction abilities of competing GPU's if you simply take the names given to each quality level and benchmark it against the equivalently named feature of your competitor, if they actually behave quite differently in delivering the final image they're designed to - that matters. It's their entire purpose. If it didn't, then AMD wouldn't even need FSR2, we could just use FSR1. Most reviewers at least thankfully, recognize that would be ridiculous as the quality levels are so drastically different despite AMD also naming the internal res settings of FSR1 the same.
That is exactly why separate dedicated articles & videos are created to point out these differences. Besides, some people claim to need Ferraris and that everyone else needs Ferraris, and that if you don't have a Ferrari you can't drive. But most people simply need a car to get around.

As for the bizarre stipulation that you can't include proprietary technologies in reviews,
If that is what you got from what I said, that was not my intended message. It can be included but it should not be the core of the benchmarking, in the interest of benchmarking apples with apples.
When you measure a car's performance, you measure acceleration and steering. That the other car has a backup camera is nice and might make you get that car instead, but it's not the primary measurement of a car's performance.

if the actual implementation is widespread enough, then it being 'proprietary' is completely irrelevant when evaluating the value of hardware. We buy these GPU's to play games, if the games commonly support a feature of a card that can deliver a superior experience with the same base resolution input, it doesn't matter if it's open source or not. You are not making a political statement with what gaming GPU you buy.
Actually, you are making a political statement, whether you are aware or not. Voting with money always works. If you buy shoes that were created by child slavery, even if it was never your intention, and even if you are unaware, you are literally funding child slavery. Whatever we buy influences the world, because where the money goes, the same things grow.
But I guess that is too raw for most. Everyone prefers to see themselves as good, rather than contributing to atrocities.

How that relates to GPUs? Simple. The market adapts to what is bought, what is required, what is desired and by what is available. By nature, FSR and XeSS have a larger potential market than DLSS. At some point anyone that bought an nVidia GPU is going to have to live with the fact that DLSS will no longer be used, just like happened with PhysX, HairWorks, Ansel and G-sync. Obviously, if one pays a premium, one wants such features to be included to be used as much as possible. People don't want to think they are throwing away their money, so obviously they were mad when Starfield wouldn't have DLSS.
But it's they themselves that constantly buy products with features that are literally planned obsolescence. And as AMD and Intel compete more, there will be more and more instances where proprietary features will go unused. Out of the three, it's probably XeSS that's going to be the winner, while FSR and DLSS fade away.

Your purchases literally influence what nVidia, AMD and Intel do with their market policy. Choose wisely.

We compare the overall value proposition of hardware utilizing proprietary technologies all the time, in fact one of the reasons this generation of mid/low-end cards are so derided is because of the relatively weak value proposition they have when compared to the price of consoles.
I don't see how things are different than in the past regarding consoles. The PS5 is about a 5700XT, or a 2070. The $269 RX 7600 is 20% faster than the PS5's GPU, and a PS5 costs about as much as an RX 7800XT. The Arc770 is also about 15% faster than a PS5 for about $300.

Of course, if only nVidia is used as a point of reference, I can understand why one would think that this generation is derided. But I can't help but think that it's gamers themselves that created this situation by monetarily supporting constant price hikes on shiny new features with a short lifespan.

Hardware Unboxed benchmarks are ultimately useful, and close to how they should be. I've not always agreed with them, but, the fact that both AMD and nVidia fanboys find them biased is actually a good showing of their neutrality.
 
You can :ROFLMAO: all you want. It is fake performance if you claim that it renders at a higher resolution when it's not. Just like frame generation produces fake frames, upscaling produces fake pixels. Simple as that.

Lol, they're all generating pixels from samples, the samples are just collected differently. If I switch back and forth between trilinear filtering and anisotropic filtering, which one generates the real pixels, as they're sampling differently?

That has always been the case, where textures and effects all can have different resolutions. Ultimately they all come together, which is called rasterization. DLSS/FSR/XeSS, neither of them are rasterization.

Screen-space effects (depth-of field, motion blur, AO, GI, shadows etc) are not rasterization, Ray/path tracing is not rasterization, fragment/pixel shading is not rasterization. Rasterization in hardware is just one step that takes polygons (post-culling) and generates fragments to be shaded. If I play a game at 1440p "native" with full resolution screen-space effects or a game at 4k "native" with 1/4 resolution for screen-space effects, which one is the "real" pixels?

The way rendering works is you sample data (geometry, textures, materials) and you produce pixels. You feed the data into a pipeline (collection of algorithms) and pixels come out the other side. You can change the sampling rates, and you can sample spatially and temporally. No matter what, the pixels that come out the other side are as "real" as any other. Subjectively quality will vary.
 
Last edited:
Back
Top