Value of Hardware Unboxed benchmarking

Why on earth should nearly 50 % more expensive ($269 vs $399) cards expectations be the same as the cheaper ones?
Because you're not comparing them? You're comparing these cards to what's available on the market at the same price. Thus the expectations from them should be the same - if 8GBs isn't enough for 4060Ti then it's the same with 7600, especially since the cards aren't exactly that far apart from each other in performance and since 7600 isn't improving the VRAM situation in its pricing tier either, just like 4060Ti.
 
Because you're not comparing them? You're comparing these cards to what's available on the market at the same price. Thus the expectations from them should be the same - if 8GBs isn't enough for 4060Ti then it's the same with 7600, especially since the cards aren't exactly that far apart from each other in performance and since 7600 isn't improving the VRAM situation in its pricing tier either, just like 4060Ti.
But that's the point. If the 4060 Ti isn't massively better performance-wise than the 7600 (which itself is uncompetitively priced) and it comes with the same VRAM compromises, why should I pay almost 50% more?
 
They've started saying that once AMD released FSR1. It was bad prior to that because uh I dunno you can't use it on AMD h/w? But then their stance suddenly changed from "DLSS is bad" to "DLSS and FSR are okay".
No. First video before FSR I tried.
 
I don't recall anyone ever making hard statements about DLSS being bad, or FSR being bad. To my memory, the entire conversation revolved around general merits of both -- FSR being able to run on basically anything as a general purpose upsampler with all related potential cons, and DLSS's AI training model unique to each supported app giving it a bit of a leg-up but only applicable to those apps which had been profiled so unusable as a general-case solution.

The only people I saw outright panning either solution were fanatics; fringe elements who need "their team" to "win." And sometimes "their team" wasn't about the manufacturer of the GPU...
 
I doubt anyone is buying the 6800 XT for AI performance.
People are buying the 6800XT because raw performance. What people and reviewers like HW Unboxed tend to forget is that in the future Raytracing performance and AI performance basically add to raw performance because games will use both as a standard. Even AMD shares this vision https://videocardz.com/newz/amd-talks-future-of-rdna-wants-to-make-game-npcs-smarter-with-ai

So if a game runs AI operations in real time, competing RTX GPUs with a similar large VRAM buffer will age far better than the 6800XT because of how much faster tensor cores are at matrix multiplication.
 
People are buying the 6800XT because raw performance. What people and reviewers like HW Unboxed tend to forget is that in the future Raytracing performance and AI performance basically add to raw performance because games will use both as a standard. Even AMD shares this vision https://videocardz.com/newz/amd-talks-future-of-rdna-wants-to-make-game-npcs-smarter-with-ai

So if a game runs AI operations in real time, competing RTX GPUs with a similar large VRAM buffer will age far better than the 6800XT because of how much faster tensor cores are at matrix multiplication.
Well, we're still waiting for "native" UE5 titles, let alone titles making extensive use of A.I for content generation. Are such titles even in development right now?
 
Is it still the case that game developers can't even access tensor cores through any currently available APIs?
 
But that's the point. If the 4060 Ti isn't massively better performance-wise than the 7600 (which itself is uncompetitively priced) and it comes with the same VRAM compromises, why should I pay almost 50% more?
The 4060TI is up to 2x as fast. And since when is performance and price a linear scale?! The 7900XTX costs $600 more and fails in so many games to get even more than 60FPS in 1440p.

1685616106610.png

For every topic, you can just use their thumbnails.
 
People are buying the 6800XT because raw performance. What people and reviewers like HW Unboxed tend to forget is that in the future Raytracing performance and AI performance basically add to raw performance because games will use both as a standard. Even AMD shares this vision https://videocardz.com/newz/amd-talks-future-of-rdna-wants-to-make-game-npcs-smarter-with-ai

So if a game runs AI operations in real time, competing RTX GPUs with a similar large VRAM buffer will age far better than the 6800XT because of how much faster tensor cores are at matrix multiplication.

Well, the category of competing RTX GPUs with a similar or larger VRAM buffer from the RTX 30 series includes two GPU's, the 3090 and the 3090 Ti.

In hindsight, I don't think reviewers made a mistake by not making a big deal about AI performance of the 2020 GPUs. Does anything else than DLSS and Frame Generation take advantage of AI acceleration in games? Even there, you are sol and won't be able to use FG if you have an RTX 30 series GPU.

AI will definitely make it into games more and more but the longer it takes the less relevant the 2020 GPUs will be. If we see one or two new novel AI implementations (neural radiance cache?) in some games by the end of this year, the GPUs will already be 3 years old and AI acceleration will still not be close to relevant in gaming in the big picture.
 
Last edited:
Is it still the case that game developers can't even access tensor cores through any currently available APIs?
For Turing, NVIDIA changed how standard FP16 operations were handled. Rather than processing it through their FP32 CUDA cores, as was the case for GP100 Pascal and GV100 Volta, NVIDIA instead started routing FP16 operations through their tensor cores.

The tensor cores are of course FP16 specialists, and while sending standard (non-tensor) FP16 operations through them is major overkill, it’s certainly a valid route to take with the architecture. In the case of the Turing architecture, this route offers a very specific perk: it means that NVIDIA can dual-issue FP16 operations with either FP32 operations or INT32 operations, essentially giving the warp scheduler a third option for keeping the SM partition busy.



Ampere and Ada also follow the same path as Turing.
 
The 4060TI is up to 2x as fast. And since when is performance and price a linear scale?! The 7900XTX costs $600 more and fails in so many games to get even more than 60FPS in 1440p.

View attachment 8981

For every topic, you can just use their thumbnails.
It's not a linear scale, and generally you get worse performance per/$ below $300 and above say $700. The MSRP "sweet spot" last generation was the 3060 Ti, since you got 30% more performance for 21% more money vs. the 3060 and even better results vs. AMD cards which were priced way too high.

As far as the thumbnails go, can you tell me from the thumbnails what HU's conclusions were regarding DLSS 3.0 and FSR 2.0/1.0?
 
Last edited:
The 4060TI is up to 2x as fast. And since when is performance and price a linear scale?! The 7900XTX costs $600 more and fails in so many games to get even more than 60FPS in 1440p.
7600 offers nearly 20% better performance/price at 1080p compared to 4060 Ti (TPU)
 
And some used GT1030 offers 200000% better perf/price compared to 4060Ti.
What exactly is the point in price/perf comparisons between differently priced products? It rarely tells us anything of value.
 
But that's the point. If the 4060 Ti isn't massively better performance-wise than the 7600 (which itself is uncompetitively priced) and it comes with the same VRAM compromises, why should I pay almost 50% more?
Would relative/ray tracing performance, energy efficiency (TPU reviews) or GPU features become factors in paying the difference? I don't like the price of either the 7600 or 4060 Ti but most buying in this segment will also want something that will contain some relevancy for the next 2-3 years. As the TPU concluded "At the end of the day, what matters is actual gaming performance."
 
Would relative/ray tracing performance, energy efficiency (TPU reviews) or GPU features become factors in paying the difference? I don't like the price of either the 7600 or 4060 Ti but most buying in this segment will also want something that will contain some relevancy for the next 2-3 years. As the TPU concluded "At the end of the day, what matters is actual gaming performance."
The TPU quote you've chosen contradicts your initial point...

On topic HUB have been taking all of the above factors into their conclusions for both the 4060Ti and 7600.
 
Back
Top