Value of Hardware Unboxed benchmarking

AI is not really a good reason though. Even 16GB is pretty limiting.
My Oncologist friend just told me today, that he had to choose a 3060 12GB over a 3060Ti 8GB because of VRAM, he does MRI Brain Contrast pattern recognition using TensorFlow. He had no choice but to use the lower performer 3060 because it's bigger VRAM helped it process his models faster than the more powerful 3060Ti with its smaller 8GB VRAM. In the future he is looking at purchasing some NVIDIA GPU with 24GB.

I am guessing lots of amateur/beginner AI devs are like my friend.
 
It's not a linear scale, and generally you get worse performance per/$ below $300 and above say $700. The MSRP "sweet spot" last generation was the 3060 Ti, since you got 30% more performance for 21% more money vs. the 3060 and even better results vs. AMD cards which were priced way too high.

As far as the thumbnails go, can you tell me from the thumbnails what HU's conclusions were regarding DLSS 3.0 and FSR 2.0/1.0?
You get up to 2x the performance of a 7600. The lows of the 7600 are so bad that this product is useless on the PC (like a low end OEM card sold to endusers).

Can you tell me from this ingame comparision in Bright Memory Infinity which two upscaling technqiues are "impresse and almost the same" and why FSR 2.0 is "uglier and full of artefacts"?
 
You get up to 2x the performance of a 7600. The lows of the 7600 are so bad that this product is useless on the PC (like a low end OEM card sold to endusers).

Can you tell me from this ingame comparision in Bright Memory Infinity which two upscaling technqiues are "impresse and almost the same" and why FSR 2.0 is "uglier and full of artefacts"?
Up to 2x but on average only 25% at 1080p (TPU)
 
You get up to 2x the performance of a 7600. The lows of the 7600 are so bad that this product is useless on the PC (like a low end OEM card sold to endusers).

Can you tell me from this ingame comparision in Bright Memory Infinity which two upscaling technqiues are "impresse and almost the same" and why FSR 2.0 is "uglier and full of artefacts"?
I don't see any drastic performance regressions in rasterization. Which titles are we talking about?

Regarding the comparison, if you're talking about XeSS, HU disliked the dp4a implementations they tested, but thought the XMX implementation was better than FSR 2 (video title was "Actually Impressive - Intel XeSS Defeats AMD FSR 2.1 on Arc GPUs"). And they have always claimed that DLSS 2.0 is superior to FSR due to having less artifacts. In their latest analysis of 26 titles they found FSR was able to tie DLSS in only a small handful of titles and only at 4K. Everywhere else DLSS was either moderately or significantly better. (In their own words, "FSR was not able to outperform DLSS under any circumstances").
 
Would relative/ray tracing performance, energy efficiency (TPU reviews) or GPU features become factors in paying the difference? I don't like the price of either the 7600 or 4060 Ti but most buying in this segment will also want something that will contain some relevancy for the next 2-3 years. As the TPU concluded "At the end of the day, what matters is actual gaming performance."
And lack of VRAM affects gaming performance. It does so now, and will almost certainly do so even more in the next two to three years as games get ever more demanding.

If the argument is just to play at 1080p or Medium settings or whatever, people dont want to pay $400+ in 2023 for a sub-console experience. That's ridiculous.
 
A preview for the upcomming F1 23 benchmarks:
The "Ultra-High" preset uses every Raytracing effect with the "high" quality setting.

Lets see how this "we will only test with the ultra-preset" channel will handle F1 23.
 
Coming out weeks after other publications is just for the clicks. AMDUnboxed has no problem with AMD's behaviour. You will see that they wont talk about it anymore in any of the GPU reviews.
 
Coming out weeks after other publications is just for the clicks. AMDUnboxed has no problem with AMD's behaviour. You will see that they wont talk about it anymore in any of the GPU reviews.

🙄

More like one week, and they were trying to get clarification from AMD. This was the result of multiple inquiries, waiting for a response, and not getting them.

Not rushing out of the gate with the outrage, but taking their time and crafting a video that addresses the majority of the concerns and counter-arguments others have had is basically the opposite of click-bait. You may scoff at the usual youtube thumbnail, but it's a measured critique with obviously a good deal of time put into it.

There have been, and there will likely be other takes from them in the future that are questionable, critique them for those when they happen. This is not one of them.
 
I kinda wonder what HUB's Steve thinks about this issue. Did he say anything on it during their last "monthly Q&As"?

He hasn't really chimed in much on this from what I can see, been mostly Tim. Initially Tim was a little skeptical of the implication that AMD would actually bake this into their contracts, but recognized at least that it was starting to look pretty suspicious, especially with Jedi Survivor being a huge game and also being built on UE4 where you almost have to actively avoid using DLSS.

This last video came about due to them trying to get clarification from AMD, and AMD's PR helping this along thusly:

 
Both of them in this video saying that they think that there's "no real contract saying what each party can or can't do" made me do a facepalm already.

Keep watching. They say that initially but then go on to say that all things considered, it's also not looking good. There's a difference between spitballing their suspicions and theories in a QA session made just after this news broke vs. them doing what any outlet should do, and try to get input from the parties involved and come out with a more concrete statements once more facts - or in this case, complete lack of a response given the generous time allotted - are in.

Tim took a guess that AMD wouldn't be this dumb, then found out that hey - actually, they might indeed be. I think this video carries a lot more weight than one segment of a QA session video that would not receive nearly the attention, you can see from the youtube comments that it upset a portion of their fanbase quite a bit.
 
Well that's my issue with them both in these cases - they "guess" too much instead of actually knowing what they are talking about.
Nobody can KNOW for certain here, quite obviously. You yourself can only 'guess' at the end of the day what any contract actually says. It's the most anybody can do without real inside information.

Either way, your initial claim was dishonest and misleading(which isn't at all surprising). They are not trying to defend AMD like you're painting at all, and are quite negative about the whole situation. They've even made a whole video debunking(to the best of their ability) all the defenses surrounding the situation:


Not that you'll care. Doesn't fit your narrative, so you will find some other line of arguing to keep thinking how you do. You've got it out for them and nothing can or will change that.
 

I always appreciate a wider selection of game testing, especially ones that are a little long in the tooth to get a more well rounded picture of what kind of generational er, 'uplift' a new card provides. The thing is this isn't even as bad as it could have looked, they didn't test 4k. I'm wondering if there would be some cases like Doom Eternal where the 4060ti would end up even slower than the 3060 at that res.
 
I always appreciate a wider selection of game testing, especially ones that are a little long in the tooth to get a more well rounded picture of what kind of generational er, 'uplift' a new card provides. The thing is this isn't even as bad as it could have looked, they didn't test 4k. I'm wondering if there would be some cases like Doom Eternal where the 4060ti would end up even slower than the 3060 at that res.

Well there's already scenarios in which 3060 bests the 3070ti due to the VRAM differences. Which does also bring up an issue in that if 4K testing pressures memory too much the 3060ti results may likely collapse as well. A problem the 4060ti may have isn't just the cache over memory bandwidth scenario but that the narrower PCIe bus may also be hit harder if VRAM is an issue as well.

A concern with cache reliance was always question of what happens if you try to test worst case scenarios for it. I remember bringing this up when RDNA2 released as well in that I wonder what happens if you looked into scenarios that might hit the caches harder. But the issue is those scenarios may not just be purely related to resolution.

For example since you up Doom Eternal, HUB's results do seem contrary to other reviewers which actually show Doom as scaling better on the 4060ti against the 3060ti relative to other titles -

Which suggest that there is a strong scene dependency (as I don't believe Doom has a standardized built in benchmark) or factor involved here.

Which on another note I've always had this feeling that just testing a wide variety games isn't the only factor in determining comprehensiveness and whether or not there should be more importance placed on testing multiple scenes/scenarios in a given game instead.
 
The goal is really just to find a rough average as a guide for consumers. 50 games is about as comprehensive as you'd ever need, and wouldn't likely lead to any significantly differing results instead doing 25 games with two scenes each.

And in fact, I'd say you're actually getting less comprehensive if you did say, 10 games with five scenes each, as you're more likely to include games that dont actually differ too drastically per scene, and the lack of greater variety of games skews any average a bit too hard.
 
Back
Top