Value of Hardware Unboxed benchmarking


The thumbnail ;) HUB, like many/most gaming reviewers, haven't figured out that gpu "performance" only really scales with power now. Wondering why the gains in raster performance are so small, wondering why there's such a big gap between the 5090 and 5080 etc. Honestly, I'd have been more pissed if they came out and said the 5080 was going to be 500W. I'm not getting a dedicated circuit added to my apartment for my computer. There's kind of a limit to how much power your computer can use before it's impractical. There's a reason why the competition is not scaling any differently.
 

The thumbnail ;) HUB, like many/most gaming reviewers, haven't figured out that gpu "performance" only really scales with power now.
That's just not at all true, though.

Lovelace is massively more performant than Ampere at similar power draw.

And plenty of tests have shown that you can decrease the power on these latest GPU's quite a bit without losing a lot of performance. And all these high TDP's are really doing is causing the prices of graphics cards to go up because manufacturers have to engineer them to be able to support the stock TDP. It's all getting fairly out of hand, simply to juice out the last little bits of performance for benchmarks/reviews. We used to get more reasonable TDP parts, and then we'd have the higher end graphics card variants that can support higher power draw for a premium, but it's like even baseline graphics cards nowadays are equivalent to one of those super premium graphics cards from like 8-10 years ago.

I know there's been heavier spiking behavior on modern GPU's as well for a couple generations now, which might be contributing to the raising of TDP so people dont underbuy on the PSU side, but I dont think that's the main thinking here.

Either way, the idea that we've reached a wall in terms of efficiency, core scaling and IPC scaling is definitely not true.
 
That's just not at all true, though.

Lovelace is massively more performant than Ampere at similar power draw.

And plenty of tests have shown that you can decrease the power on these latest GPU's quite a bit without losing a lot of performance. And all these high TDP's are really doing is causing the prices of graphics cards to go up because manufacturers have to engineer them to be able to support the stock TDP. It's all getting fairly out of hand, simply to juice out the last little bits of performance for benchmarks/reviews. We used to get more reasonable TDP parts, and then we'd have the higher end graphics card variants that can support higher power draw for a premium, but it's like even baseline graphics cards nowadays are equivalent to one of those super premium graphics cards from like 8-10 years ago.

I know there's been heavier spiking behavior on modern GPU's as well for a couple generations now, which might be contributing to the raising of TDP so people dont underbuy on the PSU side, but I dont think that's the main thinking here.

Either way, the idea that we've reached a wall in terms of efficiency, core scaling and IPC scaling is definitely not true.

I think the keyword here is "now."
Ampere (consumer) is on Samsung's 8N process, while Lovelace is on TSMC 4N. These process have huge differences in performance.
The key point is whether we are going to have similar advances in process in the future? Right now all signs are pointing to a big negative.
 
Lovelace had the benefit of a full two node jump from Samsung's 8nm process. On the other hand wafer prices are >3x higher...
Blackwell has the "benefit" (if you can call it that) of using the old proven process which means that cost of similar sized die should be considerably lower than it was for Lovelace at its launch. And we kinda see it in the launch prices of 50 SKUs. So the situation is similar but for a different reason - Lovelace had to use smaller dies at higher prices which limited the perf/price gain despite a huge process upgrade. Blackwell is using a "cheaper" node but it's the same old node so perf/price is increasing only through the costs improvement on that node and not the perf gains from a newer node. The end result in perf/price changes seem to be fairly comparable really.
 
@Seanspeed I'm not saying there's no improvement, but the improvements are shrinking and honestly don't expect the big jumps to ever come back. When you add compute, you need to add bandwidth. Memory and bandwidth require power, and memory isn't scaling. So it's not even just node shrinks, it's the other bottlenecks as well. I honestly have no idea how you could get something big like a 60% gen-over-gen jump for a 6080 over the 5080 without requiring a ton of Watts. Not in two years anyway, unless there's some kind of breakthrough in how chips are made. The big giveaway is Intel and AMD both face the same issue. If you could get these big scaling bumps it would be easy for AMD or intel or leapfrog Nvidia in raw computing power, since Nvidia is now focusing more and more on tensor cores.

The "value" formula of $/frame is just not going to look favourable until something big happens in manufacturing technology and memory technology.

Edit: It's one of those things where reviewers, who have been doing this for a long time, base their perception of history and not based on the actual technology limits. So you'll get people saying things like, "Well 8 years ago ..." without cluing in that 8 years ago is a very long time in this space.
 
Last edited:
untenable to defend AMD/Intel as a noble, underrated underdog.
They’ve never done this, and I’m not even sure how you could claim they’ve done this with Intel considering their latest piece is about how Battlemage sucks lol.

I’d probably agree that FG is frame smoothing and their logic is sound (the additional frames don’t reduce latency like ‘real’ frames would, which is why people never use FG for competitive games). That said, it’s not a bad technology, I’ve used it in the past (I hate it on keyboard and mouse but it works well on controller).
 
FG has nothing to do with anykind of "smoothing". This is like saying that a CRT or plasma do not have better motion clarity because they may have higher input lag than a LCD or OLED. FG is like an advanced black frame insertion method to provide the benefit of interrupting the current frame on the display device. And with LCD and VRR it comes with perks of let the monitor run in the optimized frame range.
 
They’ve never done this, and I’m not even sure how you could claim they’ve done this with Intel considering their latest piece is about how Battlemage sucks lol.

I’d probably agree that FG is frame smoothing and their logic is sound (the additional frames don’t reduce latency like ‘real’ frames would, which is why people never use FG for competitive games). That said, it’s not a bad technology, I’ve used it in the past (I hate it on keyboard and mouse but it works well on controller).

I think there needs to be a re-think about what "performance" means.

Two scenarios:
1) 75 fps on a 240 Hz screen (VRR)
2) 60 fps -> 240 fps with frame gen on a 240 Hz screen (VRR)

Scenario 1 has 25% better performance, which would be considered meaningful by almost every gpu review standard, but which scenario is a better user experience? I honestly don't know the answer. I can tell you that depending on the quality of the frame gen, the latency penalty for frame gen the best user experience could go either way. Personally, I'd hate playing both of those scenarios. So I'm not really sure how these questions get answered. HUB is highly opinionated, and I actually think that's good and useful. If you're a person that also doesn't like frame gen and only uses upscaling within limits, then their reviews are probably right on the money. I'm just not sure what performance means if you detach it from user experience. I'm now considering a 480Hz screen as my next display, because if I can frame gen from 120 into the 300s or higher it'll be worth it. Have to see what the quality is.
 
Last edited:
FG has nothing to do with anykind of "smoothing". This is like saying that a CRT or plasma do not have better motion clarity because they may have higher input lag than a LCD or OLED. FG is like an advanced black frame insertion method to provide the benefit of interrupting the current frame on the display device. And with LCD and VRR it comes with perks of let the monitor run in the optimized frame range.

It's pretty close to the motion smoothing or motion enhancement you find on tvs, so I think it's a pretty good description. I hate it for movies, because I don't like the soap opera effect, but I prefer massively high frame rates for first-person and third-person games. It will have a byproduct of reducing motion-blur, so it does have some of the benefit of BFI as well.
 
FG is like an advanced black frame insertion method to provide the benefit of interrupting the current frame on the display device.
You aren’t going to believe what people usually call BFI: motion smoothing.

(really FG is more like the frame insertion algorithms TVs have, idk why you think it’s anything like BFI, there are no black frames inserted?)

This is like saying that a CRT or plasma do not have better motion clarity because they may have higher input lag than a LCD or OLED.

No, this is like saying plasma at 60Hz is some higher frame rate in LCD equivalence. Nobody denies that FG and plasmas and CRTs improve fluidity but they don’t improve latency like doubling frame rate does.

I would not call a 60Hz plasma equivalent to a 120Hz LCD just because it has better motion clarity precisely because it still feels like 60Hz input lag.

Unlike most people that talk about plasma gaming, I actually have a plasma TV and despite the better motion handling it still feels like 60Hz (worse actually since it’s ancient and input lag is off the charts).
 
Movie material runs with 24fps at 24hz. So it will always look unnatural. With gaming it looks much better because we do not have a proper sync between frametime and hz. And seeing a stuttering rendered character feels just noch right.
 
Movie material runs with 24fps at 24hz. So it will always look unnatural. With gaming it looks much better because we do not have a proper sync between frametime and hz. And seeing a stuttering rendered character feels just noch right.
First off this isn’t true unless your player supports refresh rate matching. Many internal players don’t so you need 3:2 pulldown.

Also this isn’t even true, you can achieve perfect sync: it’s called VRR and we’ve had it for around a decade now. Before that we had Vsync but that was rather rigid.
 
I think the keyword here is "now."
Ampere (consumer) is on Samsung's 8N process, while Lovelace is on TSMC 4N. These process have huge differences in performance.
The key point is whether we are going to have similar advances in process in the future? Right now all signs are pointing to a big negative.
We are. It's just going to cost so much to engineer them that the per-dollar gains are completely negated.

AI training still hungers for increased performance density (per sq-mm, not per-$), so as long as the AI race is on we may continue to get more performant xx90s at proportionately higher prices. Meanwhile we have to rely on algorithmic shifts to get super-linear gains.
 
FG may be close to motion interpolation but it's generally a lot better on latency and is in fact 100% usable in most games which are hitting 40+ fps w/o it.
It's puzzling that we're back to this discussion now after many people trying it on their 40 cards, after FSR FG and even LSFG all being met with highly positive reaction.
Is it "real" performance? No. Is it useless and shouldn't be mentioned when talking about new features of the new GPUs? No.

Fact is 50 series will be able to output more frames than 40 series - and this is likely coming without any negative change in input latency.
So from a 40 vs 50 series comparison of FG vs MFG results such comparison is entirely valid on all fronts.
HUB is just clueless again.
 
Is it "real" performance? No. Is it useless and shouldn't be mentioned when talking about new features of the new GPUs? No.
This sounds like HUB’s position, restated.

Using FG frames as ‘performance’ in a graph is at best misleading. I don’t think the tech is useless or whatever (I’ve been using frame gen from before FG even came out since Oculus implemented it years ago) but I dislike how Nvidia is blurring the line here.

Case in point; does the 5070 beat the 4099 or does it require the new 4x FG to do so?
 
This sounds like HUB’s position, restated.

Using FG frames as ‘performance’ in a graph is at best misleading. I don’t think the tech is useless or whatever (I’ve been using frame gen from before FG even came out since Oculus implemented it years ago) but I dislike how Nvidia is blurring the line here.

Case in point; does the 5070 beat the 4099 or does it require the new 4x FG to do so?

I'm pretty sure Jensen put a caveat on that performance on stage, but it is getting messy. But the genie is out of the bottle, so to speak, so language and reviews are going to have to adapt. There's no burying your head in the sand about the function of "ai" and tensor cores in gpus now. They add something, and if it's not performance, then I don't know what the language should be. Right now when we say performance we mean all parts of the gpu except tensors (TOPS), I guess.
 
Using FG frames as ‘performance’ in a graph is at best misleading. I don’t think the tech is useless or whatever (I’ve been using frame gen from before FG even came out since Oculus implemented it years ago) but I dislike how Nvidia is blurring the line here.
If this is a comparison of 40 series FG to 50 series MFG then it's a valid comparison. It would be misleading to compare any FG to non-FG - like what Nvidia did back at 40 launch. So all in all it is an improvement.
 
Back
Top