Value of Hardware Unboxed benchmarking

If this is a comparison of 40 series FG to 50 series MFG then it's a valid comparison. It would be misleading to compare any FG to non-FG - like what Nvidia did back at 40 launch. So all in all it is an improvement.
Well not really because 50 series can interpolate more. Doubling the interpolation doesn’t double performance just like turning on FG originally didn’t double performance.

50 series will have smoother motion because it can interpolate more but I have a stubborn feeling that the 5070 only ‘beats’ the 4090 because of the 4x interpolation.
 
I have a stubborn feeling that the 5070 only ‘beats’ the 4090 because of the 4x interpolation.
It's not just a feeling but fact. NVIDIA is stretching how much they can lie with media parroting the lies before crap hits the fan. So far they've succeeded, sadly. Any scaling, framegen and whatnot should be left out from any comparison which doesn't specifically compare said feature.
 
It's not just a feeling but fact. NVIDIA is stretching how much they can lie with media parroting the lies before crap hits the fan. So far they've succeeded, sadly. Any scaling, framegen and whatnot should be left out from any comparison which doesn't specifically compare said feature.

I'm pretty sure Jensen said on stage that the 5070 matches the 4090 because of DLSS4 frame gen. I still think the comparison isn't a good one, but he did explain how they were reaching that conclusion. All of their slides have explicitly shown that DLSS is including frame gen and multi frame gen.
 
FG may be close to motion interpolation but it's generally a lot better on latency and is in fact 100% usable in most games which are hitting 40+ fps w/o it.
Weeeeeell, that's very subjective.
If I had to choose between 40 and 40 with FG (somehow, excluding the option to turn settings down to get better framerates), I would choose 40, because the added latency is very noticeable to me.
At minimum 60 FPS, and with a good FG implementation, (few to no noticeable artifacts), for me, it would be a no-brainer.
At 120, it should be illegal not to use it...
I really like the tech, but some things, are by nature very subjective.
 
Weeeeeell, that's very subjective.
If I had to choose between 40 and 40 with FG (somehow, excluding the option to turn settings down to get better framerates), I would choose 40, because the added latency is very noticeable to me.
At minimum 60 FPS, and with a good FG implementation, (few to no noticeable artifacts), for me, it would be a no-brainer.
At 120, it should be illegal not to use it...
I really like the tech, but some things, are by nature very subjective.
I don’t think FG really adds latency, it just doesn’t remove it. So 40 with FG feels identical to 40 without, but it won’t feel like 80. For many (like myself) this feels weird because we are used to input latency decreasing as frame rates increase.
 
Well not really because 50 series can interpolate more. Doubling the interpolation doesn’t double performance just like turning on FG originally didn’t double performance.
Isn't that's the point? Look at the Nvidia comparisons, they aren't comparing 5070 to a 4090 there, they are comparing it to a 4070. Which means that they are comparing it to a _slower_ card, and then they are adding MFG on top of that card's result with FG - which means that you've already paid the FG latency penalty on the 4070 and from that you're just getting "free generated frames" on the 5070 without additional latency. So as I've said such comparison seems entirely valid - you're basically comparing the improvements the new FG nets you in framerate over what you would be getting without it. If they'd be comparing FG vs FG (2x mode or whatever its called on 5070) then that's make a lot less sense in fact when you can use a 3x one and get more frames "for free".

Weeeeeell, that's very subjective.
It's less subjective than title to title dependent. Some are fine even with 30 fps baseline and for some you'd want 60+. On average though, if it's not a twitch shooter and if the baseline frametime health is good my own estimates show that a 40 fps baseline is fine for DLSS FG.
 
Weeeeeell, that's very subjective.
If I had to choose between 40 and 40 with FG (somehow, excluding the option to turn settings down to get better framerates), I would choose 40, because the added latency is very noticeable to me.
At minimum 60 FPS, and with a good FG implementation, (few to no noticeable artifacts), for me, it would be a no-brainer.
At 120, it should be illegal not to use it...
I really like the tech, but some things, are by nature very subjective.

Yep. For me, 90 fps minimum (with upscaling). Anything below 90, I'm not playing. 120 is where it gets good and anything beyond that is ideal. So if I'm turning on frame gen, I probably want to be starting at 120, but from 90 maybe.
 
Isn't that's the point? Look at the Nvidia comparisons, they aren't comparing 5070 to a 4090 there, they are comparing it to a 4070. Which means that they are comparing it to a _slower_ card, and then they are adding MFG on top of that card's result with FG - which means that you've already paid the FG latency penalty on the 4070 and from that you're just getting "free generated frames" on the 5070 without additional latency. So as I've said such comparison seems entirely valid - you're basically comparing the improvements the new FG nets you in framerate over what you would be getting without it. If they'd be comparing FG vs FG (2x mode or whatever its called on 5070) then that's make a lot less sense in fact when you can use a 3x one and get more frames "for free".


It's less subjective than title to title dependent. Some are fine even with 30 fps baseline and for some you'd want 60+. On average though, if it's not a twitch shooter and if the baseline frametime health is good my own estimates show that a 40 fps baseline is fine for DLSS FG.
Sure but there’s going to be an even larger latency disconnect between 2x and 4x. No, there won’t be added latency beyond the usual FG penalty, but going from 40->80 will feel less jarring than 40->160 as now we’re seeing 160 while feeling 40.

Also we don’t know if these extra 2 frames are of the same quality as the original extra 1 frame we get from old FG.

Overall I think it’s just incorrect to treat generated frames the same as rendered frames. It would be like saying “I play at 4k native” but you’re really using DLSS performance. It’s all good tech but blurring the lines is just manipulating the numbers at this point.
 
I think the keyword here is "now."
Ampere (consumer) is on Samsung's 8N process, while Lovelace is on TSMC 4N. These process have huge differences in performance.
The key point is whether we are going to have similar advances in process in the future? Right now all signs are pointing to a big negative.
Obviously leaps like Samsung 8nm->TSMC 5nm aren't likely to happen again with major GPU releases, but there is definitely still a couple more generations of reasonable improvements to go before I think we're gonna face something of a nightmare scenario for high performance consumer processors.

I expect that RDNA5/UDNA will be on a high performance and mature version of TSMC 3nm, and probably whatever comes after Blackwell, too. And then we'll have some further scope starting from TSMC's 2nm and advancements in GAA+BSPD, and hopefully Intel's processes become competitive enough to keep costs reigned in for another generation.

Beyond that, sure, I think that we're pretty much screwed, largely because of cost reasons more than anything, but even in that situation, I dont think power scaling is gonna be the primary source of new performance improvements. That's just not at all sustainable. Architecture IPC, AI, and wide chiplet strategy are gonna get us further than simply cranking up the power limits. It's not like current GPU's are straining on power limits by any means. In most cases, you can drop power draw with minimal performance loss.
 
Sure but there’s going to be an even larger latency disconnect between 2x and 4x. No, there won’t be added latency beyond the usual FG penalty, but going from 40->80 will feel less jarring than 40->160 as now we’re seeing 160 while feeling 40.

Also we don’t know if these extra 2 frames are of the same quality as the original extra 1 frame we get from old FG.

Overall I think it’s just incorrect to treat generated frames the same as rendered frames. It would be like saying “I play at 4k native” but you’re really using DLSS performance. It’s all good tech but blurring the lines is just manipulating the numbers at this point.
Well DF just put out a video showing that from a base of about 50ms with original 2x frame generation, going to 4x only incurs another 6.5ms or so of input penalty. That's very acceptable if you can take advantage of the extra visual fluidity.
 
Sure but there’s going to be an even larger latency disconnect between 2x and 4x.
What's "latency disconnect"? You're not getting worse latency, you're just getting more frames. And as for latency this in general isn't inherently tied to the framerate. You could be getting high latency even with high framerates in some titles so getting the same with MFG would look just like an engine specific. The only way to see that its not would be to run the game at a similar framerate w/o FG - which for MFG modes will likely be impossible to do even by going to the lowest in-game setting.

Also we don’t know if these extra 2 frames are of the same quality as the original extra 1 frame we get from old FG.
I see no reason why they would be any worse, and that 1 frame in FG was basically indistinguishable from a "real" frame when viewed at high fps. The only FG artifact which is visible sometimes is the HUD interpolation but it's not a big deal.

Overall I think it’s just incorrect to treat generated frames the same as rendered frames.
But they are not. Again, look at the comparisons. All 50 cards in them are _faster_ than 40 cards which they are being compared to even without FG.
 
It does add a frame of latency at a minimum.
This is true, but it's also one frame at the "new" framerate, not the original framerate. So long as the FG framerate is at least double the original, the latency will be at least equal to the original. There's a strange gray area where the FG rate is less than double the OG framerate, where latency does suffer a bit. What I'm not entirely sure of is if latency continues to drop with FG framerate > 2xOG framerate... I suspect this becomes a property of the app code input sampling.
 
What's "latency disconnect"?
When the FPS counter says 160 FPS but it feels like 40 FPS, because 1 in every 4 frames is interpolated and your input has no affect on it.


And as for latency this in general isn't inherently tied to the framerate.
This is just factually incorrect, given the same setup with the same controllers and display a higher frame rate will almost always give you a lower latency unless the display bugs out at higher refresh rates (fortunately not a modern issue).


But they are not. Again, look at the comparisons. All 50 cards in them are _faster_ than 40 cards which they are being compared to even without FG.
But they are? Most of their comparisons are with framegen on. The 5070 will almost certainly not match the 4090 without the new 4x FG yet it’s the literal tagline Nvidia used when introducing it lol.

The 5070 probably isn’t going to be as fast as a 4090, but it will interpolate twice as many frames so Nvidia is treating this as equivalent.
 
Well DF just put out a video showing that from a base of about 50ms with original 2x frame generation, going to 4x only incurs another 6.5ms or so of input penalty. That's very acceptable if you can take advantage of the extra visual fluidity.
I agree, the extra fluidity will be nice but it will never feel like an actual high frame rate setup.

Keep in mind HUB seems to focus a lot on competitive gaming (for one reason or another) where higher frame rates are not just for fluidity but decreasing latency. I can obviously feel the difference between 60 and 120 in latency, but FG turning 60 to 120 (or even 240) is still going to feel like 60 even if it looks higher. This is why I don’t think we can consider it the same as ‘real’ performance.
 
This is just factually incorrect
This is factually correct. Latency is a product of the engine more than the framerate. You can make it smaller by improving the framerate but at any given framerate the latency can be wildly different between different engines and even just titles on the same engine. So just saying that 150 fps with 30ms latency would somehow "look incorrect" is completely wrong because you can easily get the very same result in some title without any frame generation. In other words you will not be able to tell from latency alone that a game is using framegen unless you'll start changing settings.

But they are? Most of their comparisons are with framegen on.
On both 40 and 50 cards which they are comparing, yes. So it's not a FG vs no FG comparison, it's a FG vs MFG one. And that logically is a lot better one.

The 5070 probably isn’t going to be as fast as a 4090, but it will interpolate twice as many frames so Nvidia is treating this as equivalent.
The only place where they've made that comparison is the marketing slide in the keynote. You won't find it anywhere on their website.

This whole "Internet rage" is stemming from one liner dropped during the announcement by Jensen. And I think it's funny as hell (again) because honestly he has dropped a lot more questionable lines during the keynote.
 
You can make it smaller by improving the framerate but at any given framerate the latency can be wildly different between different engines and even just titles on the same engine.
I think it’s very clear we are talking about within a given game so idk what different engines has to do with this. Inside a game given the same setup, increasing frame rate decreases latency. Why do you think CS pros play at like 400 FPS?
The only place where they've made that comparison is the marketing slide in the keynote. You won't find it anywhere on their website.
so we agree that Nvidia was deceptive and using 4x FG vs 2x FG to claim the 5070 is faster than a 4090 is wrong?
On both 40 and 50 cards which they are comparing, yes. So it's not a FG vs no FG comparison, it's a FG vs MFG one. And that logically is a lot better one.
it’s better but it’s still misleading and incorrect.


So just saying that 150 fps with 30ms latency would somehow "look incorrect" is completely wrong because you can easily get the very same result in some title without any frame generation. In other words you will not be able to tell from latency alone that a game is using framegen unless you'll start changing settings.
If you take a single game and run it at 40 FPS interpolated to 80 FPS it will feel like 40 FPS and not 80 FPS, due to latency. To get 80 FPS yes you’d have to lower settings but I think we’re making this hypothetical more complicated than it needs to be, with FG you get more fluidity at the same latency, with more raw frames you get more fluidity at lower latency. This is the point I am making and that’s why they aren’t directly comparable.
 
Inside a game given the same setup, increasing frame rate decreases latency.
Sure but you're talking about some inherently obvious latency from using FG - which isn't a thing at all. You can use FG or not use FG and get a similar latency at a similar performance which means that the idea that MFG would somehow produce a clear "latency disconnect" is just wrong. The only thing which is important for latency is for it to be low enough for you to be able to play the game without input issues. The actual figure can be whatever as it depends on a game, your own perception, you input method and a bunch of other things.

And yes we've discussed this all back in 2022 already. MFG doesn't add anything worth of returning to that discussion, it just shows you more frames - which is a clear positive change.

so we agree that Nvidia was deceptive and using 4x FG vs 2x FG to claim the 5070 is faster than a 4090 is wrong?
It's marketing ¯\_(ツ)_/¯

it’s better but it’s still misleading and incorrect.
No, it's not. You will be seeing that more frames on a 50 series card. Yes you won't get better latency on it but since you're comparing FG to MFG it is expected already.
 
When the FPS counter says 160 FPS but it feels like 40 FPS, because 1 in every 4 frames is interpolated and your input has no affect on it.

I'm trying to understand what you mean by this. If your brain is confused by seeing the number 160 why not just turn off the FPS counter?

Also you need to define what "feel" means? Is feel based on the number of individual frames your eyes see? Animation framerate? Mouse latency?

Frame gen only improves visual smoothness and doesn't significantly impact "gameplay feel". It's not a performance improvement in the classic sense but seems like a net win to me assuming those generated frames are any good of course.
 
I'm pretty sure Jensen put a caveat on that performance on stage, but it is getting messy. But the genie is out of the bottle, so to speak, so language and reviews are going to have to adapt. There's no burying your head in the sand about the function of "ai" and tensor cores in gpus now. They add something, and if it's not performance, then I don't know what the language should be. Right now when we say performance we mean all parts of the gpu except tensors (TOPS), I guess.

I still feel this problem with the "old heads/guards" (maybe someone can suggest a better term for this?). If we ignore GPUs for a moment performance in any other context by reviewers, consumers, and etc. refers to how capable the device being discussed accomplishes the intended task.

Now if we go back to GPUs for here with respect for gaming what are they actually used for by consumers? Are consumers just looking for just raw frames put out and to compare those numbers (or just the number on the corner)? Or is it more than that?

I'll be frank again I feel people need to come to terms with the fact that things aren't that easy anymore. We've gotten lazy in simply describing GPUs in terms of the user experience with some simple numbers. I can see how this is scary for some of the public and reviewers, it is after all much easier to just look up those numbers of generate those numbers (often scripted) and never actually use the products in the scenarios they are used for and provide some qualitative analysis on their experience.

And I'll stick to it that the current methodology isn't accurately described as performance either and use the term raw frames if we want to be accurate.

It's going to get fuzzier and therefore harder to discuss GPUs going forward and I think that is okay.

This is more of a reviewer thread so I'll stick to this discussion more from this perspective but reviewers in the past relying on those numbers and not doing direct qualitative analysis was actually detrimental to people actually using those reviews for consumer purposes. I don't know if people for example remember the frame time consistency issue that led to capture and review data looking at things like frame times and percentile FPS? The fact that we went what was basically years without any mention of it even though it was noticeable in actual usage of said products is telling in terms of the problems with what we term as "reviews."
 
Back
Top