Value of Hardware Unboxed benchmarking

As expected, mfg is rubbish. This comes as no surprise since fg 2x was rubbish. I really appreciate Tim going into details and showing more examples. It’s also nice to put to rest all the flawed latency arguments that had been going on. Using frame gen provides an inferior game feel always and even Nvidia’s numbers validate that….

Last cycle with the 4000 series, people were overhyping frame gen. Then when I finally got a 4000 series GPU, I could see all these artifacts. It made a lot of reviews come across as Nvidia “paid” advertising. MFG has not only failed to improve the frame gen image experience as it relates to image quality, it magnifies the problems that already exist.

All of this is due to Nvidia failing to meet their performance goals for the new generation. This frame gen noise is just like when monitor manufacturers started listing “dynamic contrast”. Just misleading marketing designed to fool consumers into purchasing products based on false claims. Very few reviewers are talking about that instead acting as an extension of Nvidia’s marketing arm. Lots of excuses are rendered to justify the failure to meet performance targets instead of calling a spade a spade. Almost 0 ipc improvement and very few people are talking about that either….

Frame gen would be fine if it was discussed as a value add. The way people are talking about it now just invites negative discussion about the technology.

A real discussion needs to be had about the quid pro quo that goes on nowadays with reviews. The idea that receiving a free 5090, 5080, 5070, etc to review won’t influence the nature of the review is detached from reality. If I got a free 5090, I’d be singing the praises of Nvidia at every turn. It’s hard to take the opinions of those seriously who do not pay for their products. It’s especially very annoying when you spend your hard earned money on the product only to see a huge deviation in experience from the “reviewers”.

Other than hub who has been temporarily blacklisted and gamers nexus, there aren’t many reviewers who talk honestly about the products. There are way too many influencers masquerading as reviewers nowadays for example Optimum Tech…..
 
My opinion is that pausing the video on a generated frame is not super representative of showing the persistent errors frame-gen may have: that exaggerates things that your eyes cannot see and is more so academic than it is about the usecase.
There's a whole host of people who make these exact same jokes about the nitpicking DF often does in y'alls videos. lol

"And here when we zoom in closely we can see how this reflection is not actually entirely accurate..." :p
 
There's a whole host of people who make these exact same jokes about the nitpicking DF often does in y'alls videos. lol

"And here when we zoom in closely we can see how this reflection is not actually entirely accurate..." :p
DF don't zoom to highlight things that are otherwise imperceptible though, it's to prevent YouTube compression from mangling everything, and for viewers on lower res screens or phones. It's not at all the same.
 
There's a whole host of people who make these exact same jokes about the nitpicking DF often does in y'alls videos. lol

"And here when we zoom in closely we can see how this reflection is not actually entirely accurate..." :p
lol
Well as mentioned below from @Qesa zooming is for phones and didactic purposes for things that are visible at 100% frame size at PC monitor viewing distance. But the important that I was trying to say there in my comment is something I mentioned back in our original DLSS 3 review. The stobing nature and low persistance of error in frame generation makes analysing ihave to be different than how we analyse image reconstruction. In image reconstruction error maintains in every frame over a sequence so that is where it becomes identifiable rather easily. The strobing of frame generation changes the persistence of error subjectively, so you end up not seeing a whole lot of stuff which would just not fly if it were in every frame.

The physical size and active contrast of an error in frame generation has to be a lot larger and common for it to be readily visible in motion at the frame-rates that framegeneration is advised at (80-240hz). A lot larger than the error we see in image reconstruction.

Another thing we are struggling with right now is communicating the effect of frame generation being on with the same content with it being off for that content. So for example, a 5090 running Outlaws at 4K DLSS Performance mode FG off, vs. the same with FG on. The FG on view on a 240hz screen has way lower persistence blur than the 70-80 fps of FG off... yet how do we tell an audience watching a video at 60 fps on youtube that multi frame generation makes games a lot less blurry if images captured of it do not show off this human eye aretfact?
 
In the age of the 5090 being CPU limited at 4K and CPUs not getting faster quickly enough .. how can one gain smoother fps without frame generation? ...
 
In the age of the 5090 being CPU limited at 4K and CPUs not getting faster quickly enough .. how can one gain smoother fps without frame generation? ...
Of course there are titles where it can be CPU limited, like any GPU can for sure, but declaring it outright CPU limited is just BS.
In fact, here's ComputerBases benchmark, where every single title gained FPS from overclocking, which wouldn't happen if it was CPU limited.
And here's Guru3D's ROG Astral, which is already OC'd, being faster than reference, and still gaining more from additional oc
And would you look at that, same thing happens at TechPowerUp
 
lol
Well as mentioned below from @Qesa zooming is for phones and didactic purposes for things that are visible at 100% frame size at PC monitor viewing distance. But the important that I was trying to say there in my comment is something I mentioned back in our original DLSS 3 review. The stobing nature and low persistance of error in frame generation makes analysing ihave to be different than how we analyse image reconstruction. In image reconstruction error maintains in every frame over a sequence so that is where it becomes identifiable rather easily. The strobing of frame generation changes the persistence of error subjectively, so you end up not seeing a whole lot of stuff which would just not fly if it were in every frame.

The physical size and active contrast of an error in frame generation has to be a lot larger and common for it to be readily visible in motion at the frame-rates that framegeneration is advised at (80-240hz). A lot larger than the error we see in image reconstruction.

Another thing we are struggling with right now is communicating the effect of frame generation being on with the same content with it being off for that content. So for example, a 5090 running Outlaws at 4K DLSS Performance mode FG off, vs. the same with FG on. The FG on view on a 240hz screen has way lower persistence blur than the 70-80 fps of FG off... yet how do we tell an audience watching a video at 60 fps on youtube that multi frame generation makes games a lot less blurry if images captured of it do not show off this human eye aretfact?
I wasn't totally disagreeing with you or anything, just making a joke.

Though Tim from HUB is a fairly performance and image quality-sensitive person, so I do trust if he says that these things can be noticeable at full speed. But you're right that it makes it very hard to show to people so they can make up their own mind. I'm probably less sensitive to a lot of these things, and think general 2x FG is pretty great technology at the very least.
 

A very good video showing how the 5080 is indeed a 5070 product in all but name. Again, it's quite shocking how very few people are talking about this especially in an Era where Nvidia's gross margin is 75%. Granted, data center is doing most of the heavy lifting in the margin department but it's now ridiculous. This is what happens when there is no competition.
 

A very good video showing how the 5080 is indeed a 5070 product in all but name. Again, it's quite shocking how very few people are talking about this especially in an Era where Nvidia's gross margin is 75%. Granted, data center is doing most of the heavy lifting in the margin department but it's now ridiculous. This is what happens when there is no competition.
It's effectively a refresh, like the 4080 Super, since it is the same size chip on the same process. So I think it's more reasonable to think of it as a 4080 Super Super, with a new generation of features of course.

We can see the exact same thing in the AMD space with the 6600 XT > 7600 XT and even the 6800 XT > 7800 XT, where AMD benefited from a new process and packaging paradigm, but delivered ~5% more performance.
 
It's effectively a refresh, like the 4080 Super, since it is the same size chip on the same process. So I think it's more reasonable to think of it as a 4080 Super Super, with a new generation of features of course.

We can see the exact same thing in the AMD space with the 6600 XT > 7600 XT and even the 6800 XT > 7800 XT, where AMD benefited from a new process and packaging paradigm, but delivered ~5% more performance.
If you have to use AMD as a point of comparison, that should immediately raise red flags..... Amd haven't delivered a competitive product in a long long time.
 
If you have to use AMD as a point of comparison, that should immediately raise red flags..... Amd haven't delivered a competitive product in a long long time.
AMD was competitive in performance with RDNA 2, and look to be competitive with RDNA 4 also. Their failure has been on the software side, and with making the required feature investments in hardware. But that's not the topic of discussion here. The point is that AMD and Nvidia have both gone though periods of refreshing their architecture, and periods where they made more radical changes.
 
AMD was competitive in performance with RDNA 2, and look to be competitive with RDNA 4 also. Their failure has been on the software side, and with making the required feature investments in hardware. But that's not the topic of discussion here. The point is that AMD and Nvidia have both gone though periods of refreshing their architecture, and periods where they made more radical changes.
My point was that AMD should not be used as a point of comparison and instead, Nvidia should be compared against themselves historically. This generation is a historical anomaly for Nvidia and raises cause for concern. Finally, RDNA 2 was not a competitive product and sold as an uncompetitive product could. As a package, it was completely inferior to Ampere which delivered similar raster performance, superior ray tracing performance and had the software suite to back all the hardware changes. The industry leader does not often look to a stumbling competitor as inspiration on what performance goals to target. If they do that, it won't be long until they're no longer the leader..
 
My point was that AMD should not be used as a point of comparison and instead, Nvidia should be compared against themselves historically. This generation is a historical anomaly for Nvidia and raises cause for concern. Finally, RDNA 2 was not a competitive product and sold as an uncompetitive product could. As a package, it was completely inferior to Ampere which delivered similar raster performance, superior ray tracing performance and had the software suite to back all the hardware changes. The industry leader does not often look to a stumbling competitor as inspiration on what performance goals to target. If they do that, it won't be long until they're no longer the leader..
Your history books must be really short.
 
My point was that AMD should not be used as a point of comparison and instead, Nvidia should be compared against themselves historically. This generation is a historical anomaly for Nvidia and raises cause for concern. Finally, RDNA 2 was not a competitive product and sold as an uncompetitive product could. As a package, it was completely inferior to Ampere which delivered similar raster performance, superior ray tracing performance and had the software suite to back all the hardware changes. The industry leader does not often look to a stumbling competitor as inspiration on what performance goals to target. If they do that, it won't be long until they're no longer the leader..
AMD has been close to Nvidia in performance in the sectors they've competed for the last 20+ years. Over some periods they fell behind in performance per area and performance per watt and they haven't always competed at the high end. They are substantially behind in marketing, mindshare and features, but that's not relevant to my argument.

My point is not that Nvidia should be following AMD or using them as inspiration, but that both companies are facing the same set of limitations. So if Nvidia really is "missing" something with their designs, AMD should be able to capitalize on that. I presume that Nvidia wants to sell as many 5000 series cards as possible, so they wanted to make the fastest product within their target margins. So then the question is really, within the same transistor budget, the same process, and the allocated engineering resources, could they have made a substantially faster product? And I don't see how you can answer that question, simply by pointing to advancements that Nvidia has made in the past, as those advancements are already baked in.

That's when you would typically talk about what competitors are doing in the same space, though you seem to want to say that such discussion is off limits.
 
I think we need some perspective here with RTX 4xxx and RTX 5xxx. Ampere -> Ada benefitted from Samsung 8nm to TSMC 4N (really a 5nm iteration). This was by far the single largest process jump interms of interation count with any GPU generation. So you had signficantly more efficient and performant transistors and the ability pack more of them into any given space but not neccesarily at any given cost. Remember the chief complaint about 4xxx was price except for the 4090, which for some reason was very generously (if we want to use that word) priced relative to the previous generation but the rest of the stack was much less so with a combination of price increases and perf/$ regression in some cases.

Could the 5xxx have been "better?" At least somewhat if they designed it for 3nm but I think people should be cognizant that that would've increased costs that they would have wanted to pass on, and having a "better" product would have also meant it would've have been priced higher just due to desirability.

Instead what we have if you look at the chip design from GB203 and downwards is that it seems rather cost optimized and the maybe ~10% range perf/transistor and efficiency gains on the same node shouldn't just be dismissed.

Look I know what people are really thinking is that they would've wanted something like Blackwell on 3nm with higher gains (maybe on the order of at least 35% in terms of perf and efficiency) but at the same price decisions they made. That's fine from a ideal wisful thinking stand point but was that really a plausible alternative? Look it's fine everyone wants better products for cheaper, but at some point I feel it's just going into wishful thinking and ranting I feel than any type of meaningful discourse.
 
It's effectively a refresh, like the 4080 Super, since it is the same size chip on the same process. So I think it's more reasonable to think of it as a 4080 Super Super, with a new generation of features of course.

We can see the exact same thing in the AMD space with the 6600 XT > 7600 XT and even the 6800 XT > 7800 XT, where AMD benefited from a new process and packaging paradigm, but delivered ~5% more performance.
Blackwell is not a refresh. Its a new architecture with new hardware features.
 
Blackwell is twice as fast with FP4. Performance is not the only metric. Blackwell has a much better display engine, new encoder and decoder, better TensorCores, better memory controller, new RT Core features, new AI core processor etc.

I think Blackwell is bigger leap from Lovelace than Ampere from Turing.
 
Last edited:
Blackwell is twice as fast with FP4. Performance is not the only metric. Blackwell has a much better display engine, new encoder and decoder, better TensorCores, better memory controller, new RT Core features etc.

I think Blackwell is bigger leap from Lovelace than Ampere from Turing.
I never said performance was the only metric. The topic was not the Blackwell architecture in general, but GB203 specifically, and HU's claim that it delivers a disappointing performance improvement, and one not in line with previous '80 class cards.
 
How nVidia is naming their products has nothing to do with the performance improvement. It is based on the price point which logical has to increase with inflation.

BTW: This channel has dropped another "RTX 5080 is bad" video. This must the third of the fouth videos they have released since the end of the NDA.
 
Back
Top