Value of Hardware Unboxed benchmarking

As expected, mfg is rubbish. This comes as no surprise since fg 2x was rubbish. I really appreciate Tim going into details and showing more examples. It’s also nice to put to rest all the flawed latency arguments that had been going on. Using frame gen provides an inferior game feel always and even Nvidia’s numbers validate that….

Last cycle with the 4000 series, people were overhyping frame gen. Then when I finally got a 4000 series GPU, I could see all these artifacts. It made a lot of reviews come across as Nvidia “paid” advertising. MFG has not only failed to improve the frame gen image experience as it relates to image quality, it magnifies the problems that already exist.

All of this is due to Nvidia failing to meet their performance goals for the new generation. This frame gen noise is just like when monitor manufacturers started listing “dynamic contrast”. Just misleading marketing designed to fool consumers into purchasing products based on false claims. Very few reviewers are talking about that instead acting as an extension of Nvidia’s marketing arm. Lots of excuses are rendered to justify the failure to meet performance targets instead of calling a spade a spade. Almost 0 ipc improvement and very few people are talking about that either….

Frame gen would be fine if it was discussed as a value add. The way people are talking about it now just invites negative discussion about the technology.

A real discussion needs to be had about the quid pro quo that goes on nowadays with reviews. The idea that receiving a free 5090, 5080, 5070, etc to review won’t influence the nature of the review is detached from reality. If I got a free 5090, I’d be singing the praises of Nvidia at every turn. It’s hard to take the opinions of those seriously who do not pay for their products. It’s especially very annoying when you spend your hard earned money on the product only to see a huge deviation in experience from the “reviewers”.

Other than hub who has been temporarily blacklisted and gamers nexus, there aren’t many reviewers who talk honestly about the products. There are way too many influencers masquerading as reviewers nowadays for example Optimum Tech…..
 
My opinion is that pausing the video on a generated frame is not super representative of showing the persistent errors frame-gen may have: that exaggerates things that your eyes cannot see and is more so academic than it is about the usecase.
There's a whole host of people who make these exact same jokes about the nitpicking DF often does in y'alls videos. lol

"And here when we zoom in closely we can see how this reflection is not actually entirely accurate..." :p
 
There's a whole host of people who make these exact same jokes about the nitpicking DF often does in y'alls videos. lol

"And here when we zoom in closely we can see how this reflection is not actually entirely accurate..." :p
DF don't zoom to highlight things that are otherwise imperceptible though, it's to prevent YouTube compression from mangling everything, and for viewers on lower res screens or phones. It's not at all the same.
 
There's a whole host of people who make these exact same jokes about the nitpicking DF often does in y'alls videos. lol

"And here when we zoom in closely we can see how this reflection is not actually entirely accurate..." :p
lol
Well as mentioned below from @Qesa zooming is for phones and didactic purposes for things that are visible at 100% frame size at PC monitor viewing distance. But the important that I was trying to say there in my comment is something I mentioned back in our original DLSS 3 review. The stobing nature and low persistance of error in frame generation makes analysing ihave to be different than how we analyse image reconstruction. In image reconstruction error maintains in every frame over a sequence so that is where it becomes identifiable rather easily. The strobing of frame generation changes the persistence of error subjectively, so you end up not seeing a whole lot of stuff which would just not fly if it were in every frame.

The physical size and active contrast of an error in frame generation has to be a lot larger and common for it to be readily visible in motion at the frame-rates that framegeneration is advised at (80-240hz). A lot larger than the error we see in image reconstruction.

Another thing we are struggling with right now is communicating the effect of frame generation being on with the same content with it being off for that content. So for example, a 5090 running Outlaws at 4K DLSS Performance mode FG off, vs. the same with FG on. The FG on view on a 240hz screen has way lower persistence blur than the 70-80 fps of FG off... yet how do we tell an audience watching a video at 60 fps on youtube that multi frame generation makes games a lot less blurry if images captured of it do not show off this human eye aretfact?
 
In the age of the 5090 being CPU limited at 4K and CPUs not getting faster quickly enough .. how can one gain smoother fps without frame generation? ...
Of course there are titles where it can be CPU limited, like any GPU can for sure, but declaring it outright CPU limited is just BS.
In fact, here's ComputerBases benchmark, where every single title gained FPS from overclocking, which wouldn't happen if it was CPU limited.
And here's Guru3D's ROG Astral, which is already OC'd, being faster than reference, and still gaining more from additional oc
And would you look at that, same thing happens at TechPowerUp
 
lol
Well as mentioned below from @Qesa zooming is for phones and didactic purposes for things that are visible at 100% frame size at PC monitor viewing distance. But the important that I was trying to say there in my comment is something I mentioned back in our original DLSS 3 review. The stobing nature and low persistance of error in frame generation makes analysing ihave to be different than how we analyse image reconstruction. In image reconstruction error maintains in every frame over a sequence so that is where it becomes identifiable rather easily. The strobing of frame generation changes the persistence of error subjectively, so you end up not seeing a whole lot of stuff which would just not fly if it were in every frame.

The physical size and active contrast of an error in frame generation has to be a lot larger and common for it to be readily visible in motion at the frame-rates that framegeneration is advised at (80-240hz). A lot larger than the error we see in image reconstruction.

Another thing we are struggling with right now is communicating the effect of frame generation being on with the same content with it being off for that content. So for example, a 5090 running Outlaws at 4K DLSS Performance mode FG off, vs. the same with FG on. The FG on view on a 240hz screen has way lower persistence blur than the 70-80 fps of FG off... yet how do we tell an audience watching a video at 60 fps on youtube that multi frame generation makes games a lot less blurry if images captured of it do not show off this human eye aretfact?
I wasn't totally disagreeing with you or anything, just making a joke.

Though Tim from HUB is a fairly performance and image quality-sensitive person, so I do trust if he says that these things can be noticeable at full speed. But you're right that it makes it very hard to show to people so they can make up their own mind. I'm probably less sensitive to a lot of these things, and think general 2x FG is pretty great technology at the very least.
 
Back
Top