Value of Hardware Unboxed benchmarking

You are both correct. BFI improves the perception of motion. Unlike CRTs, LCDs are sample-and-hold, so within a 16.67ms window the LCD pixel will continue to hold a color static for the entire duration, and then instantly jump to a different frame at refresh. This is horrible. BFI reduces the sample duration. By inserting black frames, it tricks our brain into interpolating the space between the two visible frames, thereby creating the illusion of smoother motion. It's exactly why scanlines have an antialiasing effect on low-resolution content, except that is over space instead of time. Frame generation is actually inserting interpolated frames instead of having to trick our brain into doing it. With sufficient interpolated frames, the sample-and-hold artifacting will become almost imperceptible.
But i dont think that BFI makes it smoother. It cuts the transistion time between frames. 60 FPS at 60hz are just 60 FPS. More frames are always better. Otherwise why would we play with more than 30 FPS, 60 FPS, 120 FPS...?!

I don't understand this. Let's take FG to its extreme and say that hypothetically it's interpolating 10000 frames every 16.67ms. Motion is now perfect, but the system still responds to inputs every 16.67ms. Are you saying that would feel weird? A higher sampling rate would be obviously and perceptibly better, but you're saying that 10000 fps motion with 60fps sampling feels worse than 60fps motion at 60fps input sampling? I won't challenge you, but have you actually done A/B comparisons? Are you sure it isn't input lag (which is horrible) that's bothering you rather than the input sampling rate?
nVidia's G-Sync Pulsar could be a great example of how disconnect FPS and motion clarity really is. nVidia claims that Pulsar provides 4x the clarity. So at 100FPS the displays behaves like it displays 400 FPS.

In the end it doesnt matter how we get to better motion claritity and smoothness. If somebody wants the lowest input latency he would to have to buy the best hardware available. But better hardware (display) and better software (FG) can provide cheaper ways.
 
Last edited:
100%. I would give you more % if I could but sadly in this context I am limited to 100.
Change your baseline. Rather than give percentage relative to your maximum, give a percentage relative to something else, such as "enthusiasm felt by a dung beetle who's noticed he's picked a little extra poop just rolling his ball home." Then small-print that.

Yes, 36,000,000%!

Relative to a baseline enthusiasm felt by a dung beetle who's noticed he's picked a little extra poop just rolling his ball home.

Or, if you're a politician, just leave the explanation out and have an unqualified metric that sounds good ("I give more than the paltry 100% of my rivals") and can be rationalised if queried.
 
I actually played the elden ring dlc with lossless scaling, its a locked 60hz game so I was doing the 60-120 thing. Early on I had a particularly good fight where I landed more parries than I normally ever do, so I turned FG off to see how I went and I didn't do as well. For some reason I parry better with FG than native, I don't really get it maybe I notice the tells i'm looking for when to parry easier when it's smoother? because latency hasn't improved obviously.
That is absolutely what is happening and is honestly a great point in all this.

While the inherent input lag benefits of higher 'real' framerates can be noticeable for your general 'second to second' gameplay feel, we shouldn't discount that higher visual smoothness and clarity helps us read what's going on easier, and can give us a leg up on input reaction time. So yea, better player performance in games with higher framerates isn't necessarily all about the input lag alone. And so in this case, assuming you've got a display that can actually produce the extra frames, frame generation could totally still provide some input response benefit.

And I think Souls games are one of the better examples where the better readability helps because ever since Bloodborne, From Software have this predilection for very visually noisy bosses with lots of exaggerated and lavish animations that combined, can be difficult to read in the moment. Being able to more easily/quickly read what's happening is definitely going to be an advantage to some degree.
 
No, lag and rate are different. For example, if I have a really crappy peripheral it could take 10 seconds for the first button press to trigger an event in the game engine, but every subsequent button press could get queued up every 16ms after your first press and trigger events at a steady 16ms rate at the game engine, except that each event is delayed 10 seconds from when you pressed the button. That would feel awful, but it's really an input lag issue and not a sampling rate issue. In computer architecture terms it's a latency problem not a throughput/rate problem. For most people a "small" amount of *consistent* lag is something that the brain can adapt to and learn to ignore. A/B'ing will remind you immediately, but play with a small lag for a while and your brain will acclimatize to it. But of course, lower lag is strictly better.

Since your A/B comparison was with FG on-vs-off I suspect (but of course can't confirm) that what you're perceiving is just vanilla input lag, and it's not a sample rate issue. If true, then the "floatiness" you feel should be no worse with 3x or 4x FG compared to 2x FG because the lag doesn't change much. But all FG will feel more disconnected than no-FG (all other factors being equal) and especially so if you're playing a mouse-based game.
Possibly so, if I eventually get a 50 series card I can test this.

That said though, my peripherals are fairly low latency (let’s just assume this is the case I haven’t actually tested this before lol). If I run a game at 120 fps vs 60 fps, latency should be drastically lowered correct? I would imagine the game would be responding to input twice as fast since it’s drawing frames in half the time.
 
I can’t help but think you’ve missed the info on Reflex 2. It makes the input polling rate the same as the display frame rate - including polling and updating a generated frame just before the frame buffer swap.
Yes reflex 2 should help with this a lot but having used async space warp before im still a bit skeptical.
 
What does Reflex 2 do? Are there any good videos explaining it?

It's the same style of optical warp that's done to improve VR latency.


This is the link to the earlier research that is shown in the video.


But other companies have been doing this for a while. They refer to one of the fixes as "guard bands," which is what VR does. It's rendering an image that's larger than the screen, so when you warp the image you can fill on the gaps from the pixels in the guard bands. Seems like Nvidia is doing a different approach and filling in the gaps with ai stuff, which to me seems more prone to issues, but I guess we'll find out.

Edit: Notice they show Valorant, which is a game you can easily run at 250+ fps, even on lower end pcs. Even if you're flicking fast, the gpu is rendering so many frames that it's the ideal case for warping, because any gaps after warping will be very small, even on fairly fast flicks. Will be more interesting to see the results at say 120 fps or less.
 
Last edited:
It's the same style of optical warp that's done to improve VR latency.


This is the link to the earlier research that is shown in the video.


But other companies have been doing this for a while. They refer to one of the fixes as "guard bands," which is what VR does. It's rendering an image that's larger than the screen, so when you warp the image you can fill on the gaps from the pixels in the guard bands. Seems like Nvidia is doing a different approach and filling in the gaps with ai stuff, which to me seems more prone to issues, but I guess we'll find out.

Edit: Notice they show Valorant, which is a game you can easily run at 250+ fps, even on lower end pcs. Even if you're flicking fast, the gpu is rendering so many frames that it's the ideal case for warping, because any gaps after warping will be very small, even on fairly fast flicks. Will be more interesting to see the results at say 120 fps or less.
So it has to guess what should be there? What about when it's guessing from an AI generated frame that was generated from a frame that was upscaled with AI? Lots of guesswork. Seems this could be taken too far but we'll see. Maybe it's not noticeable on the edges of the screen if your monitor is big enough. Or maybe like you said it's intended for competitive shooters with super high framerates.
 
Edit: Notice they show Valorant, which is a game you can easily run at 250+ fps, even on lower end pcs. Even if you're flicking fast, the gpu is rendering so many frames that it's the ideal case for warping, because any gaps after warping will be very small, even on fairly fast flicks. Will be more interesting to see the results at say 120 fps or less.

Notice they say Valorant is running at 800 fps 😱
 
So it has to guess what should be there? What about when it's guessing from an AI generated frame that was generated from a frame that was upscaled with AI? Seems this could be taken too far but we'll see. Maybe it's not noticeable on the edges of the screen if your monitor is big enough. Or maybe like you said it's intended for competitive shooters with super high framerates.

It’s not guessing. Watch the video.
 
It’s not guessing. Watch the video.
I watched it. At least around the edges of the screen it is literally guessing.

guess
verb
to give an answer to a particular question when you do not have all the facts and so cannot be certain if you are correct:

Mind you it's probably pretty damn good at guessing. I couldn't really see any obvious artifacting in those clips.
 
I watched it. At least around the edges of the screen it is literally guessing.



Mind you it's probably pretty damn good at guessing. I couldn't really see any obvious artifacting in those clips.

Also how much of the screen edge does it typically need to fill in each frame. 1% vs 10% could make quite a difference.
 
Notice they say Valorant is running at 800 fps 😱

Yep. Valorant is a game that was designed to be competitive and run on just about anything so high end GPUs will run at rates like that. The higher the frame rate the less noticeable the warping is going to be. Reflex 2 on a game like cyberpunk or something might not be so nice. Feels like tech specifically for competitive games and high frame rates.
 
Also how much of the screen edge does it typically need to fill in each frame. 1% vs 10% could make quite a difference.
Like @Scott_Arm said it would depend on the framerate and how fast you're flicking. At 30FPS a fast flick could lead to much more than 10% of the frame being completely hallucinated (that's a guess :) ). I doubt it's intended for that use case and it might have some limit to how much it can pad out on the edges.

I was highly skeptical of DLSS and framegen but both I see as essential now, so I'm not too worried. They've been on point with most of this stuff so far.

Perhaps the thing that excites me the most is being able to override DLSS with the new version in the NV App. At least I think that's what they were saying. That would be incredible. There are hundreds of games that already support DLSS and this would be a free upgrade for everyone in all those games, going back to Turing users. Hopefully this can't trigger any game bans like that AMD thing did a while back.

All in all I went from expecting NVIDIA to fuck us to being pleasantly surprised on pretty much all fronts. They could have really twisted the knife and still maintained ~90% market share IMO.
 
Last edited:
I watched it. At least around the edges of the screen it is literally guessing.



Mind you it's probably pretty damn good at guessing. I couldn't really see any obvious artifacting in those clips.

It uses past and current frame data, plus movement data, and feeds it into an AI transformer that calculates the pixels that have to filled in. It’s no more guessing than DLSS Super Resolution is. Or DLAA, DLDSR, Frame Generation or any other deep learning/neural network feature.
 
It uses past and current frame data, plus movement data, and feeds it into an AI transformer that calculates the pixels that have to filled in. It’s no more guessing than DLSS Super Resolution is. Or DLAA, DLDSR, Frame Generation or any other deep learning/neural network feature.
I'm talking about the edges of the screen. It has literally no information about something that is outside of the frame, unless maybe you're shaking the camera back and forth really fast.
 
I'm talking about the edges of the screen. It has literally no information about something that is outside of the frame, unless maybe you're shaking the camera back and forth really fast.

Of course it does. Do you think that pixel and motion data of pixels that have moved off the screen can’t be tracked and accumulated from prior frames? How do you explain the presented video which obviously is showing deterministic and not “guessed” output. It’s just like DLSS, or Ray Reconstruction.

Read what Andrew posted, techniques like this have been used in the VR space for a long time.
 
It uses past and current frame data, plus movement data, and feeds it into an AI transformer that calculates the pixels that have to filled in. It’s no more guessing than DLSS Super Resolution is. Or DLAA, DLDSR, Frame Generation or any other deep learning/neural network feature.

I don't think this is correct. DLSS super resolution renders an entire frame before upscaling. At worst you you would have a low resolution image with a naive upscale and aliasing. Then you take history and heuristics to try to enhance the image.

Frame gen is just interpolating between two frames that you have data for. It can have disocclusion problems too. If they really are doing extrapolation, then there's definitely an issue of guessing what fills in the gaps.

Warping literally has no information about what's off screen, or what's occluded behind objects in a frame. Sure, you have a depth buffer, but if you have an object in front of the camera, you have no idea what's behind the object, and moving could disocclude that object and then what do you fill that disoccluded space in with? Usually for VR warping they render a frame that's basically larger than the camera. It would be say 10% larger at the top, bottom, left and right edges (that's just a guess at the size of the "guard rails"). Then when you warp, you have data from off the screen edges to pull into the frame with some warping because of rotations and whatnot. But the disocclusion is a common failure case, and I'm not sure what kind of data you would have to know what's behind an object the camera hasn't looked behind before.
 
Back
Top