Value of Hardware Unboxed benchmarking

Of course it does. Do you think that pixel and motion data of pixels that have moved off the screen can’t be tracked and accumulated from prior frames? How do you explain the presented video which obviously is showing deterministic and not “guessed” output. It’s just like DLSS, or Ray Reconstruction.

Read what Andrew posted, techniques like this have been used in the VR space for a long time.
Past pixels is one thing. The problem is pixels that haven't been in yet.
 
In Nvidia’s words:

Through our research, NVIDIA has developed a latency-optimized predictive rendering algorithm that uses camera, color and depth data from prior frames to in-paint these holes accurately.

We’ll have to wait on Nvidia to release their research papers. They did mention they have been working on it for five years.
 
In Nvidia’s words:



We’ll have to wait on Nvidia to release their research papers. They did mention they have been working on it for five years.

What happens if your prior frames do not contain any data about where you're now looking?

Edit: How many frames of data are you going to keep? What if I'm playing valorant and I'm holding an angle for 2s, and then I do a flick to look 180 behind me. SUPER common scenario. At say 240 fps, which is low for that game, what happens when I flick to look behind me? The frame history will have no data outside my field of view, which is only 103 degrees horizontally in that game (I think).


Downsides of ASW​


Just as with ATW, ASW is active and enabled for all applications without any developer effort.

There's no completely free lunch, however. ASW doesn't scale well below half the display's refresh rate. Depending on what's being displayed, there may be visual artifacts present as a result of imperfect extrapolation. Typical examples include:
  1. Rapid brightness changes. Lightning, swinging lights, fades, strobe lights, and other rapid brightness changes are hard to track for ASW. These portions of the scene may waver as ASW attempts to find recognizable blocks. Some kinds of animating translucency can cause a similar effect.
  2. Object disocclusion trails. Disocclusion is a graphics term for an object moving out of the way of another. As an object moves, ASW needs something to fill the space the object leaves behind. ASW doesn't know what's going to be there, so the world behind will be stretched to fill the void. Since things don't typically move very far on the display between 45 fps frames, these trails are generally minimal and hard to spot. As an example, if you look closely at the extrapolated image from the screenshots here you'll see a tiny bit of warping to the right of the revolver.
  3. Repeated patterns with rapid movement of them. An example might be running alongside an iron gate and looking at it. Since parts of the object look similar to others, it may be hard to tell which one moved where. With ASW, these mispredictions should be minimal but occasionally noticeable.
  4. Head-locked elements move too fast to track. Some applications use a head-locked cockpit, HUD, or menu. When applications attempt to do this on their own without the help of a head-locked layer, the result can be judder because the background is moving fast against the head-locked object. Some accommodation can be made with ASW, but users can move their head fast enough that they'll no longer track properly and the result won't be smooth. Using the head-locked layers (ovrLayerFlag_HeadLocked) provided by the Oculus Rift SDK will produce the ideal result.

Outside of point 4, you shouldn't avoid scenarios that produce these artifacts but rather be mindful of their appearance. ASW works well under most, but not all, circumstances to cover sub-90fps rendering. We feel the experience of ASW is a significant improvement to judder and is largely indistinguishable from full-rate rendering in most circumstances.[1]

So any time you have disocclusion the occluding object is stretched to fill the gap. It is an automatic artifact. That's likely what they'll do in the cases where they do not have historical frame data to infill. They'll just infill by stretching the screen to fill the edges or disoccluded areas. So, this will likely only be useful at higher framerates, since your mouse movements are going to be a lot faster than turning your head with a vr headset.
 
Last edited:
In Nvidia’s words:



We’ll have to wait on Nvidia to release their research papers. They did mention they have been working on it for five years.
That would be the case for stuff moving around in a frame. It should be obvious that it won't have any camera, color or depth data from stuff that never existed (outside of the viewport). If you're panning the camera on a wall and come across a picture, do you really think it would know about that picture in advance? It will still be a couple generations before Jensen invents time travel.

I still think the problems might not be super noticeable unless you're playing at low fps on something like a Steam Deck (small screen).
 
Last edited:
This is good -
Latency of 30 ms Benefits First Person Targeting Tasks More Than Refresh Rate Above 60 Hz

Post-Render Warp with Late Input Sampling Improves Aiming Under High Latency Conditions
I found an old personal message from Dr. Joohwan Kim who led the Reflex and Reflex2 research.
(Google translated)
Yesterday, a line of gaming software and hardware products called Nvidia Reflex was released. This line of products is a collection of technologies inspired by my research to minimize latency between input and output.

In particular, the latency analyzer product is a product that I invented myself. It is a function built into the monitor, and when you plug your mouse directly into the monitor, it accurately measures the time from mouse input to the corresponding image being displayed on the screen. Input-output latency is the most important system characteristic for gamers, but there are many variables and devices that need to be manipulated to minimize latency (CPU, GPU, display settings, in-game settings, etc.), so it is easy to miss, and therefore it is essential to measure and confirm whether latency has actually been minimized. However, the measurement method is not simple, so only those with specialized equipment and electronic engineering expertise can publish the measurement results.


Now that this feature is available, gamers can minimize latency much more easily and see if it is actually minimized. As a gamer and as a scientist, this is a product that I am really proud of, and the market response is also good. As an employee, the experience of turning an idea into a product is really special and rewarding.
 
My expectation for Reflex 2 is to have some minor glitching at the sides of the viewport. It's likely to be similar to what FG produce in similar circumstances. The question is - will it worth it for the input latency improvements? The answer will likely be "it depends on a title you're playing".
 
My expectation for Reflex 2 is to have some minor glitching at the sides of the viewport. It's likely to be similar to what FG produce in similar circumstances. The question is - will it worth it for the input latency improvements? The answer will likely be "it depends on a title you're playing".
I feel like with ultra competitive games the visuals are an afterthought anyways, so a bit of nonsense at the edges of the screen is probably no big deal.

However for those who like their competitive games looking good and consistent (myself for example) I hope they keep regular Reflex 1 as an alternative.
 
I feel like with ultra competitive games the visuals are an afterthought anyways, so a bit of nonsense at the edges of the screen is probably no big deal.

However for those who like their competitive games looking good and consistent (myself for example) I hope they keep regular Reflex 1 as an alternative.
Except that in ultra competitive those screen edges are especially important when someone comes to your field of view every pixel counts. If there's hallucinations on edges it's disadvantage if anything.
 
Except that in ultra competitive those screen edges are especially important when someone comes to your field of view every pixel counts. If there's hallucinations on edges it's disadvantage if anything.
How is this a "disadvantage" if you won't see anything at all without it (since the frame won't change in the absence of ATW)?
Also if we're talking about e-sports then we're talking about fps well in excess of 200. I have doubts that any artifacts will be very visible at such frequencies, while the added responsiveness may in fact be welcome.
 
Except that in ultra competitive those screen edges are especially important when someone comes to your field of view every pixel counts.
Yea this matters to people, and it's a reason why most hardcore competitive gamers use 24" displays. They dont want the sides of the screen too far in their periphery where our vision is really terrible, they want as much of the full screen of information as straight ahead as possible. They dont want to have to turn their heads at all or even turn the direction of their eyes too much.

24" is considered a general sweetspot to achieve this and having things be 'big enough' in their view at the same time.

And should there be any distracting visual artifacting at the sides of the screen, I can definitely see plenty of competitive types feeling Reflex 2 isn't worth it.
 
How is this a "disadvantage" if you won't see anything at all without it (since the frame won't change in the absence of ATW)?
Also if we're talking about e-sports then we're talking about fps well in excess of 200. I have doubts that any artifacts will be very visible at such frequencies, while the added responsiveness may in fact be welcome.
Without it, you'll get actual correct pixels and information in place of 'predicted' pixels.

And there's no way to predict how much any fast framerate/refresh will help without knowing how regular or severe the artifacting is in the first place.
 

FSR3.1 must have been really bad in this game. I can see the problems with it in this youtube video of him pointing a camera at a monitor on the show floor.

Really hope FSR4 is good. Good upscaling is the most important feature that AMD is missing.
 
Back
Top