Why isn't framerate upscaling being progressed when TVs have it but it's a better fit in game engine?

Shifty Geezer

uber-Troll!
Moderator
Legend
BFI aside another thing to try is Judder Reduction -that's how Samsung calls it, under Picture Clarity Settings-. It works wonders in games. I locked Elden Ring to 30fps and tried with Game Mode on, the framerate was unbearable.

Then I set Game Mode to Off and I switched to regular Picture Settings; Movie (also with Filmmaker Mode). I first tried with Judder Reduction at 0. There wasn't much of a change of course.

Finally, I set the Judder Reduction to the max value, 10. And boy, the difference was staggering. In fact with animations where the camera doesn't turn around it could easily pass for a 60Hz game. đź‘Ś If you rotate the camera, of course the movement isn't as smooth, but it's much much smoother than 30fps.
This type of frame interpolation is better handled in the game. Considering the great leaps and bounds made in resolution upscaling, why is framerate upscaling not getting the same love and progress? Why are we relying on TVs to scale down to 40 fps VRR instead of motion interpolating all games up to 60+?
 
Last edited:
This type of frame interpolation is better handled in the game. Considering the great leaps and bounds made in resolution upscaling, why is framerate upscaling not getting the same love and progress? Why are we relying on TVs of scale down to 40 fps instead of motion interpolating all games up to 660+?

according to Andreev, the bump from 30FPS to an interpolated 60FPS is indeed "free" in that the removed motion blur code is more "expensive", taking up more system resources, than his frame-rate upscaler


No other games do things like that? I mean other than using Nvidia fg
 
Last edited:




No other games do things like that? I mean other than using Nvidia fg
Yes.
I mentioned it in the DLSS thread (I think).
A game called Combat Arms: Reloaded had frame generation. Interestingly, it's a Lithtech engine game, not custom tech.
I've never actually tried it. I'm not even sure if the game is still active.
 
Yes.
I mentioned it in the DLSS thread (I think).
A game called Combat Arms: Reloaded had frame generation. Interestingly, it's a Lithtech engine game, not custom tech.
I've never actually tried it. I'm not even sure if the game is still active.
Why on earth didnt we get more games like that?
 
I haven't experienced any games that do this. I'm curious whether or not for faster-paced games this might cause a control input to screen presentation disconnect.

To elaborate: if a game looks like it is running at a faster framerate, but isn't responding to my control inputs at that same rate, would it be jarring?
 
The context is 30 fps games not looking smooth on OLEDs, and so motion smoothing in the TV being suggested. Faster games should be higher refresh rate be default. This is more a case of for 30 fps games that play okay at 30 fps, why not provide a motion smoothing option?

Curiously, it is standard proactice to render lower than screen res and reconstruct to render more efficiently than rendering every 2160p pixel. This also requires motion vectors. It shouldn't be a stretch to add an inbetween frame from that same data. the end result should be a better result than the TV with less, or certainly no worse, additional latency. In games that include 'performance' modes, this seems a fairly obvious strategy to try but with seemingly very little investment at the moment.
 
In the blog post that accompanied the annoucement of the FG tech in Combat Arms, the developers stated that image warping was the main drawback. I suppose this is what the Optical Flow processing in Nvidia GPUs is meant to correct.
Btw, if you never actually watched the demo video of the Force Unleashed FG technology, it's pretty interesting. Especially the 15-to-60fps upscale portion.
A youtuber has preserved it, since the links at the Digital Foundry article are dead.

 
This type of frame interpolation is better handled in the game. Considering the great leaps and bounds made in resolution upscaling, why is framerate upscaling not getting the same love and progress? Why are we relying on TVs to scale down to 40 fps VRR instead of motion interpolating all games up to 60+?
The two big reasons are 1) it adds latency (because you need two frames on which to interpolate, plus the time for the interpolation to happen) and 2) on objects that that are subject to non-lean motion, it introduces weird arse movement.

A lot of stuff looks OK, but some stuff really does not.
 
Yeah but that's still an issue with it done on TV also. Where 30 fps on OLED reportedly looks very juddery, you need motion smoothing. Why not do that in game rather than in the TV? I guess the theory is the TV is already doing it so the console can not bother. With that thinking, why not just render 1080p and have the 4K TVs upscale?

By now with lots of research, the improvement on the 2010 Force Unleashed introduction to the idea should be as pronounced as the improvements we've had over Checkerboard Rendering's first take on image reconstruction, which I think was introduced 2016.
 
it's used heavly in the Grid legends game VR on the Quest 2 i think, there are artifacts on fast moving objects, but overall the performance looks solid and fluid.
 
Yeah but that's still an issue with it done on TV also. Where 30 fps on OLED reportedly looks very juddery, you need motion smoothing. Why not do that in game rather than in the TV? I guess the theory is the TV is already doing it so the console can not bother. With that thinking, why not just render 1080p and have the 4K TVs upscale?

By now with lots of research, the improvement on the 2010 Force Unleashed introduction to the idea should be as pronounced as the improvements we've had over Checkerboard Rendering's first take on image reconstruction, which I think was introduced 2016.
What you are asking is basically what Sony is doing in PSVR with their 60fps to 120fps reprojection tech. But I think they can do that (without adding latency) only thanks to the features of PSVR headset tracking. Otherwise if you don't know where to reproject then you need 2 whole frames like Dsoup said.
 
What you are asking is basically what Sony is doing in PSVR with their 60fps to 120fps reprojection tech. But I think they can do that (without adding latency) only thanks to the features of PSVR headset tracking. Otherwise if you don't know where to reproject then you need 2 whole frames like Dsoup said.
Psvr was only reprojection of camera view direction, so no additional frames needed.
It doesn't help with moving objects.
 
Yeah but that's still an issue with it done on TV also. Where 30 fps on OLED reportedly looks very juddery, you need motion smoothing. Why not do that in game rather than in the TV? I guess the theory is the TV is already doing it so the console can not bother. With that thinking, why not just render 1080p and have the 4K TVs upscale?

By now with lots of research, the improvement on the 2010 Force Unleashed introduction to the idea should be as pronounced as the improvements we've had over Checkerboard Rendering's first take on image reconstruction, which I think was introduced 2016.

I don't think doing it on the TV is a good idea either, FWIW.

If your display doesn't handle presentation of low frame rate content well it may be a preferable option, but I don't think it's a good match for interactive media in general.

The fight for smooth, consistent frame display in games has seen a lot of methods tried, each with accompanying drawbacks. I think VRR is, overall, the best solution yet. The best playing games have the most direct path between control input to display of the result of that action and the fewer layers in between the better.

I'm not against frame generation in passive media, though. Depending on the source the result can be a noticeable improvement. I've messed with some AI resolution/framerate upscalers and they can achieve some astonishing results.
 
What you are asking is basically what Sony is doing in PSVR with their 60fps to 120fps reprojection tech.
Not quite, because AFAIK that just shifts the view to track motion. I think in-game animation is still 60 fps or whatever and not interpolated.

Otherwise if you don't know where to reproject then you need 2 whole frames like Dsoup said.
You shouldn't need any more than already present for temporal image reconstruction techniques. You have the motion from the last frame and the motion from the next/current frame. You can extrapolate that for the tween, or delay a frame and tween between known positions.

I think VRR is, overall, the best solution yet.
That's not a solution where low framerate 30 fps looks juddery. Rather than looking for a fix for high framerate games that sometimes drop low, this is looking at a fix for displays that can't handle low framerate games well and need a 60 fps stream. I mean, devs could just ditch 30 fps outright, problem solved. ;) But given the overheads of that, just like upscaling lower res to 2160p is overall a better compromise than rendering a full 2160p, rendering lower framerates and upscaling seems a smart option.

I dunno. Are those TVs that struggle with 30 fps fine with 45 fps? Although devs can't really set 45 fps as a minimum yet because to many gaming displays are 30/60 Hz.
 
why is framerate upscaling not getting the same love and progress?
I think we need to study the example that actually works to maybe figure out why it took so much, and by that example I mean DLSS3, or DLSS Frame Generation.

Theoretically, NVIDIA could've introduced this 4 years ago with DLSS1, or 3 years ago with DLSS2, but they waited several years to introduce it, so maybe they needed to develop AI algorithms to do it reasonably well or with sufficient accuracy? NVIDIA also needed to develop Reflex, to reduce latency to "levels comparable to native latency", they also needed hardware that is fast enough to work in real time. Even with all of that DLSS3 still has some teething issues, yes it beats all offline AI Frame Generation methods in accuracy and speed, but it sill increases latency a bit, and it still have issues with UI in several games.

So I am postulating that a number of factors needed to come together for frame generation to work in NVIDIA's case, this probably made it very hard for others to do it without these factors.
 
Btw what's preventing devs from implementing async time warp to 2d games?

Basically all VR games have it, for years
 
Are those TVs that struggle with 30 fps fine with 45 fps? Although devs can't really set 45 fps as a minimum yet because to many gaming displays are 30/60 Hz.

That's why we need high refresh rate displays. Not for the top-end, but for all the subdivisions.
A 360hz standard would give you access to perfect frame-pacing for framerates of 24, 30, 36, 40, 45, 60, 72, 90, 120, 180, and 360hz. And any brief deviation over or under those targets would not be visibile because time-to-refresh is so small at 360hz.
 
Yeah but that's still an issue with it done on TV also. Where 30 fps on OLED reportedly looks very juddery, you need motion smoothing. Why not do that in game rather than in the TV?

I would also like to know this. When the TV is doing the work, the real hit is latency, i.e. to smooth 30fps (32ms) you need to introduce at least an extra 32ms latency whilst you can waiting for bounding frames form which to produce your interpolated frame, plus what Xms it takes to generate the frame - which is all on top of any inputer/display lag.

Some higher-end TVs - which I think includes Sony Bravia 'X-Reality Pro' tech - uses more historic frames which might reduce some of the motion artefacts, although if something is so non-linear as to complete change direction/vector then it's still going to look a bit funky

It does seem like that the game itself could do a better job to alleviating these issues. I have a PSVR which uses asynchronous reprojection and that seems to work really well. Because of the sensory focus, any weirdness in VR tends to be more pronounced.
 
I think we need to study the example that actually works to maybe figure out why it took so much, and by that example I mean DLSS3, or DLSS Frame Generation.

Theoretically, NVIDIA could've introduced this 4 years ago with DLSS1, or 3 years ago with DLSS2, but they waited several years to introduce it, so maybe they needed to develop AI algorithms to do it reasonably well or with sufficient accuracy? NVIDIA also needed to develop Reflex, to reduce latency to "levels comparable to native latency", they also needed hardware that is fast enough to work in real time. Even with all of that DLSS3 still has some teething issues, yes it beats all offline AI Frame Generation methods in accuracy and speed, but it sill increases latency a bit, and it still have issues with UI in several games.

So I am postulating that a number of factors needed to come together for frame generation to work in NVIDIA's case, this probably made it very hard for others to do it without these factors.
DLSS3 is a particular solution using ML. The 2010 example was using a more traditional algorithm and, like FXAA, TAA etc, could have been developed since then without using an ML solution. Conceptually, given past frame pixel and motion data, and current frame pixel and motion data, interpolating something a bit motion-blurry-but-appropriate shouldn't be too tricky. It wasn't really enabled before games started including per-object motion vectors and they probably didn't start that until per-object motion blur was feasible, no-one thinking to go that route for frame interpolation.

Indeed, none of the TV frame-interpolation systems using DLSS3 type ML and they've provided motion upscaling for some years now.
 
Back
Top