Lossless Scaling. FGx2-x20 & upscaling on ANY GPU! Miniguide page 11

now with the addition of Adaptive Frame Generation, now you can also use LS to output "negative" frame generation. So now you can be running a game internally at 60fps but you can use a 0.5X multiplier and run it at 30fps using LS, or use a 0.4X multiplier so you get 24fps from a base of 60fps, etc etc etc.

In this regard, it's interesting what Alex from Digital Foundry commented on what a former AMD employee, now at Intel, said about the future of Frame Generation.

Said Intel employee commented that Frame Generation could produce extremely high framerates to match the maximum refresh rate of your monitor, regardless of the base framerate of the content (i.e. going from 40fps to 1000fps on a 1000Hz display).

In this scenario, simply knowing your monitor's refresh rate would allow frame generation technology to handle the rest.

The idea of FG dynamically outputting frames to match the maximum refresh rate of your monitor, regardless of the base framerate, could revolutionize how we experience games and visuals.

It would mean smoother gameplay and a more seamless experience, even on hardware that might not natively achieve those high framerates, and would take the most advantage of your display.
 
Last edited:
The idea of FG dynamically outputting frames to match the maximum refresh rate of your monitor, regardless of the base framerate, could revolutionize how we experience games and visuals.

It would mean smoother gameplay and a more seamless experience, even on hardware that might not natively achieve those high framerates, and would take the most advantage of your display.

One downside of this scheme is that this could introduce extra latency (than native, but not more than current frame gen tech).
Personally I think smoothness outweight the downside, providing that the interpolation is done well.
To be more extreme maybe it's possible to do something like using AI to fill up missing pixels (a bit like what NVIDIA is doing with Reflex 2), then we can render a scene with randomized pixels in some way and it'd be much easier to fix rendering time (providing the CPU is fast enough). This is probably more suitable for raytracing, as in traditional rendering there'd be too much dependencies (e.g. you need to render shadow maps before actually rendering the main scene). This way we can have both smooth gameplay and low latency.
 
One downside of this scheme is that this could introduce extra latency (than native, but not more than current frame gen tech).
Personally I think smoothness outweight the downside, providing that the interpolation is done well.
To be more extreme maybe it's possible to do something like using AI to fill up missing pixels (a bit like what NVIDIA is doing with Reflex 2), then we can render a scene with randomized pixels in some way and it'd be much easier to fix rendering time (providing the CPU is fast enough). This is probably more suitable for raytracing, as in traditional rendering there'd be too much dependencies (e.g. you need to render shadow maps before actually rendering the main scene). This way we can have both smooth gameplay and low latency.
wonder if this new approach can get rid of frametime and framepacing issues in games, because of what @Dictator said about his conversation with a formerly AMD employee now at Intel who mentioned that the idea behind that was getting rid of VRR, which is apparently a nightmare for them.

I haven't tested this since I am playing at 4K 60fps as of late, but will do when I connect a 360Hz monitor I have once again.
 
wonder if this new approach can get rid of frametime and framepacing issues in games, because of what @Dictator said about his conversation with a formerly AMD employee now at Intel who mentioned that the idea behind that was getting rid of VRR, which is apparently a nightmare for them.

I haven't tested this since I am playing at 4K 60fps as of late, but will do when I connect a 360Hz monitor I have once again.

A fixed target frame rate approach like what Lossless Scaling is doing now certainly will help with frame pacing issues, as the output frame rate will be fixed and in theory the AI interpolator should be able to smooth out the inconsistancies. If a game does not do something very weird such as fix their game logic with the frame rate, player experiences should be mostly fine.

For example, imagine an object moving to the right at 10 pixels per ms. If the frame time is not very smooth, say, 30 ms, 50 ms, and then 40 ms, the object should be moving like 300 pixels, then 500 pixels, then 400 pixels for these frames. Although overall the moving is smooth in terms of time passed, from the player's point of view, the moving of the object is not smooth and will look jumpy.
Now imagine an AI frame interpolator generates frame in between in say 10 ms apart, so we'll have 12 frames all with 10 ms frame time, and it can interpolate the object to be moving 100 pixels in the first frame, 200 pixels in the second frame, and so on. Then in all these frames the object will be moving 100 pixels per frame so it will be very smooth and consistent.

Then we'll have to consider the problem of user inputs. If a game ties user input with frames it will still feel a bit jumpy, because although the user is seeing smooth frames, the inputs are only sampled at the actual frame times so inputs in a longer frame will feel later than inputs in a shorter frame. This can't be completely solved because an interpolated frame can't really reflect the user's input as only actual rendered frames are derived from user inputs. So in the long run it's still better to have consistent rendered frame time with fixed input sampling time. That's why I think it'd be better if frame gen is used to generate partly rendered frames instead of interpolating frames between rendered frames.
 
A fixed target frame rate approach like what Lossless Scaling is doing now certainly will help with frame pacing issues, as the output frame rate will be fixed and in theory the AI interpolator should be able to smooth out the inconsistancies. If a game does not do something very weird such as fix their game logic with the frame rate, player experiences should be mostly fine.

For example, imagine an object moving to the right at 10 pixels per ms. If the frame time is not very smooth, say, 30 ms, 50 ms, and then 40 ms, the object should be moving like 300 pixels, then 500 pixels, then 400 pixels for these frames. Although overall the moving is smooth in terms of time passed, from the player's point of view, the moving of the object is not smooth and will look jumpy.
Now imagine an AI frame interpolator generates frame in between in say 10 ms apart, so we'll have 12 frames all with 10 ms frame time, and it can interpolate the object to be moving 100 pixels in the first frame, 200 pixels in the second frame, and so on. Then in all these frames the object will be moving 100 pixels per frame so it will be very smooth and consistent.

Then we'll have to consider the problem of user inputs. If a game ties user input with frames it will still feel a bit jumpy, because although the user is seeing smooth frames, the inputs are only sampled at the actual frame times so inputs in a longer frame will feel later than inputs in a shorter frame. This can't be completely solved because an interpolated frame can't really reflect the user's input as only actual rendered frames are derived from user inputs. So in the long run it's still better to have consistent rendered frame time with fixed input sampling time. That's why I think it'd be better if frame gen is used to generate partly rendered frames instead of interpolating frames between rendered frames.
what do you mean by games that ties the user input with frames? Games like Rock Band or Guitar Hero? just curious....
 
That's why I think it'd be better if frame gen is used to generate partly rendered frames instead of interpolating frames between rendered frames.
That's sentence doesn't make a lot of sense. If it doesn't generate entire frames, it's just traditional super-resolution before framegen. For increasing fps without increasing the rate of the render loop, there is only frame interpolation and frame extrapolation ... there are not other options.

As for extrapolation, Frameless rendering is in my opinion still the best term to describe it. The main render loop samples should never be shown as is, every displayed frame generated by a scan locked extrapolation algorithm, taking into account camera movement from inputs. NVIDIA will do it eventually, it's inevitable. I hope Intel doesn't sit on their research in the mean time though, they should just integrate their extrapolation tech in Unreal and try to get some twitch games to use it. Lossless scaling can't really do it, because you need to overrender the edges and get real time inputs for determining the view matrix for the extrapolated frame. It needs to be integrated in the game engine.
 
Last edited:
what do you mean by games that ties the user input with frames? Games like Rock Band or Guitar Hero? just curious....

Some games do their main loops like this:
Take user input -> render frame -> take user input -> render frame -> ...
I have no data but I think it's relatively rare these days.
 
That's sentence doesn't make a lot of sense. If it doesn't generate entire frames, it's just traditional super-resolution before framegen. For increasing fps without increasing the rate of the render loop, there is only frame interpolation and frame extrapolation ... there are not other options.

As for extrapolation, Frameless rendering is in my opinion still the best term to describe it. The main render loop samples should never be shown as is, every displayed frame generated by a scan locked extrapolation algorithm, taking into account camera movement from inputs. NVIDIA will do it eventually, it's inevitable. I hope Intel doesn't sit on their research in the mean time though, they should just integrate their extrapolation tech in Unreal and try to get some twitch games to use it. Lossless scaling can't really do it, because you need to overrender the edges and get real time inputs for determining the view matrix for the extrapolated frame. It needs to be integrated in the game engine.

I think it's probably a bit like what you mentioned. I was not talking about traditional super resolution because you still need to estimate a target resolution before each frame which can be quite messy. What I imagine is something like randomized rendering where a renderer renders random pixels (probably not really random for memory coalescence reasons), and when it's very close to the allocated frame time, use AI to interpolate all remaining pixels.
 
Using golden pixels which must be shown is likely not worth it, it complicates everything. Consistent framerates, consistent quality, everything. Never show rendered samples, fudge everything.

Hallucination will be a lot less objectionable if it doesn't clash with golden pixels. Sometimes being consistently slightly wrong is better than correcting yourself ;)
 
Using golden pixels which must be shown is likely not worth it, it complicates everything. Consistent framerates, consistent quality, everything. Never show rendered samples, fudge everything.

Hallucination will be a lot less objectionable if it doesn't clash with golden pixels. Sometimes being consistently slightly wrong is better than correcting yourself ;)

It does not necessarily have to be golden pixels, but I think it's probably better to start somewhere simpler. For example, I remember that in the discussion of Jensen Huang's "all pixels will be AI generated" people were somewhat "panicked" about that, but this is actually a very similar idea. I can imagine that at least in the beginning we might need some "good" full frames as references, then in the middle we may just need the geometry data (could be something without shading at all, only where the object boundries are) and some pixels with shading or lighting information, and let the AI try to fill up the rest. In the future there's probably no longer the need for the reference frame at all.
Now the interesting question is, how do we migrate into this future? This does not seem to work well with a traditional renderer, but a new render engine supporting two modes might be able to do the trick. A game just use the game engine "normally" and let the game engine to handle the rest. Although I suspect that this will have to be after raytracing becomes the default, so game engines no longer rely on shadow maps and reflection maps.
 
the section of this week's Digital Foundry where they talk about LS. The comments of the video are kinda descriptive.

 
That's why I think it'd be better if frame gen is used to generate partly rendered frames instead of interpolating frames between rendered frames.

So using AI to do what a second GPU did in SLI and Crossfire? So AI would act as part of SFR, interesting concept.

That way you would get the proper user input response with each frame while saving performance each frame by having AI handle half the rendering.
 
So using AI to do what a second GPU did in SLI and Crossfire? So AI would act as part of SFR, interesting concept.

That way you would get the proper user input response with each frame while saving performance each frame by having AI handle half the rendering.
if I understood him correctly, Yensen Huang hinted at a future where GPUs wouldn't render the scenes, they'd just describe the scene and the AI would generate the frames, so a ton of raster power wouldn't be necessary. But maybe that's just me.
 
if I understood him correctly, Yensen Huang hinted at a future where GPUs wouldn't render the scenes, they'd just describe the scene and the AI would generate the frames, so a ton of raster power wouldn't be necessary. But maybe that's just me.

Future GPU's that render like that would still need a tone of raster and RT performance to maintain backwards compatibility.
 
Future GPU's that render like that would still need a tone of raster and RT performance to maintain backwards compatibility.
what you mentioned on having a second GPU is really curious, 'cos LS allows users with dual GPUs -discrete GPU and integrated GPU- to use one of them to just render the Frame Generation frames -normally the iGPU- and the other GPU -normally the discrete GPU- just renders the game normally, so both are used very efficiently and not performing at their limit.
 
what you mentioned on having a second GPU is really curious, 'cos LS allows users with dual GPUs -discrete GPU and integrated GPU- to use one of them to just render the Frame Generation frames -normally the iGPU- and the other GPU -normally the discrete GPU- just renders the game normally, so both are used very efficiently and not performing at their limit.

I didn't really mean having a second physical GPU, just that you could have the raster hardware rendering half the frame and the tensor cores rendering the other half with AI.

SFR (split frame rendering) was something that SLI and Crossfire did and offered which is why I made the comparison and that there needs to be another dedicated GPU.
 
I didn't really mean having a second physical GPU, just that you could have the raster hardware rendering half the frame and the tensor cores rendering the other half with AI.
Normal rendering is inherently hard to predict the speed off, you can try to dynamically trade off quality with speed but inside a frame it's still hard to meet a render budget.

Motion compensation and "AI" filtering on the other hand are fairly predictable, so it's easy to finish pixels just in time for display, even a couple scanlines ahead.

That's why it makes most sense to never rely on rendering being finished to create a display frame. If rendering is finished, it gets used for filtering, if not then not, either way the display frame is ready on time.
 
there is this guide out there for people having issues with crashes in the new version of LS, the fix seems to be going to the the config.ini and basically change flush=1 to =0.

[rendering]
flush = 0

[capture]
frametime_buffer_size = 15
frametime_buffer_reset_multiplier = 6
queue_draining_momentum = 0.01

[lsfg]
ui_detection_rate = 1
real_timestamp_tolerance = 0.05
base_framerate_threshold = 10

[fps_counter]
scale = 1.0
color = 0xD3D3D3
font = Consolas
offset_x = 0
offset_y = 0
show_captured = 1

84XdqFG.png
 
the first comment of the video below explains how the options for latency work in LS, and it's managed to surprise me.

I'm the person doing the official latency tests for lossless scaling. I just wanted to say, that by setting max frame latency to 1, you are not going to see latency improvements, in fact, MFL=1 is often the highest latency option you can choose. What MFL=1 does is that it will not let the CPU submit multiple frames for rendering at the same time, which is not a good idea to do when using multi frame gen. Leave MFL at the default 3, or set it to 10 (which is the lowest latency option by a hair's edge). MFL=1 compromises performance from the CPU side and can lead to stutters, so it's not recommended.You can also increase the size of the LS Overlay ( the one saying 100 / 360) from the config file in the app's directory. You can change the color and the font as well.

 
Back
Top