Why isn't framerate upscaling being progressed when TVs have it but it's a better fit in game engine?

What is (or was at least) very noticeable though, is the conversion between various framerates, e.g. NTSC, PAL, 24fps movies. So movies on TV are no longer smooth.
Movies on PAL were judder free because they were run faster at 25 fps. No need for pull-down. Nowadays there's a role-reversal - PAL content on 60 fps displays has judder. PAL content should be ditched and replaced with 60 fps filming given all displays are 60 Hz but not all can scale down to 50 Hz (PAL content on a PC or mobile).
 
Maybe this is only actual because variable rate shading can't be the norm when the writing is on the wall about gpu driven rendering nor there will be PC titles to profit from VRS with UE4 titles drying up and UE5 going all in on gpu driven/ compute? Yet they spent years hyping VRS thats supposedly superior to motion estimation? Oh well...
 
Maybe this is only actual because variable rate shading can't be the norm when the writing is on the wall about gpu driven rendering nor there will be PC titles to profit from VRS with UE4 titles drying up and UE5 going all in on gpu driven/ compute? Yet they spent years hyping VRS thats supposedly superior to motion estimation? Oh well...

?????
wut
 
VRS should give wiggle room wrt. feamerate. Except VRS seems to be on its way out because https://vkguide.dev/docs/gpudriven/gpu_driven_engines/
In my experience, I don't think these are equivalent features. VRS is for variation of the shading rate, this in particular alleviates GPU compute bottlenecks, gpu-drive-engines reduces the CPU requirement to send many draw calls to the GPU, this in turns resolves bottlenecks around many small draw calls.

VRS via compute shaders would replace VRS hardware. However they are still both useful as one in particular uses the compute pipeline and the other uses the 3D pipeline. Having VRS as an option of both pipelines is fairly useful for developers in terms of flexibility.
 
In my experience, I don't think these are equivalent features. VRS is for variation of the shading rate, this in particular alleviates GPU compute bottlenecks, gpu-drive-engines reduces the CPU requirement to send many draw calls to the GPU, this in turns resolves bottlenecks around many small draw calls.

VRS via compute shaders would replace VRS hardware. However they are still both useful as one in particular uses the compute pipeline and the other uses the 3D pipeline. Having VRS as an option of both pipelines is fairly useful for developers in terms of flexibility.
I'd expect software "VRS" lead into stochastic plus intertwined implementations & would be a new entry - with HW VRS falling out of favor as we speak.
 
I'd expect software "VRS" lead into stochastic plus intertwined implementations & would be a new entry - with HW VRS falling out of favor as we speak.
until the 3D pipeline falls out completely, there will always be a place for hardware accelerated features.
Compute shaders are certainly incredible tools, but as we have seen, pure compute renderers have some challenges with specific tasks that are ultimately better suited for the traditional 3d pipeline. Thus moving these features over to 3D, like VRS is still important to have. As long as it works well and performs well, there is a place for it. It doesn't need to be leveraged excessively, it's not meant to be a game changer, but it can certainly help smoothen out situations.
 
Can you explain that more? It goes way over my head but sounds interesting
Yeah, i thought it's obvious.
I'll explain using the outdated example of OpenGL accumulation buffers, which was used in the last century, i would say.
The typical application was AA and motion blur. And it worked by rendering many frames. Each frame has the same subpixel jitter we use now for TAA to achieve AA, and each frame also is rendered a it's own point in time to subdivide the duration between two shown frames.
The accumulation buffer then was used to sum up all those (sub)frames and calculate the averaged result, which then was displayed (or stored, since this was more useful for offline rendering than realtime).

I have used the same method also for offline renderings at work. 3Ds Max had a very fast renderer using rasterization.
So it could not jitter time per pixel like a raytracer eventually can. So to get motion blur, the same method of accumulating whole frames at subdivided time was used.
Usually i have used 10 subframes to get a single final frame, which was smooth enough. It causes banding between the subframes for fast moving objects, which was visible but no real issue.

Nowadays, using high end GPUs we often see frame rates of 200 - 300 fps, at least for older games. New ones still often over 120.
And using a 60Hz display the is no benefit from that.
But if we would accumulate 2-4 frames, each one having fake post processing motion blur and TAA, we would get smoother motion and better AA on our low Hz display for the games where the GPU can do this.

It's low afford, and people like me would be happy, since i'll not replace my old display before it's broken.
And in general i want a constant framerate first, and secondly a high constant frame rate. I'm not really convinced about VRR. Though, i have not yet seen it in action at all. :)
 
VRS should give wiggle room wrt. feamerate. Except VRS seems to be on its way out because https://vkguide.dev/docs/gpudriven/gpu_driven_engines/
This post about GPU driven rendering and culling has zero overlap with VRS -- different techniques, targeting different bottlenecks, that run in different parts of the pipeline. GPU driven culling reduces cpu load, it (profoundly) reduces time waiting on sending information between the cpu and the gpu, and the compute shader culling significnatly reduces rendering vertices. If your shaders are significantly complicted you'd still need a technique to address fragment shading, like vrs, a low base resolution, etc.

(Great post series if you want to learn vulkan though!)
 
VRS should give wiggle room wrt. feamerate. Except VRS seems to be on its way out because https://vkguide.dev/docs/gpudriven/gpu_driven_engines/
I did not read the whole link, but i don't see how VRS conflicts with GPU driven rendering (assuming that's what you think).
Afaict, it's totally possible to do all related tasks on GPU: Analyzing the frame, setting up the VRS granulary per tile for the next frame, assuming it's similar.
I also think that's the general way to use it, not matter how much GPU driven the engine is.

But recently there was the case of some game, iirc Dead Space Remake, where the use of VRS caused degraded quality for upscalers.
So after a patch VRS is now disabled in case upscalers are enabled.
I imagine the lower resolution confuses things like DLSS / FSR, and if you use those upscalers, they give bigger wins than VRS.
So that's maybe the reason why the hype about VRS has totally muted. But not sure about anything.
 
So after a patch VRS is now disabled in case upscalers are enabled.
I imagine the lower resolution confuses things like DLSS / FSR, and if you use those upscalers, they give bigger wins than VRS

That's my point, both just add up as undersampling, and I'd expect more from a compute implementation. Some rather localized variant vs. frame-wide, former property would make hw VRS preferable to dynamic scaling except the results weren't there (and push for variable feamerate maybe even hurt this as a goal).
 
That's my point, both just add up as undersampling, and I'd expect more from a compute implementation. Some rather localized variant vs. frame-wide, former property would make hw VRS preferable to dynamic scaling except the results weren't there (and push for variable feamerate maybe even hurt this as a goal).
do u think frame interpolation will be one of the teqniques in the coming consoles lets say lock a game at 60 and interpolate any dropped frames instead of lowering graphical features to reach 60 fps?
 
do u think frame interpolation will be one of the teqniques in the coming consoles lets say lock a game at 60 and interpolate any dropped frames instead of lowering graphical features to reach 60 fps?
No cause that's framewide. I'd expect some local preservation vs. current methods something along lines of
  1. Home
  2. Ray Tracing Gems II
  3. Chapter

Temporally Reliable Motion Vectors (...)​


With "reliability" / "deeper exploration" lost if frameskip would occur so instead you lose out on local features.
 
were not talking the same thing, im not talking about geometry triangles or edges im talking about the final rendered image on a frame buffer that is stored as a bitmap to be transformed to a vector image instead this will kill aliasing since vectors have infinite resolutions
It would kill aliasing only as well as your reconstruction works. Notice you generate your vector data from aliased input, so it won't go away just you represent the same data now differently.
 
I think you misundertand. No-one wants the hit to latency. However, if you want smoother visuals and the game is only capable of managing 30 fps, the only solution is frame interpolation, and at the moment the only place that happens is on the TV. If choosing between motion upscaling on TV and in game, surely the latter is better? Obviously higher initial framerates is the best option, but where you have to render at lower framerates, interpolation is better than 30 fps.

Doesn't work for VR though, whether it is overhyped or not I'd really prefer game devs not to descend into development patterns which are fundamentally incompatible.

I'd rather see IBR based methods using up to date motion vectors to create intermediate frames.
 
I'm confused. What are you suggesting in 'IBR based methods using motion vectors' that's different to what's being asked for from in-game frame interpolation?
 
Up to date motion vectors from the engine for the intermediate frame. So the engine does input, animation and physics, only the rendering is half assed. Extrapolation, not interpolation.
 
Up to date motion vectors from the engine for the intermediate frame. So the engine does input, animation and physics, only the rendering is half assed. Extrapolation, not interpolation.
if a game was super fragment shader bound and the vertex shader and cpu were just sitting around idle most of the frame then running gameplay code, physics, and vertex shaders every other frame to get updated motion vectors would be a very effective use of the hardware, but there just aren't any games bottlenecked in that way. Extrapolation is the best thing affordable.
 
Interpolation is not affordable for VR. Extrapolation is hard and not just an easy bolt on for lazy developers.

Extrapolation will probably have to wait for pure ray tracing engines, those are far more suited to sample reuse than hybrid engines. You can throw up to date samples at deocclusion, use a small set of real samples to detect when image based approaches to shading/shadow movement screws up and selectively add more samples etc etc.
 
Extrapolation will probably have to wait for pure ray tracing engines, those are far more suited to sample reuse than hybrid engines. You can throw up to date samples at deocclusion
This idea could be already explored with hybrid. Rasterize each 2nd. frame, extrapolate frames in between and use RT to resolve failures.
But not sure if that's practical, depends on cost of rasterizing one frame vs. reprojection, BVH update and tracing for the second. But would be interesting to see.

Regarding sample reuse, other and more robust caching techniques are still an option too, e.g. texture space shading.
If we can decouple lighting properly, generating frames ideally would be so cheap upscaling isn't needed. But we can still do both ofc.

Foveated rendering is also related, which might enable a much larger reduction of pixel to be rendered than upscaling.
But the question is how acceptable aliasing and flicker is in peripherical regions. We might need high quality there, and temporal smoothing contradicts our quick perception of movements in the periphery.
So the win surely is much smaller than hoped, but still interesting.
If it works, maybe cameras on flatscreens to detect eye focus would't be silly either.
 
Back
Top