Realtime frame interpolation upscaling 30fps to 60fps

At this point it's more to see if such a system can be developed to where the typical user does not notice any artifacts. A proof of concept basically.
If you peeps remember the old Killer Instinct arcade games, you'll recall that these had pre-rendered side-scrolling backgrounds for most stages with (incidentally, also pre-rendered) sprites layered on top.

Now, as it happened, the game updated sprites (mostly) at 60fps, because that's what you need for a fast-paced fighting game, but the game didn't have the memory and storage to hold a separate frame of pre-rendered background animation for every single pixel of player movement. So it would essentially treat each frame like a regular fighting game background and scroll it horizontally within a small span of movement, then flip to a new frame when the players moved outside these bounds.

Old tech, and not similar fundamentally, although similar enough in practice. :)
 
Have you guys ever considered a hybrid approach like where you split the scene in two, near z stuff and far z stuff, and render the far z stuff at a much lower frame rate and interpolate that to 60fps, but render the near z stuff at full 60fps?
Yes, we tried that too in Trials HD. But it doesn't produce very stable frame rate, since a nearby object can suddenly cover up the whole screen, and you must process all the costly material/light equations for huge amount of pixels in the screen. Number of far away pixels is not constant, so you cannot rely on that if you want to achieve sync locked 60 fps (or 30 fps).

Far away pixels are generally easier to re-project, because of less perspective distortion. However huge majority of nearby pixels are also easy to re-project, once you know which ones. Basically you must be able to distinguish these pixels fast, and regenerate only the parts of the screen that are over the allowed error metric. Of course if you know what you want exactly, you already have your perfect data, and you can use it instead of using some approximate re-projection :) . So you must somehow (quickly) approximate the target and compare against it. And you must be able to regenerate small regions of data quickly. Lots of requirements, but not impossible ones for current highly programmable GPUs.
 
MfA said:
AR is still very much a hardware problem ... projecting images at arbitrary DoF and high FoV through a pair of extremely light see through glasses? Science fiction.
I've seen that science fiction at work in 2000 at an amusement park. Granted the lightweight glasses were wired to some sort of receiver belt you had to wear, and working with a square room and humans restricted to center of it simplifies a lot of AR problems, but it was over a decade ago - and the illusion of being inside (and somewhat interacting with) a hologram was impressive.
 
AR is still very much a hardware problem ... projecting images at arbitrary DoF and high FoV through a pair of extremely light see through glasses? Science fiction.
Narrow DoF (more blurred area) should be faster than wide DoF (no depth blurring at all), since more blurred signal requires less samples. Same is true for motion blur. In a well designed future future engine both should reduce processing costs instead of increasing it. It's just silly how current engines first generate a perfectly sharp scene (spending huge amount of cycles and BW doing it), and then intentionally blur most of it :)
 
May I ask that is there any obvious advantage or disadvantage to do frame-interpolation by the game console rather than by HDTV? Can PS4 do better interpolation than Xreality pro of Bravia TV? thx!
 
May I ask that is there any obvious advantage or disadvantage to do frame-interpolation by the game console rather than by HDTV? Can PS4 do better interpolation than Xreality pro of Bravia TV? thx!
The game has all the game scene data information, including depth and motion vectors. The TV only has 2D images to compare and interpolate/extrapolate. A game tweening frames can get substantially better quality.

My question is, following on from Sebbbi's comments, how far are we from progressive shaders that can adapt to dynamic quality settings? Could a game be shaded with shaders that automatically tone down the visuals when depth exceeds certain DoF and motion blur thresholds?
 
Could a game be shaded with shaders that automatically tone down the visuals when depth exceeds certain DoF and motion blur thresholds?

Not if you can't spawn distinct shaders per screen-pixel (after rasterizer).
If you branch in a pixel-shader, then you have to be lucky enough that all the instances of that shader take the same branch on the whole current wavefront, otherwise all instances pay _both_ branches, which makes such a shader in effect even slower.
You have the possibility to split depth into "planes" through the stencil buffer, and then iterate over them using different pixel shader for each "plane", although the stencil buffer is more worth than gold, it's unlikely that a renderer doesn't occupy it already for other things (and has other algorithms more important just waiting to take that spot). And still it's multi-pass, thus slower than optimal.

If the hardware could spawn distinct pixel shaders based on conditions on-the-fly (after rasterizer), a whole lot of nice algorithms would become feasible.
 
If the hardware could spawn distinct pixel shaders based on conditions on-the-fly (after rasterizer), a whole lot of nice algorithms would become feasible.
It would be nice to select pixel shader per pixel. Simplest way to sidestep this limiation is to select the pixel shader (branch) by 8x8 tile granularity (64 threads in GCN wave). This way you don't need to worry about branch serialization costs. It's also quite handy to be able to generate processing path masks at 8x8 lower resolution.

Of course you can also do also better than that. Bin (or sort) pixels by branch, and process each branch separately by a big compute kernel. This might of course require more data movement or cause worse data access patterns (since it doesn't keep screen locality that well).
 
Makes you wonder what a fully programmable architecture like Larrabee could do regards foveated, adaptive, economical rendering. GPUs are all about brute-forcing the solution, and we're still tied to very limited rendering practices in reality.
 
If you re-read my post you'll see I already made a distinction between interpolation and extrapolation ...

My bad, I misunderstood you badly, now I see what you meant. Seems like we agree then.
 
VR tech is advancing all right, but it's the focus distance that's the currently unsolvable issue that, in a sense, "breaks" the immersion. I'd like to see some day somebody find a solution for that.
 
Bumping my favourite topic: frame interpolation!

at 60fps, differences between each frame is often so miniscule that it begins to make no sense to me to recalculate ALL those intricate shaders that you've done the last frame. I would have hoped the hardware would have specialized routes for this type of work (e.g. rendering only the parts that require correction by 1 bit masking the framebuffer). Is there even such a technique? If there was, I guess it would also allow for adaptive MSAA approaches (again, is there adaptive MSAA?) at high frequency shader detail instead of simply polygon edges.
 
wha? In order to interpolate frame n, you need n-1 and n+1, which puts you 1 frame behind...how is this even logical...
 
wha? In order to interpolate frame n, you need n-1 and n+1, which puts you 1 frame behind...how is this even logical...

As mentioned a few times in the thread, and in the technique linked in OP, you don't need n+1, because you can extrapolate quite well actually, due to the nature of the game having access to much more motion information than a TV can have.
 
I'm wondering how the game could know before a frame is rendered which pixels change enough that they need to be re-rendered... That seems to me as if it would require the computer to be omniscient. ;)

I suppose it could simply run the geometry workload and re-texture the scene using pixels from the previous frame (somehow identifying areas where image parallax reveal new pixels of an object and re-texturing only those - sounds complicated), but such a technique would be problematic with things like foliage (or anything else that moves independently of the player - including other actors, vehicles and so on, basically anything but the ground), so you'd have to separate all that stuff out first, requiring extra passes and additional blending/scene compositing... Would you really gain all that much in the end?
 
I'm wondering how the game could know before a frame is rendered which pixels change enough that they need to be re-rendered... That seems to me as if it would require the computer to be omniscient. ;)

I suppose it could simply run the geometry workload and re-texture the scene using pixels from the previous frame (somehow identifying areas where image parallax reveal new pixels of an object and re-texturing only those - sounds complicated), but such a technique would be problematic with things like foliage (or anything else that moves independently of the player - including other actors, vehicles and so on, basically anything but the ground), so you'd have to separate all that stuff out first, requiring extra passes and additional blending/scene compositing... Would you really gain all that much in the end?

Could be possible to use some sort of motion prediction. I've seen this used in online games to smooth out movement.

I'm curious though, are the new consoles capable of frame interpolation natively? perhaps it was a factor in both sony and MS not going with more powerful graphics hardware lol
 
Could be possible to use some sort of motion prediction.
Yes, but would this be precise enough? If it estimates wrong you'd have weird crawling textures and shimmering along edges and all sorts of weirdness.

I'm curious though, are the new consoles capable of frame interpolation natively?
It's never been mentioned on any specs sheet for either console, nor by any developers I think. Looks like they support it in the way that if you program the hardware to do it, then they support it. :)
 
even extrapolation would only make sense if the compute cost is significantly lower than a real frame, but then again the design is always trying to budget roughly the same amount of of time per frame, so saving of time on each even frame is only logical if each odd frame's rendering time is not impacted...i think.
 
even extrapolation would only make sense if the compute cost is significantly lower than a real frame, but then again the design is always trying to budget roughly the same amount of of time per frame, so saving of time on each even frame is only logical if each odd frame's rendering time is not impacted...i think.
Yes, the uneven cost of interpolated frames is a real problem. Games should always aim for stable frame rate (not best average frame rate). If you program your game to interpolate every other frame, your game will have lots of micro stutter (just like bad SLI/crossfire implementations). Delaying the cheap frame (vsync) helps a bit, but that pretty much spoils the idea, because you are just waiting for the vsync in the cheap frame (there's no gain by idling the GPU for most of the frame). Of course you can do tricks, and issue some processing for the next frame at the end of the cheap frame, but it's quite hard to load balance everything perfectly.

It's better to reproject every frame partially (the sections with low error metric) and refresh (render fully) only parts that are needed. This results in much smoother frame rate, assuming you can dynamically adjust the error metric to produce constant processing cost.
 
Back
Top