Realtime frame interpolation upscaling 30fps to 60fps

Not if you interrupt/flip during vblank. (Of course, displays don't have vblank anymore, but you know what I mean...)
But then you're not flipping at an arbitrary point.

That is, if you wait for vsync, you are tied in refresh periods but have no tearing. If you flip buffers at any point during the output, you aren't tied to refresh periods but you tear the screen. You can't have it any other way.
 
Slightly off-topic, but I found out yesterday that the WPF drawing engine for .NET on Windows platforms has a vsync call, that is an event you can hook up to. If you draw things in that event, they are nicely 'vsynced' (e.g. change bg color there won't cause tearing).

The framerate is variable and can change dynamically on what you're doing - e.g. I was drawing an animated path, and I set it to progress one step at a time for each draw. As the UI is well multi-threaded in WPF, I can at the same time start recording a new path (I'm using a stylus for this, with pressure detection and everything, pretty cool), and this sped up the 'framerate' - e.g. to improve styles input and UI feedback, it would draw (than twice as) often.

What I did to propertly time my animations then is record the last time I got the event and measure how much time passed, and draw the timed animation steps that match that passed time. Not a perfect solution, but it gives a smooth playback that matches the speed at which I drew the animation quite well.

Anyway, it made me wonder if you could do something like that in a game rendering engine - probably not, so carry on. :D
 
But then you're not flipping at an arbitrary point.
My question wasn't about flipping at any arbitrary point, but wether you could flip (or rather, write to GPU registers) while the GPU was otherwise preoccupied running draw calls. And, as an extension, generate hardware interrupts as a function of "beam" position...

After all, you could very well have shader programs and whatnot running during the period traditionally known as vertical blank. And also, it's good to not have to spend a core spinning a loop manually checking for blank, so we can flip buffers without tearing the screen.
 
120Hz will be the standard output for Project Morpheus even when the game is 60Hz. This has me wondering if Sony could do the same for none VR games & make 60Hz the standard output for PS4 games or even reproject 60FPS games to 120FPS for 120Hz monitors?

8i9m9PB.jpg


lv11sas.jpg
 
I think there have been a lot of discussions about reprojection on the forum before. I can't remember who was posting about it. Maybe sebbbi. Not sure who else.

I don't see why it wouldn't be doable for standard games if it's doable for VR.
 
I think there have been a lot of discussions about reprojection on the forum before. I can't remember who was posting about it. Maybe sebbbi. Not sure who else.

I don't see why it wouldn't be doable for standard games if it's doable for VR.

The 60FPS SharePlay that's coming with the next FW update also has me wondering if reprojection is being used to get the 60FPS after the data is sent to your PS4.
 
Reprojection error is tied to frame rate. The faster base frame rate you have, the smaller the difference between the frames. Reprojection for 30 fps source data is doable, but needs much more game specific trickery and tuning to look good compared to 60 fps+ source data. For example at 30 fps you clearly notice how the specular highlights and shadow edges are incorrectly reprojected for moving surfaces (as these things are not moving with the surface).
 
Btw, what is exactly reprojection? Is it just an image shifting using motion as a cue? In terms of reprojection, is the motion cue can only come from external device (like headset motion data) or it can/also use the motion cue from the game itself? If it take cues from the game, is it only take the motion or it also consider the 3D world itself (using the depth/z or anything else) as cues? When I heard reprojection, the first thing that I though is making creating a separate image for left and right eye (stereo rendering) but instead doing double the workload, the engine render once and create the left and right image by reprojecting the rendering with slight shift by utilizing the cues from the game engine.
As you can see, I'm confused. Can anyone help me with the terminology and what constitute a reprojection.
 
the VR leader morpheus person guy explained in an interview that its like warping the previous frame to the predicted new frame that the prediction is gathered from sampling the movement very fast.

but take this post with a bucket of salt. my english is not top notch.
 
Reprojection error is tied to frame rate. The faster base frame rate you have, the smaller the difference between the frames. Reprojection for 30 fps source data is doable, but needs much more game specific trickery and tuning to look good compared to 60 fps+ source data. For example at 30 fps you clearly notice how the specular highlights and shadow edges are incorrectly reprojected for moving surfaces (as these things are not moving with the surface).

How would something like 48 fps rendering with a reprojected frame after every 4th frame for a 60 fps output work out? Or would that be too little of a gain to go through the whole reprojection thing? If so would 40 fps rendering with a reprojected frame after every 2nd frame be worth it?

Depending on how much processing is needed for reprojecting this seem like it could save about 1/4 or 1/3 of processing time for something that most people probably wouldn't notice.
 
I think reprojecting more than every 1 frame wouldn't work (at least not for real time rendering) because then the cost of rendering every frame would be the same as if it wasn't doing reprojection. For example, if I make a 10fps game and make it a 20fps, I basically inserting the reprojected frame while the engine busy rendering the next frame. If I want to make 15fps into 20fps, I basically need to insert 1 reprojected frame every 3 real frame. The problem is that those 3 real frame rendering cost need to be the same as when it render 20fps, otherwise it would judder because frame 1 to 3 will be displayed within 15fps and 3 to 5 will be displayed within 30fps (because the jump from real frame is the same and reprojected frame double the fps).
 
I think reprojecting more than every 1 frame wouldn't work (at least not for real time rendering) because then the cost of rendering every frame would be the same as if it wasn't doing reprojection. For example, if I make a 10fps game and make it a 20fps, I basically inserting the reprojected frame while the engine busy rendering the next frame. If I want to make 15fps into 20fps, I basically need to insert 1 reprojected frame every 3 real frame. The problem is that those 3 real frame rendering cost need to be the same as when it render 20fps, otherwise it would judder because frame 1 to 3 will be displayed within 15fps and 3 to 5 will be displayed within 30fps (because the jump from real frame is the same and reprojected frame double the fps).

Could you go the way if killzone and reduce the render time by striping the frames created and have the missing parts represented in. In this way you then render at 60fps but not at your target resolution.

This still means the cpu has to work to the target framerate so it would be limited by your existing constraints.

Rendering more than 50% of the image I assume would make the effect more seamless and robust and achieve the same desired final framerate CPU depending.
 
For Morpheus: I can't imagine the system would artifically limited to 60fps of rendering. Would it not make more sense for the Playstation to render as fast as it's able (be that 60fps or 73fps) and the extended device can add all interim frames?
 
Shuu-san said that they encourage devs to render 120 fps. the frame interpolation will still active and give better result
 
I'd guess that 'better result' was for those occasions where the fps drops below 120. Otherwise what else can the interpolation be doing?
 
the VR leader morpheus person guy explained in an interview that its like warping the previous frame to the predicted new frame that the prediction is gathered from sampling the movement very fast.

but take this post with a bucket of salt. my english is not top notch.
Ok that's great then if true. It should mean no additional input lag.
 
I'd guess that 'better result' was for those occasions where the fps drops below 120. Otherwise what else can the interpolation be doing?
If they use latest head orientation information for it, it should reduce head tracking latency quite a bit even when rendering 120hz. (fix the orientation change that happens during rendering.)
 
If they use latest head orientation information for it, it should reduce head tracking latency quite a bit even when rendering 120hz. (fix the orientation change that happens during rendering.)
Given advanced updates due to head moving will display part of an image previously off screen would that not be hard to predict or will they use some over scan to give them extra edge data to allow this to be possible?

Or with this being peripheral vision is this a non issue?
 
Given advanced updates due to head moving will display part of an image previously off screen would that not be hard to predict or will they use some over scan to give them extra edge data to allow this to be possible?

Or with this being peripheral vision is this a non issue?

I think you can only move your head so much within 60th of a second and any new part that's not rendered will be in your peripheral vision. Any "black" part requiring an actual render can be filled with color from adjacent pixels, thereby eliminating any flicker (which would be very fast as it would be a 120hz flicker, but we detect flicker better in our peripheral vision.).

I may have gotten this wrong, but, from what I understood from the very little amount of things I read on the subject is that the new 120th frame interpolated from the original frame is just a translated/rotated version of the previous frame based on your head position. So the animations would still run at 60hz if that's the games' output but your "perceptual presence" in the game would be 120hz. An extreme example would be like moving your head around in a stop-motion animation that ran at 10fps, you would be seeing identical animation of characters but you'd have the ability to look in and around, giving the feeling of presence even tho the animaton ran at 10fps.

So I think this technique used in Morpheus for upping to 120hz is different from the technique that interpolates actual animation frames and would not be used for upping 30fps games to 60fps, unlike the 30 to 60fps frame upcaling technique used in an unreleased prototype of the Star Wars games.

(This is not to say the basis of this reprojection techique could not be used for moving parts of the image, you'd need to either predict which parts will keep moving or interpolate between to fully rendered frames (with the latter adding latency))
 
Last edited:
Back
Top