Sony VR Headset/Project Morpheus/PlayStation VR

They should patch Knack for Morpheus. The camera behaviour makes it an ideal candidate for a third person VR.
 
If they do 60 to 90, they get half a frame of additional maximum frame age. Since 60 to 120 already have some artifacts on Morpheus, I would guess that 45 would be pushing it too far.

60 is already pushing it too far according to Valve and Occulus, that's why both are aiming for a native 90fps refresh. So yeah, I agree 45fps average would be too little and I don't think re-projection is a perfect substitute for properly rendered frames. My point was though that this isn't a unique technology to Sony. They are simply using movement data to re-caculate an updated view of the previous frame. While on the PC it sounds like the target will be a native 90fps using "re-projection (aka Timewarp)" to fill in the occasional missed frame, Sony intends to use it to push 60fps (and less) all the way up to 120fps by the sounds of it. An interesting approach born of having no alternative I'd imagine.

60 to 120, the additional age will alternate between 0ms and 16.7ms. Scene judder is 1:1 on a 60Hz cycle.
8.33ms average plus the 8.33ms scan out delay to the oled = 16.7ms

60 to 90, the additional age will alternate between 0ms, 22.2ms and 11.1ms. Scene judder is 2:1 on a 30Hz cycle.
11.1ms average plus 11.1ms scan out delay to the oled = 22.2ms

I couldn't say how relevant the above would be given that in both cases you're always getting a unique frame on every refresh with the only difference being that on average 2/3rds of them a real rendered frames int he 90fps example and only half are real in the 120fps example. It's not really relevant in either case though since 60->90 isn't the intended use case for timewarp (although there's nothing stopping it being used in that way as far as I'm aware and the end result would likely be better than a locked or variable 60fps on a 90hz screen.

Also I don't know how sensitive we will be with scene judder, it should only affect close objects with VR, I know I can't stand 3:2 pull down with tvs that can't do 24 correctly. This going to be interesting.

There are quite a few issues with re-projection vs a properly rendered frame that make it questionable to use for every game to "double" framerate IMO. As a last ditch effort to ensure no missed vsyncs when the framerate occasionally dips a little below the target then it's a great idea but I'm not so sure about the constant doubling of frame rates. But then I guess it's still better than a 60fps refresh that may still drop some frames.

https://www.oculus.com/blog/asynchronous-timewarp/
 
Didn't Sony say that devs must use the reprojection library all the time, even when rendering 120 to 120?
If I remember the goal was not necessarily to "double the frame rate" for free, it was to lower the input-to-photon latency. Input being the HMD head movement.

There are two separate latencies:

1. Scene Setup Movement: input + scene setup + render + scan out + display
- Morpheus at 120 is fastest
- Oculus at 90 is slower
- Morpheus at 60-to-120 is slowest

2. HMD Angular Movement: input + reprojection + scan out + display
- Morpheus at 120-to-120 or 60-to-120 are equally fast
- Oculus at 90-to-90 is slower

Regardless, at the same async rendering frame rate, Morpheus will have an advantage of 2.8ms because of it's faster scan out.

This small 2.8ms is not a big enough advantage to compensate the difference between 60fps and 90fps rendering. So 90 is better with scene movement, but it is inferior for HMD angular movements. The reprojection algorithms are not known, and Sony said there are ways to mitigate the artifacts cause by the scene movement and/or close objects. This remains to be seen, and it is sure to require more processing.

So far only morpheus can double the frame rate in a way that doesn't induce visible judder. Judder is the common denominator of render-cycle to scan-out-cycle. Judder is visible if that frequency is below 60Hz. There might also be algorithms to reduce this, I don't know.

Sync output without any reprojection:

90Hz Oculus : 11.1ms render + 11.1ms scan out = 22.2ms
60Hz Morpheus : 16.7ms render + 8.3ms scan out = 25ms
 
Last edited:
Didn't Sony say that devs must use the reprojection library all the time, even when rendering 120 to 120?

Yep and that's also how timewarp will work as far as I'm aware. Although I suspect the option to turn it off for sufficiently powerful systems will also exist.

If I remember the goal was not necessarily to "double the frame rate" for free, it was to lower the input-to-photon latency. Input being the HMD head movement.

I'm not sure I see a distinction. The method they are using to reduce input latency (which is certainly required under VR at only 60fps so no doubt the driving factor for the tech) is to "double the frame rate for free". It's likely that most PS4 games are going to run at 60fps so the doubling of that through reprojection in order to reduce input latency is what we're going to get regardless of what the goal may have been. Except it's not really doubling frame rate because they're not new frames, they're the same frame reprojected at a different angle in a non-perfect way.

There are two separate latencies:

1. Scene Setup Movement: input + scene setup + render + scan out + display
- Morpheus at 120 is fastest

Yes but only if you ignore the massive visual compromises that would have to be made for a PS4 game to run at a solid, native 120fps.

2. HMD Angular Movement: input + reprojection + scan out + display
- Morpheus at 120-to-120 or 60-to-120 are equally fast
- Oculus at 90-to-90 is slower

Yes but only if you ignore the lower quality of the 60-120 solution and the inherent pitfalls of using it.

I don't think there's any doubt that the ideal situation is 120 real frames per second. And in a world where massive visual compromises wouldn't be required to achieve it we'd all choose that. But in the real world were you do need to compromise on visuals for higher frame rates then I'd argue that 120fps is not a good target to aim for when 90fps is sufficient to achieve "presence" at least not when you're struggling to hit 60fps in modern games. And I'd wager that the only reason Morpheus has a 120hz display is so that they can conveniently double up on the 60fps target which the vast majority of developers will be aiming for because of it's use on TV's.

Regardless, at the same async rendering frame rate, Morpheus will have an advantage of 2.8ms because of it's faster scan out.

Which might be an issue if Occulus were recommending games target 60fps and get re-projected to 90fps but they're not. They're recommending 90fps. Sony on the other hand seem to have a choice between 60fps reprojected to 120fps (inferior) or 120rfps which while better is arguably not worth the graphical compromise that would need to be made given the systems limited resources. I suppose you could always argue for 90rfps reprojected to 120fps but I imagine that would result in an overall worse image than native 90rfps/hz.

This small 2.8ms is not a big enough advantage to compensate the difference between 60fps and 90fps rendering. So 90 is better with scene movement, but it is inferior for HMD angular movements.

I'm not sure how you can separate the two given they both contribute to the overall presence. So if you're arguing that 60 reprojected to 120 is superior for VR than native 90 then I'd strongly disagree. Putting technical arguments aside (real frames vs reprojected frames, the required frame rate for presence, the visual issues reprojection introduces etc...) pure common sense should indicate that's not the case. Both Occulus and Vive chose 90hz on a system that is relatively unrestricted. 60/120 was chosen on PS4 (IMO) because of performance limitations and the synchronization to TV refresh rates.

The fact that 90 real fps was chosen over 60 reprojected to 120 by two vendors who have access to the same (or at least very similar) reprojection technology and are less limited by system performance is pretty telling IMO.

So far only morpheus can double the frame rate in a way that doesn't induce visible judder.

I'd need to see some proof of that but regardless, as I said above, time warp isn't designed to double framerate. It's designed to stabilize a native 90fps frame rate that isn't quite perfect. It can be used to double frame rate (ala Sony) but Occulus and Valve certainly don't seem to be recommending it be used for that.
 
You're putting words in my mouth. I didn't say 60 to 120 is equal to 90 to 90, I said it has advantages and disadvantages. It is scene dependent and algorithm dependent, and this remains to be seen.

The last "common sense" argument we had was that Morpheus would look like a PS2. Oculus founders dismissed PS4/XB1 as incapable of VR early on... now Sony if right up there with them. The recent Epic demo on UE4 optimized for PS4 is showing us once again that reality is counter-intuitive.
 
Sync output without any reprojection:
90Hz Oculus : 11.1ms render + 11.1ms scan out = 22.2ms
60Hz Morpheus : 16.7ms render + 8.3ms scan out = 25ms

But... what about low persistence on both?

Oculus crew already talked that they switched to displays that use global scan out [all pixels illuminated at the same time], so I presume they would keep the screen dark for the majority of time. Sony also mentioned that they will use low persistence on their new and upgraded OLED screen.

What timings for actually keeping the image on screen are needed for 90/120hz modes? 2-3ms?
 
But... what about low persistence on both?

Oculus crew already talked that they switched to displays that use global scan out [all pixels illuminated at the same time], so I presume they would keep the screen dark for the majority of time. Sony also mentioned that they will use low persistence on their new and upgraded OLED screen.

What timings for actually keeping the image on screen are needed for 90/120hz modes? 2-3ms?
Good question, I don't know. Early in this thread I was talking about my wish for BFI, which is black frame insertion, it's pretty much the same thing. The compromise seems to be how much brightness is left if they leave it black too long. At 60hz a 50% black frame is nice, if subtle (my projector have this option) but I don't think it would be a big deal at 120hz. 50% sound like almost a freebie for even better motion I guess, no reason not to do it?

An ideal display would be flashing the image in the shortest flash possible, but I want brightness too!
 
Last edited:
I'm assuming that when they talk about running that UE4 demo on the Morpheus they're not using the ~1.7-1.9x oversampled render target that's the default for Oculus/Valve? Do we know if the PS4's VR implementation is still doing its VR-related post-processing on an external/breakout box? My understanding was they were doing the warping for the lenses there such that an undistorted image could be mirrored to the TV first. If the lens warping is not done natively by the PS4 itself, then whatever output is getting piped out over HDMI to the box will be the limiter on whatever sampling can be done.
 
We don't know if the external processor warps or unwarps the image.
Morpheus also have more pixel density in the center than the edges, or at least v2 increased it compared to 2014 v1.
We don't know what the optical projection function is, so we can't compare. Could be more dense than oculus, or less, or it could be a stereographic projection. Or anything really.
If the API renders a rectilinear image at the center's resolution it's a waste of processing power for most of the rest of the image. Is this what oculus does?
 
Last edited:
If the API renders a rectilinear image at the center's resolution it's a waste of processing power for most of the rest of the image. Is this what oculus does?

That's what everything has done thus far, as far as I know (all Oculus, Valve prototypes, devkits, and future releases - and I presumed Morpheus as well).

edit:
Not sure what you mean of this: "Morpheus also have more pixel density in the center than the edges"

Are you saying that the Morpheus panel is not made of square pixels and regular grid?

edit2:

Just to be clear: Hardware thus far (by default, configurable in the SDKs and/or UE4) renders to a render target of sufficient resolution to reach 1:1 samples in the center of the screen. In the Oculus SDK for the DK2 this is something around 1.7x (total area, not each axis), and Valve's Vive devkit the recommended number is actually higher (~1.96x). UE4 has a command called "hmd sp xxx", where "xxx" is the percentage increase/decrease from the panel res.
 
Last edited:
I just meant higher density because of the effect of the optics (1.7x or 1.96x scaling factor as you said for oculus and vive). No doubt Morpheus is the same but we don't know how much yet.

From the show reports, Vive is supposed to have a slightly worse display than oculus, maybe they pushed it too far and 1.7 is better?
And Sony said they increased it between 2014 and 2015, maybe they were too conservative?

I suppose that, later on, they will have tricks to put more GPU ressources in the center of the render than at the edges. But I agree with what you said, if Sony offloaded the warping to the external processor, they'd be in trouble, they'd be stuck at 1:1 max.
 
From the show reports, Vive is supposed to have a slightly worse display than oculus, maybe they pushed it too far and 1.7 is better?
And Sony said they increased it between 2014 and 2015, maybe they were too conservative?.

Pretty sure that both Oculus CB/CV1 and Vive DK/CV are using identical panels so whatever visual difference there has been or will be, will be down to the quality of optics used and whatever diffuser they might add to break up the screendoor (I've seen "linen" referenced in more than a few places to describe the CB prototype's screen.) The '~1.7' number is specifically relating to the older DK2, while the Valve number is from their GDC rendering talk which is referencing the Vive. Epic's UE4 Oculus Connect presentation last year also makes reference of an 'HMD SP 130-140' setting range (that's 130-140% on each axis), which corresponds to the same DK2->Vive range in area. The difference between the DK2 and the Vive could very well just be down to minor FOV and lens differences since we're only talking about a 10% per-axis difference there. The earliest Oculus SDKs used a shader to distort the framebuffer, and that's now switched to mapping the render target to a finely tessellated curved mesh (Valve/Vive being the same method) - but either way you're left with an image that's undersampled in the center and oversampled in the periphery. There's been a fair bit of talk with regards to rendering a natively curved output to match the lens profile, but so far the only cases I've seen this actually done have been with ray tracing.

But yeah - the entire scheme is super wasteful - VR rendering up until now has basically been "innovation" by way of GPU sledgehammer. You've got the oversampling required to compensate for the lens distortion, you've got the timewarp/reprojection which necessitates a larger-than-visible frustrum and frame buffer so your POV retransform has some wiggle room to move around, and most of the VR "optimizations" that exist in UE4/Unity are really just quick-n-dirty optimizing for latency at the expense of throughput (disabling CPU frame queuing, etc).

Definitely room for innovation whenever VR rendering actually becomes an economically sensible investment. Until then it's probably up to Oculus, Valve, Sony, and Nvidia/AMD to come up with the solutions, because I can't see EA/Ubi/Activision/Epic/Unity/etc restructuring their renderers for 0.1% of the user market. Oculus's recent SDK rolled out support for a few new features that allow for layer compositing, which opens up the ability to have arbitrary sampling levels for different elements on the screen - I believe Carmack referenced doing this for GearVR's virtual cinema in order to get the video on the theater screen fully sampled according to the panel's native 1440p even though the rest of the environment is rendered at 1080p for performance reasons. Whether or not it's reasonable to use this to tile the entire screen in different sample resolutions I can't say (have been lazy the last month), but it's almost certainly the way forward for rendering small text, huds, etc as you can pretty well throw 4x or 8x supersampling at it without a whole lot of concern. Personally I'm tempted to throw my hands up and just say 'F- the whole graphics pipeline' and spend my time in path tracing with the hopes that eye tracking and foveated rendering will finally make it reasonable :p
 
Thanks a lot for the details.

Wow, it's even worse than I thought. But it's still early. :LOL:
 
The Morpheus Headset needs to be for the PS5. I just don't see the PS4 having enough hourse power to provide realistic graphics and 120fps. When I see VR for the first time, I want it to be like being in a "real" world. I think the uncanny valley will kick in hard with the Morpheus and less than very well done human characters will look strange.
 
Pretty sure that both Oculus CB/CV1 and Vive DK/CV are using identical panels so whatever visual difference there has been or will be, will be down to the quality of optics used and whatever diffuser they might add to break up the screendoor (I've seen "linen" referenced in more than a few places to describe the CB prototype's screen.) The '~1.7' number is specifically relating to the older DK2, while the Valve number is from their GDC rendering talk which is referencing the Vive. Epic's UE4 Oculus Connect presentation last year also makes reference of an 'HMD SP 130-140' setting range (that's 130-140% on each axis), which corresponds to the same DK2->Vive range in area. The difference between the DK2 and the Vive could very well just be down to minor FOV and lens differences since we're only talking about a 10% per-axis difference there. The earliest Oculus SDKs used a shader to distort the framebuffer, and that's now switched to mapping the render target to a finely tessellated curved mesh (Valve/Vive being the same method) - but either way you're left with an image that's undersampled in the center and oversampled in the periphery. There's been a fair bit of talk with regards to rendering a natively curved output to match the lens profile, but so far the only cases I've seen this actually done have been with ray tracing.

But yeah - the entire scheme is super wasteful - VR rendering up until now has basically been "innovation" by way of GPU sledgehammer. You've got the oversampling required to compensate for the lens distortion, you've got the timewarp/reprojection which necessitates a larger-than-visible frustrum and frame buffer so your POV retransform has some wiggle room to move around, and most of the VR "optimizations" that exist in UE4/Unity are really just quick-n-dirty optimizing for latency at the expense of throughput (disabling CPU frame queuing, etc).

Definitely room for innovation whenever VR rendering actually becomes an economically sensible investment. Until then it's probably up to Oculus, Valve, Sony, and Nvidia/AMD to come up with the solutions, because I can't see EA/Ubi/Activision/Epic/Unity/etc restructuring their renderers for 0.1% of the user market. Oculus's recent SDK rolled out support for a few new features that allow for layer compositing, which opens up the ability to have arbitrary sampling levels for different elements on the screen - I believe Carmack referenced doing this for GearVR's virtual cinema in order to get the video on the theater screen fully sampled according to the panel's native 1440p even though the rest of the environment is rendered at 1080p for performance reasons. Whether or not it's reasonable to use this to tile the entire screen in different sample resolutions I can't say (have been lazy the last month), but it's almost certainly the way forward for rendering small text, huds, etc as you can pretty well throw 4x or 8x supersampling at it without a whole lot of concern. Personally I'm tempted to throw my hands up and just say 'F- the whole graphics pipeline' and spend my time in path tracing with the hopes that eye tracking and foveated rendering will finally make it reasonable :p
Thanks for the summary!
14/16nm GPUs with HBM will be welcome indeed. My piggy-bank just shivered pitifully anticipating the inevitable slaughter. :)
 
SCE published yet another gaze tracking patent, this one is more about the software, low computing requirement, and precision with the corneal reflection technique... they mention it has the precision required to use it in first person shooters.
http://www.freshpatents.com/-dt20150514ptan20150130714.php?imgpr=1

Still not sure if I'm too optimistic that it will make it to Morpheus, but SCE seems to be putting a lot of effort into it.
Not a single mention of foveated rendering, it's all 100% for UI. :???:
 
Not a single mention of foveated rendering, it's all 100% for UI. :???:
There isn't need to be in a patent about how to track the eyes rather than what to do with that data in software. A patent on foveated rendering would have to be some fancy technique or extra bit of hardware or something to enable it.
 
True, but the way patents are written they always list the possible applications. So it looks like a weird omission.

This is just my OCD phase before E3, I'm reading too much into it. Where the hell are the rumors? We have no rumors about anything!
 
Sure they had that big unit showcased that worked fine with Infamous Second Son, but I don't expect foveated rendering in Morpheus.
 
Back
Top