If they do 60 to 90, they get half a frame of additional maximum frame age. Since 60 to 120 already have some artifacts on Morpheus, I would guess that 45 would be pushing it too far.
60 to 120, the additional age will alternate between 0ms and 16.7ms. Scene judder is 1:1 on a 60Hz cycle.
8.33ms average plus the 8.33ms scan out delay to the oled = 16.7ms
60 to 90, the additional age will alternate between 0ms, 22.2ms and 11.1ms. Scene judder is 2:1 on a 30Hz cycle.
11.1ms average plus 11.1ms scan out delay to the oled = 22.2ms
Also I don't know how sensitive we will be with scene judder, it should only affect close objects with VR, I know I can't stand 3:2 pull down with tvs that can't do 24 correctly. This going to be interesting.
Didn't Sony say that devs must use the reprojection library all the time, even when rendering 120 to 120?
If I remember the goal was not necessarily to "double the frame rate" for free, it was to lower the input-to-photon latency. Input being the HMD head movement.
There are two separate latencies:
1. Scene Setup Movement: input + scene setup + render + scan out + display
- Morpheus at 120 is fastest
2. HMD Angular Movement: input + reprojection + scan out + display
- Morpheus at 120-to-120 or 60-to-120 are equally fast
- Oculus at 90-to-90 is slower
Regardless, at the same async rendering frame rate, Morpheus will have an advantage of 2.8ms because of it's faster scan out.
This small 2.8ms is not a big enough advantage to compensate the difference between 60fps and 90fps rendering. So 90 is better with scene movement, but it is inferior for HMD angular movements.
So far only morpheus can double the frame rate in a way that doesn't induce visible judder.
Sync output without any reprojection:
90Hz Oculus : 11.1ms render + 11.1ms scan out = 22.2ms
60Hz Morpheus : 16.7ms render + 8.3ms scan out = 25ms
Good question, I don't know. Early in this thread I was talking about my wish for BFI, which is black frame insertion, it's pretty much the same thing. The compromise seems to be how much brightness is left if they leave it black too long. At 60hz a 50% black frame is nice, if subtle (my projector have this option) but I don't think it would be a big deal at 120hz. 50% sound like almost a freebie for even better motion I guess, no reason not to do it?But... what about low persistence on both?
Oculus crew already talked that they switched to displays that use global scan out [all pixels illuminated at the same time], so I presume they would keep the screen dark for the majority of time. Sony also mentioned that they will use low persistence on their new and upgraded OLED screen.
What timings for actually keeping the image on screen are needed for 90/120hz modes? 2-3ms?
If the API renders a rectilinear image at the center's resolution it's a waste of processing power for most of the rest of the image. Is this what oculus does?
From the show reports, Vive is supposed to have a slightly worse display than oculus, maybe they pushed it too far and 1.7 is better?
And Sony said they increased it between 2014 and 2015, maybe they were too conservative?.
Thanks for the summary!Pretty sure that both Oculus CB/CV1 and Vive DK/CV are using identical panels so whatever visual difference there has been or will be, will be down to the quality of optics used and whatever diffuser they might add to break up the screendoor (I've seen "linen" referenced in more than a few places to describe the CB prototype's screen.) The '~1.7' number is specifically relating to the older DK2, while the Valve number is from their GDC rendering talk which is referencing the Vive. Epic's UE4 Oculus Connect presentation last year also makes reference of an 'HMD SP 130-140' setting range (that's 130-140% on each axis), which corresponds to the same DK2->Vive range in area. The difference between the DK2 and the Vive could very well just be down to minor FOV and lens differences since we're only talking about a 10% per-axis difference there. The earliest Oculus SDKs used a shader to distort the framebuffer, and that's now switched to mapping the render target to a finely tessellated curved mesh (Valve/Vive being the same method) - but either way you're left with an image that's undersampled in the center and oversampled in the periphery. There's been a fair bit of talk with regards to rendering a natively curved output to match the lens profile, but so far the only cases I've seen this actually done have been with ray tracing.
But yeah - the entire scheme is super wasteful - VR rendering up until now has basically been "innovation" by way of GPU sledgehammer. You've got the oversampling required to compensate for the lens distortion, you've got the timewarp/reprojection which necessitates a larger-than-visible frustrum and frame buffer so your POV retransform has some wiggle room to move around, and most of the VR "optimizations" that exist in UE4/Unity are really just quick-n-dirty optimizing for latency at the expense of throughput (disabling CPU frame queuing, etc).
Definitely room for innovation whenever VR rendering actually becomes an economically sensible investment. Until then it's probably up to Oculus, Valve, Sony, and Nvidia/AMD to come up with the solutions, because I can't see EA/Ubi/Activision/Epic/Unity/etc restructuring their renderers for 0.1% of the user market. Oculus's recent SDK rolled out support for a few new features that allow for layer compositing, which opens up the ability to have arbitrary sampling levels for different elements on the screen - I believe Carmack referenced doing this for GearVR's virtual cinema in order to get the video on the theater screen fully sampled according to the panel's native 1440p even though the rest of the environment is rendered at 1080p for performance reasons. Whether or not it's reasonable to use this to tile the entire screen in different sample resolutions I can't say (have been lazy the last month), but it's almost certainly the way forward for rendering small text, huds, etc as you can pretty well throw 4x or 8x supersampling at it without a whole lot of concern. Personally I'm tempted to throw my hands up and just say 'F- the whole graphics pipeline' and spend my time in path tracing with the hopes that eye tracking and foveated rendering will finally make it reasonable
There isn't need to be in a patent about how to track the eyes rather than what to do with that data in software. A patent on foveated rendering would have to be some fancy technique or extra bit of hardware or something to enable it.Not a single mention of foveated rendering, it's all 100% for UI.