Sony VR Headset/Project Morpheus/PlayStation VR

The display planes combine for the video out. There's only one signal down the HDMI which is the final composite. But I guess if they use HDMI+Ethernet, they can send the UI layer separately.

Ah, but Sony don't need to work within the limits of HDMI. Aside from being a connector pin layout, HDMI is a software stack but it's a standard that only exists for interoperability between different devices which is a luxury they don't need for PS4 to Morpheus (or its breakout box).

HDMI is also something of a compromise to allow support of different things requiring different bandwidth - the video in 2D or 3D, an ethernet channel, the audio, the audio return channel, control lines and miscellaneous data. HDMI 1.4 has a raw bandwidth of 10Gbps, I wonder if this is enough to send two distinct 1080p frames at 60Hz - probably if you've not sending complete full frames (i.e. the cockpit also consumes the bottom 25% of the screen) and you've not simultaneously trying to send 5.1 or 7.1 lossless audio. :runaway: Although if you throw out HDMI you basically rule out PC compatibility.

But I'm more curious how an approach like this would complicate the rendering pipeline complexity. For example if you're producing two distinct panes but there's light, shadows or reflections etc. from one (the world) impacting the other (the cockpit overlay), how that have an impact? Will this complicate AA?
 
HDMI is also something of a compromise to allow support of different things requiring different bandwidth - the video in 2D or 3D, an ethernet channel, the audio, the audio return channel, control lines and miscellaneous data. HDMI 1.4 has a raw bandwidth of 10Gbps, I wonder if this is enough to send two distinct 1080p frames at 60Hz -
6 MB a frame at 24 bit colour. 373 MB/s for one frame @ 60Hz, 746 MB/s for stereoscopic video. There'd be enough for a full screen 2D overlay.
 
Cool. And it's likely you wouldn't need to do send two 1920x1080 frames, not unless you had a very intricate overlay that covered the entirety of the screen.
 
The display planes combine for the video out. There's only one signal down the HDMI which is the final composite. But I guess if they use HDMI+Ethernet, they can send the UI layer separately.

I think I have the total wrong end of the stick then as I was under the impression I saw a slide where Sony had this task on the ps4 gpu and as such would have access to this and returning movement data from the hmd, thus I had assumed and this is probably fatal that the gpu did all the game and frame and the display just delt with the CA and lense correcting.

This seemed logical to me for cost and because we know Sony has a fair chunk of compute and many queues to slip this in so it may be practically invisible performance wise which seems to fit with their quote about it being best to leave always on and such.

BTW thanks all for being tolerant of a lay person who struggles with some of the tech jargon.
 
From hands on impressions, most people feel that the image is suficiently sharp compared to other devices.
But shouldn't reprojection technology introduce artifacts and motion blur?
How accurate can a frame be, based on previous frames and sensor data?
 
From hands on impressions, most people feel that the image is suficiently sharp compared to other devices.
But shouldn't reprojection technology introduce artifacts and motion blur?
How accurate can a frame be, based on previous frames and sensor data?

The London Heist demo(Time Crisis like demo) was 60 fps reprojected to 120 fps. The robot demo was 120 fps.
 
The best huds I've seen in VR demos were transparent and in the peripheral edges, so you have to actively look at them to really see them.
 
BTW thanks all for being tolerant of a lay person who struggles with some of the tech jargon.
I think it's fair to say that when it comes to specifics, the unknowns far outnumber the knowns. Much of this is just spitballing, or because this is B3D, education speculation ;)
 
... But shouldn't reprojection technology introduce artifacts and motion blur?
How accurate can a frame be, based on previous frames and sensor data?


As I understand it it's not actually a new frame, it's repositioning the last frame with the latest movement data from the hmd, that's what it sounded like in the eurogamer article anyway.
 
As I understand it it's not actually a new frame, it's repositioning the last frame with the latest movement data from the hmd, that's what it sounded like in the eurogamer article anyway.
Well it's at least that. How much smarter it is than that remains to be seen! :yep2:
 
From hands on impressions, most people feel that the image is suficiently sharp compared to other devices.
But shouldn't reprojection technology introduce artifacts and motion blur?
How accurate can a frame be, based on previous frames and sensor data?


No one has used one right after the other yet. So its really hard to tell what could be hours between them or in some cases with crystal cove months between using them.
 
Oculus will probably add reprojection too before launch. It's applicable to any VR and it helps with the most sensitive aspect, which is the head rotation.
 
Oculus will probably add reprojection too before launch. It's applicable to any VR and it helps with the most sensitive aspect, which is the head rotation.
Your display has to refresh at that rate though. You can't reproject at 180 fps when your display only does 90 fps.
 
As I understand it it's not actually a new frame, it's repositioning the last frame with the latest movement data from the hmd, that's what it sounded like in the eurogamer article anyway.

If it is better to interpolate between no frames and correct frames than having any bad or incorrect data, isn't there a chance that the brain could perceive the interpolated frames as wrong? Are those frames close enough to what the user should see? If that is the case, then Sony has some pretty impressive tech there.

No one has used one right after the other yet. So its really hard to tell what could be hours between them or in some cases with crystal cove months between using them.

Yeah, that is true. Some have the Rift dev kits though, and most said that the differences are minimal.
 
Your display has to refresh at that rate though. You can't reproject at 180 fps when your display only does 90 fps.

I think Sony where making the point that even if your game runs at the native panel speed it can help.

I read it that the frame takes x time to create and it uses the best headset position at the start which itself has some lag, by the end of that frame the headset might have a better position so you tweak the frame for position as the very last thing to try and reduce the delay that all the sensors and frame generation have introduced.

You will always have the motion sensor delay but might be able to shave most off the frame creation so you are working more akin to parallel than serial.
 
Your display has to refresh at that rate though. You can't reproject at 180 fps when your display only does 90 fps.
The way Richard Marks explained it is that the reprojection is asynchronous. It's not "interpolating" between frames, it's making the game engine run at a different rate than the display, and reduces latency to the minimum possible. All frames being displayed are reprojected. So even if it renders at 90fps or 120fps, it's still an advantage to reproject all frames.

The motion data of the rendered frame is already old once it's finished rendering and ready to display. So reprojecting a 120fps game at 120fps is cutting latency in half.
 
It's going to be weird that a game targeting 60fps will be in reality oscillating between something like 40fps and 80fps across all gameplay... but the display will remain low latency at 120fps versus the motion data. So the frame rate drops in the game will be seen completely differently than what we're used to, instead of becoming choppy it would create weird artifacts?
 
If it is better to interpolate between no frames and correct frames than having any bad or incorrect data, isn't there a chance that the brain could perceive the interpolated frames as wrong? Are those frames close enough to what the user should see? If that is the case, then Sony has some pretty impressive tech there.



Yeah, that is true. Some have the Rift dev kits though, and most said that the differences are minimal.

Yea but you've been able to purchase a dk2 since last summer vs Morpheus that was first shown off in Feb of this year and wont be out till next quarter 1.

I'd wait until they are all out on the market and see what the short comings and pros of each unit is.
 
I'd wait until they are all out on the market and see what the short comings and pros of each unit is.
We are speculating based on first hand reports, known engineering limitations, reseach about the basic requirements for a good VR experience, and plenty of presentations about the technology involved. There will be differences and tweaks before release, but the latest prototypes from each company won't have a complete redesign since it's all a year or less from launch.

People who tried all three VR prototypes within the same day took extensive notes about their observation. So far the PS4 solution is said to feel competitive with high-end PCs from competitors. For that alone it's undeniable that Sony got something impressive far beyond expectations.
 
New PC demo of Eve Valkyrie

On the upcoming Eve Fanfest, CCP will held and stream VR tournament with 8 teams, each with 5 players.
 
Back
Top