DriveClub by Evolution Studios [PS4]

Imagine drawing a line right down the centre of your TV; the left section will be for your left eye, the right section for your right. So you look forwards. If you had the complete screen for each eye, your eyeballs will have to go crosseyed to see at screen that close to your face.

That's not how 3d works. You need to see the same object from both eyes (although at a different angle) for it to have depth. If you divide the TV screen straight through the middle, you don't have any overlapping information, thus no 3d. See the image that was posted right after your post and note that most of the objects/scene are present in both pictures but from a slightly different angle:

-nnhnEEIyv-a9YUEN6YZFImZieySh_8p46-8sYYkyFtd4uca5rm5JNTwt-C6A8kv-7h9=h900
 
That's not how 3d works. You need to see the same object from both eyes (although at a different angle) for it to have depth. If you divide the TV screen straight through the middle, you don't have any overlapping information, thus no 3d. See the image that was posted right after your post and note that most of the objects/scene are present in both pictures but from a slightly different angle:

Yes, I'm well aware that you see 3D from two seperate view points, in life and in 3D films / VR. But you are actually looking at two screens (in VR at least); left for left and right for right. Your brain and your eyes do this clever thing of bringing those two images together so that they're overlapping and cause the effect of 3D.

Exactly how your eyes (and brain) actually work. Also with two images.
 
I’m trying to think of how best to explain it…

Your eyes are naturally a few inches apart, if you cover your right eye you see one image from the left eye and vice versa is also true. Those two images are not overlapped before your brain receives them, they’re still two images. If you hold a pen at arm’s length both of your eyes will focus on that pen, your brain knows you’re looking at a pen so it’ll bring those two images so that they’re overlapping (causing a nice sense of scale). You can make yourself go cross-eyed if you want to get an idea of how your brain interprets two images that aren’t looking at the same place. It’s actually still trying to match them up and you therefore still perceive one (wonky) image.

3D films on TVs and the cinema are displayed on one screen for efficiency purposes, not because they’re trying to replicate eyes. Film makers can presumably play some funny tricks by changing a viewpoint from being the standard several inches apart, to being several feet part and make you feel like a giant (not sure if that’s been done though).

What VR I imagine does especially well (I haven’t tried it yet, so can’t say for sure) is avoiding any kind of image ghosting from the opposite eye’s view. If you’re watching a 3D film, all the lenses are trying to do is block out the other eye’s viewpoint, but they don’t do this brilliantly well – you can never completely blank the light emitting from the TV/cinema. So you get ghosting of the other’s image. You brain doesn’t do this with normal vision, because you’re eyes are separate. VR will be good because neither eye will ever see the other’s viewpoint and therefore no cross image distortion.
 
Ah, gotcha. Indeed I know how 3d works, I was just a bit perplexed when you were talking about two "half" screens and dividing a TV screen through the middle. Especially this post here:

ThePissartist said:
From what I understand both the left and right lens only see half of the screen each; that's why we have 960x1080/eye. The complete screen refreshes at 120hz, each individual refresh updates both the left and the right eye (i.e., both "screens"), so it's not doing one and then the other like a TV does.

After reading it again, I think I understand what you meant to say (or what I misunderstood to mean). Anyway, if we have two separate screens, each as 960x1080, we only need 60Hz per lens for a 60fps game. Internally however, the engine would still be rendering the equivalent of 120fps, as each 1/60th frame would be rendered from two slightly different view points at the same time (and not alternating every 1/120th). That's my guess. I would guess this is more efficient too, as the physics engine and lots of parts within the game engine would not need to be recalculated, but only for each 1/60th. What would be a concern IMO is the very narrow FOV (960x1080 is very narrow), although if you use anamorphic pixels, you could still give the illusion of a wider FOV with less pixel information. Maybe you could even cheat a bit by doing odd lines spread across left/right eye, but that would probably look quite weird. Not sure to what extent our brain can piece that information together without giving an headache...
 
we only need 60Hz per lens for a 60fps game. Internally however, the engine would still be rendering the equivalent of 120fps, as each 1/60th frame would be rendered from two slightly different view points at the same time (and not alternating every 1/120th). That's my guess.

Unfortunately VR is not that easy. Experimentation has shown that even 60Hz is not enough for a confortable experience, it makes it easy for people to feel motion sickness. 120Hz (some say 90Hz is goode enough) is considered the minimum to avoid that.
 
if we have two separate screens, each as 960x1080, we only need 60Hz per lens for a 60fps game. Internally however, the engine would still be rendering the equivalent of 120fps, as each 1/60th frame would be rendered from two slightly different view points at the same time (and not alternating every 1/120th)

The engine is unlikely to be rendering at 120fps, unless the game is native 120hz, so most often it'll be frame interpolation to bring the framerate up to the standard of the display.

I would say that two simultaneous 960x1080 refreshes at 60hz are the not equivalent to 120hz, they're actually equivalent to 1920x1080 at 60hz, even if the system is essentially showing 120 separate frames per second (i.e., 60/eye). Otherwise Goldeneye on the N64 would be considered to be rendering 120fps (30x4 players). Which of course, it isn't.

You're right in that they're definitely not alternating the views.
 
The engine is unlikely to be rendering at 120fps, unless the game is native 120hz, so most often it'll be frame interpolation to bring the framerate up to the standard of the display.

I would say that two simultaneous 960x1080 refreshes at 60hz are the not equivalent to 120hz, they're actually equivalent to 1920x1080 at 60hz, even if the system is essentially showing 120 separate frames per second (i.e., 60/eye). Otherwise Goldeneye on the N64 would be considered to be rendering 120fps (30x4 players). Which of course, it isn't.

You're right in that they're definitely not alternating the views.
No, its more like a 1280x1080 at 60hz buffer with left aligned and right aligned 960x1080 slices taken from it.
 
No, its more like a 1280x1080 at 60hz buffer with left aligned and right aligned 960x1080 slices taken from it.
That's probably closer to the truth considering the actual windows displayed.

Edit: actually I'm not sure about that...
What do you mean?
 
That's probably closer to the truth considering the actual windows displayed.

Edit: actually I'm not sure about that...
What do you mean?
So you start out with a 1280x1080 rendered display buffer and take 2 slices from it. For the left eye slice you start at pixel column 41 and take 960 pixel columns to the right or pixel columns 41 to 1000. For the right eye slice you start at pixel column 1240 and take 960 pixel columns to the left or pixel columns 1240 to 281. This is all done in the breakout box that has received the 1280x1080 rendered buffer from the ps4. Why the offset/overscan? That's used by the interpolator for head trajectory offset every other frame.
 
So you start out with a 1280x1080 rendered display buffer and take 2 slices from it. For the left eye slice you start at pixel column 41 and take 960 pixel columns to the right or pixel columns 41 to 1000. For the right eye slice you start at pixel column 1240 and take 960 pixel columns to the left or pixel columns 1240 to 281. This is all done in the breakout box that has received the 1280x1080 rendered buffer from the ps4. Why the offset/overscan? That's used by the interpolator for head trajectory offset every other frame.

I'm confused, it sounds like you're suggesting that both the left and the right eyes are reusing the same pixel columns? i.e., 281 - 1000 are present on both, with only 1-280 reserved for the left and 1001-1240 for the right? I can understand the need to save some pixel columns to help with the interpolated frames, but surely it wouldn't be with how you've described it... Maybe I'm not understanding you correctly.
 
So you start out with a 1280x1080 rendered display buffer and take 2 slices from it. For the left eye slice you start at pixel column 41 and take 960 pixel columns to the right or pixel columns 41 to 1000. For the right eye slice you start at pixel column 1240 and take 960 pixel columns to the left or pixel columns 1240 to 281. This is all done in the breakout box that has received the 1280x1080 rendered buffer from the ps4. Why the offset/overscan? That's used by the interpolator for head trajectory offset every other frame.

I was just thinking about that, but IMO that would not give you any 3d picture. Certainly not any real depth as then every single object within the scene would have the same offset (by the 240 pixels you offset left/right in your example), so you wouldn't be able to simulate depth... certainly not forward/backwards movement. I think you'd end up with a slightly strange image that is still very 2 dimensional.

Remember; the depth is achieved by a non-consistent overlap of objects. The closer an object is, the more the offset needs to be. The less offset you have, the farther away the object is. You can imagine doing this by moving a pencil between your eyes closer and farther away. Note by closing one eye (but still looking straight) at a time you will see that the pencil will more offset the closer you move it. I.e. when it's very close to your nose, from your left eye, the pencil will be far right, from your right eye, it will be far left. When the pencil is farther away (lets say 20 meters, assuming you have that long arms :D), it will be closer to the middle from both eyes view point.

To achieve this, you need to render the same scene from two different angles, hence why 3d is costly to render in a realtime application.
 
So you start out with a 1280x1080 rendered display buffer and take 2 slices from it. For the left eye slice you start at pixel column 41 and take 960 pixel columns to the right or pixel columns 41 to 1000. For the right eye slice you start at pixel column 1240 and take 960 pixel columns to the left or pixel columns 1240 to 281. This is all done in the breakout box that has received the 1280x1080 rendered buffer from the ps4. Why the offset/overscan? That's used by the interpolator for head trajectory offset every other frame.

I create a quick couple of images in Paint (don't judge me), one being how I understand the VR headsets work and the other being how I interpret what you're saying. Feel free to correct me if I'm being stupid here.

vr1.JPG
Yes, I know all the proportions are wrong, but I think this shows that's happening. Your left eye is looking at the left circle and your right is looking at the right circle. Your brain then merges those two images to give the perception of 3D.

What you're describing seems more like this:

vr2.JPG
Where the left and right eyes are sharing some pixel columns. Similar to how a 3D TV works.

Is this what you're saying?
 
I'm confused, it sounds like you're suggesting that both the left and the right eyes are reusing the same pixel columns? i.e., 281 - 1000 are present on both, with only 1-280 reserved for the left and 1001-1240 for the right? I can understand the need to save some pixel columns to help with the interpolated frames, but surely it wouldn't be with how you've described it... Maybe I'm not understanding you correctly.
Yes 281-1000 are present on both the left and right eye but I didn't mention it to avoid confusion. :D. The idea is to KISS so yea the easier the better.
 
I create a quick couple of images in Paint (don't judge me), one being how I understand the VR headsets work and the other being how I interpret what you're saying. Feel free to correct me if I'm being stupid here.

View attachment 1017
Yes, I know all the proportions are wrong, but I think this shows that's happening. Your left eye is looking at the left circle and your right is looking at the right circle. Your brain then merges those two images to give the perception of 3D.

What you're describing seems more like this:

View attachment 1018
Where the left and right eyes are sharing some pixel columns. Similar to how a 3D TV works.

Is this what you're saying?
The top diagram is from the screen perspective and the bottom diagram from the render buffer perspective.
 
As per my last post, I'm just going to repeat, you can not take a static rendered frame and project two slices at an offset to two screens for the left/right eye to create a 3d image. The whole point in creating a 3d image is that different objects are at a different angle (from each POV) and depending on their position into the scene would need to be rendered at a larger or smaller offset to one another; thus why you need to render the scene from two angles and you can't just take a static screen and split it up into slices...

The example with the pencil illustrates this rather nice; an object at the tip of your nose would be far left from the right eyes POV, and far right from the left eyes POV. An object further into the scene (i.e. on the horizon) would be closer to being in the identical spot from both POVs.
 
Back
Top