DriveClub by Evolution Studios [PS4]

I was just thinking about that, but IMO that would not give you any 3d picture. Certainly not any real depth as then every single object within the scene would have the same offset (by the 240 pixels you offset left/right in your example), so you wouldn't be able to simulate depth... certainly not forward/backwards movement. I think you'd end up with a slightly strange image that is still very 2 dimensional.

Such is the beauty of the parallax view. :LOL:

Remember; the depth is achieved by a non-consistent overlap of objects. The closer an object is, the more the offset needs to be. The less offset you have, the farther away the object is. You can imagine doing this by moving a pencil between your eyes closer and farther away. Note by closing one eye (but still looking straight) at a time you will see that the pencil will more offset the closer you move it. I.e. when it's very close to your nose, from your left eye, the pencil will be far right, from your right eye, it will be far left. When the pencil is farther away (lets say 20 meters, assuming you have that long arms :D), it will be closer to the middle from both eyes view point.

To achieve this, you need to render the same scene from two different angles, hence why 3d is costly to render in a realtime application.

Remember that the optics will also do some warping of those pixels from the eye's perspective, you also get some 3d effects from the higher frame rate. Now you can also do pixel manipulation too if you want but why? Any thing you do will be pretty subtle and then you difficulty using those pixels for the alternate display.
 
Yes to Phil. You have to have two separate cameras rendered, one for each eye, otherwise there are no depth cues and you're just seeing a flat image like a TV. Games are rendered like stereo 3D TV, two viewpoints. The display buffer puts left and right images (rendered at 960x1080 or thereabouts) side-by-side on the display. The lenses ensure each eye sees only its half of the display and complete image. The game renders 60 fps, but has to draw two camera views per frame so, unless they can use 3D extrapolation tricks (probably don't work for VR?), the game renders 120 frames per second, but each at half 1080p res. So the pixel draw requirements are the same as 1080p60, but the geometry has requirements equivalent to 1080p120.
 
The top diagram is from the screen perspective and the bottom diagram from the render buffer perspective.

I'm pretty sure you're wrong with this, though I'm willing to stand corrected if you're able to describe how and why that might happen. I can't see how you could share pixels like that.
 
As per my last post, I'm just going to repeat, you can not take a static rendered frame and project two slices at an offset to two screens for the left/right eye to create a 3d image. The whole point in creating a 3d image is that different objects are at a different angle (from each POV) and depending on their position into the scene would need to be rendered at a larger or smaller offset to one another; thus why you need to render the scene from two angles and you can't just take a static screen and split it up into slices...

The example with the pencil illustrates this rather nice; an object at the tip of your nose would be far left from the right eyes POV, and far right from the left eyes POV. An object further into the scene (i.e. on the horizon) would be closer to being in the identical spot from both POVs.
That's DOF not necessarily 3d, and point of focus would now be getting into foveated view territory and that's not what you are getting from the current VR solutions. Are there other pixel manipulations such as shading or blurring you can do to give you a better 3d effect? Sure, but that'd be done as a post process and would be done to after taking your slice of the render buffer. That though would add latency so the effect better be worth it!
 
You're wrong on this upnorthsux. ;) Each eye sees a different perspective. An extreme example, hold a piece of paper perpendicular to your face directly in front of your nose - your left eye sees one side of the paper and the right eye sees the other side. For objects at a moderate distance, the differences between object offsets and what you see of them build a stereoscopic image. At greater distance there are other cues like parallax motion. These headsets work with stereoscopy, just like 3DTV and 3D cinema. What you're describing is just single image TV or cinema, the same image seen by both eyes, and will have no more depth than that. The only depth you could get would be the focal distance of that virtual plane defined by the stereoscopic offset.

DOF is blurring before/beyond the focal point. There's no accurate DOF in VR which is one of its shortcomings - in VR, all the pixels are equally sharp to the eye regardless of virtual distance and virtual focal point.
 
...I didn't mention it to avoid confusion. :D. The idea is to KISS so yea the easier the better.

I have no idea what you're saying here.

Such is the beauty of the parallax view. :LOL:.

Or here.

...you also get some 3d effects from the higher frame rate.

I don't think this is in any wa true.

Are there other pixel manipulations such as shading or blurring you can do to give you a better 3d effect? Sure

Nor does this make any sense.
 
I don't think this is in any wa true.
High framerate on a object that moves sideways to expose a nice parallax on object helps brain to get a distance information of the scene.

You can try this by closing eyes, turn 90 degrees or until you view is changed enough to have a new view.
Stay still and open one eye, view should be close to 2D image.
Now move slightly sideways and suddenly trough parallax you have some 3D sense of the view.

This also happens with fabricated views when you have moving content and 60-75fps.
 
It's finally here, taken from gaf. Pretty crazy to high this level of detail in a drivingg game.
1449309723-driveclub-tm-20151205105002.png
1449309505-driveclub-tm-20151205103320.png
 
I also have the impression that general IQ has improved since the last patch, it may be placebo but i think they improved on AA/AF a bit (not dramatically).
 
Yes to Phil. You have to have two separate cameras rendered, one for each eye, otherwise there are no depth cues and you're just seeing a flat image like a TV. Games are rendered like stereo 3D TV, two viewpoints. The display buffer puts left and right images (rendered at 960x1080 or thereabouts) side-by-side on the display. The lenses ensure each eye sees only its half of the display and complete image. The game renders 60 fps, but has to draw two camera views per frame so, unless they can use 3D extrapolation tricks (probably don't work for VR?), the game renders 120 frames per second, but each at half 1080p res. So the pixel draw requirements are the same as 1080p60, but the geometry has requirements equivalent to 1080p120.

Don't understand the bold: you say 60fps for pixel draw and 120fps for geometry. This hypothetical talk is without the 60->120 interpolation trick, right?

Shouldn't both be at 1080p120 without the interpolation or both at 1080p60 with?
 
Actually the non overlapping circles are more realistic. The eyes have roughly a 180 degree field of view combined, but each eye separately only has, say, 100 degrees. So part of the view does not need to be rendered twice. Of course the current headsets only render, what, 120 degrees? So that makes it less important.
 
Yeah that's the biggest problem with headsets right now imo, i used both Oculus, google cardboard (with a dedicated headset, not the cardboard thing) and Vive and all still have that tunnel vision feeling (like you have a box around your eyes) even though i would assume it's a bit improved in the latest headsets. The only way i can see that problem being resolved completely is by having a 21:9 screen for VR instead of 16:9, but then processing power needed would increase as well (1280x1080 per eye instead of 960x1080).
 
Don't understand the bold: you say 60fps for pixel draw and 120fps for geometry. This hypothetical talk is without the 60->120 interpolation trick, right?

Shouldn't both be at 1080p120 without the interpolation or both at 1080p60 with?
It's easier to think double the geometry due two very similar views, so 2 views at 960x1080 and 60fps. (This also makes the actual game clock easier to understand.)

Shadowmaps and cubemaps and some other things can be shared with views, so you do not have to do everything twice.
Polygon shading efficiency is somewhat lower as polygons get smaller.
 
Yeah that's the biggest problem with headsets right now imo, i used both Oculus, google cardboard (with a dedicated headset, not the cardboard thing) and Vive and all still have that tunnel vision feeling (like you have a box around your eyes) even though i would assume it's a bit improved in the latest headsets. The only way i can see that problem being resolved completely is by having a 21:9 screen for VR instead of 16:9, but then processing power needed would increase as well (1280x1080 per eye instead of 960x1080).
It's an optics and form factor problem too. Starbreeze's x2 1080p panel headset is fine as an enthusiasts bit of kit but not going to fly as a consumer devise.
 
Back
Top