Nope, that's not what I 'm suggesting. There's zero extra geometry for rendering explosions.I still don't think there's a good way to do it. After all imagine how games that have attempted to display explosions using 3D polygons. Not so good. Except now you'd have to take a 2D texture and try to make a very irregular polygon out of it. So suddenly your triangles per scene will explode, and you'll have to reduce resolution or scene complexity even further.
Nope, that's not what I 'm suggesting. There's zero extra geometry for rendering explosions.
You understand the idea fo 3D from 2D right, as used in Crysis 2 and Sony's tech? You render one single viewpoint, and then derive 2 from it by shifting pixels around based on depth. That's a single image, you could take a photograph of someone's head, and use depth info to extrude it in 3D space. That's all it needs, so if you render a flat polygon but include in the texture some depth info, you'll have what's needed to create 3D from 2D by shifting pixels around, only with the pixel displacements not just being on the flat plain of the polygon resulting in a two dimensional object in the 3D space, but instead they'll be displaced as if the centre of the polygon is at a different distance. In the case of our explosions, each particle can have a bulge in its centre like a displacement map, and the 3D from 2D process will make it look 3D within the boundaries of the object. Thanks to explosion clouds being pretty much spherical and somewhat hazy, it should be quite convincing.
The overhead would be storage for a depth map per particle (insignificant) and the 3Dification process which is that 2% overhead that CryTec have talked about. For just rendering explosions, that's a significant overhead, but if other aspects of the engine are reigned in a bit, it seems doable, and something probably necessary as 2D sprites are going to look awful and break the illusion.
For 720P/2 3D (KZ3 style)
left eye: 640 vertical lines (set a)
right eye: 640 vertical lines (set b)
brain: 1280 unique lines (set a + set b) + depth information
But I think I understand what you mean.
I thought he meant effective vertical interlace due to line doubling.
For the 640 veritcal lines they aren't interlaced to provide 1280 combined vertical lines. They are the exact same vertical lines except shifted to the right or left depending on which eye that frame is meant for.
An interlaced scene where odd lines are shifted to the right and even lines are shifted to the left would be bizarre in the extreme. And when using glasses to then provide only even lines to one eye and only odd lines to the other eye would be even more bizarre.
Regards,
SB
I disagree, my friend, the pixels are completely different, no? Yes, on paper it is 640 lines but for the brain it sees much more because each eye gets different information.
Left eye: 640 columns (set a)
Right eye: 640 columns (set b)
Brain: 1280 non-repeating columns (set a + set b) + depth information
Looks like Sony is also working on getting the "fake" 3D up to acceptable levels ala Crysis 2. Implication being that since there's less of a performance hit, they might be pushing that in the future rather than "real" stereoscopic 3D.
I disagree, my friend, the pixels are completely different, no? Yes, on paper it is 640 lines but for the brain it sees much more because each eye gets different information.
Left eye: 640 columns (set a)
Right eye: 640 columns (set b)
Brain: 1280 non-repeating columns (set a + set b) + depth information
No that would be the same as saying the 2D version with 1280 columns would equal 2560 columns for 2 successive frames.
It doesn't work like that. Your left eye will see 640 columns. Your right eye then sees its 640 columns superimposed directly over where the 640 columns were for the right eye.
Your brain will only ever see 640 columns. Your TV will only ever display 640 columns (well it'll be upscaled to 720p by the TV or console, but you get the idea). So after upscaling each pixel in game will take up 2 pixels on screen (assuming 720 horizontal lines). The image itself still remains 640 columns, albeit stretched out to 1280.
Regards,
SB
No that would be the same as saying the 2D version with 1280 columns would equal 2560 columns for 2 successive frames.
It doesn't work like that. Your left eye will see 640 columns. Your right eye then sees its 640 columns superimposed directly over where the 640 columns were for the right eye.
Your brain will only ever see 640 columns. Your TV will only ever display 640 columns (well it'll be upscaled to 720p by the TV or console, but you get the idea). So after upscaling each pixel in game will take up 2 pixels on screen (assuming 720 horizontal lines). The image itself still remains 640 columns, albeit stretched out to 1280.
Regards,
SB
Though there's an argument to be made that 3D is less...'dependent' on resolution, I don't think these numbers are the way to go about it. Presented two pieces of data on the same information, left and right views of a scene, the human mind will interpolate all sorts of detail - most of what you 'see' in a given moment is actually all made up by the brain. HD resolutions are about fidelty and giving a scene some clarity. It also reduces artefacts like aliasing. 3D will, in my estimation, increase information density and so decrease perception of jaggies, but not affect fidelity. The end result is 3D at a given resolution probably looks less aliased than that same resolution in 2D, but won't look as sharp.
The approach looking at numbers and samples isn't an effective way to explain human perception. I imagine there are some good research papers that explore this, but I don't know any!
On this I agree, my friend. You can't simply say one column is super-imposed over other and so it's all the same lines.
The brain recreates a new image (like dreaming) based on all the different data it receives. It is the ultimate GPU! Only problem ... no HDMI output. It will be great to connect a DVR at night.
I still say you're wrong. Let's for instance imagine that you're looking at a 2D picture on a 2D screen. Now we're going to show that 2D picture on the 3D screen, but the picture stays in 2D. All we're doing is placing this 2D picture slightly in front of the physical screen's edges, at exactly the same level as the screen's edges, or slightly into the screen. The picture is still 2D. It's the same resolution it was before. You see it in the same resolution as before. Your brain interprets it the same way. Except, now it is perceived to be at a different distance from you relative to the edge of the screen (or not).
Now we should of course not undervalue the third dimension that has just been added. But you're mistaken if you believe any x or y information is added - only z information is added. The source screen still only encodes half the x or half the y information, and this information doesn't magically return. Pixel aliasing just has an additional dimension to be visible in, and it is in fact suggested from various sources that this helps your brain identify the individual pixels / aliasing steps more accurately (you see them now in 3 dimensions) rather than the other way around, which is why Sony's researchers say that good AA is more important than resolution. I presume this is because your brain gets even better at edge detection in 3D space, which makes sense if you think about it.
The difference between pixels is like 5% on average...
A 640p resolution does not look like a 1280 resolution with glasses. It looks like a 3d 640p image and that's it. Tried this a couple of times with my PC.
What's definitely true is that resolution isn't that big of a factor in 3d anymore. Neither is stuff like great black levels.