Efficiently implementing 3D support in next generation console hardware?

I wonder if an optional "3D add-on" could be shipped at launch. Essentually developers would need to program to support it as a standard feature, but the add-on being nothing more than an a 2nd GPU-box. Essentially those who buy 3D TVs as early adopters have the cash, why not make them pay for it if they want it. Asking for another $100 or so to get 3D support could be a way to do this. Of course just cutting down resolution/graphics is another way.
 
I don't think they should make the consoles any more like a PC. What we have now is more then enough of that.

Linking two consoles together also doesn't sound likely, there needs to be a very very fast low latency link between them for it to work...
 
The reverse reprojection surface pixel cache system I used in early Trials HD development builds could really easily support stereoscopic 3d rendering. Stereoscopic view would need only very little extra processing power as the geometry passes are really cheap (they are basically just simple dual projected texture geometry with no light calculation, no shadow calculation, etc). Stereoscopic rendering is really a poster child for all pixel reusing techniques.
 
I think it was Joker who went to a private screening that you're thinking of.

Yeah I went to a private screening as part of a job interview actually, since I've been thinking of exiting games. The process is a mix of labor and tech, objects still need to be traced out by teams of artists, but there is some substantial code tech involved to help the process. It's definitely not automatic though, it takes serious labor. I very much question how easy it would be to outsource said labor since it's one of those things that needs to be done at a consistent high quality level and monitored closely. We're used to cleaning up outsourced art in games, but I'm not sure how well that would work as part of the 2d->3d movie process. If the work isn't done exactly right then it might need to be totally thrown away.

Console wise, they need to avoid having to render the scene twice (once for each eye). Halving the resolution helps pixel side, but doesn't help vertex side since all verts would still have to be processed twice. I'm guessing they could implement 3d with some sort of mrt approach, where you turn on a "3d mode" render state, and the hardware would automatically enable two color output buffers, and use the resultant z values to automatically shift the final pixels accordingly, so each shader would write out the same color to two different buffers at two different locations. That way vertex load stays the same in 2d or 3d mode, and they could render 3d at full res since pixel load would also be the same (all shaders are run only once). It should be easy to toggle 3d on/off that way. The only hit is some bandwidth.

This method should be doable on existing consoles as well, although there is little incentive to support 3d games anytime soon.
 
Console wise, they need to avoid having to render the scene twice (once for each eye). Halving the resolution helps pixel side, but doesn't help vertex side since all verts would still have to be processed twice. I'm guessing they could implement 3d with some sort of mrt approach, where you turn on a "3d mode" render state, and the hardware would automatically enable two color output buffers, and use the resultant z values to automatically shift the final pixels accordingly, so each shader would write out the same color to two different buffers at two different locations. That way vertex load stays the same in 2d or 3d mode, and they could render 3d at full res since pixel load would also be the same (all shaders are run only once). It should be easy to toggle 3d on/off that way. The only hit is some bandwidth.

This method should be doable on existing consoles as well, although there is little incentive to support 3d games anytime soon.
only complication would be that most fragments would not be shifted an integer number of pixels
 
I support the console SLI idea if we get to sell two copies of each game ;-)

don't give them ideas.. :LOL:

for 3D rendering, well for stereo 3D rendering, you don't need special hardware : case in point, nvidia stereo drivers - have been available for a decade, running on a TNT onwards.

the interesting tricks would be done at the software level - API or game engine.
such as, a take on the geometry instancing idea.
 
Last edited by a moderator:
Back
Top