Efficiently implementing 3D support in next generation console hardware?

Squilliam

Beyond3d isn't defined yet
Veteran
Supporter
I know all about the inefficient methods for alternate frame rendering which effectively halves a baseline 60FPS source to two 30FPS sources for each eye. So assuming that one of the console manufacturers decided they wanted to make every game next generation support 3D glasses then how would they best implement that technology in hardware? We know the hard way which is to simply double up the graphics hardware or halve the I.Q. but what would be the most efficient, seamless and easy methods to implement 3D for the next generation?
 
"So assuming that one of the console manufacturers decided they wanted to make every game next generation support 3D glasses then how would they best implement that technology in hardware?"

They likely wouldn't if they wanted 100% support they'd just require it.
Although they could just double up the graphics hardware and have the API hide the camera details.
 
What I was meaning was "Is it possible to effectively render a game in 3D without doubling the workload" if all you're doing is switching the camera angle then if you're smart about it how much duplication of the rendering process can you avoid?
 
You can certainly reduce redundant work outside the graphics pipeline, but you don't need hardware to do that.

You could potentially share a lot of the vertex work, but you'd have to have some mecahnism to seperate things that cared about eye position from those that didn't. You'd probably add another step in the transformation pipleline and split the results. It's probably a lot of work for a minimal gain.

You might also be able to do some tricks with distant objects, but I doubt it's worthwhile.

Fundamentally your rendering 2 views from similar but disparate view points.

I think the bigger issue with 3D is can it be done in a none fatiguing value add way that isn't just a gimmick.
 
Laa-yosh mentioned experiencing a technology that can take old 2D footage and render it 3D. I think last week's Channel 4 3D week used that technology, because they showed Saw 3 which, AFAIK, wasn't ever recorded stereoscopically. If this is the case, i don't know how quick and easy such a 2D to 3D conversion could be created in game. Or even if it's any good. Channel 4's 3D was very ropey. The anaglyph method doesn't help.
 
Did I really do that? What I've heard about is converting old movies to 3D, like Titanic, but I don't think that has anything to do with advanced technology. What's more plausible is to farm it out to branches of existing post effects studios, to India and other countries on the Far East.

The job would then be to rotoscope (trace) every main object and character and try to get some 3D info from the camera movement, then somehow rebuild the scene in a simple 3D representation and project back the film footage to the geometry. Objects further away could literally be like carboard cutouts.

If you think this is too much work, think again. Light sabers and laser blasts in SW, or Tron's characters' FX were using hand painted masks and there was absolutely no digital tool for that process. Today it's a lot easier, although still quite repetitive and requiring very little creativity... but it can be done. So if there's a market for it, then it will be done, too. But obviously this won't be as good as something filmed in 3D, and it can't be used on games either.


As for the implementation in nextgen consoles, I really wonder how all of today's 2D fake particle effects would work. That can be a real problem...
 
Game developers have a fair amount of experience with this kind of work already, as it's not that different from split-screen games. That said, you should be able to do much better than that because you're always certain you need the same textures, same geometry/animations, and you can do the same culling (just expanding the viewpoint slightly to cover both eyes, but that's generally a minimal difference).

I do think there's a fair amount of optimalisation possible, both in hardware and in software. The question is going to be whether or not its worth it. I think it will be interesting to see how Sony is going to present the 3D pipeline at CES next year, in the manner that they are planning to support the PS3 in combination with their Bravia TVs.

I'm currently expecting that at least on PC we'll see the 3D abstracted like it is now in the NVidia drivers, and hardware won't show up in the picture until 3D is widespread enough or it can be done really cheap.

You can certainly reduce redundant work outside the graphics pipeline, but you don't need hardware to do that.

You could potentially share a lot of the vertex work, but you'd have to have some mechanism to seperate things that cared about eye position from those that didn't. You'd probably add another step in the transformation pipleline and split the results. It's probably a lot of work for a minimal gain.

You might also be able to do some tricks with distant objects, but I doubt it's worthwhile.

Fundamentally your rendering 2 views from similar but disparate view points.

I think the bigger issue with 3D is can it be done in a none fatiguing value add way that isn't just a gimmick.
 
Did I really do that? What I've heard about is converting old movies to 3D, like Titanic, but I don't think that has anything to do with advanced technology. What's more plausible is to farm it out to branches of existing post effects studios, to India and other countries on the Far East.

The job would then be to rotoscope (trace) every main object and character and try to get some 3D info from the camera movement...
Ahh, I didn't realise it was that pantsy. I wondered how you could possible automatically extract 3D data, and couldn't come up with a solution myself. Applying a Human Brain explains how it's possible to retro-fit 2D movies with 3D.

Of course, that method would work on games. A g-buffer would provide the 3D info and the 3D-ification could be performed on that.
 
Sony showed 3d games running on standard ps3s with the tv doing some sort of processing right? If so i doubt the processing would be that intensive. If thats the case and this sort of processing could be done on a single spu say, then future games could potentially do the processing inside the ps3 so that it would work with any 3Dready TV. Maybe ;)

Future consoles could reserve a core specifically for this from the start so that it is available for every single game, or a dedicated chip similar to what is used in the tv could be included, like an advanced version of the 360s scaling chip.
 
Last edited by a moderator:
No, not really, think about it like cutting out the actor and placing him on a simple cardboard model with his outline, then moving him in 3D space in front of a virtual pair of cameras. Then do it with everything that you consider important enough. Sort of like 2D modeling and animation, but with several layers and parallax... Most modern compositing packages have enough 3D support to accomplish this, but it does need a human brain which you can't yet program into a game engine.
 
Laa-yosh mentioned experiencing a technology that can take old 2D footage and render it 3D. I think last week's Channel 4 3D week used that technology, because they showed Saw 3 which, AFAIK, wasn't ever recorded stereoscopically. If this is the case, i don't know how quick and easy such a 2D to 3D conversion could be created in game. Or even if it's any good. Channel 4's 3D was very ropey. The anaglyph method doesn't help.

I think it was Joker who went to a private screening that you're thinking of.
 
No, not really, think about it like cutting out the actor and placing him on a simple cardboard model with his outline, then moving him in 3D space in front of a virtual pair of cameras. Then do it with everything that you consider important enough. Sort of like 2D modeling and animation, but with several layers and parallax... Most modern compositing packages have enough 3D support to accomplish this, but it does need a human brain which you can't yet program into a game engine.

But didnt sony show tech that did this? if you have the depth buffer you know the exact depth of every object and so it would make processing the 2d image a lot easier, im guessing thats how thier tech works anyhow, cant think of any other way
 
The problem is that if your two viewpoints are not in the same position then what they see from the scene is different. There will be areas in the background that are covered by objects in the foreground, which should become visible once you use two cameras instead of one.

You can test it yourself, just hold up your index finger about 5 inches from your face and look at it with only your left eye, then your right eye. Now lift your hand in front of your face about a foot away and look at whatever's beyond your hand, wall, TV, or the computer screen, so that you cover up a good chunk of it, and then repeat. In both cases you get two very different images, you get to see stuff that the other eye can not see. Your brain does a lot of image processing to deal with stuff like this when both of your eyes are open. So you don't really notice but it's there and stereo games and movies have to deal with it as well.

Now even if you have a full G-buffer, you don't store occluded pixels in it - but you'd need them to get this stereo effect. So what needs to be done is a full reconstruction of the background. There are techniques for this in movie VFX (think about how they painted out Andy Serkis in LOTR in order to replace him with the far thiner Gollum), which rely on 2D image editing, retouching etc. The above mentioned fake 3D-upgrade would of course have to cover this as well. But games don't have the human brain and painting skill to replicate this trick either.

Also, the above mentioned fake 3D can't deal with the first example either, the case where you have two different images of your finger. One with the front and right side, one with the front and left side, whereas a mono view would only have the front side. It is detail that's lost from a 2D movie as well, which is why I've mentioned the cardboard effect before. If you have static objects and backgrounds and a moving camera then it is possible to use simple 3D geometry and re-project the movie's frames onto them to get rid of it but it's quite unlikely that they'd do it for the actors, or complex stuff like trees etc.
 
A better option would be similar to Forza 3, where you could link up 2 consoles, but to the same TV, so even if you're rendering twice, there is no performance loss, and the base console is still powerful and cheap enough for the masses that don't have a 3D TV. Since 3D will require new TV's most likely, anyone who can afford to do that could also just buy a second console.
 
A better option would be similar to Forza 3, where you could link up 2 consoles, but to the same TV, so even if you're rendering twice, there is no performance loss, and the base console is still powerful and cheap enough for the masses that don't have a 3D TV. Since 3D will require new TV's most likely, anyone who can afford to do that could also just buy a second console.

It's a relatively simple way to do it, but not a great way to do it imho - you're definitely throwing away a lot of performance for nothing.
 
It's a relatively simple way to do it, but not a great way to do it imho - you're definitely throwing away a lot of performance for nothing.

And for everyone without "yet another special TV" you throw away a ton of performance by requiring games to support 3D. Do we want "2x" work per pixel for almost every user or for 5% of the market to get 3D?
 
3D is gonna take off seriously, the way I see it. Sure, many customers don't even have HDTVs yet, but that just means an even greater market for 3D ready TVs ;)

Edit: of course it doesn't mean that every game's gonna waste performance... I think it'll be more like, full 1920*1080 for 2D users and dual 960*1080 (possibly with 2xAA) for the stereo users. At that res, upscaling isn't gonna be as ugly as it is with 1024*576 noAA games nowadays. In fact most users probably won't notice the difference at all.
 
Resolution changes and framerate changes are possible, but then we are backing ourselves into certain corners. What happens to the dev aiming for 720p 30Hz?

As for display sales, that is a hurdle. Just as big is getting technology out that is light, works well with glasses, and doesn't mess up colors. If I have to wear something extra over my glasses I want a HMD with head tracking. Buying yet another display and having to potentially get 4 or 5 pairs of glasses :devilish:
 
What I was meaning was "Is it possible to effectively render a game in 3D without doubling the workload" if all you're doing is switching the camera angle then if you're smart about it how much duplication of the rendering process can you avoid?
If you know that you are rendering 3D, you can make things like shadow map generation to take wider FoV into an account and render them only once.
Basically it is only the final screen rendering which is doubled.
 
I still wouldn't gamble billions of dollars for 3d yet but still like the console SLI idea. Imagine if you could hook up two consoles and each would render at 960*1080 and you'd get a full HD picture. Console sli would do wonders for the longevity of a console generation
 
Back
Top