Digital Foundry Article Technical Discussion Archive [2010]

Status
Not open for further replies.
In fact the details of the way both consoles perform are, and probably will be, scarce and not definitive. It´s normal, there´re big companies involved and the NDA hide aspects of the architecture and the performance. Marketing issues (being the most capable console, in terms of processing power, is a useful slogan) reinforce that darkness.

But in fact we have learned a bit during these years. I don´t discard the possibility of some of the talk about weaknesses and strenghts being very polite marketing, but in any case the truth is not as much interesting as the search towards it, so I´m happy with the forum as it is now. It´s very interesting.

My thougths about the question itself (is X more capable than Y?) are not clear. I agree with the point that the quality of some games (if we use games as a rule of thumb in order to measure processing power) is not aparent sometimes. Sandbox games are used usually as an example of that. They don´t look as much impressive as other more close-oriented titles and as a result people tend to discard them as the definitive metric. But at the same time the point that the two "exclusive" sandbox games of each platform (CD and Infamous) aren´t really so impressive lets me with a reasonable doubt. RDR, GTA, Fallout and so on are multiplatform games and in some way the compromises between the architectures and the code arise. So it´s almost unavoidable that one version of the game will run better.

On the other hand, exclusive, graphic-oriented titles with high production values are somewhat more "stunning" or "impressive" (weird words, I know, very subjective) in PS3, but there´s the argument of the budgeting and the objectives of each IP, Sony being more inclined towards "graphics" and "ownership" and Microsoft letting third parties do their magic.

At the end of the day I think that tech-oriented, "deep" development is a bit more productive in PS3 because of the nature of the architecture itself, and XTS is a powerhouse of fast development cycles with great results and accesible, well known, well documented features. The problems with XTS are more of the art design and assets kind of thing, with the load that the hardware can handle being well known, and with PS3 the problems are a mix of art and the "original" and in some cases rigid architecture, not being clear what is the limit of the platform, even in case of the GPU, that some people report as not exploited to its limits because of their complexity and oscurantism from NVidia.

But at the same time, the custom code that SPU´s seems to allow can be innovative and rewarding, albeit somewhat "unnecesary" if hardware was more accesible. Freedom, but a forced one. Very "sonyish".

The final word to me is that only a few more MB of EDRAM would make XTS definitely superior, and a freaking, tiny GPU of unified shaders architecture would change PS3 from a powerful nightmare to a comfortable -and unexplored- platform of paralellised code. Who wins? Don´t bother me. We learn, that´s the interesting part.
 
I think he has an interesting underlying point that games like GTA just don't receive as much love as perhaps they should when talking overall platform achievement. Even DF has played with the Uncharted series being the most advanced on PS3 at times. I have a soft spot for GoW myself.

Yes it seems so. I mean when people talk about say sparks in other games with collision detection/simplistic physics no one mentions that GTAIV has that for all sparks. And then lots of other features but since it is multiplatform it is put in the backseat. Heck is these kind of tech features even named in game tech articlesregarding mp games at all or is it just exclusives that get that threathment?
 
I'm curious as to why the MT Framework engine the way it does on the PS3. Even with the mammoth installs it still has some shoddy performance that has been discussed here, not mention there's seems to no PS3 performance boost even with Capcom tweaking the engine.

Calling it shoddy is really stretching it. A lot. It drops into the mid twenties when a ton of stuff is going (so does Killzone 2, a game that everyone keeps on praising for its kick-ass visuals) on and unless you slow it down to a crawl you wouldn't even notice some of the missing particles.
Most of the time the game runs great and looks fantastic.

By the way, I'm playing RE5 on my PC right now (which looks pretty much indistinguishable from the PS3 version during normal gameplay, despite AA and a massive resolution boost) and it still seems to me like the water isn't quite as "impressive" as on the 360.
 
Checking it out...

Aside from a somewhat "cardboard cut-out" style of look on explosions and the main view weapon, Killzone 3D is an impressive spectacle.

That's going to be interesting to see how games deal with things like that. Making fully 3D explosions, smoke effects, etc. aren't going to be easy.

Because of the way the 3D framebuffer is structured, the tear is mostly restricted to the left eye, but in really heavy scenes it will cascade downwards and encroach into the viewpoint of the right eye.

That's going to be a problem going forward if tearing is always concentrated in one eye. One of the captions mentioned it was a "jarring effect."

Performance improvements are something to which Sony is paying a lot of attention. The aim is to make the process of adding 3D less onerous to developers generally. In our previous Making of PlayStation 3D article we described Sony's tentative work in using a 2D image for one eye, then using a technique involving extrapolation of the depth buffer to create the secondary image, with re-projection used to fill gaps in the image and restore the full stereoscopic effect you'd otherwise be missing.

"You'll see a title in the next 12 months using that," SCEE's senior group director Mick Hocking told us during an E3 update on this new technique for generating 3D imagery on the PS3. "The technology's working. It's early days, it's in the prototype stage still so we've still got to put it into a full title and work out all the nuances.

Looks like Sony is also working on getting the "fake" 3D up to acceptable levels ala Crysis 2. Implication being that since there's less of a performance hit, they might be pushing that in the future rather than "real" stereoscopic 3D.

One thing to take away from this is that Sony is dedicated to pushing 3D in games, because their TV business depends on it.

It's more to benefit their TV division than it is the PS3 division as even with good adoption of 3DTVs, it's still going to be a very very very small fraction of PS3 users who will be able to play the games in 3D. On the other hand, pushing 3D on PS3 is another marketing point when trying to push an expensive HIGH margin 3DTV.

Regards,
SB
 
So I assume everything would look like paper cutouts with the depth buffer 3d then? It's like comparing movie Avatar "killzone3" Vs Clash of the titans "crysis 2" right?
 
Eh? The various methods to derive 3D data from depth values isn't nesessarily going to produce cardboard cutouts. Crysis 2 isn't going to look like a popup book for example.

What is a problem is that smoke, explosions, etc. are made with 2D textures (I'm sure my terminology isn't correct). When rendered into a 3D/stereoscopic scene, there's no way to make it 3D, it's still a flat 2D object, thus the cardboard cutout look. I'm not sure any systems have the power to make volumetric smoke using hundreds of thousands or millions of particles.

Regards,
SB
 
What is a problem is that smoke, explosions, etc. are made with 2D textures (I'm sure my terminology isn't correct). When rendered into a 3D/stereoscopic scene, there's no way to make it 3D,...
Actually smoke and explosions are ideal candidates for the 3D-from-2D method described above. There's no detail or alternatve viewpoint to worry about because the whole thing is a big blurry splodge. In theory explosions could create a 2D image with depth and a 3D view be derived. This should work very well with KZ3 and its deferred renderer that allows more flexibility than a typical forward renderer. Not sure how you'd engineer such a system into a true stereoscopic renderer, but of course in the 3D-from-2D titles it'll be a drop in effect. The only new thing to work on would be creating a 3D depth map.
 
Actually smoke and explosions are ideal candidates for the 3D-from-2D method described above. There's no detail or alternatve viewpoint to worry about because the whole thing is a big blurry splodge. In theory explosions could create a 2D image with depth and a 3D view be derived. This should work very well with KZ3 and its deferred renderer that allows more flexibility than a typical forward renderer. Not sure how you'd engineer such a system into a true stereoscopic renderer, but of course in the 3D-from-2D titles it'll be a drop in effect. The only new thing to work on would be creating a 3D depth map.

Fix your quote. ;)

I'm not to sure about that. As is right now there's a flat object being used to display what would in reality be a fully 3D transparent object. Well, actually millions of particles forming what the eye interprets as a 3 dimensional cohesive object in reality.

When those start interacting with static geometry the disconnect is going to be even more bizarre.

It would be similar to having a 2D sprite representing a ball and then trying to reconstruct a 3D ball when displayed in 3D with actual depth. Except while a ball may be relatively easy, it's a perfect sphere, an irregular smoke blob may not be so easy. And explosions and fire even more so.

I'm not convinced there's a way to convincingly convert those into actual 3D objects.

Regards,
SB
 
It would be similar to having a 2D sprite representing a ball and then trying to reconstruct a 3D ball when displayed in 3D with actual depth. Except while a ball may be relatively easy, it's a perfect sphere, an irregular smoke blob may not be so easy. And explosions and fire even more so.

I'm not convinced there's a way to convincingly convert those into actual 3D objects.
It's not going to be particularly accurate, but it'd be infinitely preferrable to clear 2D billboarded sprites! The interaction with other objects would not be a problem in a deferred renderer if the particle effects are rendered to a separate buffer and composited using the depth info for the scene. In fact depth info for the explosion volume could be stored in the texture as it's the same for each explosion, just as the sprites are, making it even more straightforward than I first thought.
 
I was reading the Digital Foundry article on Killzone 3'S 3D feature and I was wondering about the resolution.

To achieve stereoscopic 3D on the PS3 they have to sacrifice resolution in order to maintain the framerate. The digital foundry analysis on Killzone 3 was based on the Sony Conference 3D K3 footage which is of course a 2D video of what one eye would see. That footage as expected shows lower resolution as expected.

Knowing that the whole 3D image is comprised of 2 images for each eye, assuming that one image shows half of the lines and the other shows the missing lines, is it possible that the complete image the brain receives could perceive higher resolution since it can possibly add up what sees from each eye?
 
Knowing that the whole 3D image is comprised of 2 images for each eye, assuming that one image shows half of the lines and the other shows the missing lines, is it possible that the complete image the brain receives could perceive higher resolution since it can possibly add up what sees from each eye?

Very likely not - I would expect the left/right images to be in-sync rather than interlaced, otherwise the image would get very strange, and you'd probably be able to tell from half the image as well. However, what you lose in vertical resolution, you get back in spatial information, and it may well be that together with the better AA (MLAA in this case), the x-y-z image gives your brain more information to work with and fill in the blanks overall than with a traditional x-y image.

We don't know yet of course whether or not this will be what the final game will be like, or if they do manage to get it up to 720p, but it's an interesting area of research. Sony mentioned before that native resolution wasn't as important as good AA. Killzone 3D might be a good example of this.
 
I was reading the Digital Foundry article on Killzone 3'S 3D feature and I was wondering about the resolution.

To achieve stereoscopic 3D on the PS3 they have to sacrifice resolution in order to maintain the framerate. The digital foundry analysis on Killzone 3 was based on the Sony Conference 3D K3 footage which is of course a 2D video of what one eye would see. That footage as expected shows lower resolution as expected.

Knowing that the whole 3D image is comprised of 2 images for each eye, assuming that one image shows half of the lines and the other shows the missing lines, is it possible that the complete image the brain receives could perceive higher resolution since it can possibly add up what sees from each eye?

If right and left were interlaced, then the interlacing would be quite evident when showing only one eye view. Either it would show one eye in full res but with alternating black bars or a condensed image where each line doesn't quite match the previous one.

Regards,
SB
 
It's not going to be particularly accurate, but it'd be infinitely preferrable to clear 2D billboarded sprites! The interaction with other objects would not be a problem in a deferred renderer if the particle effects are rendered to a separate buffer and composited using the depth info for the scene. In fact depth info for the explosion volume could be stored in the texture as it's the same for each explosion, just as the sprites are, making it even more straightforward than I first thought.

I still don't think there's a good way to do it. After all imagine how games that have attempted to display explosions using 3D polygons. Not so good. Except now you'd have to take a 2D texture and try to make a very irregular polygon out of it. So suddenly your triangles per scene will explode, and you'll have to reduce resolution or scene complexity even further.

I think it's a far trickier problem to solve that some would think. With walls, vehicles, signs, etc. The stuff to recreate for an alternate eye view is fairly predictable and was already in 3D in the first place. Plus you aren't faced with having to recreate anything the player can see.

EG - in the case of a 3D ball you already have all the player facing triangles rendered and it's easier to extrapolate the differences between right and left eye. With a 2D representation of a ball. You now have to recreate both the entire half of the ball facing a person but also the right and left eye differences without being able to use the half that's already rendered (for the original 3D ball example above) to help recreate the alternate eye view.

BTW - I think transparancies (other than explosions, fire, smoke) may face similar problems. Although if they are representing things that are already naturally thin, grass and leaves for example, it's not so bad if they have a cardboard cutout look. Although grass could be problematic if you have a flat texture with a depiction of a blade growing out of the ground and then going up and curving toward the player, as well as blades appearing to curve away from the player etc. all in one texture. But for the short term, I think that's far easier to sorta ignore than explosions and such which are generally in your face and not just frill.

Regards,
SB
 
Last edited by a moderator:
If right and left were interlaced, then the interlacing would be quite evident when showing only one eye view. Either it would show one eye in full res but with alternating black bars or a condensed image where each line doesn't quite match the previous one.

Regards,
SB

It is effectively interlaced becaue it has different pixel information.

KZ3 3D is half resolution on paper but your brain is getting 1280 different vertical lines (same pixel variety as 2D 720P) as well as depth information for more total information.
 
I don't quite agree with that. You're getting half the information (both eyes get the same lines of information), but in exchange you're getting depth information added to those lines.
 
From my experience with 3d on my PC, the 2-dimensional effects like smoke and explosions aren't a very big issue. You can see something is wrong if you are really looking for it, but you can also see the issue without 3d.
 
I also disagree :)

I don't quite agree with that. You're getting half the information (both eyes get the same lines of information), but in exchange you're getting depth information added to those lines.

720P/2 3D (KZ3) gives both eyes half number of lines of information as each eye will get with 720P 2D but you must remember that all those are completely different lines of information.

720P 2D has 1280 unique lines.

720P/2 3D has 1280 unique lines

The difference is that with 720P 2D, 1280 lines are combined before the eyes (on the screen), but with 720P/2 3D, 1280 lines are combined by the brain.
 
I mean vertical lines my friend

I think you're mixing up vertical and horizontal here.

For 720P 2D (KZ2)

left eye: 1280 vertical lines

right eye: 1280 vertical lines (same as left eye)

brain: 1280 unique lines (same as each eye)


For 720P/2 3D (KZ3 style)

left eye: 640 vertical lines (set a)

right eye: 640 vertical lines (set b)

brain: 1280 unique lines (set a + set b) + depth information



But I think I understand what you mean.

I thought he meant effective vertical interlace due to line doubling.
 
Status
Not open for further replies.
Back
Top