Digital Foundry Article Technical Discussion Archive [2011]

Status
Not open for further replies.
Great articles Grandmaster, I really enjoyed it.
BTW i'm really surprised how rock solid 2D mode gameplay is, great improvement over KZ 2, but 3D mode i would probably hate and couldnt play much - 20-25 almost all the time is just too low for me.


Maybe less CPU intensive while doing background streaming or level loading.

SPUs are great in decoding so i dont its an issue.
 
Last edited by a moderator:
Great articles Grandmaster, I really enjoyed it.
BTW i'm really surprised how rock solid 2D mode gameplay is, great improvement over KZ 2, but 3D mode i would probably hate and couldnt play much - 20-25 almost all the is just to low for me.

What I'm also trying to find out is the sensation of depth and height. e.g., Jumping down from a building in 3D mode. I can only imagine the experience is "more rush" (or more vomit inducing, depending on whether you can take the effect ^_^)


SPUs are great in decoding so i dont its an issue.

They may need the SPUs for other more critical tasks. e.g., accelerate loading, and setup next level asap.

EDIT: I vaguely remember the Blu-ray drive can support (up to 12 ?) parallel streams/channels. So we can stream dialogue, music, video, and other assets from Blu-ray at the same time. Don't quote me on that though. It's very old info, and my recollection is fragmented.
 
DF, or someone else, might want to think of an objective way to measure 3D effects. Right now, it's rather haphazard and ill-defined. At least come up with a set of in-game scenarios where 3D effect should help and measure their impact. It may be difficult or imprecise initially, but at least there is a consistent framework to discuss this sort of things.

I remember someone (Dolby ?) has set up a 3D certification standards. What exactly do they measure ?
 
Yap… and comparing shots in 2D is somewhat inaccurate because the 3D effect is completely missing. The perceived resolution and depth should be better in 3D mode. Also dynamic artifacts like pop-ins are not noticeable in static screenshots. Should have more in 3D mode.

Perceived resolution will still be exactly the same. Thus the overall bluriness, lack of detail, jagged aliasing, etc. will all still be quite evident. The illusion of depth may allow some people to ignore it for the novelty, however.

As to benefits to gameplay that may come about if you combine the arm motion of throwing a grenade with Move or something like Kinect, but I doubt you'd have similar benefits with a traditional controller.

Regards,
SB
 
Perceived resolution will still be exactly the same. Thus the overall bluriness, lack of detail, jagged aliasing, etc. will all still be quite evident. The illusion of depth may allow some people to ignore it for the novelty, however.

Not quite. I have read claims of increased or decreased perceived resolution in stereoscopic 3D. ^_^
I think Sony also claimed that AA is more important than resolution in 3D presentation.

As to benefits to gameplay that may come about if you combine the arm motion of throwing a grenade with Move or something like Kinect, but I doubt you'd have similar benefits with a traditional controller.

Nope. Using traditional controller, it's common to show the grenade throwing arc in the 3D environment. The arc would be much more easy to interpret in a 3D presentation. I experienced this problem quite often. It may have little to do with Move or Kinect. Our eyes should be able to judge depth with or without our hands.

A friend working on auto-stereoscopic 3D display ported Doom to the test unit. He believes that he's much better at estimating where the grenade will land on 3D display. He can time his throw better too. At one point, he was mumbling about a special grenade/weapon that explodes in mid-air to damage people in the perimeter. That type of weapon will be better on a 3D display since there is no "landing point" per se. It explodes in mid-air. Proper and quick depth perception may be needed.

EDIT: I suspect air-bourne objects mixed with ground units may be another area of interest.
 
Is there a mistake in the article on the alpha effects buffer? Is it really SubSD? That seems odd.

I didn't check to see if Richard made the comparison gallery available, but you can more easily see that for instance the bloom effect covers half the distance horizontally, which is in accordance with the resolution drop from 2D to 3D. Considering that fillrate, bandwidth, ALUs etc are fixed commodities it would make sense to scale the post-fx accordingly since you're drawing things twice. Performance is already a bit unwieldy moving to 3D as is, though the bigger scaling issue is the geometry load; pixel resolution is trivial to scale.
 
Can anyone explain why games like KZ3 and many others continue to use Bink for video?

As others have noted, it must be related to simultaneous processing while the engine streams game content. Grandmaster has already done a fairly extensive analysis for Final Fantasy XIII since he had access to the files, going so much as to re-encode them, and I don't believe bitrates were the issue (have to double check). I suppose we won't know for sure until we bug some dev enough about it.

I can't imagine it's a licensing/cost issue or any lack of effort. :s

For example, here's a list of both PS3 and 360 titles that use Cri Middleware. I can vouch for the high video quality from the Halo 3 Limited Edition HD Video documentaries, but I haven't played any of the games listed:

http://www.cri-mw.com/product/adoption/platform/playstation3.html
http://www.cri-mw.com/product/adoption/platform/xbox360.html
 
As others have noted, it must be related to simultaneous processing while the engine streams game content. Grandmaster has already done a fairly extensive analysis for Final Fantasy XIII since he had access to the files, going so much as to re-encode them, and I don't believe bitrates were the issue (have to double check). I suppose we won't know for sure until we bug some dev enough about it.

I can't imagine it's a licensing/cost issue or any lack of effort. :s

For example, here's a list of both PS3 and 360 titles that use Cri Middleware. I can vouch for the high video quality from the Halo 3 Limited Edition HD Video documentaries, but I haven't played any of the games listed:

http://www.cri-mw.com/product/adoption/platform/playstation3.html
http://www.cri-mw.com/product/adoption/platform/xbox360.html
Both Crysis 1 and 2 use CRI Middlewareas well. AFAIK, it's paired with Scaleform Video.
 
The biggest problem with 3D is not the slowdowns, I really did not perceive any when I played it. The low resolution is obviously perceivable, but thanks to MLAA the jaggies are minimized, and the resolution tradeoff is worth it to achieve a proper 3D effect. The biggest problem is being able to play it for more than half an hour at a time.
 
I'd be curious to know if they did user feedback trials to determine which 3D method they should use. i.e. 2D+ depth @ full resolution vs half-res per eye.
 
Where is the tearing? There is none in 2D or 3D modes in KZ3.
The performance is close during normal gameplay, according to the article and my personal observations as well.
In the video comparisons you can see tearing in the right side (3D mode) while not on the left side (2D mode).
 
For comparison, a shot of Crysis 2's post-process S3D against Killzone 3's true dual-rendered S3D. Not comparing the two games here, just the methods:

http://dl.dropbox.com/u/12527604/crysis/pictures/Crysis2_S3D.jpg
http://images.eurogamer.net/assets/articles//a/1/3/3/0/0/9/7/3d_1_anaglyph.BMP.jpg

I'm pretty impressed at how well the post-process S3D works, but there are artefacts (as you'd expect) and it doesn't work ideally for some objects. True S3D is better, but not sure whether or not it's worth the resolution drop.
 
It might be as simple as licensing costs. H264 might cost 'too much'. Alternatievly it could be as simple as there not being a good framework to integrate H264 in a game the way developers want.
 
Status
Not open for further replies.
Back
Top