Image Quality and Framebuffer Analysis for Available/release build Games *Read the first post*

I don't think its been confirmed, but some people seem to think its the same, except they used QAA instead, which explains the slightly blurrier image.

Not quite the same... Simply removing the black borders yields a different resolution. Quincunx is making it difficult to determine whether it is rendering 720p but displaying in a smaller window (ala Army of Two) or if it is rendering 1:1 in a 720p window (no scaling); I believe it's the latter, but not 100% sure.
 
Regarding the SF2 resolution, from what I understand the game is a 4:3 game and then the "16:9" mode is just a zoomed in view of the game, hence the larger sprites with dodgy scaling. If you switch between 4:3 and widescreen mode you can see how zoomed in it becomes. Oh and that would also explain the bars on the side, because they didn't want to zoom in too much.
 
Not quite the same... Simply removing the black borders yields a different resolution. Quincunx is making it difficult to determine whether it is rendering 720p but displaying in a smaller window (ala Army of Two) or if it is rendering 1:1 in a 720p window (no scaling); I believe it's the latter, but not 100% sure.

I'm not 100% sure either. A couple of facts though:

1. The letterboxing top and bottom is *slightly* larger on PS3
2. There is a black border on the left and right on PS3, not on 360.

Point 2 suggests a downscaling a la Army of Two. However, this composite shot seems to show that the PS3 image isn't being scaled down by any appreciable degree. So my guess would be that the letterboxing is just ever-so-slightly larger on PS3, but unnoticeable unless you have a 1:1 display, like me...

 
Last edited by a moderator:
Here's a theory about the weird screen-space ordered grid dithered shadows on Far Cry 2 360:

The PS3 version obviously has the usual for NVIDIA GPUs PCF, which is not readily available on 360.
What is PCF? When a shadowmap is sampled somewhere in the area between pixels (remember, a pixel is not a little square), the GPU does the shadowmap comparison for all four, then filters bilinearly between the four 0/1 results.

What I think Far Cry 2 does on the Xbox is the following: define a 2x2 array with random small offsets (smaller than one texel in the shadowmap, e.g. 1/1024). When sampling from the shadowmap to shadow a fragment, offset the shadowmap texcoord by one of the offsets in this array depending on the screenspace coords of the fragment. Fully dark areas would remain fully dark, as the offset won't be enough to get the shadowmap sampled outside them; ditto for fully lit areas. At the edges between the two, such a screendoor pattern would appear.

This is pretty nifty, in fact, can't wait to try it :)
 
I'm sticking to my original theory for now where the same shadowing method using cheap fake dithered transparancy is used, and this is done after AA for 360 and before AA for PS3. The interlacing effect doesn't actually exist, I feel - where it looks like it has an interlaced (rather than dithered) effect, this is actually two shadows overlapping causing it to look like an interlaced effect. But I'm looking forward to hearing more theories from the experts, and assen's experimental findings nevertheless!
 
I'm sticking to my original theory for now where the same shadowing method using cheap fake dithered transparancy is used, and this is done after AA for 360 and before AA for PS3. The interlacing effect doesn't actually exist, I feel - where it looks like it has an interlaced (rather than dithered) effect, this is actually two shadows overlapping causing it to look like an interlaced effect. But I'm looking forward to hearing more theories from the experts, and assen's experimental findings nevertheless!

Two shadows overlapping? The tree branch shadows surely are a from single shadow caster, a single twig.

I agree with the "fake dithered" part, but there's no transparency involved in shadowmapping, it's a fake dithered PCF IMHO.
 
Two shadows overlapping? The tree branch shadows surely are a from single shadow caster, a single twig.

I agree with the "fake dithered" part, but there's no transparency involved in shadowmapping, it's a fake dithered PCF IMHO.

I mention transparency because this is often used in lieu of transparency, when there's no transparency left, and I was thinking that they do it to achieve that layered shadow effect for cheap in a similar style.

Are you sure it's a single twig? If I look all over the screenshot, there are definitely a lot of shadows that overlap rather than stem from a single twig, and these shadows are all over the map from the bridge right up on the car.
 
The general limit for XBLA games is indeed 350MB. There will be exceptions made on a case-by-case basis, of course. With their biggest memory card being 512MB subtract 128MB for the NXE, that leaves 384 for profiles and other content. It seems they're cutting it close at 368MB for the game.

AFAIK, there is no limit for PSN.
the XBLA version is 368MB? the PSN version is listed as 303MB.

Does that have to do w/ how games are downloaded/decompressed on PS3 as opposed to 360? Or is the game actually larger on 360? (sorry if this is off topic, it just seems counter to what i was expecting; even though MS said they increased the limit for the game as a special case)
 
Regarding the SF2 resolution, from what I understand the game is a 4:3 game and then the "16:9" mode is just a zoomed in view of the game, hence the larger sprites with dodgy scaling. If you switch between 4:3 and widescreen mode you can see how zoomed in it becomes. Oh and that would also explain the bars on the side, because they didn't want to zoom in too much.

Unless the native 4:3 resolution they used to author the art has 720 vertical lines (or 1080) the art is still stored in a non-native resolution when outputting either 720p or 1080p. In fact, if they did a straight zoom shouldn't the backgrounds also display this artefact?

It would be interesting to know if the character artefacts also happen @ 4:3. Taking a look at various games from the SF2 series (from World Warrior through Super Hyper) they all use a native resolution of 384x224 which is 16:9.3(3). I would speculate that they re-authored the character art at twice the resolution (768x448) maintaining the same aspect ratio of the original, while the backgrounds/hud/etc. is stored at 1080p.

It would be nice if someone who had the game could test with different aspect ratios.
 
I've moved the discussion on output quality and rendering choices to a new thread so we can focus this one on examining what games are doing, rather than getting lost in why ;)
 
Some additions to the thread start post.

Frame buffer formats

RGBA8 = 8 bits for Red, 8 bits for Green, 8 bits for Blue, 8 bits for alpha = 32 bits per pixel
FP10 = 10 bits for RGB components, and 2 bits for alpha = 32 bits per pixel
FP16 = 16 bits for each component = 64 bits per pixel (no support for hardware alpha blending on Xenos)
NAO32 = LogLUV conversion, In-Depth explanation = 32 bits per pixel (no hardware alpha blending)

You forgot one important format. R11G11B10_FLOAT is a widely used back buffer format (on platforms that support it). It consumes same amount of memory and bandwidth as the standard RGBA8 and FP10 formats (4 bytes per pixel), and is as fast as FP10 (A2R10G10B10_FLOAT) to sample and to render to. R11G11B10_FLOAT also supports blending and filtering. It's a perfect choice for HDR rendering as the quality is pretty much comparable to FP16 and the bandwidth and memory usage are halved.

Temporal AA on PS3 (ala Quaz)- odd and even frames are rendered with a half-pixel shift. The current frame is blended with the previous frame to achieve a similar effect as super sample AA for static scenes. In a moving scene, the blending of the odd and even frames produces a persistent blurring of the image.

You can blend the new frame with last rendered frame (half pixel shifted offset), but this causes severe image ghosting with moving camera/objects limiting the technique usefullness. If the game can achieve solid 60 fps, it's usually better just to let the eyes do the blending themselves. 60 fps (half pixel shift every other frame) is enough to fool human eye and no distracting shimmering can be seen. Also all TV sets (and monitors) have some pixel lag, so the last frame pixels will slightly also blend together with the new pixels before being sent to the eye. Temporal AA looks good on static or slowly moving scenes, but the aliasing effect is no longer visible if the camera moves more than few pixels every frame (if blending is used you just see the ghosting and no AA at all, if no blending is used you just see the scene without AA).

There are many alternative temporal antialiasing methods that work perfectly on moving camera/objects. Reverse reprojection is one of them. For each rendered pixel, you calculate the position of the same surface pixel in last frame by utilising last frame model and view matrices (and sample the last frame back buffer texture at that position). This method is comparable to 2xSSAA, but affects only surfaces (not edges). The technique is most useful in hiding pixel shader based aliasing (caused by various parallax mapping techniques, shadow map sampling, negative lod bias shimmering, etc, etc). MSAA techniques do not anti alias polygon surfaces at all (leaving all aliasing caused by pixel shader and sampling). Like SSAA, all temporal aliasing techniques also antialias stencil buffer based effects (stencil shadows, etc).
 
Hi sebbbi, when I have time I'll make some additions in the stickied thread for your explanations. Thank you for stopping by!

hmm... it just so happens we were having a conversation about other framebuffer formats recently, specifically the R11G11B10 one you mention. From what I understood the alpha/fourth component may not even be utilized in many cases as it would normally describe the transparency/translucency/opaqueness of the object. There are other uses for the bits as other devs have described in their logluv HDR encoding, but the 2-bit alpha component seems to be a mystery to some for the A2R10G10B10 in terms of compromise. Are 8-bits for the alpha a waste in RGBA8?

How much extra range results from the extra bits associated with the R and G channels then? FP10 has been described more as a medium dynamic range format.

It also makes me wonder if we'll see a similarly re-appropriation of the FP16 bits ala FP10 etc. in future hardware if devs don't need that separate 4th channel so much, and at the same time get ridiculously high dynamic range (of course assuming hardware doesn't shift to logluv encoding where it's all different). Then again, FP16 is already a huge enough dynamic range... what other uses would there be to go beyond R16G16B16... without being a waste, what can the 16 bits of the 4th channel be used towards (same question as earlier) :?:


(Note to myself: I ought to clarify in the written guide that the temporal AA part was for the games observed thus far, not as general implementation as I wrote it to appear :oops: )
 
Guys, you've mentioned that FC2 xbox 360 is 2XAA.I'm pretty sure it's 4xAA..On my set there's a barely a jaggy to be seen.I've traded in the game now so I can't comment.
 
BTW Gamespot has done their annual PS3/360 graphics comparision article
http://au.gamespot.com/features/6201700/index.html?tag=topslot;title;1

The discussion is mostly non-technical (eg. comparing SCIV: "The Xbox 360, however, has superior flowers") there are a few interesting tidbits.

However in the COD5 comparision they mention that 'the Xbox 360 maintains its antialiasing advantage, which you can see in the disappearing antenna in the second set of shots' which incidentally is something I pointed out a while back.

PS3http://img511.imageshack.us/img511/383/codps3006rx4.jpg
360http://img409.imageshack.us/img409/6815/cod360006gi9.jpg
The boot and lower leg of the dead soldier in the foreground. This pic also seems to show that the 360 has slightly better (perhaps through implementation) AA, notice the slightly more complete tree branches, next to the burning building in the top right of the screen. Though it could be just LOD or a one off thing

As both versions use 2xMSAA, is it down to differences in sample patterns between Xenos and RSX, or is it just a quirk that occurs in certain scenes?
 
hmm... it just so happens we were having a conversation about other framebuffer formats recently, specifically the R11G11B10 one you mention. From what I understood the alpha/fourth component may not even be utilized in many cases as it would normally describe the transparency/translucency/opaqueness of the object. There are other uses for the bits as other devs have described in their logluv HDR encoding, but the 2-bit alpha component seems to be a mystery to some for the A2R10G10B10 in terms of compromise. Are 8-bits for the alpha a waste in RGBA8?

The alpha channel of the back buffer (render target) is rarely used for blending. All the most common blending techniques can be implemented with only the source alpha (object's texture alpha channel). Destination alpha is only useful for some multipass rendering tricks (and most can be implemented more efficiently with the stencil buffer).

For custom color representations (logluv, RGBE8, etc), the alpha channel is used to store the color data component (exponent, color multiplier or fourth color channel needed for the color space). When writing custom color representations to the back buffer, the alpha channel is directly written to just like the RGB channels, and no alpha blending is used. This also means that source alpha cannot be used either (as the pixel shader alpha output is used for the alpha write and contains color data instead of alpha). When you are rendering with a custom color space, the only way to get alpha channel blending to work is to pinpoint between 2 buffers (sampling the pixel color from the other and rendering to the other). This slows down the performance considerably (game has to copy the whole backbuffer to a texture for every alpha blended object).

How much extra range results from the extra bits associated with the R and G channels then? FP10 has been described more as a medium dynamic range format.

Each channel has 5 exponent bits and 6-6-5 mantissa bits. The maximum range is the same as with 16 bit floats (same number of exponent bits). The precision is however not as good (10 bits of mantissa vs 5/6 bits), and 11/10 bits floats store only positive values.

16bit float:
- 5 bit exponent
- 10 bit mantissa
- 1 bit sign
- Highest value: 2^(31-15) = 65536 (both positive and negative)

11bit float:
- 5 bit exponent
- 6 bit mantissa
- no sign bits
- Highest value: 2^(31-15) = 65536 (only positive)

It also makes me wonder if we'll see a similarly re-appropriation of the FP16 bits ala FP10 etc. in future hardware if devs don't need that separate 4th channel so much, and at the same time get ridiculously high dynamic range (of course assuming hardware doesn't shift to logluv encoding where it's all different). Then again, FP16 is already a huge enough dynamic range... what other uses would there be to go beyond R16G16B16... without being a waste, what can the 16 bits of the 4th channel be used towards (same question as earlier) :?:

R9G9B9E5 (9 bits mantissa per channel, 5 bits shared exponent) is also a good format for HDR rendering. But so far no hardware supports it as a render target (only as a texture).

R9G9B9E5 float:
- 5 bit exponent (shared with all channels)
- 9 bit mantissa per channel
- no sign bits
- Highest value: 2^(31-15) = 65536 (only positive)

This is a very good 4 byte (32 bit) format for HDR rendering, as the quality is almost equal to FP16 (only one less mantissa bits), and there is no unnecessary sign bits either. The shared exponent is not a problem. If a channel is considerably brighter than the others, the reduction of other channels precision cannot be seen by human eye (the intense blooming of a super bright pixel color channel dominates the pixel color).
 
the XBLA version is 368MB? the PSN version is listed as 303MB.

Does that have to do w/ how games are downloaded/decompressed on PS3 as opposed to 360? Or is the game actually larger on 360? (sorry if this is off topic, it just seems counter to what i was expecting; even though MS said they increased the limit for the game as a special case)
Well, I bought the PS3 version and installed it's 303MB. So no decompression of the DL there.

The same artifacts (upscale?) are present in the PS3 version as the ones grandmaster capped. Looks virtually identical to the XBLA version. I cant see them in the 4:3 mode (although it does seem when i look very close that the same almost dithered look around the outline is there sometimes); no idea if that's because it's downscaled or if that's the version it's upscaled from. My eye for detail isn't good enough to tell. :???:

I was really hoping for the 1080p capcom had promised. Didn't think there would be any hardware limitations precluding it. I really hope this wasn't due to MS limiting the number of MB; though that would seem counter to them specifically allowing over 350MB as an exception for this game (unless them caving in and allowing such was too late in the development cycle? seems unlikely.:cry:)

I'm a little disappointed, i was expecting something more along the lines of the PSN SSF2HDR wallpaper. Still love the game though. :smile:
 
Last edited by a moderator:
made another interesting discovery regarding SF2HD remix

both shots were taken from 720P 26" Bravia

http://www3.telus.net/public/dhwag/Cammy720P.jpg

http://www3.telus.net/public/dhwag/Cammy1080i.jpg

as you can see, 1080i just look better with less artifacts, more details

(you'd have to force 1080i to get 1080i mode)

I was always curious about how a 720P TV is able to output 1080i when there's clearly not enough pixels

somehow 1080i does seem to provide better pictures, when the game is running natively at 1080p

I get similar results from GT5P as well
 
Last edited by a moderator:
Back
Top