The other thread was getting to big for its own good, and as has been requested often enough, an easy-to-find compilation of discovered resolutions is needed.
I or other mods may update this thread as time goes. Here is how resolutions can be determined.
http://forum.beyond3d.com/showpost.php?p=1070774&postcount=273
http://forum.beyond3d.com/showpost.php?p=1070972&postcount=282
http://forum.beyond3d.com/showpost.php?p=1071006&postcount=284
http://forum.beyond3d.com/showpost.php?p=1071084&postcount=292
http://forum.beyond3d.com/showpost.php?p=1065791&postcount=225
http://forum.beyond3d.com/showpost.php?p=1065280&postcount=222
http://forum.beyond3d.com/showpost.php?p=1167507&postcount=29 [/SIZE]
In order to reduce the number of people asking about hardware scaling, please read the following:
http://forum.beyond3d.com/showpost.php?p=1022643&postcount=77
http://forum.beyond3d.com/showpost.php?p=1146396&postcount=22
http://www.beyond3d.com/content/articles/16/
i.e.
360
Back-Buffer(s) = Pixels * FSAA Depth * Rendering Colour Depth (may include multiple render targets for deferred rendering techniques)
Z-Buffer = Pixels * FSAA Depth * Z Depth (usually 32-bit depth)
Front-Buffer(s) = Pixels * Output Colour Depth (this is what you see, almost always resolved to 8-bit per component rather than 10-bit or 16-bit)
Total = Back-Buffer(s) + Z-Buffer + Front-Buffer(s)
For Xenos, the back buffer and z-buffer must fit within the 10 MiB (10*1024*1024 bits, 8 bits/byte) to avoid tiling.
Update Dec. 3 2008: Grabbed from XNA MSDN:
"Render targets must be padded to certain dimensions based on the multisample mode. The padding required for 32-bits-per-pixel formats is 80×16 for 1× antialiasing, 80×8 for 2× antialiasing, and 40×8 for 4× antialiasing."
This may explain ShootMyMonkey's information on Tomb Raider Underworld's back buffer (1024x576) and depth-buffer (1040x576) dimensions, but may be tied to a particular rendering technique they are also using.
For Xenos,
e.g. 1024x600, 2xMSAA, single render target FP10, 32-bit depth/Z-buffer
In the above resolution, this means that the horizontal res must be in multiples of 80, and the vertical res must be in multiples of 8. Thus 1024 is "padded" to 1040.
back buffer + Z = 1040*600*2*(32/8 + 32/8) = 9984000 bytes = 9.984MB
G-Buffers/Deferred Shading/Lighting etc.
In the case of deferred lighting renderers with G-Buffers, the single framebuffer expands to Multiple Render Targets. For example, Killzone 2 makes use of 4 MRTs and the Depth/Stencil Buffer (Z-buffer). i.e. 4 more "frame buffers" worth of memory and memory bandwidth consumption. Modern HW post-DX10-class support up to 8 MRTs and can be mixed between 32-bit and 64-bit size.
HDRR & Formats
The RGB colour space ties brightness with the colour - the magnitude of the colour channels. With FP16, RGB values are expanded greatly due to the floating point representation versus integer i.e. 10-bit mantissa, 5-bit exponent, and 1-bit sign (positive/negative). Integer formats are essentially limited to the binary value... INT8 -> 2^8 = 256 levels, INT16 -> 2^16. But lighting is non-linear in effect. Compare a value of 5 to 6 in brightness versus 200 to 201. It's not the same. Floating Point formats rectify that to a degree.
With LUV encoding, colours are separated into chominance and luminance - the absolute colour value and the brightness. A very high dynamic range is realized and the lighting accumulation can actually be more accurate (combining multiple lights for instance) than RGB.
Other formats do exist, such as R11G11B10 [strike]on Xbox 360[/strike] DX10+ as Sebbbi explains:
Regarding the notable absence of the alpha channel in this format:
Aspect Ratio
Some of you might be wondering how games like Call of Duty 4 (1.71:1) , Halo 3(1.8:1) , Metal Gear Solid 4 (4:3) can have rendering resolutions that are not 16:9 aspect ratio. All you need to learn about is anamorphic widescreen. The image is squeezed into the rendered resolution but is then stretched to the proper 16:9 presentation.
An example of this squeezing can easily be seen in any Doom 3 engine games (Quake 4/Prey/Quake Wars). If you have one of them handy on your PC (latest version will do), try setting your resolution to 960x720 and in the console type r_aspectratio 1 for 16:9 or 2 for 16:10. All you'll see is the in-game view being squeezed/stretched horizontally. On the flip side, if you render the game at 1280x720 while still in 4:3 mode, the Mancubus just might be the fattest enemy you'll ever see. You can help it lose some weight by setting the game to 16:9. And of course, the isomorphic 1280x720 rendition will offer more image clarity than the anamorphic 960x720.
Multisample AA - multiple geometry/sub-sample (reddish squares in below images) points with a particular weighting (surrounding the texture sample point, green square in below images) are used to determine the colour of the pixel being rendered. Sample positions can differ between AMD/nVidia. As RSX is based on the G70 architecture, the following sample patterns should apply. In the case of Xenos, it would not be unreasonable to assume that it uses the same patterns as ATI has used in the past (R300+).
The result for 2xMSAA is that there may be one intermediary shade in between polygon edge steps; one sample is found to be within one polygon (e.g. colour A), and the second sample is found in another polygon (e.g. colour B). If both sample points have equal weightings, the resultant pixel would be 50% colour A, 50% colour B. Obvious results are obtained when a polygon edge bisects the shortest imaginary line connecting the two geometry sample points. Hence, 2xMSAA for G70 will look slightly different to 2xMSAA for R520 (see sample positions below).
For the case of 4xMSAA, there may be more shades in between polygon edge steps due to the higher number of geometry samples, resulting in a smoother transition between steps. With equal weightings to each sub-sample, there will be three intermediary shades.
The easiest way to see MSAA level is to have a straight-edged object overlapping another object/background with a high contrast in colours between the two e.g. black object against a white background. Beware of JPG compressed screenshots where pixels near high frequency components (edges) can be distorted.
Quincunx AA on PS3 - two geometry sample points are used just like 2xMSAA (so the same storage cost), but it also uses 3 samples belonging to neighbouring pixels (regardless of a polygon edge) to the right and below of the original texture sampled point (see sample pattern image for clarity). The result is a blurring of the entire image, but higher perceived polygon AA. Consider a texture of higher frequency components, lots of different colour patterns. The current pixel's two geometry sample points indicate the pixel is entirely within one polygon. However, the three neighbouring sub-samples are still accounted for in the final pixel, hence the overall image blur.
Quincunx sample pattern
http://www.beyond3d.com/images/reviews/GF4/gf4samplepattern.jpg
Comparison between 2xMSAA/QAA/ & blur filters
http://upsilandre.free.fr/images/Quincunx.jpg
PS3 DMC4 Temporal AA - odd and even frames are rendered with a half-pixel shift. The current frame is blended with the previous frame to achieve a similar effect as super sample AA for static scenes. In a moving scene, the blending of the odd and even frames produces a persistent blurring of the image. However, this is also advantageous for the edges of alpha-to-coverage primitives, because traditional multisampling does not work*, only super sampling.
PS3 MGS4 Temporal AApossibly disable temporal AA when the camera moves or with quick player movement to avoid ghosting/excessive blurring.
*see transparency AA or adaptive AA settings on appropriate PC hardware.
Update Dec 3 2008:
Sebbbi's take on TAA as seen so far:
Black Levels & Output
Xbox 360
Standard = 16-235
Intermediate
Expanded = 0-255
PS3
Limited Range = 16-235
Full Range = 0-255
----------------------------
UE3.0 etc & the "missing" MSAA - AA is apparent in certain cases, and it's possible the rendering pipeline or the use of post-processing effects, with respect to the MSAA resolve, messes with the end results. e.g. drawing effects after the msaa resolve.... HDR tone-mapping occurring after the msaa resolve, leading to no AA'd edges... etc
Consider that an AA'd frame is resolved and the HDR lighting is applied afterwards. The lighting would be using colour ranges far exceeding the original resulting in an incorrect comparison.
----------------------------
RSX FYI
500MHz Core/650MHz Mem
http://game.watch.impress.co.jp/docs/20060925/3d_tgs.htm
---------------------------
Alpha to Coverage (A2C) (AlStrong - do contradict when feasible please... only want the best info - Today... May 31 2010)
This technique converts the alpha component output from the pixel shader to a coverage mask. Regular alpha-tested textures are either visible or not visible (0 or 1, hence the pixel aliasing). When the MSAA is applied each output fragment gets a transparency of 0 or 1 depending on alpha coverage and the MSAA result. The higher the MSAA, the closer the result is to being "correct". With 2xMSAA or no MSAA, the dither pattern is most readily noticed in still shots. Higher MSAA can hide that effect.
The reason is mainly to avoid sorting alpha transparencies from back-to-front, which can be a costly and imperfect set of operations, impacting the framerate and memory bandwidth consumption much more heavily (alpha blend)... assuming one is already utilizing MSAA. MSAA works on polygon edges, but alpha tested polygon edges do not conform to the leaves and other wirey details you do see.
Take for example... a 4x4 inch glass plate with coloured leafy designs and edges within (alpha test, what's transparent glass and not). The MSAA itself does not look at the components within the 4x4 inch "polygon" plate. It looks at the 4x4 inch glass as a polygon or quad.
Quincunx AA has the "benefit" of already blurring all screen-space pixels, which may lead to smoother looking alpha textures such as foliage and wire fences.
So called Transparency AA or Adaptive AA on PC will selectively supersample or apply a less effective MSAA on the alpha-tested texture to produce better results. e.g. render the 4x4 inch glass at 2x2 resolution and downsample...
------------------------
Notice: devs, please correct when possible. PM either me or Shifty Geezer. Thank You.
I or other mods may update this thread as time goes. Here is how resolutions can be determined.
http://forum.beyond3d.com/showpost.php?p=1070774&postcount=273
http://forum.beyond3d.com/showpost.php?p=1070972&postcount=282
http://forum.beyond3d.com/showpost.php?p=1071006&postcount=284
http://forum.beyond3d.com/showpost.php?p=1071084&postcount=292
http://forum.beyond3d.com/showpost.php?p=1065791&postcount=225
http://forum.beyond3d.com/showpost.php?p=1065280&postcount=222
http://forum.beyond3d.com/showpost.php?p=1167507&postcount=29 [/SIZE]
In order to reduce the number of people asking about hardware scaling, please read the following:
http://forum.beyond3d.com/showpost.php?p=1022643&postcount=77
http://forum.beyond3d.com/showpost.php?p=1146396&postcount=22
http://www.beyond3d.com/content/articles/16/
i.e.
360
- Xenos is the hardware scaler, not HANA/ANA, which are just the video output chips dealing with digital/analog output to the various connections made with the display. Dave Baumann
- AVIVO takes more samples for scaling so it is expected that the scaling is better [strike](the algorithm also appears to be some form of bicubic, not a simple bilinear) [/strike]Update: see below
- Hardware scaling to the output resolution chosen in the dashboard is [strike]automagically performed[/strike] .
- In upscaling to 1080p, some games may first scale to 720p in software before the GPU scales to a 1080p output. e.g. RR6, SCIV, VF5, CoD4, H3, TR:L...
- Some games let the GPU handle the scaling to 1080p directly from the arbitrary resolution. e.g. MotoGP 06, PGR3, ESIV:O...
- "lanczos is just one of the selectable filters. Ultimately each game 'could' opt to use a different filter/sample count combination (though I suspect most don't bother)." -Fafalada
- For whatever reason, the Pure Video scaler in the G7x derived GPU (RSX) is gimped to horizontal scaling of specific front buffer resolutions. See B3D Article
- As of the January 2007 PS3 SDK, only the following front buffer resolutions are supported for hardware scaling to 1920x1080p: 960x1080 1280x1080, 1440x1080 and 1600x1080
- Games whose back buffers are rendered at strange resolutions must upscale the front buffer (at a cost of extra memory) to either 1280x720 (720p output) or to one of the acceptable 1080p resolutions. (Note: native resolution = back buffer resolution, front buffer = image for displaying/hardware scaling)
- ShootMyMonkey mentions scaling support for a pixel height of 576. There may be more new settings available.
Back-Buffer(s) = Pixels * FSAA Depth * Rendering Colour Depth (may include multiple render targets for deferred rendering techniques)
Z-Buffer = Pixels * FSAA Depth * Z Depth (usually 32-bit depth)
Front-Buffer(s) = Pixels * Output Colour Depth (this is what you see, almost always resolved to 8-bit per component rather than 10-bit or 16-bit)
Total = Back-Buffer(s) + Z-Buffer + Front-Buffer(s)
For Xenos, the back buffer and z-buffer must fit within the 10 MiB (10*1024*1024 bits, 8 bits/byte) to avoid tiling.
Update Dec. 3 2008: Grabbed from XNA MSDN:
"Render targets must be padded to certain dimensions based on the multisample mode. The padding required for 32-bits-per-pixel formats is 80×16 for 1× antialiasing, 80×8 for 2× antialiasing, and 40×8 for 4× antialiasing."
This may explain ShootMyMonkey's information on Tomb Raider Underworld's back buffer (1024x576) and depth-buffer (1040x576) dimensions, but may be tied to a particular rendering technique they are also using.
For Xenos,
http://msdn.microsoft.com/en-us/library/bb447675.aspxMSDN said:Render targets must be padded to certain dimensions based on the multisample mode. The padding required for 32-bits-per-pixel formats is 80×16 for 1× antialiasing, 80×8 for 2× antialiasing, and 40×8 for 4× antialiasing.
e.g. 1024x600, 2xMSAA, single render target FP10, 32-bit depth/Z-buffer
In the above resolution, this means that the horizontal res must be in multiples of 80, and the vertical res must be in multiples of 8. Thus 1024 is "padded" to 1040.
back buffer + Z = 1040*600*2*(32/8 + 32/8) = 9984000 bytes = 9.984MB
G-Buffers/Deferred Shading/Lighting etc.
In the case of deferred lighting renderers with G-Buffers, the single framebuffer expands to Multiple Render Targets. For example, Killzone 2 makes use of 4 MRTs and the Depth/Stencil Buffer (Z-buffer). i.e. 4 more "frame buffers" worth of memory and memory bandwidth consumption. Modern HW post-DX10-class support up to 8 MRTs and can be mixed between 32-bit and 64-bit size.
HDRR & Formats
The RGB colour space ties brightness with the colour - the magnitude of the colour channels. With FP16, RGB values are expanded greatly due to the floating point representation versus integer i.e. 10-bit mantissa, 5-bit exponent, and 1-bit sign (positive/negative). Integer formats are essentially limited to the binary value... INT8 -> 2^8 = 256 levels, INT16 -> 2^16. But lighting is non-linear in effect. Compare a value of 5 to 6 in brightness versus 200 to 201. It's not the same. Floating Point formats rectify that to a degree.
With LUV encoding, colours are separated into chominance and luminance - the absolute colour value and the brightness. A very high dynamic range is realized and the lighting accumulation can actually be more accurate (combining multiple lights for instance) than RGB.
Other formats do exist, such as R11G11B10 [strike]on Xbox 360[/strike] DX10+ as Sebbbi explains:
sebbbi said:It consumes same amount of memory and bandwidth as the standard RGBA8 and FP10 formats (4 bytes per pixel), and is as fast as FP10 (A2R10G10B10_FLOAT) to sample and to render to. R11G11B10_FLOAT also supports blending and filtering. It's a perfect choice for HDR rendering as the quality is pretty much comparable to FP16 and the bandwidth and memory usage are halved.
Regarding the notable absence of the alpha channel in this format:
sebbbi said:The alpha channel of the back buffer (render target) is rarely used for blending. All the most common blending techniques can be implemented with only the source alpha (object's texture alpha channel). Destination alpha is only useful for some multipass rendering tricks (and most can be implemented more efficiently with the stencil buffer).
For custom color representations (logluv, RGBE8, etc), the alpha channel is used to store the color data component (exponent, color multiplier or fourth color channel needed for the color space). When writing custom color representations to the back buffer, the alpha channel is directly written to just like the RGB channels, and no alpha blending is used. This also means that source alpha cannot be used either (as the pixel shader alpha output is used for the alpha write and contains color data instead of alpha). When you are rendering with a custom color space, the only way to get alpha channel blending to work is to pinpoint between 2 buffers (sampling the pixel color from the other and rendering to the other). This slows down the performance considerably (game has to copy the whole backbuffer to a texture for every alpha blended object).
Aspect Ratio
Some of you might be wondering how games like Call of Duty 4 (1.71:1) , Halo 3(1.8:1) , Metal Gear Solid 4 (4:3) can have rendering resolutions that are not 16:9 aspect ratio. All you need to learn about is anamorphic widescreen. The image is squeezed into the rendered resolution but is then stretched to the proper 16:9 presentation.
An example of this squeezing can easily be seen in any Doom 3 engine games (Quake 4/Prey/Quake Wars). If you have one of them handy on your PC (latest version will do), try setting your resolution to 960x720 and in the console type r_aspectratio 1 for 16:9 or 2 for 16:10. All you'll see is the in-game view being squeezed/stretched horizontally. On the flip side, if you render the game at 1280x720 while still in 4:3 mode, the Mancubus just might be the fattest enemy you'll ever see. You can help it lose some weight by setting the game to 16:9. And of course, the isomorphic 1280x720 rendition will offer more image clarity than the anamorphic 960x720.
Multisample AA - multiple geometry/sub-sample (reddish squares in below images) points with a particular weighting (surrounding the texture sample point, green square in below images) are used to determine the colour of the pixel being rendered. Sample positions can differ between AMD/nVidia. As RSX is based on the G70 architecture, the following sample patterns should apply. In the case of Xenos, it would not be unreasonable to assume that it uses the same patterns as ATI has used in the past (R300+).
The result for 2xMSAA is that there may be one intermediary shade in between polygon edge steps; one sample is found to be within one polygon (e.g. colour A), and the second sample is found in another polygon (e.g. colour B). If both sample points have equal weightings, the resultant pixel would be 50% colour A, 50% colour B. Obvious results are obtained when a polygon edge bisects the shortest imaginary line connecting the two geometry sample points. Hence, 2xMSAA for G70 will look slightly different to 2xMSAA for R520 (see sample positions below).
For the case of 4xMSAA, there may be more shades in between polygon edge steps due to the higher number of geometry samples, resulting in a smoother transition between steps. With equal weightings to each sub-sample, there will be three intermediary shades.
The easiest way to see MSAA level is to have a straight-edged object overlapping another object/background with a high contrast in colours between the two e.g. black object against a white background. Beware of JPG compressed screenshots where pixels near high frequency components (edges) can be distorted.
Quincunx AA on PS3 - two geometry sample points are used just like 2xMSAA (so the same storage cost), but it also uses 3 samples belonging to neighbouring pixels (regardless of a polygon edge) to the right and below of the original texture sampled point (see sample pattern image for clarity). The result is a blurring of the entire image, but higher perceived polygon AA. Consider a texture of higher frequency components, lots of different colour patterns. The current pixel's two geometry sample points indicate the pixel is entirely within one polygon. However, the three neighbouring sub-samples are still accounted for in the final pixel, hence the overall image blur.
Quincunx sample pattern
http://www.beyond3d.com/images/reviews/GF4/gf4samplepattern.jpg
Comparison between 2xMSAA/QAA/ & blur filters
http://upsilandre.free.fr/images/Quincunx.jpg
PS3 DMC4 Temporal AA - odd and even frames are rendered with a half-pixel shift. The current frame is blended with the previous frame to achieve a similar effect as super sample AA for static scenes. In a moving scene, the blending of the odd and even frames produces a persistent blurring of the image. However, this is also advantageous for the edges of alpha-to-coverage primitives, because traditional multisampling does not work*, only super sampling.
PS3 MGS4 Temporal AApossibly disable temporal AA when the camera moves or with quick player movement to avoid ghosting/excessive blurring.
*see transparency AA or adaptive AA settings on appropriate PC hardware.
Update Dec 3 2008:
Sebbbi's take on TAA as seen so far:
----------------------You can blend the new frame with last rendered frame (half pixel shifted offset), but this causes severe image ghosting with moving camera/objects limiting the technique usefullness. If the game can achieve solid 60 fps, it's usually better just to let the eyes do the blending themselves. 60 fps (half pixel shift every other frame) is enough to fool human eye and no distracting shimmering can be seen. Also all TV sets (and monitors) have some pixel lag, so the last frame pixels will slightly also blend together with the new pixels before being sent to the eye. Temporal AA looks good on static or slowly moving scenes, but the aliasing effect is no longer visible if the camera moves more than few pixels every frame (if blending is used you just see the ghosting and no AA at all, if no blending is used you just see the scene without AA).
There are many alternative temporal antialiasing methods that work perfectly on moving camera/objects. Reverse re-projection is one of them. For each rendered pixel, you calculate the position of the same surface pixel in last frame by utilising last frame model and view matrices (and sample the last frame back buffer texture at that position). This method is comparable to 2xSSAA, but affects only surfaces (not edges). The technique is most useful in hiding pixel shader based aliasing (caused by various parallax mapping techniques, shadow map sampling, negative lod bias shimmering, etc, etc). MSAA techniques do not anti alias polygon surfaces at all (leaving all aliasing caused by pixel shader and sampling). Like SSAA, all temporal aliasing techniques also antialias stencil buffer based effects (stencil shadows, etc).
Black Levels & Output
Xbox 360
Standard = 16-235
Intermediate
Expanded = 0-255
PS3
Limited Range = 16-235
Full Range = 0-255
----------------------------
UE3.0 etc & the "missing" MSAA - AA is apparent in certain cases, and it's possible the rendering pipeline or the use of post-processing effects, with respect to the MSAA resolve, messes with the end results. e.g. drawing effects after the msaa resolve.... HDR tone-mapping occurring after the msaa resolve, leading to no AA'd edges... etc
Consider that an AA'd frame is resolved and the HDR lighting is applied afterwards. The lighting would be using colour ranges far exceeding the original resulting in an incorrect comparison.
----------------------------
RSX FYI
500MHz Core/650MHz Mem
http://game.watch.impress.co.jp/docs/20060925/3d_tgs.htm
---------------------------
Alpha to Coverage (A2C) (AlStrong - do contradict when feasible please... only want the best info - Today... May 31 2010)
This technique converts the alpha component output from the pixel shader to a coverage mask. Regular alpha-tested textures are either visible or not visible (0 or 1, hence the pixel aliasing). When the MSAA is applied each output fragment gets a transparency of 0 or 1 depending on alpha coverage and the MSAA result. The higher the MSAA, the closer the result is to being "correct". With 2xMSAA or no MSAA, the dither pattern is most readily noticed in still shots. Higher MSAA can hide that effect.
The reason is mainly to avoid sorting alpha transparencies from back-to-front, which can be a costly and imperfect set of operations, impacting the framerate and memory bandwidth consumption much more heavily (alpha blend)... assuming one is already utilizing MSAA. MSAA works on polygon edges, but alpha tested polygon edges do not conform to the leaves and other wirey details you do see.
Take for example... a 4x4 inch glass plate with coloured leafy designs and edges within (alpha test, what's transparent glass and not). The MSAA itself does not look at the components within the 4x4 inch "polygon" plate. It looks at the 4x4 inch glass as a polygon or quad.
Quincunx AA has the "benefit" of already blurring all screen-space pixels, which may lead to smoother looking alpha textures such as foliage and wire fences.
So called Transparency AA or Adaptive AA on PC will selectively supersample or apply a less effective MSAA on the alpha-tested texture to produce better results. e.g. render the 4x4 inch glass at 2x2 resolution and downsample...
------------------------
Notice: devs, please correct when possible. PM either me or Shifty Geezer. Thank You.