Alternative AA methods and their comparison with traditional MSAA*

Have anyone considered using subpixel rendering?

It works for anti-aliasing fonts (ClearType) so I don't see with it wouldn't work for other types of graphics.

Yes, I considered it, and started prototyping something using it a year or two ago. Never got that far though. It would be interesting now with the wave of post-AA techniques to investigate if it could be used to improve edge sharpness while keeping it smoother. I think for techniques like GBAA where I know exactly where the edge is it would be fairly easy to implement. I might give that a whirl if I get time.
 
Have anyone considered using subpixel rendering?

It works for anti-aliasing fonts (ClearType) so I don't see with it wouldn't work for other types of graphics.
I also did some experimentations a few years ago. Subpixel rendering works best for images that have all color components. White and black work the best, making text rendering the best case for the technique. Pure (single channel) RGB colors do not work that well, but most games do not use pure colors that much.

We have been developing mainly for consoles for the last few years. Subpixel rendering requires knowledge about the subsample layout, and unfortunately there's a lot of different TV sets available (with different panel techniques and subpixel layouts). Subpixel rendering requires 1:1 native rendering. Most TV sets overscan compensate by default (scale image up a bit). Current consoles are not capable of rendering high end game graphics at 1080p (for Full HD TVs) and thus require upscaling. Unfortunately all these limitations make subpixel rendering a really marginal technique for console games.

I got a Galaxy Note last week, and I have been pondering about subpixel rendering and various chroma optimizations since. The device has a 1280x800 PenTile display (alternating RG, GB subpixel layout). Human eye is much more sensitive to luminance than chroma, and green affects luminance (perceived by our eyes) more than red and blue combined. And thus the PenTile display (at 285 DPI) looks flawless (except for some corner cases which of course I tested right away when I got the device). Most of the new graphics hardware (AMD GCN, NVidia Fermi, PowerVR) have scalar SIMD architectures, and thus it would be more efficient to calculate only luminance at target resolution, and chroma could be rendered at lower resolution. Most video/image compression algorithms use lower precision chroma without problems, so it could be useful for games as well.
 
I'm not so sure about that. Maybe for photorealistic games, but the vivid colours you can get with games and not real life tend to suffer badly at the hands of lossy image compression. Something like snooker, or a cartoony interface, will show colour bleeding. Maybe at half res for chromo it'd work okay, but it's not an avenue I'd be particularly in favour of exploring unless absolutely necessary. I'm one of those who wishes DVDs/BRDs were used for better movie quality rather than all the extras.
 
I'm not so sure about that. Maybe for photorealistic games, but the vivid colours you can get with games and not real life tend to suffer badly at the hands of lossy image compression. Something like snooker, or a cartoony interface, will show colour bleeding. Maybe at half res for chromo it'd work okay, but it's not an avenue I'd be particularly in favour of exploring unless absolutely necessary. I'm one of those who wishes DVDs/BRDs were used for better movie quality rather than all the extras.
Lower chroma resolution isn't the cause of the visible artifacts in Bluray video compression. JPG at 100% quality also stores chroma at 2x2 lower resolution and most people cannot see any difference to uncompressed data. See here for comparison: en.wikipedia.org/wiki/File:Colorcomp.jpg

I would be perfectly happy if we got Bluray image quality in our games (at 1080p resolution).
 
Lower chroma resolution isn't the cause of the visible artifacts in Bluray video compression. JPG at 100% quality also stores chroma at 2x2 lower resolution and most people cannot see any difference to uncompressed data. See here for comparison: en.wikipedia.org/wiki/File:Colorcomp.jpg

I would be perfectly happy if we got Bluray image quality in our games (at 1080p resolution).

Maybe in motion it's not that as noticeable but on these still images it's plain obvious. (to me)
 
Lower chroma resolution isn't the cause of the visible artifacts in Bluray video compression.
I know. It's just an example of how I feel quality is compromised where it shouldn't be.
JPG at 100% quality also stores chroma at 2x2 lower resolution and most people cannot see any difference to uncompressed data. See here for comparison: en.wikipedia.org/wiki/File:Colorcomp.jpg
In most cases JPEG isn't dealing with the vibrancy and clarity of CG graphics though. I've created some simple designs before that are killed by JPEG - thank goodness for PNG! At 1080p, a half-res chromo component probably wouldn't look too bad. That's still 720p and the result should look sharp on all but the largest of screens.

...

Okay, just tested this taking a couple of game images including Team Fortress .png, splitting the channels, and undersampling all but lightness. The result is impressive! Not hugely detectable blurring. Shouldn't work too well with things like name texts in a multiplayer game, but those should be rendered in a UI pass.

Can luminance be AA'd separately to the same effect (AA luminance, render rest of channels undersampled)? In the case of a deferred renderer, the albedo could be undersampled etc., with just luminance having due care.
 
Maybe in motion it's not that as noticeable but on these still images it's plain obvious. (to me)
It depends on who is watching. I personally cant see the difference of best quality (100%) JPG-2K or JPG-XR (HD Photo) when compared to uncompressed image (without zooming of course).

All subsample antialiasing techniques degrade croma quality in order to improve perceived resolution (= luminance quality). If you take a screen capture and zoom in to cleartype antialiased text, you will see chroma bleeding at the text edges, but the text seems perfectly good when watched without zooming. Subsample antialiasing and image compression are basically based on the same idea (average human eye is pretty bad in detecting small chroma changes). Both image compression and text antialiasing compensate degraded chroma by modifying neighbor pixels (to average out the chroma difference).
 
just note,
recently relesed version of ARMA 2: OA 1.60 and Take On Helicopters 1.03 support native FXAA 3.11

we are looking also into SMAA and some others :) (FXAA 4.0 looks promising)

but if anyone has good tips :) (hint i know humus's ones) ...
 
Has anyone ever implemented DX10.1 or DX11 selective subsample processing in a high profile title with deferred shading and MSAA? All this talk about performance impact of multisampling is getting on my nerves without numbers ... poor excuse to stick to half assed solutions on the PC (the PC not being worth the effort to do it right because of sales volume is a better excuse, but if that's the case I'd rather hear it outright).
 
Has anyone ever implemented DX10.1 or DX11 selective subsample processing in a high profile title with deferred shading and MSAA? All this talk about performance impact of multisampling is getting on my nerves without numbers ... poor excuse to stick to half assed solutions on the PC (the PC not being worth the effort to do it right because of sales volume is a better excuse, but if that's the case I'd rather hear it outright).

Battlefield 3 does it with compute shader "rescheduling", which is definitely the most modern way to do it. There should be plenty of benchmarks available.

Really I think the biggest issue with MSAA in a modern engine is scalability with regards to implementing and maintaining separate MSAA paths for various rendering features. MSAA doesn't just affect lighting...you also have to account for it in your G-Buffer pass, fog, SSAO, particles, post-processing, etc. It has non-trivial impact on engineering time, and makes a drop-in solution like FXAA a lot more appealing. Memory usage can also be a concern...a typical G-Buffer can consume nearly 200MB for 1920x1080 w/ 4xMSAA.
 
Has anyone ever implemented DX10.1 or DX11 selective subsample processing in a high profile title with deferred shading and MSAA? All this talk about performance impact of multisampling is getting on my nerves without numbers ... poor excuse to stick to half assed solutions on the PC (the PC not being worth the effort to do it right because of sales volume is a better excuse, but if that's the case I'd rather hear it outright).

For our SMAA paper we got an average MSAA 8x cost of 5.4ms for various titles. Where the GPU was underused, the cost was zero; on high profile games the cost was as high as 7.7ms, which is half of the time for a 60 fps game.

As of today, MSAA 8x on high profile deferred games is just not possible because the memory and performance required to run it is just ridiculous (I've first hand information about this =)
 
I don't typically benchmark 8x MSAA because frankly currently GPUs are not optimized for it. There's a massive cliff beyond 4x in a lot of situations, so that's really the most interesting place to benchmark.
 
Plus you have CSAA or the AMD equivalent, which IMO is a much better option that straight 8xMSAA. Although I don't think CSAA works with deferred MSAA... could be wrong?
 
Has anyone ever implemented DX10.1 or DX11 selective subsample processing in a high profile title with deferred shading and MSAA? All this talk about performance impact of multisampling is getting on my nerves without numbers ... poor excuse to stick to half assed solutions on the PC (the PC not being worth the effort to do it right because of sales volume is a better excuse, but if that's the case I'd rather hear it outright).

There will be new paper about deferred MSAA presented on i3D 2012 by Marco Salvi
marcosalvi Marco Salvi
@
@renderwonk it's about making msaa + deferred shading more efficient (faster and with lower storage) without sacrificing msaa image quality
 
Although I don't think CSAA works with deferred MSAA... could be wrong?
Any AA method could be made to work, but with varying overhead and difficulty. I imagine you could probably do better (quality-wise) than CSAA if implemented in software; it may not make sense to do a direct implementation. Marco and Kiril's I3D paper is one example.

Indeed though increasing the MSAA level to get better coverage information is not a particularly efficient way to scale. You almost certainly want something on the order of 4x "real" MSAA/super-sampling available with a good pattern, but beyond that you may want to start to lossily compress samples (CSAA or similar). I'd love if 16x could be the baseline myself though ;)
 
Back
Top