Subpixel rendering

sebbbi

Veteran
I was playing around with various methods to implement antialiasing for our deferred renderer (in DirectX9) and at the same time pondering how to implement cleartype style subpixel rendering to our font rendering algorithm (in pixel shader). Then I though, why not render the whole scene in subpixel accuracy and solve both antialiasing and font rendering problems simultaneously.

On LCD screens, the pixel layout is following:
[RGB][RGB][RGB][RGB] - 4 separate RGB pixels (3 channels each)

But we can also represent it like this:
[R][G][R][G][R][G][R][G] - 12 single channel "grayscale" pixels (one screen subpixel each)

When we separate subpixels like this, we get 3x vertical resolution, and only need to generate one "greyscale" color channel for each pixel (red channel for x%3==0 pixel columns, green for x%3==1 columns and blue channel for x%3==2 columns). Of course we need to calculate normal vector, depth buffer, texture coordinates, etc for each pixel separately (we cannot use the same value for adjacent RGB channels like we normally do), and we have to sample rgb textures for each pixel (and filter out the unused channels - unless we separate the input textures also to 3 different A8 textures, each containing separate R, G or B data).

Human eye doesn't see the separate subpixels. This example shows one white "pixel" in the screen. Dot represents black subpixel and R, G and B represent fully lit subpixels (of each color).

...RGB
..BRG.
.GBR..
RGB...

This way we can basically move the white dot in the screen by subpixel accuracy (3x horizontal resolution). The same basically applies to any length of polygon scanline (as long as it's longer than a pixel). You can cut off the scanline by subpixel precision and the human eye doesn't notice it. So by rendering the scene in 3x the vertical resolution one subchannel per pixel we can basically increase the screen resolution 3x in the x direction, and this technique is considerably cheaper than rendering all the channels in 3x resolution and downsampling the result (supersample antialias). The quality should also be better compared to standard downsampling (if we expect same sort of quality than font subsample renderers achieve).

Has anyone here experimented with similar subpixel rendering technique? Any potential issues I should be aware of?
 
Hi,

search forums for "cleartype subpixel" -> "show posts", i think maybe there were some relevants posts on this topic before.
 
While you're at it, don't assume vertical subpixels - I use a pivoting monitor in portrait orientation, and my pixels are horizontal :)
 
I implemented subpixel rendering like previously described. It works ok, but there is noticeable color bleeding present at some edges.

I also implemented an color bleeding elimination algorithm described in this article: http://www.grc.com/cttech.htm. It correctly eliminates all color bleeding, but also makes the image greyscaled, as it's designed for single color font rendering.

I have been experimenting with various custom made color bleeding reduction algorithms, but all my algorithms seem to mess up the image color balance. A possible idea would be to render the luminance only in subpixel resolution (human eye is much more sensitive to luminance), do a color bleeding reduction pass for the subpixel precision image and then colorize the image with screen resolution color image.

i dont understand this bit :
[RGB][RGB][RGB][RGB] - 4 separate RGB pixels (3 channels each)

It just means that each pixel is threated as competely separate entity. In traditional rendering engines, each pixel is rendered separately. You do not think about it's surrounding sub pixels when you determine the color of each pixel.
 
wouldnt you need to "displace" pixels if necessary?. If you have an image 3x the resultion but dont take into account that each of those pixels only display a primary color then you`re in trouble if you dont rasterize grayscale pictures.
Think of a Texture with red/green/blue Stripes, then rasterizing it so that each sampling point is in another stripe than its neighbours. If you sample the red stripe for a green sub-pixel then you would only get the masked green part - ie nothing. So if the texture is shifted to match subpixels with samplingpoints of another color you will always get black? Is that maybe where your color-bleeding is coming from, or have you already solved that?

If you`d render full RGB-Pixels (instead of 1 component) you could then go through each subpixel and then pick the corresponding component of the 3 pixels and calculate final color (average, maximize, dunno which would fit best). That way you dont completely lose information through masking like in the above example.
 
wouldnt you need to "displace" pixels if necessary?. If you have an image 3x the resultion but dont take into account that each of those pixels only display a primary color then you`re in trouble if you dont rasterize grayscale pictures.
Think of a Texture with red/green/blue Stripes, then rasterizing it so that each sampling point is in another stripe than its neighbours. If you sample the red stripe for a green sub-pixel then you would only get the masked green part - ie nothing. So if the texture is shifted to match subpixels with samplingpoints of another color you will always get black? Is that maybe where your color-bleeding is coming from, or have you already solved that?

If you`d render full RGB-Pixels (instead of 1 component) you could then go through each subpixel and then pick the corresponding component of the 3 pixels and calculate final color (average, maximize, dunno which would fit best). That way you dont completely lose information through masking like in the above example.

Yes, I know this color undersampling issue. The subpixel rendering techniques have best effect when all color channels are used (best seen in pure white pixels). You cannot get any advantage with subpixel rendering on pure red, blue or green pixels at all. The color undersampling issue is also the main source of the color bleeding in greyscale images as well. If white (or any greyscale) polygon scanline is short enough and it's length is not dividable by 3, there is uneven amount of red, blue and green components, making the scanline "glow" a bit at wrong color in the edge with extra subpixels.

This is the filtering algorithm used in Cleartype: http://research.microsoft.com/~jplatt/cleartype/sid2000.pdf. It blends the 2 adjacent subpixels to each subpixel (33%, 33%, 33% box filter on subpixel level). As the nearest subpixels are of different subpixel colors (RGB triplet each time, just in different order) the image becomes greyscale. I have experimented with different filter kernels (20%, 60%, 20%, etc), but there is no way around the issue that the pixel color becomes less colorful (more grayscale) with a filter like this. The more color you want to preserve, the more color bleeding you get. It's not a issue with font rendering, as the font is single colored.
 
Back
Top