I was playing around with various methods to implement antialiasing for our deferred renderer (in DirectX9) and at the same time pondering how to implement cleartype style subpixel rendering to our font rendering algorithm (in pixel shader). Then I though, why not render the whole scene in subpixel accuracy and solve both antialiasing and font rendering problems simultaneously.
On LCD screens, the pixel layout is following:
[RGB][RGB][RGB][RGB] - 4 separate RGB pixels (3 channels each)
But we can also represent it like this:
[R][G][R][G][R][G][R][G] - 12 single channel "grayscale" pixels (one screen subpixel each)
When we separate subpixels like this, we get 3x vertical resolution, and only need to generate one "greyscale" color channel for each pixel (red channel for x%3==0 pixel columns, green for x%3==1 columns and blue channel for x%3==2 columns). Of course we need to calculate normal vector, depth buffer, texture coordinates, etc for each pixel separately (we cannot use the same value for adjacent RGB channels like we normally do), and we have to sample rgb textures for each pixel (and filter out the unused channels - unless we separate the input textures also to 3 different A8 textures, each containing separate R, G or B data).
Human eye doesn't see the separate subpixels. This example shows one white "pixel" in the screen. Dot represents black subpixel and R, G and B represent fully lit subpixels (of each color).
...RGB
..BRG.
.GBR..
RGB...
This way we can basically move the white dot in the screen by subpixel accuracy (3x horizontal resolution). The same basically applies to any length of polygon scanline (as long as it's longer than a pixel). You can cut off the scanline by subpixel precision and the human eye doesn't notice it. So by rendering the scene in 3x the vertical resolution one subchannel per pixel we can basically increase the screen resolution 3x in the x direction, and this technique is considerably cheaper than rendering all the channels in 3x resolution and downsampling the result (supersample antialias). The quality should also be better compared to standard downsampling (if we expect same sort of quality than font subsample renderers achieve).
Has anyone here experimented with similar subpixel rendering technique? Any potential issues I should be aware of?
On LCD screens, the pixel layout is following:
[RGB][RGB][RGB][RGB] - 4 separate RGB pixels (3 channels each)
But we can also represent it like this:
[R][G][R][G][R][G][R][G] - 12 single channel "grayscale" pixels (one screen subpixel each)
When we separate subpixels like this, we get 3x vertical resolution, and only need to generate one "greyscale" color channel for each pixel (red channel for x%3==0 pixel columns, green for x%3==1 columns and blue channel for x%3==2 columns). Of course we need to calculate normal vector, depth buffer, texture coordinates, etc for each pixel separately (we cannot use the same value for adjacent RGB channels like we normally do), and we have to sample rgb textures for each pixel (and filter out the unused channels - unless we separate the input textures also to 3 different A8 textures, each containing separate R, G or B data).
Human eye doesn't see the separate subpixels. This example shows one white "pixel" in the screen. Dot represents black subpixel and R, G and B represent fully lit subpixels (of each color).
...RGB
..BRG.
.GBR..
RGB...
This way we can basically move the white dot in the screen by subpixel accuracy (3x horizontal resolution). The same basically applies to any length of polygon scanline (as long as it's longer than a pixel). You can cut off the scanline by subpixel precision and the human eye doesn't notice it. So by rendering the scene in 3x the vertical resolution one subchannel per pixel we can basically increase the screen resolution 3x in the x direction, and this technique is considerably cheaper than rendering all the channels in 3x resolution and downsampling the result (supersample antialias). The quality should also be better compared to standard downsampling (if we expect same sort of quality than font subsample renderers achieve).
Has anyone here experimented with similar subpixel rendering technique? Any potential issues I should be aware of?