CT is based on the ability to render subpixel effects. This works on LCDs because of the known fixed distribution of RGB subpixels in vertical strips, and the response of those subpixels given particular color values. You'll see what happens when the pattern that CT thinks you have differs from the actual pattern, just flip between BGR and RGB layouts.
Properly implemented and calibrated CT does not exhibit visible blurring nor color halos. If your CT is exhibiting that, please use the MS CT calibration tool to fix it.
CT on CRTs doesn't work, because CRT's use a phosphor triad. The result will look like bad convergence. When the font rendering engine thinks it is setting subpixel at position (0.5,1.0) on an LCD, on a CRT it is switching on a subpixel in a phosphor triad in completely the wrong position. People who report an improvement with CT on a CRT are seeing the effects of superior font antialiasing, they are not seeing the effects of cleartype, which is to take antialiased samples and render the subpixels. CT on CRT = artifacts. I've tried it on 2 different high end CRTs, side by side with an LCD, and it simply does not compare. I find that ordinary font smoothing with no CT works about as well, but CT may work slightly better due to an enhanced font rendering algorithm.
But it is a myth that "sharp" displays mean you need better AA. AA enhances readability A less sharp display is less readable. *That* is the reason for AA on fonts, NOT eliminating the "sharp, jaggy" look. Blurry pixels that blend together are not a readability enhancer.
Moreover, on AA, even commercial offline rendered CGI in films at 2k+ res use high levels of AA (both spatial and temporal), and some shots use 4k. Even at high resolutions, you will see see shimmering without AA. That's why film quality CG uses up to 64x supersampling in some spots. 64x supersampling @ 2k resolution would be akin to 131,000 x 98304 samples!