Moving on: I just finished up on the AA shader, replaced edgeAA (was merged but I managed to get it producing better results on it's own). Performance impact is just above edgeAA (by around 0.7ms). Performance is great and so are the results:
http://a.imageshack.us/img842/1943/mlaagif.gif
The entire thing is image-based (not depth based, like EdgeAA) so even sharp aliasing in textures will be removed. That can lead to some texture smoothing but it's not too big a deal, it's a hell of a lot better than jaggies or blurred jaggies.
Little bit of info on the implementation (for anyone who stumbles upon this, or is interested):
1. Sample surrounding pixels around the current one.
2. Use the samples to create a normal map from the screen (remember that screen-normal map test I did a few months back? That. No screens, they were on hugeup -.- Edit: See bottom of post, new image).
3. Do a small weighed sum of 4 scene samples using this new map as offsets.
That's the gist of it. I said earlier that it was "some sort of MLAA", which isn't really true, as I'm not performing pattern detections. Edit: got a name for it, NFAA (Normal Filter Anti-Aliasing) since we're doing a normal filter for edge detection.
Edit: Here's an image of the screen-normal map, just uploaded.