I haven't looked at this but I assume it's just discovering distance-to-edge data by looking at the slope of the depth deltas as they approach a silhouette edge and otherwise doing the same thing as GBAA? Makes sense and might be a better fit for pre-DX10 hardware (or on cards with very slow geometry shaders) but as you note not all surfaces are nicely modeled with thickness. I've had no end of trouble with this in the realm of shadows
The biggest issue I've seen with GBAA so far is that when the scene is suitably complicated you get aliasing in the distance-to-edge buffer itself. While this is expected and no post-process technique is going to magically recover sub-pixel data, the issue with GBAA is that it then interprets this aliased data as noisy offsets, which makes both edges and occasionally surface internals noisier than they had been previously. Sometimes this is actually worse looking than the original aliased image if - say - you have a clean edge that gets complicated topologically at once point, so most of it is anti-aliased but then you get weird speckles somewhere in the middle of it due to the underlying geometry.
Have you played with GBAA in scenes with smaller triangles (JC2 or something say)? The demo scene has really, really large triangles so you don't really notice any of these issues there. Any solutions?
bigtabs: these techniques aren't really the most suitable for DLL injection or control panel overrides, even if those weren't the most evil things ever
GBAA requires a geometry shader, SDAA requires an additional pass over the scene and both require some additional storage. They are very easy to integrate into an engine IMHO, but not magically "inject" into a rendering command list. Just let the game developers implement these things as necessary.