Another question someone here rises: in the interview it states that the CPU time of the AA method varies from scene to scene.
Here are some of my assumptions:
-So if I assume that there is some adaptation going on, e.g. some form of edge detection:
you have the fixed cost of the detection algorithm which must be the same for each image, scaling with the number of pixels
-I assume that you even have some parameters, which you can tune to determine what actually is an edge (I don't believe that you guys found a pure automatic algorithm, as this is basically one of the tasks we try to do here all day long
) - this parameter gives you the possibility to exchange CPU time for higher quality or vis versa (more edges detected -> better quality -> more CPU time needed)
-you have an additional cost depending on how much edges got detected: because then in a case where you detect an edge, you do your magic AA stuff.
Concluding based on this wild assumptions:
-This implies that the stated CPU time heavily depends on the actual game, for instance if you have a lot of edges to be de-aliased, you need more CPU time as more edges get detected...on the other hand, you can reduce the CPU time by setting a parameter, which decides what is an edge or not!
-If indeed a parameter is used to tune the edge detection ...one can use this again adaptively to determine the quality of AA depending on the available CPU time, i.e. depending on the actual complexity of the scene: "I have a lot of spare CPU time , so I can really turn up the edge detection sensitivity and improve IQ"
-A better way would be: use a rather low sensitivity, so that almost all edges got detected. Now sort them appropriate: "worst" edges first or something like this. Then you get a sorted list of detected edges. Determine now how many CPU time you have left. This gives you automatically the maximum number of edges you can de-aliase. Then just process the list of edges from the worst until you have no CPU time left...