The truth, AFAICS, is that Chalnoth has indeed consistently recognized 24-bit per component as sufficient for fragment processing, and has challenged the necessity for 32-bit per component. I don't recall a change in this when the nv30's 128-bit support was announced, but I do remember, and verified, this from when the R300's 96-bit was established. Confident that it is known that I'm not afraid to criticize Chalnoth, I'll take this opportunity for laziness in posting a link and ask you take my word for it, or search for "component" with his name for yourself.
What might be confusing this is two things:
1) He initially phrased his mentioning of 24-bit per component capability on the R300 as being a tradeoff required by the R300's 0.15 micron process, amongst a long tirade of other criticisms of the R300 (the power connector, and his statements of "disappointment' based on the phrasing "without limits" as mentioned by ATI).
2) He has tended to advocate 32-bit FP values being used for vertex processing.
On another note, if Doom3 has game controlled anisotropic filtering, and with relatively efficient MS AA implementations available, why are we still thinking the GF FX and R300 can't run it with aniso and AA? How hard would it be for him to apply aniso only on the surfaces where it is reasonable? Worst case, MSAA should not be a performance issue...it is like people are stuck on the belief that "latest id game = crushes every card out there" irregardless of how powerful the cards we have now are...have people just decided to forget the leap in performance above Carmack's initial targetting?
We should separate the performance concern into "what we've seen in the screenshots" and "maximum image quality tweaks that he just referred to in his plan", but the thing is those maximum tweaks seem to me likely to be performance killers even without AA and even, perhaps, anisotropic filtering.