I asked him directly about the validity on the quote and some more precisions and he's just answered me this :
But it's a chicken and egg problem now you have to believe me that this really comes from John Carmack
I just hope that this will clear things up and that it will stop people from implying that he is biaised
The quote is from me. Nvidia probably IS "cheating" to some degree,
recognizing the Doom shaders and substituting optimized ones, because I
have found that making some innocuous changes causes the performance to
drop all the way back down to the levels it used to run at. I do set the
precision hint to allow them to use 16 bit floating point for everything,
which gets them back in the ballpark of the R300 cards, but generally still
a bit lower.
Removing a back end driver path is valuable to me, so I don't complain
about the optimization. Keeping Nvidia focused on the ARB standard paths
instead of their vendor specific paths is a Good Thing.
The bottom line is that the ATI R300 class systems will generally run a
typical random fragment program that you would write faster than early NV30
class systems, although it is very easy to run into the implementation
limits on the R300 when you are experimenting. Later NV30 class cards are
faster (I have not done head to head comparisons with non-Doom code), and
the NV40 runs everything really fast.
Feel free to post these comments.
John Carmack
But it's a chicken and egg problem now you have to believe me that this really comes from John Carmack
I just hope that this will clear things up and that it will stop people from implying that he is biaised