<Insert Standard disclaimer: Personal views, not those of employer etc>
I think it's more that the DXT1->16bit was considered a design feature of the earlier NVidia chips rather than a flaw. The common opinion is that Nvidia decompressed the DXTn formats before storing them in the internal cache. Decompressing to only 16 bits would thus make the cache seem like it could hold twice as many pixels and hence, presumably, resulted in an improvement in performance.
I get the feeling that the competitors did not take this approach.
I doubt that we'll ever see NV publicly admit that it's been fixed in NV30, simply because that would be equivalent to admitting that it was broken in the first place.