YeuEmMaiMai said:
1. FP is only part of the spec when the developer opts for partial percision, not when Nvidia decides to replace shaders that call for FP24 (well FP32 for nvidia) with hand coded FP16 ones.... do you see the difference?
Well, in case of the Dawn demo, the developer clearly opted for partial precision, so the demo is still a DirectX 9 demo. I am
not discussing the cheating and hand replacement of shaders of games and demos by nVidia.
YeuEmMaiMai said:
If the demo was GPU limited, it is only GPU limited on NVidia's hardware since we already determined that it runs faster on ATi's/ Personally I think it is CPU limited as my 1.8G AMD 2200+ and 9500Pro get an average of 35 fps at 1600*1200*32.
Well, GPU limited means that getting a better CPU will only have a small amount of impact on the performance. CPU limited means that getting a better GPU will only have a small amount of impact on the performance. I think it is GPU limited no matter wheter a ATi or nVidia card is used. But as soon as the download is done, I will try it.
YeuEmMaiMai said:
The gripe I have is FP16 (contrary to what Nvidia claims) clearly is not sufficient for if it was MS would have adopted it and we would not be having this discussion.
MS
has adopted it, however only when partial precision hints are used. In my opinion, when the extra precision of FP24 or FP32 doesn't bring anything extra, a developer should use FP16. Somehow, some people here argue that you shouldn't, while on normal software where performance is required, people are used to using the precisions that are enough and bring better performance.
Again, I am
not discussing cheating/shader replacement in drivers.
YeuEmMaiMai said:
lets take a look at Nvidia's history concerning IQ shall we?
NV1 failure
I have never seen it, so I can't judge about quality. However, wasn't this the board that supported quads, but Microsoft wasn't bold enough and didn't include support for it in their first Direct3D version?
YeuEmMaiMai said:
Riva128 inferior IQ to every other card out there (load up Jedi Knight on it and notice it cannot render transparent textures inferior IQ compared to S3 verge, Ati rage3d, rendition vertite, 3dfx, matrox, etc but was fairly fast so it sold
2d was crap at anything above 1024*768
Can't judge about it because I have never seen it.
YeuEmMaiMai said:
TNT/2 once again inferior IQ to every other card out there including Rage IIc, Savage 3d/4, rendition vertite 2X00, banshee, matrox very fast so it sold
Image quality in 32 bit 3D was far from inferior to any other card. If you mean 2D quality (sharpness of the image) I can agree with you.
YeuEmMaiMai said:
ditto for GF1,2,3 and finally in GF4 they get their 2D act together but they still have trouble with ANSIO and FSAA quality
What is bad about the image quality of the GF line? The only gripe one can have is that they didn't implement 32 bit interpolation for the DXT1 texture compression, resulting in banding where smooth color changes occurred in textures.