DemoCoder said:At best, the people arguing single precision should atleast argue that FP32 needs to be supported as the minimum. If ATI indeed did not "finalize" their HW until MS said "FP24 is minimum", and if ATI indeed could have supported more than FP24 "easily", why didn't they move to FP32 in the beginning and lobby Microsoft to define FP32 as the minimum. At the time, NV already had FP32 capable HW (although with pathetic performance), and ATI could have delivered FP32 "easily", so there could have been consensus on FP32. Therefore if MS had endorsed FP32 as the minimum, we wouldn't have to wait another generation for the standard to be bumped up, since both vendors could have had FP32 HW ready, and ATI still would have come out looking golden, because presumably, their FP32 implementation would have "wiped the floor" with NVidia.
A) How would they know exactly what NV30 would look like?
B) Why increase silicon and decrease yields for something that may not get used fully--or even well--for a generation?
C) Wouldn't we still be in the SAME place as we are right now, except with developers programming their long shaders to FP32? R3xx would still be running them straight, and NV3x would still be mix-moding and _pp'ing.
Plus, there are more players than ATi and nVidia who probably didn't want the added complexity and didn't see the need for it straight off. And who knows how many developers felt they needed to go to that precision as minimum while they're just starting to roll out those kinds of advanced shaders? As well, we already saw card generations split into DX8 and DX8.1, and PixelShader go through many sub-steps, so why would folks fret overly at similar stepping in DX9? (Especially since by all accounts we're going to ride on it for a while.)