Hi,
Does anyone know where I can find accurate information about the precision of ps 2.0 operations?
I've heard that ATI uses a 24-bit floating-point format in the Radeon 9700 series and Nvidia optionally uses a 16- or 32-bit format with the Geforce FX, but this is not what I'm interested in. I need to know the relative error of the instructions like for example pow. According to the DirectX 9.0 SDK it is "full precision", but what does this mean? Do they mean it's a very good approximation, like a maximum relative error of 0.0001, or the full mantissa is IEEE correct with rounding? Or is it just implementation dependent and changeable trough the drivers? Full precision sounds a bit weird to me since that requires lots of extra silicon and clock cycles, and I can't imagine that precision is more important than performance for these graphics cards meant for gaming.
All information will be kindly appreciated.
Does anyone know where I can find accurate information about the precision of ps 2.0 operations?
I've heard that ATI uses a 24-bit floating-point format in the Radeon 9700 series and Nvidia optionally uses a 16- or 32-bit format with the Geforce FX, but this is not what I'm interested in. I need to know the relative error of the instructions like for example pow. According to the DirectX 9.0 SDK it is "full precision", but what does this mean? Do they mean it's a very good approximation, like a maximum relative error of 0.0001, or the full mantissa is IEEE correct with rounding? Or is it just implementation dependent and changeable trough the drivers? Full precision sounds a bit weird to me since that requires lots of extra silicon and clock cycles, and I can't imagine that precision is more important than performance for these graphics cards meant for gaming.
All information will be kindly appreciated.