Speed IS affected. That is the whole point.Doomtrooper said:DemoCoder said:Nope, multiprecision will always exist. First of all, it already exists with respect to input/output (look at all the texture formats, and framebuffer formats) So storage is already multiprecision. Developers want the ability to deal with multi-precision data. We don't render everything at HDR resolution just because "it is easy and simple" to always use max-precision on everything.
Whats wrong with doing everything in high precision if speed is not affected.
1) Memory bandwidth.
Using lower precision, where you can, can lower memory bandwidth usage, which will result in better performance. The amount of performance difference is dependant on some factors, like was it already bandwidth limited.
2) Lower latency.
The lower the precision, the less cycles it takes to complete an operation. CPUs (and possibly GPUs shaders) nowadays are pipelined, so each cycle a new operation can be started. However, when another operation requires the results of another, the whole latency has to be waited. The GPU/CPU might try to reorder instructions to hide that latency.
As you can see I disagree with zeckensack that only bandwidth is the reason for going to lower precision. Another important factor is latency.