Reverend said:
Reading back a few pages, I read about determination about X or Y
IMO, it would be INSANE for any compiler to try to do automated error analysis and reduce precision where it thinks the results aren't significant.
I already explained why it is a "hard' problem, but it is not a theoretically uncrackable problem with the right semantics added to the API/Language, and an interval arithmetic compiler.
Precision isn't something a compiler should mess with at all. As a programmer, I want everything precisely specified by me and me alone, as either 32-bit or 64-bit floating point (IEEE).
Well, that's you. Different programming languages exist because different people have different preferences about how much how keeping they'd like to do. Some people don't even like the compiler choosing what instructions "to mess with" and write everything directly in assembly. Most people like SQL and regular expressions because they don't have to tell the DB imperatively how to satisfy queries. It depends on what you like to do.
Personally, I like to use scene graph libraries because I don't really care about managing those datastructures.
Those dynamically typed languages are completely deterministic and predictable when it comes to evaluating basic arithmetic operations and user-defined functions constructed from them. The compiler doesn't go in and arbitrarily decide to evaluate some expression with a different precision or data type; though the precision isn't statically known, it is deterministically derived from the dynamic types attached to your data.
No, a type inferencing functional compiler has the same problems with the "range and statistical nature" (as you put it) of I/O related input data. The only way to avoid this is to have the error ranges specified with the input data. Otherwise, a compiler must make the conservative choice, which is to choose the highest precision of a given type (integer or floating point) Some compilers insert code to do type inferencing at runtime, so that for example a factorial() function will use a native INT for any results < 2^32, but then switch over to an arbitrary precision Integer object afterwards.
This is not different from a hypothetical shading language compiler (we're not talking DX9 here, but a future language) that has range information specified by the developer.
A DX9 driver could at best, do conservative type promotions. E.g. 8-bit * 8-bit promoted to 16-bit, etc. For the vast majority of DX8 this would be a win. For many DX9 shaders, it would also be a win, since most people are still working with 8-bit art work.
DemoCoder said:
The core issue is multipass. The compiler can estimate error in a single pass, but how can it estimate ACCUMULATED ERROR without saving it in an additional buffer (E-Buffer? Error Buffer?) for every pixel? Perhaps it would use MRT or some kind of packing to store the error to pick it up in later passes.
Omigod! Argh!
If you're going to do deferered rendering by shoving the pipeline state into a fat buffer, why not shove a few range variables too?
Look, no one is suggesting this actually be done, since it super-burdens the compiler/driver developers and developer too. All this just points out why types (e.g. _PP) are neccessary and why Dio's comment was not fully thought throuugh. Shaders should not be typeless. The developer should maximize the amount of semantic information given to the driver, so that it has the most amount of information to work with.
p.s. you might want to look at this paper for the technique being applied to SIMD computations on Intel (MMX)
http://www.acm.org/sigs/sigmm/MM98/electronic_proceedings/pollard/