It seems to me that some people have over reacted a little towards the current issues with pixel precision. I remember reading about a developer saying the future version of graphic APIs should be pixel precise. I sort of dismissed that as the mad ranting of a frustrated individual. Recently I heard something that brought that comment back into my mind and made me chuckle. I was talking to a person who was using the dx9 reference rasterizer and he was getting slight differences between a pentium3 and a pentium4 system. That reminded me of some software rasterizing I worked on a while back. I was working with the SSE instruction set on the p4 and found that I was getting different pixel results using different rendering modes. It turned out to be a case where (a+b)+c was getting results that were different enough from a+(b+c) that pixels were different.
Basically the moral of the story here is that people should not ask for pixel precise results from an API. They should ask for tolerances just like the OpenGL and d3d specs currently have. If these specs aren’t good enough for developers get the tolerances tightened up and algorithms more defined, but don’t go crazy and ask for exact pixel reproduction. Also it’s possible the specs are good enough and some hardware doesn’t or can’t follow them. However knowing something about the OpenGL spec I know there are places they leave a lot of wiggle room like with aniso filtering.
Basically the moral of the story here is that people should not ask for pixel precise results from an API. They should ask for tolerances just like the OpenGL and d3d specs currently have. If these specs aren’t good enough for developers get the tolerances tightened up and algorithms more defined, but don’t go crazy and ask for exact pixel reproduction. Also it’s possible the specs are good enough and some hardware doesn’t or can’t follow them. However knowing something about the OpenGL spec I know there are places they leave a lot of wiggle room like with aniso filtering.