DemoCoder said:
But film *print* (what's used to play back your movie) must be able to store the entire range that any of the negatives could have been exposed at, because analog film projectors can't dynamically adjust their iris/lightsource (this info is not recorded by analog cameras on 35mm film for the projector playback to use) So Kodak film-quality *print* film has to be HDR to hold the entire range of what possible negative exposure settings are used. In this case, the projector/human eye is the "tone mapper". The adjustment of the camera aperture/exposure settings is the "scaling" stuff, but note, it takes significant effort to get exposure right! It's difficult to "develop".
That last part is the key, DC. Significant effort? What are you planning to do with your FP16 buffer? You still have to somehow decide what scaling factor you'll use in the tone mapping. In the TV industry, it's the cameraman. With FP10 you have to use the previous frame's data so you can scale into the [-32,32] range, and with FP16 you can do all your rendering beforehand, but that's a moot point. Usually you want a time delay anyway, e.g. rthdribl.
In the case of HDR buffers, the HDR exists not just to prevent saturation, but to preserve local contrast differences. That's why tone mapping algorithms can be so complex, because the challenge is to model how the human eye views local contrast as well as global contrast. See
http://www.cs.virginia.edu/~gfx/pubs/tonemapGPU/
The eye is capable of distinguishing a 10,000:1 contrast ratio at any given time because of this very phenomenon, with logarithmic levels. Note that FP10 gives you a ratio of 32 / (1/256) = 8192. That's plenty high for local contrast simulation. Anyway, being able to reach photo-quality real-time graphics would be an admirable achievement, and cameras don't have this local contrast ability. You're talking about better than photorealistic - slow down there! Thanks for the link to the paper by the way.
These tricks to fit the range into a non-true-HDR buffer are not likely to account for all of the aspects of real tone map range compression.
I never said they would. However, they will be very effective. Remember, my central point is that going from FX8 to FP10 will be a much bigger step in quality than FP10 to FP16 or even FP32. Yes, I think there will be some banding artifacts here and there, but for potentially twice the performance and high speed MSAA to boot, it's a very good compromise.
I don't see why you think this will be such a headache for developers. Many applications will be able to get away with a simple format change for the framebuffer. Even the scaling issue will probably only need to be considered if you want to really simulate HDR accurately instead of just making it look convincing.
I do agree with you, however, that sRGB (i.e. shared exponent) would probably have been a better option. Transistor overhead appears to be limited to just a few comparision and bit-shifting circuits, which is nothing compared to the blending logic.
EDIT: fixed some inaccuracies about eye levels