Edit: My attempts at sarcasm came off as me sounding like a prick. I've tried to adjust the tone of stuff some.
Alright. Sweet. Constant Internet connection once again. Been bouncing around the Eastern US seaboard and not had access. So... let's see what we got.
For starters, read the first half of
this. I'll repeat most of it here, but I think it's a good summary all the same. Every reply in this thread seems to bake in at least a few assumptions about what it means. Some are fine to have in games. Some aren't. I'll try to dissect both.
Intro
For starters, it's best to clear up some vocab an conceptual problems. High Dynamic Range (HDR) is really a prefix. It doesn't mean anything on it's own. Talking about "HDR" is like talking about "anti". It modifies what it preceeds. I'm as guilty about this as everyone else, primarily due to laziness. All it means is that "the contrast range of the ______ (format, input device, output device) is capable of representing contrast ratios that significantly exceed what can be represented by 8 bit without quantization loss." 8 bit is a bit of a graphics assumption, but applies in most areas because it's the smallest memory unit and that's just convenient.
When we (games and computer graphics people) talk about HDR, we almost always mean High Dynamic Range Imaging (HDRI) which is to say we are talking about the capture, processesing, display, and output of HDR images. While this is mostly referring to light, it isn't necessarily so.
(By convention, we don't normally call 16 bit normal maps HDR normal maps, but they technically are. You can get more directions represented with lower quantization, so you don't get that blocky look on your surfaces. But anyway...)
Probably one of the biggest questions is "Why is this useful or desireable?" Many games go for immersion, and desire as much realism as they can get. Real light doesn't vary between 0 and 255 (the 0-255 is a bit of a confusion, but I'll get to that in a bit). The better you can model real light, the easier many effects become. Is it necessary? No. You could also hand-paint every possible frame in the game and just pick the right one based on user input, but would you do so? No. The best example I can think of prior to this is the halos around lights. A while back, around Halflife 1 people had to manually add those to each light. Pain. Now, we can specify that the object is a light and the engine can do so for us. Same basic idea. Things look more real, and you don't have to do all that much effort.
Overview
High dynamic range imaging isn't a technology.
Now read that again.
Got it? Good.
HDRI (or the even less descript HDR term) aren't something that nVidia or ATI can dedicate circuits to directly. At best, it's a nebulous web of a dozen other features which fit together to let us work. This means different things to different people. The most general way to say it is "in order to support HDRI, you need to support floating point everywhere you use pixels." This means textures, shaders, render targets, etc... in our case of games. The last generation of cards (NV30 / R300) had floating point textures and rendertargets, but you couldn't do blending operations, an important operation for games, which is why you never saw any HDR games at the the time.
Now, let's discuss the bare minimums necessary to have a game that supports HDRI. We need some way of getting HDR data into our engine. We can either capture it (like Debevec light probes) or synthesize it (in a raytracer or by simply setting our light to something like 1200.4 instead of 1). We need to process and store it somehow. This is needed for any input data, as well as our framebuffer to store the render scene as we make our draw calls. For games, this generally means having floating point available in image formats and the gpu. Then we need to output it. If you are lucky, you have an HDR display and can do it directly. Since I know the price of them, I can safely assume that no one here has one and we will rely on the other method, tonemapping. It's a fancy name for what's simply "remapping light intensities on the range of 0-blah into the range of our output device simply denoted as 0-1".*
Misconceptions
Before I cover those 3 areas in more detail, I'd like to address some related issues that seem to be routinely confused with HDRI.
LIGHT PROBES
All those pretty pictures you see on Paul Debevec's
site, made from
light probes are done using HDR images, but aren't directly related. The acquisition of light probes or sythesis of environment maps and their use in rendering is known as Image-Based Lighting (IBL). In these environment maps, each pixel is treated as a point light source. We take the reflection vector (for specular) or surface normal (for diffuse) and look up into these textures to determine at least part of our lighting at that pixel. The results from either acquired or realistic synthetic light probes are very convincing when the environment maps and processing is done in floating point opposed to 8bit fixed point. So, take away that image-based lighting through environment maps requires floating point data to produce the compelling images that it does, but HDRI can be done without involving IBL.
BLOOM
If you take one thing away from this whole rant, make it be this. Just because the lights have blurry things around them, it does not mean that there is any HDRI being involved in their production. My old TV does that a bit. The crappy lens on my cameraphone causes that as well. Neither of them have anything to do with HDRI. You can tack these onto to any light source in a conventional engine. The point of these blooms is to simulate the optics in your eye. Compared to even a modest camera, your lenses are rather poor. They blur the light passing through them. When you see something bright, light from that object leaks into to darker areas around it. If the object is bright enough, you see the bloom on the dark side. Because this only happens when we see very bright objects, people have taken to adding them to things they want to be perceived as bright. Most methods are incredible ad hoc, overdone, or most often just plain wrong. There are a class of tonemapping algorithms that employ these to display images with the proper impression on a standard display, and I will cover that in more depth in the Output section. Just take away that something claiming to have HDR technology just because they have light blooms is (in my opinion) lying or has too ambitious a marketing department.
OPENEXR
OpenEXR is many things, causing confusion not only with HDRI, but with itself. To start, OpenEXR is
* Often referred to as a datatype that is normally known as half. It's half an IEEE 32 bit float; 1 sign, 5 exponent, 10 mantissa. It was mostly designed as a storage format. If you store light values in photometric units (such as candela / meter^2), it covers the range that you would ever want to store with high accuracy. The Sun is roughly 30K cd/m^2 and OpenEXR has a max value of roughly 65K. 2 things to note are, this exactly matches the 16 bit floating point format on the GPU, and that I said storage format, not processing (the quantization can be too high in some cases).
* An image format that uses the half datatype as it's primary pixel format. It's a storage format used by ILM along with other studios and developers. It's suitable for storing images for production. It has a relatively low dynamic range (compared to other HDR image formats) but has lower quantization error across that range than any other short of 32 bit float. It also allows the attachment of arbitrary metadata in the header.
* The full library you get when you download stuff off the website. This includes code to read/write OpenEXR image files and perform a variety of math functions.
It's not an HDR technology. You could do everything that it does with other datatypes like 32 bit float or floating point TIFFs. It's just a very convenient set of things, designed to do the kind of work we want to do.
PAUL DEBEVEC INVENTED HDR
I'm sorry to say that he didn't, but his work has made huge leaps to it's popularity and acceptance. To say his work on acquisition of HDR images is impressive is an understatement. High dynamic range imaging has been around as long as floating point and pixels have been. I don't think you can really attribute any one person. The closest I can think of would be Greg Ward. He's responsible for many of the early file formats supporting HDR images. More importantly, he developed Radiance, a photorealistic global illumination renderer that supported HDRI among other things in the early 90's and has been supporting it since then.
Input
There isn't that much more to cover here that I haven't touched on earlier. Surfaces only reflect as much light as they receive. So, if we want to have HDR images as output, we'll need HDR lighting as input. For games, this means 2 major options: floating point values for our point lights, floating point environment maps (both for backdrops and image-based lighting).
I won't delve into the details too much here. That'd cover much more than I even have touched on. Really all there is to say is that as long as graphics hardware can support floating point inputs to shaders and floating point textures, games have all the inputs they need covered.
Storage / Processing
I had said above that I would get back to the issue of 0-255 really being 0-1. If you load an 8 bit texture just set to the value 128 and have a shader read it in and multiply it by itself you get 64. What's going on here, shouldn't it be 16384? The answer is no. 8 bit images work like they are on the range 0-1. .5 * .5 = .25 as we see what happened with 128. The better way to speak of this is in terms of range and quantization. The range of 8 bit integers is 0-1. The quantization, or difference in values, is 1/256.
Now you are probably thinking that we just make the range bigger and 8 bit ints could be used for HDRI. That's probably not advisable. The simplest metric of quantization is the range divided by the number of values representable along it (this assumes an even distribution of values which isn't the case all the time, but it works well enough for our purposes). As we increase the range, the quantization increases. This is how steps in your gradients and other smooth transitions appear and is highly undesireable.
Floating point formats have a larger range, and smaller error across that range. They aren't the only form of storage for HDR images. Many formats use a non-linear compression of values that distributes them closer to the sensitivity of our eyes. This is a good measure as a final storage measure, but can bad for intermediate stages as something with quantization sufficiently low in one part of the range may be brightened and that quantization be too large in another part of the range. The other drawback of non-linear compression is that they don't operate under arithmetic operations like linear values. With some, you encounter even more problems that gamma rendering, where addition doesn't work properly but multiplication does. Furthermore, these are all based on studies in human perception and beyond the scope here. We'll stick with floating point formats.
For processing, we need 3 things. Floating point in our shaders, which has been around for a while, floating point render targets, which we have also had for a while, and floating point blending, which is currently in the NV40 and will be in the next ATI card. The shaders and render targets should be obvious, as we need to process our HDR input and store them in the interim. Floating point blending is crucial for games because so many of them rely on multipass techniques and semi-transparent particle systems for their effects.
As far as processing goes, I feel that most stuff will want to be done in 32 bit floating point for the least quantization. The 16 bit half format has too high of quantization to be a working format, but I think it would be acceptable for the backbuffer format for blending into. I haven't done any tests on this to confirm absolutely, but in all the work I have done this has seemed sufficient.
Output
Up to this point, all of our data has been in HDR images. At some point we want to output this. Some of us our lucky enough to have HDR displays to directly output a decent set of the intensity levels we can see. For everyone else's monitors, and if we ever want to print an image, we have to look at reducing the dynamic range to fit within that of the output medium.
Tonemapping is the most common name for reducing the dynamic range of an image for output. Linearly scaling the image (that is to say divide every pixel by the value of the max) does very poorly. There are dozens of different published techniques for this which rely on a variety of different classes simple, to accomplish the goal. The end result is the same: reduce the global contrast while preserving local detail.
A comprehensive overview of tonemapping algorithms is beyond the scope here, so I will focus on one simple on that can be implemented in real-time with reasonably controls on graphics hardware. That is the global part of the
Reinhard photographic tonemapper. It computes the logarithmic average of all the pixels in the image. It then uses that value along with a key value (to adjust the brightness curve) and a possible white-out value at which point all pixel values above that map to white. It's been referenced several places including the DXSDX and the article pasted above. I'm not familiar with any others that have been implemented practically for game usage.
As mentioned above, light blooms can be considered a part of tonemapping. They preserve part of the impression of a bright scene that is lost when the image is viewed at intensities less than the original ones.
Responses
Alright, now that I'm off my pulpit, I'll try to reply to questions posted.
How much storage will it take?
In my opinion, it will take maximally twice is much as current games. To be safe, you could store all your images in 16 bit floating point? In practical terms, you probably only need it on your skyboxes, projected textures, and other sources for image-based lighting. There will be quantization when modulated against 8 bit surface textures, but I'm of the opinion that they will probably not be noticeable when the game is tonemapper for display. On an HDR display, this might not be sufficient, but I don't have a firm answer.
How common will it be in the future?
I'd say it's important enough that in 2 years every engine that even remotely caters to realistic looking games will have it as a feature, if it takes that long. Simply put, it gives a lot more effects for acheiving realism without that much extra overhead. It's not an option, it's a necessity as far as I'm concerned for anyone that really wants a game to look good.
What about 16 bit int?
The quantization is too high on the low end. It's not really useful.
Conclusion
This is a quick and dirty (I know it may not seem like it) overview of high dynamic range imaging and how it applies to games. It's a huge web of related ideas, concepts and methods. This is only a very small slice of it. My opinion on HDRI is that it's much more exciting when viewed in terms of what it allows us to instead of merely a set of file formats and GPU features. Now that we have more bits to represent our image data than we will ever need, what can we do with it? Photometric calibrated renderering? Color-correct rendering? Perceptually-based camera response. The possibilities are endless.
I hope this isn't too frightening. Sorry again it took me so long. I'll be more than happy to answer any questions. I'll also probably develop this into a full set of articles to expand on many of the topics I didn't have room to here.