where to find a comprehensive article on HDR

ultragpu

Banned
hi, i been trying hard to find infos on HDR lighting on the net but havent found much. so does anyone know any good site that i can look for which gives a thorough describtion on HDR? thanks for any help.
 
Give me an idea what you are looking for.

Idiot's Guide To?
Emphasis on real-time?
Current hardware support?
Something eles?

I've been meaning to start on a series of articles on this, but haven't kicked myself hard enough to get started. I'd say partly because I've worked with it for so long exclusively, I've lost sight of what the larger mental leaps are.

So... ultragpu, and everyone else, give me all the questions you can think of relating to HDR. I'll fire back quick answers here, and try to weave them into a decent intro article(s) somewhere in the (relatively) near future.

I'm an author on the HDR display paper mentioned, and lead developer for the company that makes them. Hopefully that'll give me some cred.
 
oh hi squarewithin, i just would like to know what HDR lighting is in games, how does it work and how significant will it become in the future. thanks for answering.

and thanks Reverend for the links. i seriously looked on google for a long time but i guess i must hav typed a less matchable phrase. :?
 
What's an example of code making use of HDR? (versus regular LDR code)
Does it take more memory?
HDR alone (no tone-mapping or bloom etc): basically a reduction in banding?
 
Alstrong said:
HDR alone (no tone-mapping or bloom etc): basically a reduction in banding?
Quite off more contrast because there is less rounding. Major benefit is in reflections and other things.
 
Sorry, I've been insanely busy. Moved apartments and going on travel this weekend so I have a lot of stuff to get out of the way first. I haven't forgotten. I just need some more time.
 
Well, personally I feel that HDR can be summed up just by thinking of this:

Imagine yourself in a movie theater with an exit directly to the outside, during the day. If you've ever been in a theater like this, you know the shock of brightness that you get when you move from inside to outside.

Now imagine a game that could replicate the sense of difference in brightness levels here, and you'll get a grasp of what game developers are attempting to do with HDR technology. It is now possible to have a reasonable simulation of the above with floating-point framebuffers (though the final output range still remains the same, so games using HDR have to resort to methods to attempt to fool you into thinking things are really bright or really dim).
 
I always read different things on different type of forums about HDR. Game sites and sites like beyond3d are usually talking about HDR as in more precise colour information and bloom, etc. But when I read on 3d modeling forums they are usually talking about using HDR for lighting purposes instead of higher precision and bloom.

When you look at HDRshop and how to make HDR pictures, you usually use different F-values pictures and combine them to one to store lighting information. This way when you use HDR to light a model the actual lighting in the picture (and thus in the surrounding where you took them) are the same. So in theorie you don't need any additional scene lights to light up a model/scene.

Am I wrong in thinking that people are talking about two different things? Or is the current hardware/game engines not suited for the second approach?
 
It's more two different aspects of the same thing. Here at B3D we are mostly talking about the technology that allows us to render HDR images in realtime. What you're talking about sounds more like a discussion about how those HDR scenes are designed by the artist.
 
What they are talking about there is most likely say HDR enviroment maps that are then used in combination the PRT ( or maybe not precomputer ) or something similar to light and object.

Say like this http://www.metinseven.com/review_hdribase.htm

Now of course if you didn't use High precission high range computation and storage atleast in some of it you could't do that.

When we talk about HDR here at B3D we are saying that it either uses high precision and high range computation right through the pipeline. Because using HDR we can do effects like that and many more but without full HDR we might not be able to do things efficently or at all.
 
bloodbob said:
What they are talking about there is most likely say HDR enviroment maps that are then used in combination the PRT ( or maybe not precomputer ) or something similar to light and object.

Say like this http://www.metinseven.com/review_hdribase.htm

Now of course if you didn't use High precission high range computation and storage atleast in some of it you could't do that.

When we talk about HDR here at B3D we are saying that it either uses high precision and high range computation right through the pipeline. Because using HDR we can do effects like that and many more but without full HDR we might not be able to do things efficently or at all.

But then you could just use some 64bit picture format and not HDR. HDR stores more than just colour information IIRC. So why use HDR? I mean you have 64bit tiff/tga/png or something like that (I'm not sure which format it is).

Edit: didn't read all the posts so it seems... So you don't actually use HDR pictures as the environment, but render to a HDR like end result?
 
hiostu said:
Edit: didn't read all the posts so it seems... So you don't actually use HDR pictures as the environment, but render to a HDR like end result?
Well, I think that's going to be what games are going to do at first. They'll just use HDR to allow for a larger range of light brightness values, in conjunction with post-processing to attempt to simulate very bright and very dark areas. It'll be a bit before game developers get used to the idea of HDR, and start making the same progress in games that has been made in other areas of 3D graphics.
 
Chalnoth said:
hiostu said:
Edit: didn't read all the posts so it seems... So you don't actually use HDR pictures as the environment, but render to a HDR like end result?
Well, I think that's going to be what games are going to do at first. They'll just use HDR to allow for a larger range of light brightness values, in conjunction with post-processing to attempt to simulate very bright and very dark areas. It'll be a bit before game developers get used to the idea of HDR, and start making the same progress in games that has been made in other areas of 3D graphics.

In physical based lighting there are actually very few uses for HDR textures. The lighting equation is made of a series of albedo terms (0-1) these are put together to make a HDR term, except for light transport simulation textures (env maps etc.) you should be using textures between 0 and 1 and let the lighting equation do its job.

Its comes from the bad old idea of a colour texture, what we should actually be using are things like diffuse albedo textures. More precision can be handy but very rarely more range (what actually does it mean to have an albedo of 1000%?).

Its a problem of who is doing the lighting? The renderer or the artist?

The CG world can often get away with artist based lighting (where HDR textures are handy) because the lighting can be adjusted per shot. However in games where you have un-art controlled lighting and camera rig, taking away control from the renderer is dangerous.

Just thought its worth noting that the CG isn't always a good model to follow...
 
Well, personally, I'm not going to speculate on exactly the direction that PC gaming will take as far as lighting is concerned until we have the first iteration of HDR games, and can see firsthand what the most obvious drawbacks are.

That said, there definitely are going to be uses for HDR textures within games, such as projected lightmaps (i.e. lighting through semi-transparent objects), or you could use one (or more) HDR textures for the sky, instead of modelling the sun differently, for example.
 
hiostu said:
HDR stores more than just colour information IIRC.
If your talking about HDR texture it only stores colour information what else did you think it stored? HDR is just more precision over a large range.

Edit: didn't read all the posts so it seems... So you don't actually use HDR pictures as the environment, but render to a HDR like end result?

Using HDR enviroment maps doesn't magicaly light the scene HDR is just about the degree of accuracy you are doing the calculation with there is raidosity which is doing the lighting. You could go back and do it with LDR images/textures See the two marbles pictures ( though in this case all the render is probably still done HDR).

Now most of the HDR games that are out are still using LDR images for a few reason A) say farcry it didn't get hdr till the patch and it would be a big download B) there is no good compression algorithm but all the calculation are being done with float point calculation and the values are stored as float right till the very end.

But then you could just use some 64bit picture format and not HDR.
16Bit int channels don't really have a large enough range maybe if you had said 128bit picture format or 256 bit picture format yes you could probably use them.

I really don't see what people have a problem with HDR is just instead of doing calculation and storing information that tradational would have been done with 8 bit integers with a float point.
 
Edit: My attempts at sarcasm came off as me sounding like a prick. I've tried to adjust the tone of stuff some.

Alright. Sweet. Constant Internet connection once again. Been bouncing around the Eastern US seaboard and not had access. So... let's see what we got.

For starters, read the first half of this. I'll repeat most of it here, but I think it's a good summary all the same. Every reply in this thread seems to bake in at least a few assumptions about what it means. Some are fine to have in games. Some aren't. I'll try to dissect both.

Intro

For starters, it's best to clear up some vocab an conceptual problems. High Dynamic Range (HDR) is really a prefix. It doesn't mean anything on it's own. Talking about "HDR" is like talking about "anti". It modifies what it preceeds. I'm as guilty about this as everyone else, primarily due to laziness. All it means is that "the contrast range of the ______ (format, input device, output device) is capable of representing contrast ratios that significantly exceed what can be represented by 8 bit without quantization loss." 8 bit is a bit of a graphics assumption, but applies in most areas because it's the smallest memory unit and that's just convenient.

When we (games and computer graphics people) talk about HDR, we almost always mean High Dynamic Range Imaging (HDRI) which is to say we are talking about the capture, processesing, display, and output of HDR images. While this is mostly referring to light, it isn't necessarily so.

(By convention, we don't normally call 16 bit normal maps HDR normal maps, but they technically are. You can get more directions represented with lower quantization, so you don't get that blocky look on your surfaces. But anyway...)

Probably one of the biggest questions is "Why is this useful or desireable?" Many games go for immersion, and desire as much realism as they can get. Real light doesn't vary between 0 and 255 (the 0-255 is a bit of a confusion, but I'll get to that in a bit). The better you can model real light, the easier many effects become. Is it necessary? No. You could also hand-paint every possible frame in the game and just pick the right one based on user input, but would you do so? No. The best example I can think of prior to this is the halos around lights. A while back, around Halflife 1 people had to manually add those to each light. Pain. Now, we can specify that the object is a light and the engine can do so for us. Same basic idea. Things look more real, and you don't have to do all that much effort.

Overview

High dynamic range imaging isn't a technology.

Now read that again.

Got it? Good.

HDRI (or the even less descript HDR term) aren't something that nVidia or ATI can dedicate circuits to directly. At best, it's a nebulous web of a dozen other features which fit together to let us work. This means different things to different people. The most general way to say it is "in order to support HDRI, you need to support floating point everywhere you use pixels." This means textures, shaders, render targets, etc... in our case of games. The last generation of cards (NV30 / R300) had floating point textures and rendertargets, but you couldn't do blending operations, an important operation for games, which is why you never saw any HDR games at the the time.

Now, let's discuss the bare minimums necessary to have a game that supports HDRI. We need some way of getting HDR data into our engine. We can either capture it (like Debevec light probes) or synthesize it (in a raytracer or by simply setting our light to something like 1200.4 instead of 1). We need to process and store it somehow. This is needed for any input data, as well as our framebuffer to store the render scene as we make our draw calls. For games, this generally means having floating point available in image formats and the gpu. Then we need to output it. If you are lucky, you have an HDR display and can do it directly. Since I know the price of them, I can safely assume that no one here has one and we will rely on the other method, tonemapping. It's a fancy name for what's simply "remapping light intensities on the range of 0-blah into the range of our output device simply denoted as 0-1".*

Misconceptions

Before I cover those 3 areas in more detail, I'd like to address some related issues that seem to be routinely confused with HDRI.

LIGHT PROBES

All those pretty pictures you see on Paul Debevec's site, made from light probes are done using HDR images, but aren't directly related. The acquisition of light probes or sythesis of environment maps and their use in rendering is known as Image-Based Lighting (IBL). In these environment maps, each pixel is treated as a point light source. We take the reflection vector (for specular) or surface normal (for diffuse) and look up into these textures to determine at least part of our lighting at that pixel. The results from either acquired or realistic synthetic light probes are very convincing when the environment maps and processing is done in floating point opposed to 8bit fixed point. So, take away that image-based lighting through environment maps requires floating point data to produce the compelling images that it does, but HDRI can be done without involving IBL.

BLOOM

If you take one thing away from this whole rant, make it be this. Just because the lights have blurry things around them, it does not mean that there is any HDRI being involved in their production. My old TV does that a bit. The crappy lens on my cameraphone causes that as well. Neither of them have anything to do with HDRI. You can tack these onto to any light source in a conventional engine. The point of these blooms is to simulate the optics in your eye. Compared to even a modest camera, your lenses are rather poor. They blur the light passing through them. When you see something bright, light from that object leaks into to darker areas around it. If the object is bright enough, you see the bloom on the dark side. Because this only happens when we see very bright objects, people have taken to adding them to things they want to be perceived as bright. Most methods are incredible ad hoc, overdone, or most often just plain wrong. There are a class of tonemapping algorithms that employ these to display images with the proper impression on a standard display, and I will cover that in more depth in the Output section. Just take away that something claiming to have HDR technology just because they have light blooms is (in my opinion) lying or has too ambitious a marketing department.

OPENEXR

OpenEXR is many things, causing confusion not only with HDRI, but with itself. To start, OpenEXR is

* Often referred to as a datatype that is normally known as half. It's half an IEEE 32 bit float; 1 sign, 5 exponent, 10 mantissa. It was mostly designed as a storage format. If you store light values in photometric units (such as candela / meter^2), it covers the range that you would ever want to store with high accuracy. The Sun is roughly 30K cd/m^2 and OpenEXR has a max value of roughly 65K. 2 things to note are, this exactly matches the 16 bit floating point format on the GPU, and that I said storage format, not processing (the quantization can be too high in some cases).
* An image format that uses the half datatype as it's primary pixel format. It's a storage format used by ILM along with other studios and developers. It's suitable for storing images for production. It has a relatively low dynamic range (compared to other HDR image formats) but has lower quantization error across that range than any other short of 32 bit float. It also allows the attachment of arbitrary metadata in the header.
* The full library you get when you download stuff off the website. This includes code to read/write OpenEXR image files and perform a variety of math functions.

It's not an HDR technology. You could do everything that it does with other datatypes like 32 bit float or floating point TIFFs. It's just a very convenient set of things, designed to do the kind of work we want to do.

PAUL DEBEVEC INVENTED HDR

I'm sorry to say that he didn't, but his work has made huge leaps to it's popularity and acceptance. To say his work on acquisition of HDR images is impressive is an understatement. High dynamic range imaging has been around as long as floating point and pixels have been. I don't think you can really attribute any one person. The closest I can think of would be Greg Ward. He's responsible for many of the early file formats supporting HDR images. More importantly, he developed Radiance, a photorealistic global illumination renderer that supported HDRI among other things in the early 90's and has been supporting it since then.

Input

There isn't that much more to cover here that I haven't touched on earlier. Surfaces only reflect as much light as they receive. So, if we want to have HDR images as output, we'll need HDR lighting as input. For games, this means 2 major options: floating point values for our point lights, floating point environment maps (both for backdrops and image-based lighting).

I won't delve into the details too much here. That'd cover much more than I even have touched on. Really all there is to say is that as long as graphics hardware can support floating point inputs to shaders and floating point textures, games have all the inputs they need covered.

Storage / Processing

I had said above that I would get back to the issue of 0-255 really being 0-1. If you load an 8 bit texture just set to the value 128 and have a shader read it in and multiply it by itself you get 64. What's going on here, shouldn't it be 16384? The answer is no. 8 bit images work like they are on the range 0-1. .5 * .5 = .25 as we see what happened with 128. The better way to speak of this is in terms of range and quantization. The range of 8 bit integers is 0-1. The quantization, or difference in values, is 1/256.

Now you are probably thinking that we just make the range bigger and 8 bit ints could be used for HDRI. That's probably not advisable. The simplest metric of quantization is the range divided by the number of values representable along it (this assumes an even distribution of values which isn't the case all the time, but it works well enough for our purposes). As we increase the range, the quantization increases. This is how steps in your gradients and other smooth transitions appear and is highly undesireable.

Floating point formats have a larger range, and smaller error across that range. They aren't the only form of storage for HDR images. Many formats use a non-linear compression of values that distributes them closer to the sensitivity of our eyes. This is a good measure as a final storage measure, but can bad for intermediate stages as something with quantization sufficiently low in one part of the range may be brightened and that quantization be too large in another part of the range. The other drawback of non-linear compression is that they don't operate under arithmetic operations like linear values. With some, you encounter even more problems that gamma rendering, where addition doesn't work properly but multiplication does. Furthermore, these are all based on studies in human perception and beyond the scope here. We'll stick with floating point formats.

For processing, we need 3 things. Floating point in our shaders, which has been around for a while, floating point render targets, which we have also had for a while, and floating point blending, which is currently in the NV40 and will be in the next ATI card. The shaders and render targets should be obvious, as we need to process our HDR input and store them in the interim. Floating point blending is crucial for games because so many of them rely on multipass techniques and semi-transparent particle systems for their effects.

As far as processing goes, I feel that most stuff will want to be done in 32 bit floating point for the least quantization. The 16 bit half format has too high of quantization to be a working format, but I think it would be acceptable for the backbuffer format for blending into. I haven't done any tests on this to confirm absolutely, but in all the work I have done this has seemed sufficient.

Output

Up to this point, all of our data has been in HDR images. At some point we want to output this. Some of us our lucky enough to have HDR displays to directly output a decent set of the intensity levels we can see. For everyone else's monitors, and if we ever want to print an image, we have to look at reducing the dynamic range to fit within that of the output medium.

Tonemapping is the most common name for reducing the dynamic range of an image for output. Linearly scaling the image (that is to say divide every pixel by the value of the max) does very poorly. There are dozens of different published techniques for this which rely on a variety of different classes simple, to accomplish the goal. The end result is the same: reduce the global contrast while preserving local detail.

A comprehensive overview of tonemapping algorithms is beyond the scope here, so I will focus on one simple on that can be implemented in real-time with reasonably controls on graphics hardware. That is the global part of the Reinhard photographic tonemapper. It computes the logarithmic average of all the pixels in the image. It then uses that value along with a key value (to adjust the brightness curve) and a possible white-out value at which point all pixel values above that map to white. It's been referenced several places including the DXSDX and the article pasted above. I'm not familiar with any others that have been implemented practically for game usage.

As mentioned above, light blooms can be considered a part of tonemapping. They preserve part of the impression of a bright scene that is lost when the image is viewed at intensities less than the original ones.

Responses

Alright, now that I'm off my pulpit, I'll try to reply to questions posted.

How much storage will it take?

In my opinion, it will take maximally twice is much as current games. To be safe, you could store all your images in 16 bit floating point? In practical terms, you probably only need it on your skyboxes, projected textures, and other sources for image-based lighting. There will be quantization when modulated against 8 bit surface textures, but I'm of the opinion that they will probably not be noticeable when the game is tonemapper for display. On an HDR display, this might not be sufficient, but I don't have a firm answer.

How common will it be in the future?

I'd say it's important enough that in 2 years every engine that even remotely caters to realistic looking games will have it as a feature, if it takes that long. Simply put, it gives a lot more effects for acheiving realism without that much extra overhead. It's not an option, it's a necessity as far as I'm concerned for anyone that really wants a game to look good.

What about 16 bit int?

The quantization is too high on the low end. It's not really useful.

Conclusion

This is a quick and dirty (I know it may not seem like it) overview of high dynamic range imaging and how it applies to games. It's a huge web of related ideas, concepts and methods. This is only a very small slice of it. My opinion on HDRI is that it's much more exciting when viewed in terms of what it allows us to instead of merely a set of file formats and GPU features. Now that we have more bits to represent our image data than we will ever need, what can we do with it? Photometric calibrated renderering? Color-correct rendering? Perceptually-based camera response. The possibilities are endless.

I hope this isn't too frightening. Sorry again it took me so long. I'll be more than happy to answer any questions. I'll also probably develop this into a full set of articles to expand on many of the topics I didn't have room to here.
 
I don't know that you could call HDRI as enabling a group of technologies. It's the FP storage format, blending, and texture filtering that do that.
 
Back
Top