128-bit HDR vs 64-bit HDR

Jedi2016

Veteran
To start off with, I'll point out that I know practically nothing at all about game development or programming or any of that. This is more information-hunting than anything else. Now, I do have a background in CGI. Which, in terms of graphics and rendering, follows many of the same principles, just totally different execution.

Basically, I'm trying to figure out the difference between a 16bpc graphics environment and a 32bpc graphics environment. Aside from the obvious stuff like "32bpc has more information".. this I know already. :)

Now.. many current PC games/engines, and all console games, support only an 8bpc environment. Now, what I'm about to say may be totally wrong, but it's what I've come up with on my own in terms of how it works.

The way I understand it, in a 32-bit (8bpc) environment, no value anywhere in the scene can go higher than 1 or less than 0 (or 100%, 255, however you want to look at it). This is why a lot of games seem to have very "flat" lighting and shading.. because if you illuminate the entire scene up to 50% intensity, then your key light can be no greater than 50%. Which doesn't yield the most dynamic lighting.

Now, in CG-land, there are no such restrictions. I can take the illumination levels as high or as low as I want, and everything is still there. LightWave, like all other high-end CG apps, uses a 128-bit floating point renderer. Because of this, I have no concept of what sort of limitations exist in a 16bpc environment.. I have no experience with it, because nothing I have uses it. It's all or nothing. CG artists and production houses simply don't waste their time with 16bpc because it offers no advantage whatsoever over 32bpc, which has far more information.

Obviously, that isn't the case in a game environment, where memory is often a big factor in what you can add.

Now, I know the difference in image formats.. HDR and all that, since I use them regularly in my CG work. But, as before, I have practically no experience in FP16 images. In fact, I'm not even sure if I've ever even seen one.. I have no way of creating them except from scratch in LW (which kind of defeats the purpose, since I'm setting up all the lighting anyway).

Do FP16 images actually contain the full dynamic range of lighting that a 32bpc image contains? Or is it choked off at some point? Does anyone have an example of a 16bpc HDR photograph that I could fiddle with to see the exposure ranges?

My other question is in regards to scene lighting. I have my theory on how 8bpc engines are lit (above), and how full FP environments have absolutely no lighting restrictions at all. But what about 16bpc engines? Are they limited in the dynamic range that you can represent through lighting, shading, reflections, specular highlights, etc?

Feel free to get technical in terms of illumination, bit-depth, etc (All the stuff I'd know from a CGI perspective). But I don't have any programming knowledge, so if you go there, you'll lose me. :)

And yes, this is actually in reference to games rather than CGI.. Consoles, specifically. As most of you are probably aware, the Xbox360 supports a 16bpc (64-bit) graphics environment, while the PS3 supports a 32bpc (128-bit) environment. I'd like to know what the difference is, and if these engines are (hypothetically) used to their fullest extent, what sort of advantage might one have over the other.
________
buy vaporizers
 
Last edited by a moderator:
Jedi2016 said:
Do FP16 images actually contain the full dynamic range of lighting that a 32bpc image contains? Or is it choked off at some point? Does anyone have an example of a 16bpc HDR photograph that I could fiddle with to see the exposure ranges?
Well, FP16 is a s10e5 format. The precision is plenty for any system that uses a relatively small number of framebuffer blends and outputs to an 8-bit display (until we get video cards with a higher-precision framebuffer format than 8-bit, such as 10-10-10-2, we won't get better actual output). So for color data, unless you're doing large numbers of blends, there would be no loss of data with FP16 on the precision front.

FP16 does also have lower range than FP32, though. But its range should be plenty for any situation that would arise in a graphics environment. The range is given by the exponent, so the maximum value obtainable would be 2*2^(2^4) = 131,072, and the minimum value given by 1*2^(-2^4) = 0.0000153. As long as your color values are no more extreme than the above, you can be sure of full-precision when using FP16 as an intermediate format for 8-bit output (quick note: smaller values than 0.0000153 are representable through denorms, but you lose a bit of precision for each division by two, so the important thing would be that you're not worried about attempting to show detail in the color range below this, but rather that the scene is set up so that any colors as small as 0.0000153 should be nearly black anyway).

I think that FP16's primary shortcomings would be in the areas of non-color data, which can easily have higher dynamic range or need higher precision (such as texture coordinates).
 
*16-bit vs. 32-bit?
16 bit is faster because it is half as much data to shovel around.
32-bit has higher quality and flexibility because it has exponentially more range and precision.
They both kick ass over 8-bit. Neither one is really necessary to make a good looking game. We have been making good looking games with 8-bit for so long that most developers don't consciously realize how many hacks they use to get around its limitations. Going HDR lets you get good results without all the hacks. Even 32-bit will still require hacks sometimes, but not as often as 16-bit and certainly not constantly like 8-bit.

*The formats?
8-bit fixed point
scale: fixed at 1.0, can't represent negative numbers
precision: 1/(2^8) 2 decimal places

16-bit floating point
scale: 131072 to 1/65535
precision: 1/(2^10) 3 decimal places

32-bit floating point
scale: 10^77 to 10^-76
precision: 1/(2^23) 6 decimal places

I noted the "decimal places" for an important reason: Although the floating point formats pack huge ranges into a small number of bits, they do so by only storing the most significant digits of the number. With a 32-bit float you can represent 1000000000 or 0.0000000001 but you can't represent 1000000000.0000000001 because it would require too many digits to store the whole number.

*Details of the difference?
16 bit should be about twice as fast/cheap as 32 bit because it is half has many bits to move and half as many to compute.

We don't need the full 10^(+/-77) range of 32 bit numbers just to represent colors. The difference between the darkest thing you can still see in a pitch black room vs the brightess thing you can see without going blind is something like 10 or 100 million to 1. 16-bit is fine for that. Where we need the range is for representing other kinds of data such as a the location of a point in the world for deferred rendering and other crazy techniques that we can trick the GPUs into doing. 16-bit is limited there, but still quite capable with some extra effort. Hell... The original Shrek game on the Xbox was entirely deferred rendered using only 8-bit. I saw a presentation on it. It took a long time to explain all the hacks required to make it work.

The precision of the format determines how clever/restricted you have to be to avoid banding artifacts. Although I don't have the patience to explain it, the numbers I gave above are a little misleading. 16 bit floating point is actually a lot better than 4X more precise than 8 bit fixed point. (hint: It's because of the range of the scale) With 32-bits you pretty much don't have to worry about banding at all. Precision is only a concern for the funky, non-color calculations. With 16 bits it is pretty easy to get banding but it is a hell of a lot easier to avoid compared to 8 bit. For the funky, non-color calculations having just 10 bits of precision is a significant concern, but it is manageable.

*16-bits for offline rendering?
Although offline renderers use 32 bits for internal calculations, they traditionally have rendered 8 bit images. Only recently have they started outputing high dynamic range results. The flashy scenes in the Time Machine remake movie were a big deal at the time because they rendered to HDR images so that they could have the extra range available during post processing. If you have used HDR image formats then there is a good chance that they were actually 16 bits per channel. 16-bit TIFF and some of the OpenEXR formats are common examples. 8-bit uncompressed images can be kinda large. 16 and 32 bits are obviously twice and four times as large respectively. As I mentioned above, 16 bits is perfectly adequate to represent color. Most HDR systems deal only with color issues and choose not to pay the 2X storage hit of 32 bit because there is really only a marginal difference if all you need to represent is color.
 
corysama said:
They both kick ass over 8-bit. Neither one is really necessary to make a good looking game. We have been making good looking games with 8-bit for so long that most developers don't consciously realize how many hacks they use to get around its limitations. Going HDR lets you get good results without all the hacks. Even 32-bit will still require hacks sometimes, but not as often as 16-bit and certainly not constantly like 8-bit.
Well, I don't quite buy that first part. That is to say, there are just things you can't realistically do in a game with only 8-bit. And I don't think that once FP16 rendering becomes commonplace that we'll ever think that 8-bit looks good again.
 
Hopefully this will also lead to a push to increase the colour depth on actual displays so we can try to get rid of banding in all situations.
 
Thanks, corysama, big post. :)

As for my own rendering.. usually depends on what I'm doing with it. Most of the time, I end up just saving out as 8-bit images (usually a 32-bit TIFF file with alpha channel for compositing), but occasionally I save out 16bpc RLA files, if I need to do certain kinds of adjustments in post. I've rendered to full HDR before, but rarely.. those are either .HDR or .CIN files, full 32bpc. Since most large adjustments are done right in LW (in terms of lighting, surfacing, etc), I rarely need that degree of control in post.

How would I tell the difference between 16bpc and 32bpc EXR files? Both versions load up as "Trillions of colors" into After Effects. I'm pretty sure the handful of EXR images I have are 32bpc (they're ILM's "demo" images, the ones you sometimes see when people talk about HDR and EXR).

Anyway...

What about lighting? It seems that the days of limited light intensity are over.. can all game engines/consoles/etc support infinite light intensity? For example, if I'm lighting an interior scene, I'll sometimes have the sun shining into the window.. that light will be at least 250-400% intensity, in order to be bright enough to look like sunlight (which obviously takes it into the domain of FP). How often is this sort of thing used in games? Or is it used at all? Can we expect to see more of it in the future?

Another one is reflections. I often use .HDR backdrops in LightWave for reflective objects, because they look simply amazing. Like on this page: http://www.deathfall.com/article.php?sid=2836. Are 32bpc images required for "real" reflections like that, or can it be done with 16bpc HDR images? I know it can't be done with 8bpc images.. I was estatic when LW finally supported HDR backdrops for illumination and such.
________
buy easy vape
 
Last edited by a moderator:
Jedi2016 said:
Another one is reflections. I often use .HDR backdrops in LightWave for reflective objects, because they look simply amazing. Like on this page: http://www.deathfall.com/article.php?sid=2836. Are 32bpc images required for "real" reflections like that, or can it be done with 16bpc HDR images? I know it can't be done with 8bpc images.. I was estatic when LW finally supported HDR backdrops for illumination and such.
If we are using static images we can do this without the full HDR render ATM. Obivously if we are doing it dynamicly we need HDR to create the enviroment maps in real time.

But really one of the main benefits will be bloom and actually good looking bloom where the highlights only show up where they are supposed to ( as oposed to say simply white text which should have 0 bloom ).

Also are you sure that when you do those renderings that there isn't a HDR frame buffer? Because maybe its a HDR frame buffer but just writing to a LDR file format. I.E. If you illuminate an object with an infinitly bright light and then you have just pure white object with ambient light next to it and put them bother behind a 50% transparent object are they the same brightness. As in clamped rendering both should come out with pixels values around 127 ( or 0.5 ).
 
Last edited by a moderator:
bloodbob said:
Also are you sure that when you do those renderings that there isn't a HDR frame buffer? Because maybe its a HDR frame buffer but just writing to a LDR file format. I.E. If you illuminate an object with an infinitly bright light and then you have just pure white object with ambient light next to it and put them bother behind a 50% transparent object are they the same brightness. As in clamped rendering both should come out with pixels values around 127 ( or 0.5 ).

Nope. I just tried it, in fact. A sphere, set to 0% diffuse, 100% luminosity, pure white. And a box, next to it, set to 100% diffuse, middle-gray. The scene is lit with a white light, 500% intensity. When viewed with nothing in the way, both of them appear to be pure white. But when you examine it, you find that while the white luminous ball is at 100% (i.e. white), the box is actually between 130-150% (depending on what side of the box you're looking at.. the top is lit brighter than the side because of the diffuse angle). Likewise, when you place the 50% transparent (black) object in front of it, the white sphere does indeed to down to 50%, but the box is now noticably lighter, and has values between 65-75% (which is right where it should be, since it's half of 130-150). That's using the floating-point render display. Using the "standard" render display, they both show as 255,255,255 pure white (when not blocked by the object). But they still show as different colors when viewed behind the transparent plane, though, even in the standard viewer. The rendering itself is happening floating-point.. how you look at it after the fact depends on what render viewer you use. I typically use the FP viewer, although the difference is entirely behind-the-scenes, since obviously my monitor is only 8bpc. Oddly, I just discovered that the only way to actually export HDR images is to use the FP viewer (otherwise, while the image is still technically 16/32bpc, it doesn't actually contain any information you can't already see). Which I suppose makes sense, I've just never tested it before now.
________
ultimate fighter
 
Last edited by a moderator:
Yeah as I thought the render itself has atleast Over brightness ( something better then 0.0-1.0 but still not huge ) it possible still clamps the rendering at a low value but it certainly isn't a 0.0-1.0.
 
Last edited by a moderator:
Well, yeah.. that's in CGI, though, not a game environment. I'm just curious how much of this sort of lighting we can expect from upcoming games that support HDR, and whether it will make any difference if it's full 128-bit or it's the usual 64-bit.
________
volcano digital review
 
Last edited by a moderator:
A few minor corrections...

8bpc (integer) don't nessecary got from 0-1. Its a fixed point format where you can choice the integer versus fractional format. Now its true that on PC its traditional 0.8 (0 bits for integer, 8 bits for fractional) but its NOT true on consoles. PS1 and PS2 have always been 1.7 given a range of 0-2 with 127 steps. Which is why on PS2 the traditional gouraud multiplication could brighten whereas on PC it can only make things dimmer.

Doesn't make the assessment of LDR vs HDR wrong but just worth noting.

HDR is hard for lots of reason (artist control in a interactive environment being the main one) and 128bit HDR is just not nessecary and would be slower (its twice as big, if nothing else). So I expect every game that uses HDR (and lots won't, in many cases its not nessecary) will either use FP16 or FP10 (on X360).
 
Finally, another decent HDR thread. Yummy.

Let me correct some people and say that under adaptation, the human visual system (HVS) is capable of perceiving 14 log units of intensity. The underside of a rock on a moonless night away from city glow is about 1e-6 cd/m^2 (cd/m^2 = candela per meters squared, intensity per area), and given a long enough time in the dark you can differentiate it from the surrounds. The cones saturate around roughly 1e8 cd/m^2 which corresponds roughly looking directly at the sun. 14 log units is a dynamic range of 100 trillion to 1. If you want to discuss directly representing the entire perceivable range of the HVS, this is the number you keep in mind.

The important thing to take away from that is that 14 log units is the dynamic range of the HVS under adaptation. At any given time, your even can only adapt to a small portion of that range; roughly 5 log units. This distinction is the basis of the difference in fp16 and fp32. **

The point has been made already, but I just want to reiterate that whenever you talk about dynamic range of any finite storage representation, you also have to talk about quantization. For example, we could use 8bit unsigned ints to represent the same dynamic range as fp32 via a shift and bias to transform the mins and maxes to each other. Technically, this new specification would have the same dynamic range as fp32, but completely and utterly unsuable quantization. The topic of quantization and percentage error of a representation gets nastier when we start discussing nonlinear encoding formats, but fortunately these aren't usable as framebuffers (because blending does not work in anything but linear space). So, we'll just ignore that mess.

OpenEXR (and the half, fp16 format) was originally designed as an archive format for film sequences. This provides a great deal of understanding into why it was created the way it was. Film has a dynamic range (d-max to photographers) of about 4 log units, though is actually somewhat less due viewing conditions. When film is shot, the exposure setting maps a portion of the dynamic range of the scene onto the negative. The exposure performs the same purpose as the HVS's adaptation. The OpenEXR format is designed to represent a subset of all visible dynamic ranges, such that it exceeds the dynamic range of film by as much as possible, while still providing low quantization across the range.

The format was never designed to be used as a working format. All of the computations done for film processing and compositing are done on fp32 values. Similarly, it was never intended to be able to completely represent a sequence (encoded in actual, absolute cd/m^2) of moving from a dark cave into direct sunlight. The first case has too high of quantization. The second cas has too high of dynamic range.

What does this mean for games? I'm not saying that fp16 is useless by a long shot. Using a dynamic range of a full 14 log units has many other problems too. The less sophisticated tonemappers used in real-time graphics today would not be able to handle that range correctly. At best, you'd get desaturated output from the non-linear compression of a global one like the first half of the Reinhard photographic. At worst, you'd get something you couldn't even make out detail in half the time because it was too dark or too light from inaccurately guessing the average scene intensity.

Though, it does present a number of difficulties when you move away from entertainment purposes (where is just has to look nice/cool) to more scientific ones (this must be a specific value for physical correctness). It is preventative if you want to design an engine that tries to do completely photometrically accurate rendering (absolute luminance). There isn't enough detail in fp16 for this, but as far as games go this is mostly a pointless exercise anyway.

You don't need to be completely physically correct to make more compelling results. In the case of bloom, as was mentioned, having HDR source data will significantly improve the quality. The source image is not the only thing affecting the output. The filter they use to create the bloom has a large impact on the final image. The current method is to use a gaussian because it's 2D separable and easy to implement fast. The correct filter is much shaper and not 2D separable. HDR images without using this will still fall short of accurate bloom.



** We can actually note this ratio decreases further, and point out that due to scatter in the optics of the eye, the local contrast of the eye (maximum partial derivative) is only about 150:1. This forms the basis of the rendering to the HDR display made by BrightSide (brightsidetech.com), to which I am somewhat biased, being an employee. ;)
 
Well, if you really wanted to simulate HVS, you could still do it with FP16. You'd just need to make sure that the screen's average brightness was right in the middle of FP16's range (since FP16 has a dynamic range of about 10 log units, the dynamic range at any given time of the HVS can be completely represented, given your numbers). One could just use the previous frame's output for this.

Then the only real problems that FP16 could have in relation to FP32 as an intermediate format would be if lots of framebuffer computations were made. After all, it's pretty cheap on today's HDR hardware to do all internal calculations at full FP32, so only those times that one needs to modify framebuffer values is there the possibility of information loss that could become visible.
 
Chalnoth said:
Well, if you really wanted to simulate HVS, you could still do it with FP16. You'd just need to make sure that the screen's average brightness was right in the middle of FP16's range (since FP16 has a dynamic range of about 10 log units, the dynamic range at any given time of the HVS can be completely represented, given your numbers). One could just use the previous frame's output for this.

Correct. Sorry, I meant to make that clearer. When I say simulate the HVS, I mean have pure source data to work with (in photometric units) and then perform everything afterwards. You can determine the average intensity of the scene and map that into the range of fp16. You can do it, but you would need to figure out the average exposure of the scene before writing values to the frame buffer and correct for it then. Or do some sort of pre-pass to get that correct without blowout.

Needless to say, you don't really need that much accuracy for entertainment purposes.
 
Well, the idea is to use the value from the previous frame, and since our eyes' response is delayed anyway, that should be plenty.
 
What is the brightest cd/m2 that a good monitor can generate? Not as bright as looking at the sun, obviously. Surely there is little point in diverting resources into dynamic range that is not even displayable,(or maybe it does make sense but it all just becomes saturated *shrug*).
 
ERK said:
What is the brightest cd/m2 that a good monitor can generate? Not as bright as looking at the sun, obviously. Surely there is little point in diverting resources into dynamic range that is not even displayable,(or maybe it does make sense but it all just becomes saturated *shrug*).
Well, as far as computer displays go, the dynamic range available is pretty tiny. But that doesn't mean that it doesn't help to represent larger dynamic range in software: you can still use plenty of tricks to simulate the look of extremely bright or extremely dim environments.

In fact, I'm not sure it really makes sense to try to simulate the actual brightness of the scene, as you'd always have to play your games in a pitch-black environment for things to look right.
 
Jedi2016 said:
How would I tell the difference between 16bpc and 32bpc EXR files?

Well... If nothing else, you could look at the files sizes. I don't know if the EXR formats use any sort of lossless compression internally, but if they don't then the number of bytes in the file is going to be either 6 or 12 times the number of pixels in the image.

If it is significantly more than 6 bytes per pixel then it is probably a 32 bit per channel image.
 
ERK said:
What is the brightest cd/m2 that a good monitor can generate? Not as bright as looking at the sun, obviously. Surely there is little point in diverting resources into dynamic range that is not even displayable,(or maybe it does make sense but it all just becomes saturated *shrug*).

Most monitors have a contrast ratio somewhere in the range of 400-1000 to 1.

When you render to an HDR framebuffer you have to compress the range of colors down to LDR so that the monitor can display them. The magic is in how you do the compression. If you just drop the low bits and quantize then you just waste your time because you achieve the same result as old-fashioned rendering. If you use an intelligent tone mapping (google it) process then you will be able to go back and forth between very bright and very dark scenes while always maintaining a beautiful high contrast, high saturation image.

In LDR you have to manually tweak the textures and lighting of each scene fit within a narrow range in order to look good. That means if you change the lighting -flash a camera in the cave or eclipse the sun in the desert- then everything washes out as it is pushed past the top or bottom of the range.

HDR backbuffers also help to make realistic blending. In 8 bit land, a golf ball and the sun are both white (255, 255, 255). When I blend some smoke over the both of them I expect the sun to shine through but not the golf ball. In LDR there is no difference between them so I have to either let them both through or neither one through. But in HDR the math works how you would expect it too.

Bloom without HDR is done with hacks. Usually it is done by drawing a mask into the backbuffer's alpha channel to indicate which pixels are "bright". With HDR the sun can bloom brightly simply because it it has high values. Additionally, it can bloom less when it is behind some smoke. Even the reflection of the sun can bloom red when reflected off a red anodized aluminum pipe. The math works without manually putting in masks or glare sprites or any of the other hacks we have been doing for years.

Maybe someday we won't even have to tone map down to LDR at all!
http://www.brightsidetech.com/
I've seen these in action. I want. I want badly.
 
Back
Top