What is HDR 128bit?

Agisthos

Newcomer
I see this term being bandies about and would like some info of what it actually means. Is it something that benefits lighting effects like in those Getaway pics?

Do the current pc video cards and Xenos have this?
 
Agisthos said:
I see this term being bandies about and would like some info of what it actually means. Is it something that benefits lighting effects like in those Getaway pics?

Do the current pc video cards and Xenos have this?

HDR rendering means using more bits to represent each pixel than the typical 8-bits-per-component. Full HDR rendering involves carrying these extra bits all the way through the render chain to the render target. It allows for a greater difference (dynamic range) between the brightest pixels and the darkest in the intermediate steps of rendering. Without sufficient dynamic range, either dark areas clip at the black level, or bright areas clip at the white level, resulting in loss of data in these areas, and whites and blacks that look grey instead of white or black in the final output. HDR avoids this loss. The final step in HDR rendering is converting the HDR pixels to 8 to 10 bits per component for the display, since displays have only that much dynamic range. The programmer has control of the "exposure" in this final step, to bring out the detail in the bright areas or the dark areas. Over-bright areas can also be made to "bloom", to simulate flaws in real optics.

I don't know much about the state of HDR in current PC cards. I know that it can be faked for rather pretty (although not accurate) results, like in the Xbox Wreckless.
 
On HDR, Xbit labs in their 6800U review (and later FarCry review) have an EXCELLENT coverage of this subject: http://www.xbitlabs.com/articles/video/display/nv40_7.html

Previous generation graphics processors from NVIDIA didn’t support information output from the pixel shader to a few buffers simultaneously (Multiple Render Targets) and data rendering into a buffer in floating-point representation (FP Render Target). ATI graphics chips family supported these features from the very beginning, which made an advantageous difference from NVIDIA’s solutions.

NV40 has finally acquired full support of the Multiple Render targets and FP Render Target, which allowed the company marketing people to introduce a new term: NVIDIA HPDR. This abbreviation stands for High-Precision Dynamic-Range, i.e. the ability to build a scene with high dynamic lighting range (HDRI, High Dynamic Range Images).

The major idea of the HDRI is very simple: the lighting parameters (color and intensity) of the pixels forming the images should be described with real physical terms. To get what this actually means, you should recall the today’s approach to image description model.

RGB Model and Our Eyes
The today’s universal image description model is an additive hardware dependent RGB (Red, Green, Blue) model, which was first developed for such display devices as CRT (Cathode Ray Tube), i.e. the regular computer monitor. According to this model, any color can be represented as a sum of three basic colors: Red, Green and Blue with properly selected intensities. The intensity of each basic color is split into 256 shades (intensity gradations).

The number 256 is quite a randomly selected one and appeared as a compromise between the computer graphics subsystem performance, photorealistic image requirements and binary nature of all computer calculations. In particular, they found out that 16.7 million shades (256x256x256) are more than enough for images with photographic quality. Moreover, 256 can be easily codes in the binary system as 2^8, i.e. 1 byte.

So, according to the RGB model, black color looks like (0, 0, 0), i.e. there is no intensity at all, while white color looks like (255, 255, 255), which is the maximum intensity possible for all three basic colors.

Of course, any color in RGB model will be described with an integer triad. Note that floating point numbers (such as 1.6 or 25.4, for instance) cannot be used within this model, and the numbers used are kind of “fakeâ€, i.e. they have nothing to do with real physical lighting parameters.

One more interesting feature of the 8-bit intensity representation is its discrete character. The maximum screen brightness of contemporary monitors is known to be around 100-120cd/m^2. If we split this value into 256 shades, we will get about 0.47cd/m^2, which is the brightness interval between the two nearest shades. This way, the monitor brightness is discreet and this sampling rate (which we can also call a threshold of sensitivity to brightness gradients) equals 0.47cd/m^2 if we set the monitor brightness to the maximum, and around 0.4 cd/m^2 is the brightness is set to 70-80%.

On the other hand, the dynamic range of human eye lies between 10^6 and 10^8 cd/m^2, i.e. it makes 100,000,000,000,000:1 or 14 orders. Although human eye cannot see the light from this entire range at the same time: the maximum intensity level visible for a human eye at a time makes around 10,000:1. And since human eyesight tracks the light intensity and color separately, the entire color gamma your eye can perceive makes 10,000 brightness shades x 10,000 color shades, which equals 10^8 colors.

38415.jpg


Another important peculiarity of human eyes is the threshold of sensitivity or the minimal change of the lighting intensity perceivable by the human eye (brightness resolution). The value of this threshold depends on the light intensity and grows up as the latter increases. From 0.01 to 100cd/m^2 the dependence of the intensity on the threshold value is constant (Weber’s law) and equals 0.02cd/m^2. In other words, the threshold of sensitivity for 1cd/m^2 light intensity makes 0.02cd/m^2, for 10 – 0.2cd/m^2, for 50 - 1cd/m^2, and for 100 - 2cd/m^2. The remaining part of the intensity range doesn’t follow this rule and the dependence in this case can be described with a more complicated rule.

38417.jpg


Of course, the dynamic monitor range (and the RGB model description) is not enough to represent all real world images or at least that part of it, which a human eye can perceive. The typical consequence of that is the “removal†of all intensities from the upper and lower part of the range. An example here could be a room with the open window on a sunny summer day. The monitor will correctly display either the room interior or the part of the outdoor scene, which you can see through the window.

HDR Comes to Replace RGB
Where is the way out then?

As far as the computer monitor is concerned, there is hardly anything you can do about it: you cannot increase the screen brightness up to the level of Sun brightness.

But if there is nothing we could do about the monitor then why don’t we give up the RGB model, especially since it can be done absolutely painlessly. Let’s describe the images with real physical values of light intensity and color, and the let the monitor display all it can, as it will hardly be worse anyway. :) This is exactly the idea behind HDRI: for pixels of the image we set the intensity and color in real physical values or values linearly proportional to them. Of course, all real (and fake) lighting parameters are now described with real numbers and not integers, so that we will not be able to cope with 8 bits per channel. This approach immediately eliminates all limitations imposed by the RGB model: the dynamic image range is not limited at all theoretically. This way the question about discreetness and the number of brightness gradations is no longer acute, and the problem of insufficient color coverage is also solved.

We could state that the introduction of HDRI for the first time allowed separating and making independent the description, as a numeric representation of the image within the HDRI model, and the presentation of this description on any technical display device, such as a PC monitor, ink-jet or photo-printer. This way, the image presentation and display turned into two independent processes, while HDRI description became hardware independent.

The display of HDR image on the monitor or its printout requires transforming the dynamic range and HDRI color range into the dynamic and color range of the output device: RGB for monitors, CMYK for printers, CIE Lab, Kodak CYY and the like. Since all these models are LDRI (Low Dynamic Range Images), this transformation cannot be performed painlessly. This process is known as tone mapping and it uses the peculiarities of the human eye to re3duce the losses during this transformation. Since there is no mathematical model describing the human eyesight and its mechanisms fully and correctly, there is no general tone mapping algorithm, which could always ensure quality outcome.

Let’s return to the numeric representation of the HDRI description. Infinite dynamic range is a good thing, but the computer cannot process the infinity. That is why in practice, the dynamic range is usually limited from the top and bottom. A good approximation of this limitation is human eye range, i.e. from 10^6 to 10^8. So, we get a dilemma here. On the one hand, the broader is the dynamic range, the better. On the other hand, we should spare some of the computer resources, because bigger range requires more data to describe this image then. In order to solve this problem, they developed a few formats of the HDR numeric image representation, which differ only by the available range and desired size.

NVIDIA NV40 Acquires HDR
NVIDIA uses a compromise variant, the 16-bit OpenEXR format developed by Industrial Light and Magic. The 16-bit OpenEXR description devotes one bit for the sign of the exponent, five bits to store the value of the exponent and ten bits to store the mantissas of the chromatic color coordinates (u, v), five bits per coordinate. The dynamic representation range thus stretches to 9 orders of magnitude: from 6.14*10-5 to 6.41*104.

The process of constructing and outputting a HDR image with the NV40 graphics processor is divided into three steps:

Light Transport: rendering a scene with a high lighting dynamic range and saving the information about the light characteristics of each pixel in a buffer that uses the OpenEXR floating-point data format. NVIDIA stresses the fact that the NV40 supports floating-point data representation on each step of creation of a HDR scene, ensuring the minimum quality loss:
floating-point calculations in shaders;
floating-point texture filtering;
operations with buffers that use a floating-point data format.
Tone Mapping – translation of the image with a high dynamic range into a LDRI format (RGBA or sRGB).
Color and Gamma Correction – translation of the image into the color space of the display device (CRT or an LCD monitor or anything else).
So, the NV40 with its HPDR technology makes high-dynamic-range images available for admirers of NVIDIA products, not only to owners îf RADEONs. This is another step to bringing photorealistic graphics into computer games.

I believe the 6800 series of cards from NV currently support 16bit floating point frame buffer blending. The X800 series from ATI current does not support floating point blending.

Someone else can correct me (because I am not sure), but I believe FP16 = 64bit pixel percision, and FP32 = 128bit pixel percision. At least that is how I was understanding the NV E3 slide :? I COULD BE WRONG ON THIS.

The NV40 has 32bit percision in the internal pipeline, and the R500 is rumored to at least do the same (check the R500 threads).


We currently do not know if the R500/Xenos does FP blending. It has been hinted at, but other devs have said not to make ANY assumptions.

So guys, when do we learn if R500 does FP blending and what type?

And in a more indirect way... When the sun pops through the clouds on an overcast Redmond-Seattle day, does it leave you devs temporarily blinded or no effect?
 
Thanks that makes perfect sense now.

What is this fp16/32bit number that people mention in relation to the RSX?
 
Acert93 said:
Someone else can correct me (because I am not sure), but I believe FP16 = 64bit pixel percision, and FP32 = 128bit pixel percision. At least that is how I was understanding the NV E3 slide :? I COULD BE WRONG ON THIS.

The NV40 has 32bit percision in the internal pipeline, and the R500 is rumored to at least do the same (check the R500 threads).


We currently do not know if the R500/Xenos does FP blending. It has been hinted at, but other devs have said not to make ANY assumptions.

Oh I see cool
 
Yea there are some negitives . Like currently implementations (nv40) can't use fsaa with it and fp 16 on that chip takes a huge hit in performance.


Hopefully both the rsx and r500 can do at least fp 16 with a small hit and have fsaa enabled
 
Acert93,

I'd be really surprised if the X360 GPU doesn't support FP blending for HDR rendering. I mean, HDR has been too popular and prominent a topic in realtime CG for a couple of years for ATI to overlook. Maybe it's not to the same degree as 128-bit HDR (I've yet to figure out what that means per component--FP40 for R, G, and B, and fixed 8 for alpha?), and wouldn't make for a competitive marketing number? But, much like 1080p vs 720p, even HDR with a few fewer bits is going to be largely indistinguishable from 128-bit HDR.

Phat
 
phat said:
Acert93,

I'd be really surprised if the X360 GPU doesn't support FP blending for HDR rendering. I mean, HDR has been too popular and prominent a topic in realtime CG for a couple of years for ATI to overlook. Maybe it's not to the same degree as 128-bit HDR (I've yet to figure out what that means per component--FP40 for R, G, and B, and fixed 8 for alpha?), and wouldn't make for a competitive marketing number? But, much like 1080p vs 720p, even HDR with a few fewer bits is going to be largely indistinguishable from 128-bit HDR.

Phat

I've been thinking more about this and I now think that HDR won't be easy at all to do on the X360, even if the GPU supports FPxx internally, because there's just going to be no room in the eDRAM for an FP-anything accumulation buffer at a decent resolution. For example, at 1280x720, FP16 colour, 32-bit Z, no alpha, we're looking at 9MB already, before even considering AA. And if ATI realizes this and decided it wouldn't be worthwhile to support HDR, then the eDRAM "component math" logic won't support FP formats either and that would be the last nail in the coffin for HDR on X360.

But if ATI is serious about tiling, then maybe HDR is still workable, with tiling penalties (how big these penalties will be is dependent on how serious ATI is about tiling).
 
Actually, it was shown working rather well on a dual chip G5 system with an 850XT... So I don't think we know if it'll affect performance yet since they don't even have the GPU's in the dev kits yet.
 
It is very hard to belive that one of the most state-of-art tech will not be able to do what older ones already do.

The same can be said for BW of PS3 which seems very low but I dont belive that other one of the most state-of-art tech will have such a big problem (if solved directly or indirectly that is other question).
 
The info on X360 HDR is that it supports a custom 10-bit per component FP format for the back buffer. 2 / 3e7 / 3e7 / 3e7 or something. It's meant to be a speed/quality compromise.

128bit HDR is 32 bit per component, by the way.
 
Laa-Yosh said:
The info on X360 HDR is that it supports a custom 10-bit per component FP format for the back buffer. 2 / 3e7 / 3e7 / 3e7 or something. It's meant to be a speed/quality compromise.

128bit HDR is 32 bit per component, by the way.

So we are geting worst qualitity HDR in the XB360 than in the dev kits :oops: :? :?:
 
JF_Aidan_Pryde said:
Unreal 3 tech heavily using HDR (Gears of war) is shown working very well on XBOX 360. I don't think it'll be much of an issue.

Curious, on a R420 or variant ATI card, how did they get HDR in Gears of War? I saw the pics on Time where they specifically note the HDR--that looks great btw--but I wonder: ATI cards currently do not support HDR, and the dev kits have R420ish cards.

So how did they do it?
 
HDR capabilities on the X360 aren't known well enough yet, IMHO. It might also be a question of balance - IF it supports tiling, and IF it doesn't cause a huge performance hit, then the developer may decide if the quality is more important then speed. The graphics system is certainly complex enough to provide a wide range of options and possibilites for optimization...
 
It does support tiling and I'm told it does support FP16, however the 10-10-10-2 format is probably likely to be most frequently used - the Ruby demo was using this format, so it sounds like this is actually the default. Dev kits have X800/X850's which don't have float blending.
 
Acert93 said:
ATI cards currently do not support HDR, and the dev kits have R420ish cards.

So how did they do it?

Current ATI cards DO support HDR, just not blending... ATI R3x0 and R4x0 support a nice 16 bit integer format (which also have filtering) which apart from blending works nicely for HDR.
 
mckmas8808 said:
So DeanoC what advantages does the blending process add to the graphics?

Makes life considerable easier and allows HDR special fx (particles etc.)

I wouldn't want to go back to no blending but as over half our HDR engine were written without it, its disingenuous to say that ATI don't support HDR.
 
Back
Top