HDR Method used in Splinter Cell: Chaos Theory

bigz

Regular
Does anyone happen to know what method of HDR Rendering is used in this title? Is it the same OpenEXR method that was used in FarCry, or something different?

Cheers.
 
OpenEXR is not a HDR rendering method. It is only a fileformat to save HDR data.

But Splinter Cell: Chaos Theory uses FP16 textures and surfaces like the FP16 HDR mode in FarCry. The Postfilterprocessing is different between this two titles.
 
But Splinter Cell: Chaos Theory uses FP16 textures and surfaces like the FP16 HDR mode in FarCry. The Postfilterprocessing is different between this two titles.

That´s just in the pc-version right(it must be)?
Otherwise there´s going to be crazy in the consoleforum. :oops:
 
overclocked said:
But Splinter Cell: Chaos Theory uses FP16 textures and surfaces like the FP16 HDR mode in FarCry. The Postfilterprocessing is different between this two titles.

That´s just in the pc-version right(it must be)?
Otherwise there´s going to be crazy in the consoleforum. :oops:

Sure I was talk about the PC version. In any case I have no tools to check the rendering process for a console.
 
bigz said:
Thanks, is the HDR format that is used in SC: CT known?

Do you mean the format of the primary render target? They use a four channel FP16 format for this. The only FP format that supports alphablending on a NV4X chip.
 
I was referring to the fileformat used, but I am intregued as to the methods used to achieve HDR in SC: CT. It's certainly not the after thought that was the case with FarCry, to a certain extent. My logic here is that FarCry is quite often over-exposed.

However, this could be down to NVIDIA's drivers, as there is over exposure in some instances without HDR enabled.
 
bigz said:
I was referring to the fileformat used, but I am intregued as to the methods used to achieve HDR in SC: CT. It's certainly not the after thought that was the case with FarCry, to a certain extent. My logic here is that FarCry is quite often over-exposed.

However, this could be down to NVIDIA's drivers, as there is over exposure in some instances without HDR enabled.

IIRC SC:CT and FarCry did not use any textures in a HDR format. If necessary I can recheck this. Maybe they use RGBE8 but I can not remind that I have see the convert code for RGBE8 in one of the pixelshaders.
 
No available console can do any real HDR. That needs a 16fp backbuffer w blending, which none have.

As far as file formats go, I don't think the FarCry one used HDR textures for anything in the 1.3 patch. I remember seeing an interview with the guys. They just multilpied their light values and low dynamic range (LDR) 8bpp environment map textures. The patch would have been like 300-400 MB or something if they revamped all their lighting content.

In all likelyhood, developers using HDR textures are going to use OpenEXR, or some simplified form there of (DirectX texture format supports it these days too, I think). That way, there is no pixel format conversion to get it from system mem -> gpu mem. All it takes is a memcpy() (or several). Other formats, such as Radiance, LogLuv TIFF, and so forth quantize stuff differently, so you need to convert to a normal floating point form.

The overexposure has nothing to do with the texture format or the NVIDIA drivers. Once you are working in floating point, you have to convert those values into a range you can display on your screen. This is called tonemapping. There are a couple versions that work fast enough in graphics hardware for this. They are probably using a variant of the first part of the Reinhard photographic tonemapper described in this DirectX sample. It includes a blow-out constant for which all values above will be mapped to white. The Crytek guys probably set it a bit low, probably to make sure they got lots of light blooms.
 
Valve said they were using 16 bit integer buffers. What does this mean?

Also Doug said that:

"So what we’re doing in "Lost Coast" through the use of HDR is, depending on where you are in proximity to the light source and how long you’ve been looking at it, your eyes will adjust and the lighting in that world will adjust."

Is this new technology?
 
Subtlesnake said:
Valve said they were using 16 bit integer buffers. What does this mean?
That it's low-quality. 16-bit integer buffers have limited dynamic range, and can thus lead to banding or lack of proper luminance in high dynamic range scenes. By contrast, FP16 has effectively infinite dynamic range, for the purposes of color data.
 
bigz said:
I was referring to the fileformat used, but I am intregued as to the methods used to achieve HDR in SC: CT. It's certainly not the after thought that was the case with FarCry, to a certain extent. My logic here is that FarCry is quite often over-exposed.
Essentially all of the guts of an HDR technique lies in tone mapping pass. Everything else is just determining how bright things are supposed to be.

The tone mapping pass maps the high dynamic range data to the 0-255 range of the output to the monitor. This is typically done in an attempt to simulate how our eyes react to changes in brightness.

Personally, I feel that FarCry's tone mapping pass is rather poor at doing this (in any HDR mode), as they appear to just make the final brightness of the scene a function of the average brightness. This quickly leads to situations where some of the scene is dim, but other parts are completely whited-out. One way you could hack a better modification of this technique would be to make the final brightness of the scene a function of the maximum brightness. This would have the effect of a bright point of light blinding you (which is fairly realistic), and would also reduce the white-out problem to essentially only appearing when your virtual eyes are adjusting.

However, this could be down to NVIDIA's drivers, as there is over exposure in some instances without HDR enabled.
No. The tone mapping pass is a shader entirely written by the software developer, and thus independent of IHV drivers (assuming that the driver is compiling the shader properly).
 
Subtlesnake said:
"So what we’re doing in "Lost Coast" through the use of HDR is, depending on where you are in proximity to the light source and how long you’ve been looking at it, your eyes will adjust and the lighting in that world will adjust."

Is this new technology?
No. That's just the tonemapper. The first realtime one you might have seen was http://www.daionet.gr.jp/~masa/rthdribl/ . It's also in a sample in the DirectX SDK.

By contrast, FP16 has effectively infinite dynamic range, for the purposes of color data.
Actually, far from. The maximum value that fp16 can represent is only around 65,000. Though, this is bright enough for storage of color information. The format was designed so that it would be bright enough to cover the entire range of visible light intesities (the sun is roughly 50,000 cd/m2) while providing less quantization error than human vision can detect across the entire range. That said, it's not a movie-quality working format (the quantization resulting from several operations on fp16 values may have sufficient error quantization to be perceivable) but it works great for storage, and for games since they aren't nearly that precise yet.

One way you could hack a better modification of this technique would be to make the final brightness of the scene a function of the maximum brightness. This would have the effect of a bright point of light blinding you (which is fairly realistic), and would also reduce the white-out problem to essentially only appearing when your virtual eyes are adjusting.
This actually works quite poorly in practice, in my opinion. You need frame coherency on these things. Your eyes can't change very quickly, so you need to clamp the rate of change on these. For games, you need to do this without reading back the value, so keeping the rate of change sufficiently small requires a bit more work. Taking the maximum is highly subject to outliers (which is why log average is generally taken over arithmetic average). Think about the sun flickering through trees and how much the exposure would change based on whether you could see it that frame or not. Would turn your monitor into a strobe. Like I said, they probably just tweaked the blow-out constant in the photographic tonemapper to pump up the effect some.
 
squarewithin said:
By contrast, FP16 has effectively infinite dynamic range, for the purposes of color data.
Actually, far from. The maximum value that fp16 can represent is only around 65,000. Though, this is bright enough for storage of color information.
for the purposes of color data
Yes, it's not as accurate as could be, but FP16 has quite enough dynamic range for color data. And it's more than accurate enough for most anything outputting to a final 8-bit format, particularly as a framebuffer format.

This actually works quite poorly in practice, in my opinion. You need frame coherency on these things. Your eyes can't change very quickly, so you need to clamp the rate of change on these. For games, you need to do this without reading back the value, so keeping the rate of change sufficiently small requires a bit more work.
Well, I thought that was obvious.

Taking the maximum is highly subject to outliers (which is why log average is generally taken over arithmetic average). Think about the sun flickering through trees and how much the exposure would change based on whether you could see it that frame or not. Would turn your monitor into a strobe. Like I said, they probably just tweaked the blow-out constant in the photographic tonemapper to pump up the effect some.
Actually, I think taking the maximum would probably be more realistic. After all, just think about why the eyes limit the amount of light that enters: it's a defense mechanism to prevent damage. Damage can occur on a small part of the eye quite easily, and so it makes sense to limit based upon a maximum, though since the eye has a different number of receptors in different areas, different areas can take differing amounts of brightness, so it may be even more realistic to modulate the maximum by some function of distance from the center of the screen.

And in the case of the sun flickering through trees, the exposure wouldn't change much at all from frame to frame because you'd clearly want to average over the past few frames (in some parameter).
 
Chalnoth said:
Actually, I think taking the maximum would probably be more realistic. After all, just think about why the eyes limit the amount of light that enters: it's a defense mechanism to prevent damage. Damage can occur on a small part of the eye quite easily, and so it makes sense to limit based upon a maximum, though since the eye has a different number of receptors in different areas, different areas can take differing amounts of brightness, so it may be even more realistic to modulate the maximum by some function of distance from the center of the screen.
It is true that your retina responds to the brightest light, not the average, but it does so locally. To properly use the max value, you have to look at a weighted neighborhood around that particular pixel to determine what it's exposure level is. The problem with this (besides the obvious fill limitations, because the neighborhood is quite large) is that you get reverse gradients, where the area around a bright point gets progressively darker as you get near it. This can be fixed through bilateral filtering, but that requires a whole set of images computed with different blur weightings and blending between them based on the intensity in your neighborhood. The faster implementations use FFTW and still take several seconds per frame. I am not aware of any methods that produce acceptable results using purely local (current pixel only) information. The log scene average works quite well. It's not right, but at least it's consistent, to paraphrase Heckbert. With a few tweaks, you could easily fix the over-brightening problem.
 
squarewithin said:
It is true that your retina responds to the brightest light, not the average, but it does so locally.
I'm just not sure how that's possible. Light is limited by contraction of the iris. How is that local?
 
Chalnoth said:
I'm just not sure how that's possible. Light is limited by contraction of the iris. How is that local?
It can limit light somewhat, but it only performs a small part of the light adaptation that your visual system can handle. I think the pupil can change by a factor of 16 in area. That's only 4 stops of the 34 or so stops that our visual system can resolve across the full range of adaptation.

The vast majority of it happens locally on the retina as the rods and cones latterally their neighbors activation. I think some also happens in the Lateral Geniculate Nucleus as visual signals travel along the underside of the brain en route to the brain stem and visual cortex.

But light adaptation is a highly local phenomenon.
 
ATI SM2.0 cards have always been able to a "alternative" HDR, Parallax Mapping and Soft Shadows.

A tecnical question (HDR discussion):

Does anyone happen to know what format of High Dynamic Range is used in this SM2 codepath? Is it the same OpenEXR format that was used in SM3, or something different?
 
Back
Top