HDR RGBA8 representation I believe is unknown

vember

Newcomer
I found out a way to encode HDR RGB data in a RGBA8 texture a while ago that I haven't seen anywhere else (like among the many other methods in the ATI SDK etc etc) so I decided to write a mini-paper about it. I'm somewhat suprised that it isn't common knowledge as it's both good and really simple. It's basically like RGBE but instead of an exponent you divide at the decode which gives nice distribution of precision and good lerping-characteristics.

mini-paper: http://vemberaudio.se/graphics/RGBdiv8.pdf

Thoughts?

cheers,
- claes
 
Can you explain how it's working?

Let say I have 2 texels blended together from a encoded texture that use your algorithm.

one texel have a color component of 0.5 and an alpha value of 0.05 which give color value of 10.0 after the decoding.

the other texel have a color component of 0.5 and an alpha value of 0.1 which give color value of 5.0 after the decoding.

if I blend both texel together I hope Ill get a value of 7.5

now if I let the GPU blend both texel, the color component will be 0.5 and the alpha value 0.075 which give a color value 6.66

I could use the same example with MSAA and it didnt talk about the MSAA resolve in linear space yet.
 
You seem to have got it allright. When the alpha channel is less than 1 there will be a weighing error when doing linear interpolation, just as you described. But as the interpolation will always remain between the two endpoints and this only happens when one of the color components is larger than 1.0 it will probably be quite an acceptable loss in most cases.
 
Interesting idea. It worked very nicely on RGBA16, but with RGBA8 I saw some banding with high exposure values. I was using the texture in my HDR demo though, which has a bit extreme range, so maybe it would perform better in more moderate situations. It's one instruction shorter to decode than RGBS though.
 
now if I let the GPU blend both texel, the color component will be 0.5 and the alpha value 0.075 which give a color value 6.66

I could use the same example with MSAA and it didnt talk about the MSAA resolve in linear space yet.
Yeah, that's a problem that's bound to happen because it's a conditionally-driven encoding, so you end up with that point where the curves don't have continuity down through further differentiations. If that curve for a were approximated by some hyperbolic, it might work out a little cleaner in practice. Either way, the ideal case is for you to blend using r/a rather than r and a.

At least with MSAA, the differences between blended pixels and the scale of the errors will be less than a large alpha-blended polygon which poses a bigger problem.
 
Yeah, my main intent of the format was to use it pretty much like a regular RGB8 framebuffer with exposure performed prior to encoding to get it in the [0..1] range. But with the added benefit that larger values was available to post-processing effects so I could get good looking bloom and motion blur. This way the non-linear part of the dynamic range will only be indirectly visible thru the post-processing effects so the precision and non-linearity won't matter as much.

Blending is traditionally done in gamma-space anyway which by itself is an error, which the non-linearity here will contribute to even further. I haven't tested it yet but I think a good compromise would be to use a linear-space framebuffer in a [0..2.5]-ish range as output using this encoding which would give close to linear blending in most of the visible range while still provide enough precision in the darker shades to avoid quantization artifacts.

This isn't really an efficient encoding in the absolute sense though as it only provides 254 additinal luminance levels, but we're left with the texture units we've got.
 
Oh so we got some new requirements on what linear interpolation has to be. Now it has to be between the two values at all times were as humus just said it had to have no jumps in values. ( I.e. finite derivative ). So I'm guessing some kind of sinusodial wave that is damped to stay between the two points would be quite fine for you two?
 
bloodbob said:
Oh so we got some new requirements on what linear interpolation has to be. Now it has to be between the two values at all times were as humus just said it had to have no jumps in values. ( I.e. finite derivative ). So I'm guessing some kind of sinusodial wave that is damped to stay between the two points would be quite fine for you two?

It's hardly like there is a lot of options on DX9-level hardware so we pretty much have to rely on hacks. And as long as the final result is visually pleasing I don't have any problems with it not being properly linearly interpolated. It's just color-data. If you have stricter requirements on interpolation, don't use it.
 
Most of the time linearity isn't particularly important. There are exceptions, but most of the time what we want with the filter is a smooth gradient between the samples. Whether this is linear or something else typically doesn't matter as long as it looks fine. A sinusodial wave wouldn't look fine simply because it adds higher frequencies in between that would result in aliasing.
Also, linear isn't neccesarily the ultimate solution either. The only reason why this is used is because it's relatively cheap to implement in hardware and solves the smoothness problem for the majority of the cases. But there are plenty of filters out there that would result in better quality.
 
Humus said:
But there are plenty of filters out there that would result in better quality.
Care to mention any that can do better with only 2 samples? Okay maybe something like a sigmoid function would work because hey atleast it has some symertry going ( well not quite we are doing rotations in stead of reflections) which apparently no one around here cares about apart from little old me.
 
Last edited by a moderator:
Well, with just two samples (assuming we're talking one dimension here) you can't do much, you can interpolate with (3x-2)x^2 instead of x, which takes away some of the pointy star pattern of a bilinear filter. But without that restriction something like a bicubic can give you better quality.
 
or you could use a cosine to interpolate.. would be similar to the (3x-2)x^2, though..

could be implemented by changing the tex-sample position, using the bilinear sampler to get the (different weighted) samples. so it would be very cheap to implement actually.

and yes, getting that pointy star pattern away would look much better.
 
The problem with using different schemes for filtering 2 samples in 1D is that you screw up gradients. Imagine a ramp from white to black over the span of 5 texels. A graph of intensities would yeild a wavy profile, and visually it would look ripply and almost quasi-banded.

I was fooling around with a way to improve interpolation for lookup table functions, and found that with some clever preprocessing you could get some fairly flexible quadratic interpolations between texels using one multiply in existing hardware.

For example, suppose your two original texels were A=0.1 and B=0.6. Depending on how you assign values to the texels in two different textures (denoted by A1, B1 and A2, B2), you can get different curves between after multiplying the bilinearly filtered values.
- A1=B1=1.0, A2=0.1 and B2=0.6. ==> ordinary linear interpolation (midpoint = 0.35).
- A1=1.0, B1=0.6, A2=0.1, B2=1.0 ==> concave down (midpoint = 0.52)
- A1=0.316, B2=0.775, A2=0.316, B2=0.775 ==> concave up (midpoint = 0.297)

Things get even more interesting if you allow negative numbers, numbers larger than one, a third texture for a bias, etc. The preprocessing is the tricky part, though. Very tricky.



EDIT: Back to the main topic: vember, this divide-by-alpha technique is actually quite powerful with fewer blending limitations than you think. I thought of this in December when inspired by nAo's discussions of an alternate HDR format. I wrote a lengthy math-laden reply for this thread a week or two ago, but a brownout restarted my computer and I was too pissed to rewrite it.

I'll write it up again soon. Basically, additive and multiplicative blending can be done mathematically correct (though sometimes quite susceptible to precision artifacts after a couple layers), and LERP or an arbitrary linear combination can be done correctly for certain cases. There are a few other tricks also.
 
Last edited by a moderator:
Back
Top