HDR overused?

PeterT said:
Not in either DirectX or OpenGL, and I am not aware of any gpus that support such a thing. IMHO, it wouldn't make much sense...

Not sure why it wouldn't make sense... FP16 textures are big, I'd like to be able to compress them.

Obviously we'd take a hit in precision, but it's the range that is important.
 
MrWibble said:
Not sure why it wouldn't make sense... FP16 textures are big, I'd like to be able to compress them.

Obviously we'd take a hit in precision, but it's the range that is important.
you wouldn't take a hit in prevision if it was lossless. i find it rediculous that we use lossy compression on low color depth assets (making them look even worse) that usualy cover the magority of a scene for performance reasons then sprinkle in a couple huge, high color depth, uncompressed assets to slow everything down. i'd rather see the bandwidth spent on rediculously high res art than on increasing the color range to make high light and low light situations have a bit less banding.
 
MrWibble said:
Not sure why it wouldn't make sense... FP16 textures are big, I'd like to be able to compress them.
You are most likely right in a game context, it's just that I immediatly associate FP16/32 "textures" with rendertargets, and then you can't compress them anyway. That's probably because of my GPGPU background though.

How/why would one actually use unchanging (ie. non-RT) HDR textures in a game? I wouldn't think that's necessary for color maps. For normal maps or other less obvious forms of texturing?
 
PeterT said:
How/why would one actually use unchanging (ie. non-RT) HDR textures in a game? I wouldn't think that's necessary for color maps. For normal maps or other less obvious forms of texturing?
Mostly I think it's important for colour+intensity especially if have relfections, which is primarily environment maps. Taking these two examples...

memorial0.jpg
memorial3.jpg
http://gl.ict.usc.edu/HDRShop/main-pages/images/memorial3.jpg

ATI-9700-nl-lg.jpg


You can see how HDR environment maps will add a quick and easy sense of realism. A cubemap gives your shiny armour convincing reflections, while inside a building an HDR window adds realism with exposure changes. If your image doesn't have a self-illumination component then normal colour will be fine, which is every diffuse colour map.
 
Yes, cubemaps were actually one of the examples of "HDR in games" that I could think of, but wouldn't you need to render to those? The HDR window (with some post-blooming) is a more interesting example, as you could probably get away with just keeping it static.

(Btw, I still remember running that ATI sphere demo when I first got a card that could do it, it was very impressive then...)
 
PeterT said:
Yes, cubemaps were actually one of the examples of "HDR in games" that I could think of, but wouldn't you need to render to those? The HDR window (with some post-blooming) is a more interesting example, as you could probably get away with just keeping it static.

(Btw, I still remember running that ATI sphere demo when I first got a card that could do it, it was very impressive then...)

We might well want to have some background textures (skies and such like) which are static rather than dynamically rendered. Sure it'd be nice to have everything be rendered realtime, but sometimes it's just not practical. However a background cubemap tends to need to be a very high resolution or it's pretty easy to see the pixels.

But yes, I'm sure a lot of content wouldn't need to be, and I wouldn't propose losing 24-bit textures :)
 
see colon said:
you wouldn't take a hit in prevision if it was lossless. i find it rediculous that we use lossy compression on low color depth assets (making them look even worse) that usualy cover the magority of a scene for performance reasons then sprinkle in a couple huge, high color depth, uncompressed assets to slow everything down. i'd rather see the bandwidth spent on rediculously high res art than on increasing the color range to make high light and low light situations have a bit less banding.

If there were any ways to get a fixed compression ratio which was non-lossy there's probably a few people who would like to hear them. And patent them :)

Non-fixed ratios would be kind of interesting to implement for a texture unit.
 
D3D10 introduces a shared exponent RGBE format, and with bit logic instructions you can interpret 32 bit textures any way you want.
 
Xmas said:
D3D10 introduces a shared exponent RGBE format, and with bit logic instructions you can interpret 32 bit textures any way you want.

Yeah, we can fart around in shaders but it's not the same as having stuff supported in the ROP or sampling/filtering units natively.
 
MrWibble said:
Or we could just use pixels with more range and precision and not worry about special case hacks... :)
I still think 32-bpp is enough. Mid/Low end PC hardware and console hardware will be on a 128-bit memory bus for the foreseeable future, so it will make a big difference in performance, especially when alpha blending. FP10 is not exactly LDR, and a shared exponent format (say RGBE in 9-9-9-5 format, which nears FP16 in range) would be even better. I honestly think FP10 is more than enough to avoid the artifacts you're talking about.

This method of storing values also allows you to very easily change between formats. Simply alter the target value (i.e. 0.25 in the example), and that's it. Everything else will take care of itself. For cross-platform titles you can use the formats available. PS3/XB360/DX9/DX10/ATI/NV/budget/high-end will all have different choices being optimal for quality/performance.

Yes, you're right - the problem then is that you could get quite a bit of oscillation happening, rather than just a gradual interpolation towards a stable value. That can of course be compensated for to some degree, but it would still worry me to have a scheme based on feedback from a "broken" result. Especially where this feedback is happening right in front of someone at 60fps...
Oscillation can be mathematically guaranteed not to happen for an overdamped system. The example I posted earlier adjusted to the light much faster than is necessary or even realistic (a rate of 2 orders of magnitude per frame), so the feedback constant will be smaller.

Also, the "broken" result makes the system more stable due to the nature of clamping. If the scale is adjusted by a factor f, the calculated average will change by either the same factor or a factor slightly closer to unity. For example, increase the scale factor by 10, and the average will increase by a factor less than or equal to 10.
 
Mintmaster said:
I still think 32-bpp is enough. Mid/Low end PC hardware and console hardware will be on a 128-bit memory bus for the foreseeable future, so it will make a big difference in performance, especially when alpha blending. FP10 is not exactly LDR, and a shared exponent format (say RGBE in 9-9-9-5 format, which nears FP16 in range) would be even better. I honestly think FP10 is more than enough to avoid the artifacts you're talking about.
9+9+9+5=32 bits for colour.
alpha bits=32 - colours bits=0bits.

Yeah 32 bpp helps heaps when you got alpha blending only if you got TBDR but hey you might not even need the bandwidth savings then. If you have an immediate render you prolly need a few more bits maybe 48bit system would be a compromise between 32 and 64bits.
 
Last edited by a moderator:
DeanoC said:
Thats a very interesting idea, that I hadn't really thought about.
Essentially predictive compression of the framebuffer format based on the last N frames average lumonisity.
Why thank you. :D Good to hear I'm not falling on deaf ears.

That's an interesting way of putting it. The way I see it, it's simply an iris. A CCD/film/retina have only a fraction of the dynamic range of a camera/eye as a whole. The behaviour of an iris is very similar to what I'm saying.
 
bloodbob said:
9+9+9+5=32 bits for colour.
alpha bits=32 - colours bits=0bits.
Note that 'RGBE' has no 'A' in it. ;)

Destination alpha is very rarely used, especially when you're doing real HDR without tagging objects for bloom. Even if you want to keep a bit for alpha, knock one off the exponent and the dynamic range is still 8 million (2^(8+2^4-1)). There's a whole bunch of possibilities. Get rid of the sign bit (not really needed for colour anyway), and 8-8-8-4-4 (RGBEA) gives you 4 bits of alpha and the same 8M dynamic range.

bloodbob said:
Yeah 32 bpp helps heaps when you got alpha blending only if you got TBDR but hey you might not even need the bandwidth savings then. If you have an immediate render you prolly need a few more bits maybe 48bit system would be a compromise between 32 and 64bits.
Uhh, what? 32-bpp saves you bandwidth no matter what, especially in an IMR. Look how much faster graphics power is increasing over bandwidth.
 
Mintmaster said:
Note that 'RGBE' has no 'A' in it. ;)

Destination alpha is very rarely used, especially when you're doing real HDR without tagging objects for bloom.
Why else would you need alpha blending? if your not having overlapping transparencies in screen space you could simply do one pingpong and its prolly not gonna cost you much.

Uhh, what? 32-bpp saves you bandwidth no matter what, especially in an IMR. Look how much faster graphics power is increasing over bandwidth.
Uhh I never said it didn't save you bandwidth. I said it might not need it seeing as you don't have to use all the bandwidth to keep pulling data from the frame buffer for multiple layers of transparency as well as all the z-writes.
 
Mintmaster said:
Note that 'RGBE' has no 'A' in it. ;)

Destination alpha is very rarely used, especially when you're doing real HDR without tagging objects for bloom.
Why else would you need alpha blending? if your not having overlapping transparencies in screen space you could simply do one pingpong and its prolly not gonna cost you much. Or aren't gonna use alpa blend HDR textures.

Uhh, what? 32-bpp saves you bandwidth no matter what, especially in an IMR. Look how much faster graphics power is increasing over bandwidth.
Uhh I never said it didn't save you bandwidth. I said it might not need it seeing as you don't have to use all the bandwidth to keep pulling data from the frame buffer for multiple layers of transparency as well as all the z-writes.
 
bloodbob said:
Why else would you need alpha blending? if your not having overlapping transparencies in screen space you could simply do one pingpong and its prolly not gonna cost you much. Or aren't gonna use alpa blend HDR textures.
I don't get what you're saying. Destination alpha is not needed for alpha blending. How often do you need to store a value in the alpha channel of your framebuffer? Just because the storage format is 10-10-10-2 or 9-9-9-5 doesn't mean the blend units operate in this format, nor does it mean the pixel shader units output in this format. 99% of the time your blend factor is determined by a value in the pixel shader, i.e. source alpha, not by something in the alpha channel.

Uhh I never said it didn't save you bandwidth.
This is what you said: "Yeah 32 bpp helps heaps when you got alpha blending only if you got TBDR"

32-bpp as opposed to 64-bpp helps you with IMR also, and almost always. Less space, less bandwidth. ROP units are held back by BW for 32-bpp as it is, whether blending is enabled or not.
 
Current HDR implementations have a tendency to overbrightening rather than keeping the brightest portions of the image reasonably bright and letting shadows become very dark. Brother in arms 3 looks very good, IMHO, because they dared to have high contrast lightning with almost black shadows.
 
Back
Top