Is all you need for HDR a floating point rendering buffer?

Ok, I have a question about HDR...

Is all you need in a game egine for proper HDR, is to change the framebuffer to 16 bits and then add a final rendering pass to convert it to 8bits to display on our current displays? (This is assuming we're running an NV4x chip with floating point blending/filtering)

Is HDR basically something that's not so much an implimentation issue, but something that just needed proper hardware to exploit it's capabilities correctly?

Finally, is there any way to use a floating point framebuffer on R3x0/NV3x cards?

Thanks!
 
You need the floating point blending aswell, since transparencies and other effects require you do math with the framebuffer.

But essentially yes you just render to a higher dynamic range and use that information to postprocess the screen when you sample down to 8 bits.
 
So you most definitely need floating point blending?

In a very well known engine developers .plan file, he mentions a workaround for not having any blending operation on R300/Nv30 cards...

Developer who is very well known said:
The workaround is to copy the part of the
framebuffer you are going to reference to a texture, and have your fragment
program explicitly add that texture, instead of having the separate blend unit
do it.

So what would happen if you just changed the framebuffer in any game engine to 16 bits? Would that be considered HDR? Or does it need to actually perform the downsampling and application as post-processing effects to actually be considered HDR Rendering?

Finally, with HDR...could you change the framebuffer of a game engine to say 32 bits per-component or even 64 bits per-component if the hardware supported it? Just out of curiousity...
 
XxStratoMasterXx said:
So what would happen if you just changed the framebuffer in any game engine to 16 bits? Would that be considered HDR? Or does it need to actually perform the downsampling and application as post-processing effects to actually be considered HDR Rendering?

In theory yes - but chances are you won't see too much difference buy just doing the change.

HDR has two advantages to 8bit FF - more precision in lower range, and higher range (above 1.0).

The lower range thing would likely improve dark places in the game.

The higher range will only make a difference when a written value is read back.
This happens at blending, and with render-to-texture effects.

For example if a shiny object or light source has higher than 1.0 lightness and you see it trough say an 50% transparent material you'd see it capped at 0.5 with FF buffer, but you see it more realisticly when an FP buffer is used.

One particular example of a RT effect is the sun reflecting in the water.

Note that many of these effectscan be more-or-less emulated with 8bit targets with careful scaling making the buffer to hold say the [0..2] range.
But this costs precision - and you loose quality at darker regions.

Another thing that FP targets make feasible is calculating in linear color space - but that requires modification to the shaders and/or sampling states and/or source art. (there are various ways).
This would likely require some work from the map designers for example to change the intensity of lights.

Note that you won't automaticly see the results above 1.0 - thats where a bloom effect can be handy. Rendering into an FP buffer makes it possible to reuse the buffer for the bloom source (the bloom needs to have the >1.0 values of course.) Again this is possible to do with a separate rendering pass for the bloom - as it is done in many games or ATI's R9700 launch demo.
 
XxStratoMasterXx said:
So you most definitely need floating point blending?

In a very well known engine developers .plan file, he mentions a workaround for not having any blending operation on R300/Nv30 cards...

This a well known workaround - which is painful to implement, and can be extremely slow in many cases.
 
Hyp-X said:
This a well known workaround - which is painful to implement, and can be extremely slow in many cases.

And make AA (as it's currently implemented in hardware) useless right?
 
So basically, if I changed the framebuffer to 16bit, would I have a "64 bit color" engine?

Also, if you have a 16 bit framebuffer, then does that mean "HDR" and the stuff you make aroud it (shaders etc.) is that considered content for HDR? Basically what defines an engine's ability to have "HDR Rendering"?
 
I was reading through the .pdf file again, and I don't understand something.

It says you need "floating point arithmetic". Well, does that just mean you need to have floating point fragment program support to do the actual lighting calculations? (implying that a "HDR" fragment program must be written otherwise the higher percision render target is useless?)
 
Hyp-X said:
What AA?
There's no hardware AA implementation for FP targets... yet

I wasn't clear enough. I was asking if using this render-to-texture HDR workaround would make MSAA as it's currently implemented useless in these circumstances.
 
XxStratoMasterXx said:
I was reading through the .pdf file again, and I don't understand something.

It says you need "floating point arithmetic". Well, does that just mean you need to have floating point fragment program support to do the actual lighting calculations? (implying that a "HDR" fragment program must be written otherwise the higher percision render target is useless?)

Pretty well much.
 
Hyp-X said:
XxStratoMasterXx said:
So you most definitely need floating point blending?

In a very well known engine developers .plan file, he mentions a workaround for not having any blending operation on R300/Nv30 cards...

This a well known workaround - which is painful to implement, and can be extremely slow in many cases.
Is this workaround MRT or something else?
 
pat777 said:
Hyp-X said:
XxStratoMasterXx said:
So you most definitely need floating point blending?

In a very well known engine developers .plan file, he mentions a workaround for not having any blending operation on R300/Nv30 cards...

This a well known workaround - which is painful to implement, and can be extremely slow in many cases.
Is this workaround MRT or something else?

"ping-ponging" floating point render targets.
 
So what would this "HDR" fragment program have to do? Also, do you need to code any special code for using floating point filtering and blending, or is that more of a hardware issue than a software issue?

Also, what type of "HDR" is in Half-Life 2?
 
pat777 said:
Is this workaround MRT or something else?

No, it isn't.
It doesn't use multiple targets at the same time so no MRT needed.

Don't forget that MRT can be easily emulated on hardware that doesn't support it.
Again - of course - that comes with a performance hit.
 
Mordenkainen said:
Hyp-X said:
What AA?
There's no hardware AA implementation for FP targets... yet

I wasn't clear enough. I was asking if using this render-to-texture HDR workaround would make MSAA as it's currently implemented useless in these circumstances.

It might - depending on what you want to do in those additive passes.
You can render with MSAA into a texture but that has to be down-filtered by the time you start to use the texture.

Yet, the question is quite theoretical.
That would only matter for cards that support MSAA for FP targets (so you have that to lose) but doesn't support FP blending (so you need to use the workaround). I don't think there'll ever be a card with such a combination.
 
XxStratoMasterXx said:
It says you need "floating point arithmetic". Well, does that just mean you need to have floating point fragment program support to do the actual lighting calculations? (implying that a "HDR" fragment program must be written otherwise the higher percision render target is useless?)

Yes. But that's a given on DX9 hardware using PS2.0 shaders.
R300+ and NV40+ actually use floating point for PS1.x shaders as well.

So the program should use PS2.0 shaders (which is not that common yet - which can be blamed partially on FX series PS2.0 performance - or rather lack of it).

So what would this "HDR" fragment program have to do?

Also, do you need to code any special code for using floating point filtering and blending, or is that more of a hardware issue than a software issue?

Well if it's supported in the hardware it's a hardware issue - it it's not but it's still needed then it's a software issue. ;)

FP16 filtering is not that important as might not be needed, and it can relatively easily emulated.
FP16 blending is hard to avoid (unless you get rid of all transparencies and particles), and it's not too fast - but possible - to emulate.
 
Back
Top