fp16 render target affects MSAA quality?

shuipi

Newcomer
Hi
I have a fp16 render target with 4x MSAA, after rendering the 3D scene I down sample it to a fp16 texture and do tone mapping. The result looks more jaggied than if I had used a rgba8 integer render target. Someone said it's because of the order of down sampling and tone mapping -- the later tone mapping made the jaggies reappreared.

What's your opinions?
 
Yeah theoretically you should tone map the pre-resolved samples (which is possible in DX10), although how much difference this makes will depend on the dynamic range of your scene. In particular cases like a character backlit by the sun will look almost un-antialiased if you reverse the order of MSAA resolution/tone-mapping.

If you don't notice any difference at all with MSAA enabled though, perhaps something isn't working properly...
 
So is there any way to fix this in DX9? Especially for ATi cards since one of their features is HDR + MSAA together.
 
So is there any way to fix this in DX9? Especially for ATi cards since one of their features is HDR + MSAA together.

Don't think so...the problem with DX9 is that you can't access the pre-resolved samples in order to tone-map them, AFAIR. I don't recall seeing hacks work around this...
 
Don't think so...the problem with DX9 is that you can't access the pre-resolved samples in order to tone-map them, AFAIR. I don't recall seeing hacks work around this...

Half life 2 (and other Source games ?) does that in HDR mode. They handle shading and tonemapping in one pass, so the tone mapping happens before the AA resolve. That's the only game, as far as I know.
 
I have a fp16 render target with 4x MSAA, after rendering the 3D scene I down sample it to a fp16 texture and do tone mapping. The result looks more jaggied than if I had used a rgba8 integer render target. Someone said it's because of the order of down sampling and tone mapping -- the later tone mapping made the jaggies reappreared.

Actually the problem you're seeing with edges and AA also exists with the texture filtering. Prefiltering (mipmapping) and even real time filter (bilinear/trilinear/anisotropy) all happen before the tonemapping. With the wrong set of texture and with the wrong lighting it can reintroduce flickering that texture filtering was supposed to take care of.
 
Half life 2 (and other Source games ?) does that in HDR mode. They handle shading and tonemapping in one pass, so the tone mapping happens before the AA resolve. That's the only game, as far as I know.
Do you know more about how they tonemap the sub-pixel samples? Cause like others have said, only DX10 supports accessing sub-pixel samples
 
Do you know more about how they tonemap the sub-pixel samples? Cause like others have said, only DX10 supports accessing sub-pixel samples

That's what I was wondering as well...
 
Do you know more about how they tonemap the sub-pixel samples? Cause like others have said, only DX10 supports accessing sub-pixel samples

If you do the tone mapping as last step in your regular pixel shader you get subpixel tone mapping automatically.
 
Half-Life 2 doesn't use an FP16 framebuffer. It renders everything in single pass and does the tone mapping at the end of each shader.
 
If you do tone mapping in your pixel shader you lose the ability to compute exposure in a decent way (and I'm not that amazed by what HL2 is doing in that regard, often exposure is just wrong and it shows).
Any post processing effect that work on the full dynamic range can't be applied as well.
Given that we don't need to do post processing (for the time being at least) at subpixel level we should learn how to perform an AA resolve pass that is tone mapping aware, so that it would be possible to perform tone mapping after a multisampled floating point render target has been resolved (byebye linear interpolation on subsamples..)
Fixed function AA resolve should be deprecated, we need to be more creative at that particular stage.
 
If you do tone mapping in your pixel shader you lose the ability to compute exposure in a decent way (and I'm not that amazed by what HL2 is doing in that regard, often exposure is just wrong and it shows).
Any post processing effect that work on the full dynamic range can't be applied as well.
Well, not necessarily. After rendering you can calculate the luminance as a post-processing effect and pass it onto the next frame to use in tonemapping. Of course, you'll always have a single-frame delay (and you'd be calculating luminance from an LDR image), but I think it looks fine.

It gives reasonably good performance, anyway.
 
The problem is not the latency but the fact that you're computing a luminance value from some 'crippled' data. IMHO HL2 shows that, it's quite common to have underexposed or overexposed screen areas and no matter how long you wait for tone mapping to get 'there' but nothing changes.
To be honest, if my memory serves me well, they also use a very simple tone mapping operator, that one wouldn't use normally but that I guess is kind of mandatory if you wanna directly generate LDR pixels and determine exposure with those pixels (e.g. they use a linear operator)
 
Tone mapping operators are not invertible (particularly since they often end up clamping out values based on the current exposure), so you can't reconstruct the proper scene luminance. Unfortunately the issue is particularly bad in the case where you need the luminance the most: when the exposure is changing quickly.

TBH it's stretching it to call what HL2 does "true HDR". It's more of a hybrid LDR/HDR (even their assets) implementation that captures "some" of the effects of fully HDR rendering. I can understand why they chose to do what they did given their target hardware, but it is undesirable moving forward.

To reply to an earlier post, I'm not entirely convinced that texture filtering makes any assumptions about the dynamic range (it works entirely in frequency space) of the underlying data. It *does* assume that the functions that are using the data (usually the BRDF, etc) do not change the frequency content of the data, but reasonable tone mapping should not do that (it certainly shouldn't introduce higher frequencies!).

Thus I'm not totally convinced about the previous comment concerning texture filtering being improper with HDR rendering. With edge AA it's a bit more clear since you're talking about super-sampling and a non-band-limited (effectively infinite frequency) signal, but even in that case it's not totally clear-cut how AA fits in with tone mapping, even in the offline rendering world.

I'd appreciate any references to material covering these things formally... my brief search hasn't turned up many relevant results.
 
Tone mapping operators are not invertible (particularly since they often end up clamping out values based on the current exposure), so you can't reconstruct the proper scene luminance.
But can't the luminance be accurately calculated and passed to the next frame instead of being used on an additional pass?
ERK
 
But can't the luminance be accurately calculated and passed to the next frame instead of being used on an additional pass?
ERK
If you wanna use it in the following frames to do tone mapping within your color pass then how do you (properly) compute the luminance in the first place?
 
If you wanna use it in the following frames to do tone mapping within your color pass then how do you (properly) compute the luminance in the first place?

You could cheat with the very first frame creating a scene of which you know almost exactly the luminance of and if you have errors you can try to correct it a bit more in each of the following frames... possible ?
 
You could cheat with the very first frame creating a scene of which you know almost exactly the luminance of and if you have errors you can try to correct it a bit more in each of the following frames... possible ?
Yeah, but overtime the accumulated error will grow and that's it, end of the game :)
 
But can't the luminance be accurately calculated and passed to the next frame instead of being used on an additional pass?
But you need the average luminance over the whole framebuffer (or at least local regions for local tone mapping)... so you need to at least write it out to a separate render target and process that afterwards. If you have to do that anyways, why not just render the colour buffer in high precision to begin with and compute the average luminance directly from that?

There's no argument about old hardware and MSAA either because old hardware can't do MRT+MSAA anyways...

(Not to mention nAo's valid points about accumulated error, depending on how you implement the "passing to the next frame".)
 
Back
Top