Custom resolve demo

Thanks for the Demo Humus! I just gave it a try on my HD2900 Pro/1G (800/1.000) in Vista x64 w/ Cat8.3


1.600x1.200 (as that seems to be some common ground)

Results in Fps, w & w/o custom resolve
No AA: 533/533 (obviously)
2x AA: 330/299
4x AA: 211/347
8x AA: 101/300
Now, that's strange.

One more go in 2.560x1.600
No AA: 265/265
2x AA: 161/146
4x AA: 99/181
8x AA: 47/155
Same relations here. Bandwidth (at least from VRAM) should not be an issue at a rate of 128G/sek.


edit:
Even more strangely: If i 'fly' out of the castle so that only some background tex is in view, i am hitting a wall at 50 Fps w custom resolve. If i switch off CR, Fps jump to 250ish. What exactly is the demo doing with CR? No Edge-checking whatever, just every Pixel gets custom resolved?
 
Last edited by a moderator:
Results in Fps, w & w/o custom resolve
No AA: 533/533 (obviously)
2x AA: 330/299
4x AA: 211/347
8x AA: 101/300
Now, that's strange.

One more go in 2.560x1.600
No AA: 265/265
2x AA: 161/146
4x AA: 99/181
8x AA: 47/155

Grrr, now I'm completely puzzled of how the sampling rate relates to those numbers, here!? :???:

Here is my re-run, comparing 2xAA and 4xAA (1600*1200):

Code:
        On   Off 
2xAA:  330 | 277
4xAA:  220 | 372
As it seems, the frame rate ranks are virtually cross-sectioned... duh!
 
Radeon HD 3870

So, directly compared the numbers would look as follows:
Code:
2xAA
            HD3870     HD2900 XT+
          (823/1.200)  (800/1.000)
CR on         330         330
CR off        277         299

-----
4xAA
            HD3870     HD2900 XT+
CR on         220         211   
CR off        372         347
yes?
 
Last edited by a moderator:
8800GTS-640
Forceware 174.74 Vista 64-bit
600 core / 1600 shader / 900 memory
1680x1050 8xAA

Tested from a few different angles / distances. Custom resolve seems to be around 5-8% slower than the standard resolve on average. At 4xAA difference is much smaller.
 
The fact remains however that if we start using increasingly non-linear ones other assumptions of linearity (such as texture filtering) are going to become a problem.

I'm not so sure it's a problem actually. With HDR rendering we're rendering in linear space, so filtering linearly seems to be exactly what you want. More so than in traditional fixed point rendering to non-sRGB-corrected buffers. Then in the end we're just remapping a bunch of light values. If the tonemap operator is reasonably sensible I can't see any problems happening, regardless of if a particular light value came from an filtered texture or however it came to arrive in the render target. Hope I'm explaining my line of thought well there. :)

I take it you can't comment on these "clever tricks"?

I don't think it's super-sensitive, but I'll stay on the safe side.
 
What exactly is the demo doing with CR? No Edge-checking whatever, just every Pixel gets custom resolved?

No edge-detection or anything. Just looping over the samples, tonemap and average. Really simple code. Check resolve.shd for the shader.
 
I'm not so sure it's a problem actually. With HDR rendering we're rendering in linear space, so filtering linearly seems to be exactly what you want. More so than in traditional fixed point rendering to non-sRGB-corrected buffers. Then in the end we're just remapping a bunch of light values. If the tonemap operator is reasonably sensible I can't see any problems happening, regardless of if a particular light value came from an filtered texture or however it came to arrive in the render target. Hope I'm explaining my line of thought well there. :)
My concern is that texture filtering assumes that whatever function you're going to apply to compute the final colour/illumination of a surface is reasonably linear with respect to the underlying texture data. This is generally true before tone mapping, but not necessarily after.

For instance, consider a "black and white" checkerboard texture with values 0 and 1. Now consider a really bright light that when modulated with these values transforms the exit luminance to - say - 0 and 1024. Now if you're using a linear tone mapping operator, these will get mapped back down to 0 and 1 (with ideal exposure), and averaged to 0.5. This is the same result that you'd get if you did texture pre-filtering, retrieved a 0.5 from the texture and tone mapped that.

However with a highly non-linear tone mapping operator I think you can see how that would break down. It also breaks down with extreme exposure values (consider simply a "clamp" tone mapping operator to [0,1]).

Now it's not totally clear to me that this will be a problem in practice - particularly if you choose a "reasonable" tone mapping operator, use relatively good exposure values and don't have any insanely bright lights. However I do have a few mathematical concerns and was interested to see whether you can come up with any problems if you - for instance - tried to use a checkerboard texture with a really bright light. I'd be interested in seeing the comparison shots of that w/ HDR and tone mapping to the LDR version, with as much texture filtering as the hardware will give you.
 
No edge-detection or anything. Just looping over the samples, tonemap and average. Really simple code. Check resolve.shd for the shader.
Sorry, I am not very good at programming. :)

Does that mean, that you send n samples for every pixel back to the shader to do the resolve regardless of geometry, whereas the default AA-method does the usual thing, i.e. increased amount of sampling only on geometry borders?
 
My concern is that texture filtering assumes that whatever function you're going to apply to compute the final colour/illumination of a surface is reasonably linear with respect to the underlying texture data. This is generally true before tone mapping, but not necessarily after.
Bear in mind that a "normal D3D texture" is normally filtered incorrectly. A texture is (in the vast majority of cases) in gamma 2.2 space. When a pixel shader/TMU filters that texture it assumes the texture is in linear space. Bang, this texture is filtered incorrectly.

http://www.mytechlibrary.com/pages/...ct_-_gamma_through_the_rendering_pipeline.asp

Jawed
 
Sorry, I am not very good at programming. :)

Does that mean, that you send n samples for every pixel back to the shader to do the resolve regardless of geometry, whereas the default AA-method does the usual thing, i.e. increased amount of sampling only on geometry borders?

The default MSAA always increases sampling, regardless of geometry borders. It is only visible at the borders though (should be easy to understand why).
 
Bear in mind that a "normal D3D texture" is normally filtered incorrectly. A texture is (in the vast majority of cases) in gamma 2.2 space. When a pixel shader/TMU filters that texture it assumes the texture is in linear space. Bang, this texture is filtered incorrectly.
Yes certainly, but in the general case it doesn't end up being *too* wrong as the inverse "wrong" operation happens when writing to the framebuffer. Certainly for some effects like subsurface scattering filtering in non-linear space is a big problem and on modern hardware there's really no excuse not to do it "properly" anymore. (GPU Gems 3 has a great chapter on all of this btw.)

However, the gamma curve isn't as aggressive as some tone mapping functions can be, and certainly you're rarely dealing with as extreme contrast ratios as HDR. Definitely both are technically "wrong", but the gamma problem is both easier to solve and less severe than the problems that tone mapping can theoretically introduce.
 
The default MSAA always increases sampling, regardless of geometry borders. It is only visible at the borders though (should be easy to understand why).
Right, but these are skipping the pixelshader, which seems not to be the case with CR.
 
However, the gamma curve isn't as aggressive as some tone mapping functions can be, and certainly you're rarely dealing with as extreme contrast ratios as HDR. Definitely both are technically "wrong", but the gamma problem is both easier to solve and less severe than the problems that tone mapping can theoretically introduce.
No, HDR in itself is not wrong - that's like saying photography is wrong.

Jawed
 
No, HDR in itself is not wrong - that's like saying photography is wrong.
I'm not saying HDR is "wrong" - I'm saying if you're applying a highly non-linear tone mapping operator, then texture pre-filtering (i.e. mipmapping, etc) is wrong. By "wrong", I mean not equivalent/necessarily close to what you'd get by super-sampling the frame buffer and averaging the results once the texture is composed with lighting, etc. In particular, the signal processing/anti-aliasing arguments applied to generating and selecting mipmap levels do not hold if the output is highly non-linear.

It's the same argument as why you want to tone map before MSAA resolve really, except that it's unreasonable to tone map before texture filtering...
 
My concern is that texture filtering assumes that whatever function you're going to apply to compute the final colour/illumination of a surface is reasonably linear with respect to the underlying texture data. This is generally true before tone mapping, but not necessarily after.

But I don't think that's a problem, or even wrong. In fact, I would say it's correct to filter linearly, regardless of tonemapping operator. In the domain where texture filter lives, in linear lighting to the render target, a linear filter is the right thing. Then the mapping from this buffer to final colors is about as relevant as how you mapped a photographic HDR image to a fixed point image. I mean, you can tonemap a perfectly fine HDR photo to garbage, but that doesn't make the source image wrong in any way.

For instance, consider a "black and white" checkerboard texture with values 0 and 1. Now consider a really bright light that when modulated with these values transforms the exit luminance to - say - 0 and 1024. Now if you're using a linear tone mapping operator, these will get mapped back down to 0 and 1 (with ideal exposure), and averaged to 0.5. This is the same result that you'd get if you did texture pre-filtering, retrieved a 0.5 from the texture and tone mapped that.

However with a highly non-linear tone mapping operator I think you can see how that would break down. It also breaks down with extreme exposure values (consider simply a "clamp" tone mapping operator to [0,1]).

I don't think it would "break down". It may return something different from 0.5, but you don't want 0.5 either.
 
Does that mean, that you send n samples for every pixel back to the shader to do the resolve regardless of geometry, whereas the default AA-method does the usual thing, i.e. increased amount of sampling only on geometry borders?
Right, but these are skipping the pixelshader, which seems not to be the case with CR.

The rendering of the scene is that exact same between the two. It's still standard multisampled rendering, so the pixel shader is only executed once per-pixel, rather than per-sample. The only thing that differs is the resolve. The traditional way by calling Resolve() and then passing the result through a tonemap shader, and in the CR way by sampling the render target and tonemapping it sample by sample.
 
I'm not saying HDR is "wrong" - I'm saying if you're applying a highly non-linear tone mapping operator, then texture pre-filtering (i.e. mipmapping, etc) is wrong.

I'll have to disagree and say Jawed has a point. Texture filtering and mipmapping is only concerned with producing a correct HDR image. What happens to that image later on is irrelevant to how you produce the correct HDR image. Having the mipmaps filtering be linear is the right thing, regardless how you decide to map it in the end, because you want things to be linear in your HDR buffer. Then you can of course tonemap it to garbage if you like, but so can you do with any HDR photo too. How you arrived at the HDR image (or whether rendered or a photo) is irrelevant to the tonemap operator, and what the tonemap operator does is irrelevant to how you render a correct HDR image.
 
Back
Top