HL2 and aliasing

Here's the situation that finally helped me see the problem.

Consider a (top-level) bump map that contains two surfaces angled at 90deg to each other rougly like:
Code:
    /\
  /    \
/        \
The eye sits directly over it. Reflections off the sides then disappear off one to either side of the bump. Let us say that one of these corresponds to a green light, one to a red light.

So when using a mipmap, somehow you have to still get both the green + the red and get a dull yellow result (50% of the red, 50% of the green), but you only have a single sample to play with...

PSarge's example is the same thing writ large. For smooth, or nearly smooth surfaces, you can adjust the LOD of the environment map to sort out the problem, but for very bumped surfaces this doesn't work terribly well.

The result is that at the moment high-resolution bump maps with sharp gradients don't work terribly well except when close to the camera - you have to sacrifice bump depth in the mipmaps.
 
That's why you have to use mip-mapped bump maps. Shader aliasing is a well known problem (search for Renderman Shaders aliasing). From the mathematical point of view, you have to make sure, that your pixel shader function is sampled enough times to satisfy Nyquist theorem. This is almost always impossible, especially on current line of hardware. All you can do is to minimize your aliasing issues. You can use filtered inputs in your pixel shader (use mip-maps for everything), minimize weird dependent texture lookups, etc. You can also use derivative (ddx, ddy) instructions to do your own antialiasing scheme (by analytically computing antialiased output or by blending your function result with predefined mean value – most Renderman Shaders do similar things).
 
Hmm...well would multiple sub pixel shader processing as an architecture feature emphasis be feasible? Analyzing and marking redundant calculations and looping back to finish the shader for unique calculations and taking advantage of whatever redundancy/efficiency opportunities were found would at least leave efficiency no worse than supersampling, right? Even if hardware was optimized for only two sub pixel positions processed in this way, it seems it should show marked improvement both in regard to supersampling and aliasing. I understand ddx/ddy is a way to implement this explicitly in shaders, but I was looking more at a transparent methodology of implementation.

Good idea or bad idea? How much of this could be done effectively on the drivers/host using some mathematical principle analysis? How much would have to be done on the GPU by the scheduler?

Texture lookups being fetched for each sub pixel would be a cache and bandwidth challenge, but wouldn't the processing challenge for redundant operations be solved by more freely allocatable processing resources? I.e., where component utilization, and pipelined stage count versus operation clock cycle demands, left idle resources that could be utilized? AFAICS, the challenge would be more straightforward than maximum execution of a dependent instruction chain, since each sub pixel would be discrete.

Hey, it is pretty easy to propose this stuff when you don't actually have to design the hardware, though it does seem like a direction that would benefit conditional branching in pixel shading to me. :p Heck, might as well throw in being able to allocate resources freely between vertex and pixel processing to increase the number of resources that can be brought to bear on the problem!

These thoughts remind of Yet Another Discussion Involving SA, of course, as most AA discussions do, but I can't recall a key word to home in on the specific thread I think I recall, though I'm pretty sure I've found it before. Perhaps someone else will remember more clearly, as my search found too many discussions that touched on this for me to sift through right now.
 
I think, that something like sub pixel shader processing can be implemented on current hardware (NV3x line). You can bias your pixel shader inputs using information from ddy/ddx instructions and average computed results. The problem is that this way you only reduce artifacts. If you use 2 sub samples you will increase sampling frequency by a a factor of two. So you will get artifacts in other place in the screen. (where lower mip-map level in classic texturing would be selected) I don't think that hardware vendors will invest in sub pixel shader execution. They have much more important issues to overcome (e.g. crappy floating-point support: no blending & filtering & supersampling).
 
As I mentioned along with those instructions above, my understanding is that a solution for ddx/ddy requires explicit shader construction, whereas what I was proposing was intended as more of a general implementation solution as an aspect of pipelining implementation, which might also offer benefit with discrete value processing efficiency in general (conditional shaders).

As far as increased sampling frequency artifacts, I'm not clear on when you're proposing things wouold be made worse than one sample? It is basically intended as increased efficiency supersampling for shader output, with options on redundant calculation analysis and sample position selection left as fairly open questions (though part of the redundant calculation solution is proposed as some specific pipelining design capabilities).

I can see how repeated normal map selection can create issues with a good image, but won't calculations generate new lighting results for each pixel due to unique angle relationships even when the same sample is re-used? Are the artifacts you have in mind related to something about this I'm not considering, or that won't be addressed by mip map selection bias solutions generally applicable to supersampling?
 
Dio said:
you have to sacrifice bump depth in the mipmaps.

That shouldn't really be a 'huge' problem though. The farther you are away from a bump surface the smoother it would appear to be.

Now, if you are attempting to create large deep bumps in a sufrace, then perhaps using normal maps isn't such a good idea. Using real geometry might be a better idea in such a case. Lighting with normal maps of course isn't perspective correct, so massive details are just going to look fake if you attempt to do it. If you do use geometry you will also get the benefits of antialiasing when doing multisampling, which of course is the exact problem trying to be solved.

Now i must say, IMO, normal maps should only be used for small surface details. I tend to think of normal maps as just a better form of doing detail texturing.
 
Colourless said:
That shouldn't really be a 'huge' problem though. The farther you are away from a bump surface the smoother it would appear to be.
That's what I thought. But actually, it's wrong. My example above shows that
- if you viewed it at 'infinite distance' it should be a uniform yellow
- if you view it in any aliased manner, it will be a semirandom pattern of red and green
- flattening the bumps can never generate yellow.
 
Simon F said:
You can in a lot of cases: In a past job we had AA'd procedural textures in a raytracer using just one 'sample' and knowledge of the size of the area to be sampled.

Hi Simon! This only works for procedurals though.:)

-M
 
_arsil said:
That's why you have to use mip-mapped bump maps. Shader aliasing is a well known problem (search for Renderman Shaders aliasing). From the mathematical point of view, you have to make sure, that your pixel shader function is sampled enough times to satisfy Nyquist theorem. This is almost always impossible, especially on current line of hardware. All you can do is to minimize your aliasing issues. You can use filtered inputs in your pixel shader (use mip-maps for everything), minimize weird dependent texture lookups, etc. You can also use derivative (ddx, ddy) instructions to do your own antialiasing scheme (by analytically computing antialiased output or by blending your function result with predefined mean value – most Renderman Shaders do similar things).

Yes, this is correct.

Supersampling the scene will get nowhere near what texture filtering can accomplish, so you are just burning bandwidth there.

However, if you filter the bump maps *too* much, then you will get too much specular highlight! It's almost a catch 22. If you increase the variations of normals on the bump map to get a rougher surface, you'll get more aliasing (contrast increase).

The best solution would be to lower the specular component (which I don't think that's what they want with HDR) and filter the maps.

-M
 
Mr. Blue said:
Simon F said:
You can in a lot of cases: In a past job we had AA'd procedural textures in a raytracer using just one 'sample' and knowledge of the size of the area to be sampled.

Hi Simon! This only works for procedurals though.:)

-M
But, isn't that what a fragment programs is these days? :)
 
Dio said:
Colourless said:
That shouldn't really be a 'huge' problem though. The farther you are away from a bump surface the smoother it would appear to be.
That's what I thought. But actually, it's wrong. My example above shows that
- if you viewed it at 'infinite distance' it should be a uniform yellow
- if you view it in any aliased manner, it will be a semirandom pattern of red and green
- flattening the bumps can never generate yellow.

Then you need a roughness/microfacet factor parameter in your texture as well as the normal. This way, as you average out extremely diverse normals, you can compensate for the "smoothing". The roughness can be used to alter the shading, be it specular ligthing or environment map sampling.

The DC's bump map system had something along those lines.
 
I try to clarify my opinion. Something like sub-pixel pixel shader execution (you can call it supersampling) is not a panacea! Imagine a function, that generates perfect noise. Even if you do 1000000 sub-pixel samples you will get an aliasing. To eliminate an aliasing you have to use noise that is frequency limited.

You have to think about aliasing when you are coding a shader. Hardware can help, but wont eliminate all issues.
 
Simon F said:
you need a roughness/microfacet factor parameter in your texture as well as the normal.
That's a really useful idea, I had something similar in mind at some point, although it can start getting the shader rather complicated. I didn't realise it had all been done before - but then again, everything has been done before somewhere....
 
Simon F said:
Dio said:
Colourless said:
That shouldn't really be a 'huge' problem though. The farther you are away from a bump surface the smoother it would appear to be.
That's what I thought. But actually, it's wrong. My example above shows that
- if you viewed it at 'infinite distance' it should be a uniform yellow
- if you view it in any aliased manner, it will be a semirandom pattern of red and green
- flattening the bumps can never generate yellow.

Then you need a roughness/microfacet factor parameter in your texture as well as the normal. This way, as you average out extremely diverse normals, you can compensate for the "smoothing". The roughness can be used to alter the shading, be it specular ligthing or environment map sampling.

The DC's bump map system had something along those lines.

Wow. And I thought we only did this in the feature film business!;)

How would you guys implement such a microfacet lighting model, seeing as though, you'd need to implement other factors (shadowing & masking, fresnel, and a distribution function) unless you could blend your texture with some lambertian/scattering texture?

The DC was definitely way ahead of its time.:)

-M
 
Mr. Blue said:
Wow. And I thought we only did this in the feature film business!;)

How would you guys implement such a microfacet lighting model, seeing as though, you'd need to implement other factors (shadowing & masking, fresnel, and a distribution function) unless you could blend your texture with some lambertian/scattering texture?

The DC was definitely way ahead of its time.:)

-M
:oops: Oh heck! I didn't mean to imply all that was in DC!!! :oops:

What I said about incorporating the roughness into the texture map as you down filter the normals was only sort-of related to what was in the DC.

The DC's bump map was just (intended) for dot-product calcs. If you were using it for lighting and wanted to approximate several lights in one pass you needed a way of reducing the dot product 'intensity' and so it had a "kludge" to average lights together.
 
_arsil said:
I try to clarify my opinion. Something like sub-pixel pixel shader execution (you can call it supersampling) is not a panacea! Imagine a function, that generates perfect noise. Even if you do 1000000 sub-pixel samples you will get an aliasing. To eliminate an aliasing you have to use noise that is frequency limited.

unless you analytically integrate the noise function (which, of course, is impossible).:)

-M
 
Back
Top