Normal Mapping + Motion = Aliasing?

After playing Doom 3 in high quality mode for some time I've become accustomed to seeing shimmering on the edges of bump/normal mapped textures, even when AA is applied; the problem is aggravated more so when the flashlight is applied to the aliased edge. What, if any, correlation between normal mapping and filtering causes this? I remember reading something about gradients and fragment programs being required for proper filtering without aliasing, but can't seem to find the source.

Here is a pic of a similar case in the HL2 benchmark: http://www.beyond3d.com/misc/hl2/image.php?img=images/hdr3.jpg&comment=Half Life 2%.

Note: Thought this would be a refreshing topic to discuss now that developers are transitioning from fixed function to custom AA and filtering via fragment programs.
 
Right. Texture filtering doesn't work properly on normal maps, so you get aliasing unless something is done about it. You can either use one of a number of tricks to properly "filter" the normal map, or you can use supersampling to get rid of the aliasing, or you can use a blur filter.

The supersampling and blur filter are okay, but supersampling can only remove aliasing by so much, and is very expensive in performance, and the blur filter has obvious drawbacks.

The best way, then, is to attempt to simulate supersampling by using a different algorithm for "filtering" the normal map. One way to do it may be to take a function of the gradients of the components of the normal vector from pixel to pixel, and "smudge" the blur more for larger gradients.
 
The problem is in the hardware, pretty much. AA is done with multisampling on most GPUs these days. These run the shader only once for every pixel, not once for every sample in the pixel. Only the z/stencil is supersampled. The only anti-aliasing that is performed, is when multiple polygons partially cover a pixel. The result is then a weighed average of all pixels, based on the coverage (which is taken from the supersampled zbuffer).
While this works fine on polygon edges, artificial edges created by normalmaps are not taken into account.

The best solution would perhaps be a modified multisampling AA algorithm that does multiple samples on normalmap edges. But I don't know if anyone has developed such an algorithm yet, and even if so, it would most probably require an updated GPU design to implement it.

Supersampling will ofcourse work, but it will be more expensive (which is why supersampling was abandoned in favour of multisampling AA a few generations ago).

One way to get around it on today's hardware, may be to use the dsx/dsy and texldd instructions. Then you can calc the gradient of the normalmap for a 2x2 block of pixels, and then do a texture fetch with these gradients selecting the proper mipmap, where the smaller mipmaps will contain downfiltered normals, which should have smooth edges.
I haven't tried this myself, so I don't know how well that would work.
 
Scali said:
The problem is in the hardware, pretty much. AA is done with multisampling on most GPUs these days. These run the shader only once for every pixel, not once for every sample in the pixel. Only the z/stencil is supersampled. The only anti-aliasing that is performed, is when multiple polygons partially cover a pixel. The result is then a weighed average of all pixels, based on the coverage (which is taken from the supersampled zbuffer).
While this works fine on polygon edges, artificial edges created by normalmaps are not taken into account.
MSAA also works for polygon intersections.
The best solution would perhaps be a modified multisampling AA algorithm that does multiple samples on normalmap edges. But I don't know if anyone has developed such an algorithm yet, and even if so, it would most probably require an updated GPU design to implement it.
Since normal mapping and MSAA are completely independent of each other, I don't see any chance of people going in the direction you suggest.
Supersampling will ofcourse work, but it will be more expensive (which is why supersampling was abandoned in favour of multisampling AA a few generations ago).

One way to get around it on today's hardware, may be to use the dsx/dsy and texldd instructions. Then you can calc the gradient of the normalmap for a 2x2 block of pixels, and then do a texture fetch with these gradients selecting the proper mipmap, where the smaller mipmaps will contain downfiltered normals, which should have smooth edges.
I haven't tried this myself, so I don't know how well that would work.
This is essentially supersampling in the shader and can be done even without MSAA being enabled. This is the ideal solution as you only supersample data that needs it. Plus, it's under application control so performance can be more easily controlled. One drawback is that not all platforms support the gradient functions.

-FUDie
 
Scali said:
The problem is in the hardware, pretty much. AA is done with multisampling on most GPUs these days. These run the shader only once for every pixel, not once for every sample in the pixel. Only the z/stencil is supersampled. The only anti-aliasing that is performed, is when multiple polygons partially cover a pixel. The result is then a weighed average of all pixels, based on the coverage (which is taken from the supersampled zbuffer).
While this works fine on polygon edges, artificial edges created by normalmaps are not taken into account.
I wouldn't say it's a hardware problem. The hardware offers all the necessary features, but it can't apply them automagically.

The best solution would perhaps be a modified multisampling AA algorithm that does multiple samples on normalmap edges. But I don't know if anyone has developed such an algorithm yet, and even if so, it would most probably require an updated GPU design to implement it.
Since you never know you're at a normalmap edge until you sample it, "adaptiveness" is hardly doable. However, you can take multiple samples for every pixel unconditionally, and it would still be faster than brute force supersampling (since that includes all the calculations. Problem is, it requires tricky handling of several sets of texcoords, or gradient instructions, as you mentioned.

One way to get around it on today's hardware, may be to use the dsx/dsy and texldd instructions. Then you can calc the gradient of the normalmap for a 2x2 block of pixels, and then do a texture fetch with these gradients selecting the proper mipmap, where the smaller mipmaps will contain downfiltered normals, which should have smooth edges.
I haven't tried this myself, so I don't know how well that would work.
You don't need texldd here, just take multiple samples at different positions in the texture (determined by fractions of the texcoord gradients).
texldd is mighty slow unfortunately :(
 
The problem is not so much the filter itself. Even if you have a 100% correct average normal for the area the sample covers it doesn't solve the problem. What you should be looking for is an average not of the normals but of the lighting results, which more or less means supersampling. But since that's too slow you should rather look for various tricks to work around the problem by adjusting specular intensity and power according to the amount of normal variation.
 
MSAA also works for polygon intersections.

True, but they are still edges determined by polygon boundaries alone, not by any kind of map applied to them. So they fall under my category of 'polygon edges'.

Since normal mapping and MSAA are completely independent of each other, I don't see any chance of people going in the direction you suggest.

Well, if we look at MSAA as some kind of 'conditional supersampling', it would perhaps make sense to allow for extra conditions that would apply to problems such as the normalmap aliasing. That was the direction in which I was thinking anyway.

This is essentially supersampling in the shader and can be done even without MSAA being enabled.

Yes, but this solution would be for normalmaps only. MSAA would still be required for polygon edges.

This is the ideal solution as you only supersample data that needs it.

Yes, but it is a prefilter method, not a supersampling method. If you could put multiple samples in the multisample buffer, and average those, with the modified MSAA idea I mentioned earlier, you'd get proper supersampling in areas that require it. This may look significantly better, I am not sure. I suppose alternatively you could do adaptive supersampling with a dynamic branch (take 4 samples, calc gradients, if gradients are below a certain threshold, calc light for only 1 normal (perhaps the average of the 4 samples), else calc light for all 4 samples, and use the average).
Perhaps someone should code a few methods for antialiasing and compare quality and speed :)
(Sadly I don't have texldd-capable or dynamic-branch-capable hardware, so I can't test it myself).
 
Normal maps with specular = aliasing, esp. with tighter highlights. Don't forget there's also in-surface aliasing based on texture calculations.
 
In his QuakeCon04 video, Carmack mentions the (current) plan to combat this type of aliasing: Locally (depending on something like gradients in the normal maps?) broadening the specular highlight.
He also mentions that the problem is not terribly evident in Doom 3, as the highlight is already quite big.
 
A good solution that I've seen is polynomial texture mapping (PTM). The math actually works out in a way that mip-map averaging is correct.

Unfortunately, you need 6 values to form the polynomial, and the math requires a few instructions. It won't be as fast as ordinary bumpmapping, but you get a rough form of self shadowing bumps, anisotropic lighting, and even interreflection for free, so I think it's worth it.

I don't think PTM will give you nice sharp specular highlights, though, and if you make the polynomial so that it does, you'll still get subdued highlights in the mipmaps. In the end you still have a quadratic, which is really not much better than a normal vector.

The ultimate solution is probably higher order spherical harmonic lighting, but that gets real expensive real quick. Either that or quartic PTM's, which may actually be quite doable.
 
While I haven't gotten a chance to play Doom3 myself, I thought one of the reason of using normal map is that it removes aliasing, as oppose to using highly tessellate geometry and do lighting at vertex level.

It seems the dreaded aliasing, can still be seen. Why don't GPU makers just concentrate on getting super sampling performance up, and be done with it. Or is super sampling still not good enough solution (ignoring the current level of performance) ?
 
More info here.

AFAICS, PTMs are quite nice but need quite a bit more memory for the coefficients. Also, shader workload is greatly increased --- unless they are implemented in hardware.
So, IMO it is not a solution "for the time being"... ;)
 
V3 said:
It seems the dreaded aliasing, can still be seen. Why don't GPU makers just concentrate on getting super sampling performance up, and be done with it. Or is super sampling still not good enough solution (ignoring the current level of performance) ?
Well, the problem is that supersampling can't eliminate the aliasing. It can only reduce it, with a finite number of samples. It is quite possible, however, with alternative tricks, to almost completely eliminate aliasing (as MIP mapping does for color textures...it's not perfect, but it's pretty darned good).

An easy way to see how supersampling can break down is to look at alpha textures. If you have a video card that supports supersampling FSAA modes, you'll notice that no matter what level of FSAA you select, the wrong alpha-tested texture viewed in the wrong way way will still end up looking like crap. I seem to remember this being the case, for example, on an older racing game that had chain link fences in places along the side of the road.
 
Back
Top