AA/AF enhancements

OpenGL guy said:
3dcgi said:
Also, the only way for an edge detection algorithm to catch all edges including intersection edges is to do full scene AA or to transform all the edges before starting rasterization. At least those are the only ways I know of.
Or just do MSAA :) MSAA will catch intersections. The reason is that each subsample has it's own Z value. Apparently, Matrox's FAA uses the same Z value for all subsamples so it will break down on intersections.
Right. When I said full scene AA I was referring to MSAA or SSAA.
 
While multi-sampled AA using sparse grid sampling coupled with a hiearchical z buffer and efficient compression can provide fairly efficient AA, I think coverage mask techniques will be the long term solution for traditional renderers.

They can provide very high sampling such as 16x, 32x, or even 64x AA with a memory bandwidth cost of just a little over one bit per sample and only one z (including z slopes) sample point per pixel, yet the results can be equivalent to fully sampling the z at every sample point.

When coupled with sparse grid sampling patterns and the other benefits found in techniques such as Z3 and FAA, coverage mask AA becomes very compelling.

TBRs don't really need coverage mask AA techniques for external memory bandwidth reasons. However, such techniques might require less silicon than brute force sampling for high sample AA such as 16x or higher.
 
3dcgi said:
That argument doesn't make any sense to me. The observations make the argument for the actual AA technique to be more like multisampling as Xmas said. If the hardware was able to 16x supersample pixels the fall back supersampling mode would have used a spare sample pattern.
Did you mean sparse sampling pattern?

Why? nVidia GeForce cards have (with some drivers) supported up to 16x SSAA.
 
Perhaps I can chime in here.

I used/operated a Parhelia for about a year, and I'm now using a 9800 Pro...so I have first hand knowledge of FAA, the pros/cons, how it compares, etc.

If it were possible to work out all the kinks with FAA, it would be the most awesome implementation IMHO. I mean, when you take a good look @ some game where FAA is fully functional, the results are truly amazing. As has been pointed out numerous times, the problem is that Matrox's method didn't work all/most of the time...If I were to hazard a percentage, I would say about 80% effective, perhaps a little less.

I'm highly impressed with the 9800's AA modes...but FAA sure did look good...damn good. If Matrox can iron this out with their next chip and possibly offer even more samples, they could have a serious winner on their hands.
 
OpenGL guy said:
Actually, no. Imagine drawing a quad as two triangles with a checkboard texture applied. If you actually did supersampling on the edges, then you would see a seam where the two triangle meet. This doesn't happen with multisampling because the same texture sample is used regardless of which triangle the edge pixel falls in (sample point is always in the center).
But can't that lead to problems with the texture being sampled "out of range"?
 
Simon F said:
But can't that lead to problems with the texture being sampled "out of range"?
That's something I've always wondered myself. It's easy to think of a scenario like this:
Code:
- - - 
o x - 
o o -
Where the o's are the samples covered by the current triangle being rendered, and the x is the texture sample position.

If this is a texture that's meant to be clamped to the edge, how is are the texture coordinates chosen? After all, if the texture is wrapped instead of clamped, there could be problems if the opposite side of the texture is of a significantly different color (which would be the reason to clamp instead of wrap, I should think). Of course, if the texture is set to wrap anyway, there shouldn't be any problem, as it should be about the same color anyway.
 
OpenGL guy said:
3dcgi said:
Also, the only way for an edge detection algorithm to catch all edges including intersection edges is to do full scene AA or to transform all the edges before starting rasterization. At least those are the only ways I know of.
Or just do MSAA :) MSAA will catch intersections. The reason is that each subsample has it's own Z value. Apparently, Matrox's FAA uses the same Z value for all subsamples so it will break down on intersections.

Interesting. So that'd mean if you used different Z values per sample ( which *would* logically degrade performance because you'd have to store more Z values, but compression techniques would help a lot here ) , most ( or all? ) problems of Matrox's FAA IQ might be fixed?


Uttar
 
Simon F said:
But can't that lead to problems with the texture being sampled "out of range"?

If you use mipmapping then using any miplevel other than 0 goes out of range anyway, because it contains out of range texels when filtered.

A model that would break down from MSAA would break down from mipmapping much sooner.
 
Uttar said:
Interesting. So that'd mean if you used different Z values per sample ( which *would* logically degrade performance because you'd have to store more Z values, but compression techniques would help a lot here ) , most ( or all? ) problems of Matrox's FAA IQ might be fixed?

Yes.
It would also be equivalent of ATI's MSAA solution. :)
 
Hyp-X said:
Yes.
It would also be equivalent of ATI's MSAA solution. :)
No, because it's still an ordered-grid method, and I don't think the Parhelia's method uses gamma correct combining of the samples.
 
Uttar said:
Interesting. So that'd mean if you used different Z values per sample ( which *would* logically degrade performance because you'd have to store more Z values, but compression techniques would help a lot here ) , most ( or all? ) problems of Matrox's FAA IQ might be fixed?
I think the main problem here is that it would defeat most of the purpose of FAA.

That is, the reason to go for FAA is largely to not just anti-alias only triangle edges, but to anti-alias only those triangle edges that need it. Especially for dense geometry, this would bring the performance of FAA down quite far, and it would need a much larger memory footprint.
 
Hyp-X said:
If you use mipmapping then using any miplevel other than 0 goes out of range anyway, because it contains out of range texels when filtered.
Why? The lower detail MIP map is still applied to the same area. It's just lower in detail.
 
Consider a case with 3 mipmap levels and bilinear interpolation, with the lowest mipmap being 4x4. If you set the texture coordinates to {u=1/8, v=1/8}, you will, at the 4x4 level sample precisely at the center of the texel at the upper-left corner of the mipmap. At the 2x2 level, {1/8, 1/8} wil no longer sample precisely at the center of a texel, but do bilinear interpolation between 4 "texels", 3 of which are outside the texture map.

That said, the issue that sampling in the center of the pixel for multisampling can sample a location outside the polygon proper does mean that Gouraud colors may be computed with values outside the standard range [0,1] and thus need additional clamping circuits that were not needed otherwise.
 
Chalnoth said:
3dcgi said:
That argument doesn't make any sense to me. The observations make the argument for the actual AA technique to be more like multisampling as Xmas said. If the hardware was able to 16x supersample pixels the fall back supersampling mode would have used a spare sample pattern.
Did you mean sparse sampling pattern?

Why? nVidia GeForce cards have (with some drivers) supported up to 16x SSAA.
That was a typo. I meant sparse sampling not spare. What I meant is that if Matrox internally used a 4x4 grid for supersampling with FAA why wouldn't they choose sparse samples from that grid for their 4x mode? Instead they use a 2x2 grid.
 
Hyp-X said:
Simon F said:
But can't that lead to problems with the texture being sampled "out of range"?

If you use mipmapping then using any miplevel other than 0 goes out of range anyway, because it contains out of range texels when filtered.

A model that would break down from MSAA would break down from mipmapping much sooner.
It is true that filtering (especially isotropic ) will be a nuisance but I was thinking more along the lines of the highest resolution map. If there is correlation between the model and features in the texture (eg a big stripe running through the middle lining up, say, with the edge of the poly) then not taking the actual area of texture covered by the sub-pixels into account will be 'less correct'. Mind you, I'm probably just being a bit pedantic.

Hmmm... it just occured to me that projective textures could go quite wrong if the sample is outside of the triangle.
 
Simon F said:
OpenGL guy said:
Actually, no. Imagine drawing a quad as two triangles with a checkboard texture applied. If you actually did supersampling on the edges, then you would see a seam where the two triangle meet. This doesn't happen with multisampling because the same texture sample is used regardless of which triangle the edge pixel falls in (sample point is always in the center).
But can't that lead to problems with the texture being sampled "out of range"?
Yes, this can be an issue and does appear from time to time. However, the grossest problems can be avoided by making your texture data "multitexture friendly".
 
OpenGL guy said:
Simon F said:
OpenGL guy said:
Actually, no. Imagine drawing a quad as two triangles with a checkboard texture applied. If you actually did supersampling on the edges, then you would see a seam where the two triangle meet. This doesn't happen with multisampling because the same texture sample is used regardless of which triangle the edge pixel falls in (sample point is always in the center).
But can't that lead to problems with the texture being sampled "out of range"?
Yes, this can be an issue and does appear from time to time. However, the grossest problems can be avoided by making your texture data "multitexture friendly".

I think you mean "multisample friendly" here, don't you? ;)
 
Chalnoth said:
Hyp-X said:
If you use mipmapping then using any miplevel other than 0 goes out of range anyway, because it contains out of range texels when filtered.
Why? The lower detail MIP map is still applied to the same area. It's just lower in detail.

Yes the area is the same.

But take a texel in miplevel 1.
Presume a block filter.

texel_1[j] = (texel_0[2*i][2*j] + texel_0[2*i][2*j+1] + texel_0[2*i+1][2*j] + texel_0[2*i+1][2*j+1]) / 4

Now it is possible that "texel_1[j]" falls in that area, but some of the 4 texels it is derived from do not.
So, now you have a texel affecting your output that originally did not.

Presume you have a textured triangle with all the texels that are inside the area are black, all other texels are white.
If you view the triangle so that only the largest miplevel is used the triangle will be black.
If you start to move away the edges of the triangle start to become gray.

Ps.: Actually the previous explanation is precise for point sampling only, but the same effect is experienced with bilinear filtering.
 
related to the subject of out of range texture samples:

assuming ddx and ddy work on 2x2 pixel stamps simply by computing
a difference between register values in adjacent stamp pixels, what happens when values in the stamp are missing (at the edges, due to z-rejects, etc...)? I.e.

Code:
0X
0X
or even
Code:
0X
XX

At first I thought that you would just run the pixel shader for the missing samples (allowing ddx/ddy to be computed as they normally are), and discarding the actual results... But, you might easily end up running a shader with invalid inputs. Any ideas on how to handle this?

Serge
[/code]
 
Back
Top