Nvidia's Programmable AA

trinibwoy

Meh
Legend
Supporter
Finally?

http://patft1.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=2&p=1&f=G&l=50&d=PTXT&S1=nvidia.ASNM.&OS=an/nvidia&RS=AN/nvidia

Using jittered sub-pixel sample positions reduces the likelihood that a small primitive or fragment will be lost. Furthermore, the human perception system is tuned to detect regular patterns, appearing as aliasing and other artifacts. Jittering removes the regularity from a sampling pattern such as the pattern shown in FIG. 2A resulting in an image with reduced aliasing. Sub-pixel offset values to produce an irregular jittered sub-pixel sample position pattern may be programmed for use by Rasterizer 150 during scan conversion and coverage mask generation. However when a single jittered subpixel pattern is used for every pixel, aliasing will appear due to a perceived pattern. Therefore, a jitter pattern should vary from pixel to pixel, i.e. have a greater period, to effectively reduce aliasing.
 
Using jittered sub-pixel sample positions reduces the likelihood that a small primitive or fragment will be lost.
Errrr... how? Surely a small primitive is just as likely/unlikely to be sampled in a pixel with jittering as without.

What jittering does is move the sampling error (i.e aliasing) from being predominantly low-frequency to being more high-frequency in nature.
 
Simon F said:
Errrr... how? Surely a small primitive is just as likely/unlikely to be sampled in a pixel with jittering as without.

I think you misunderstood. Jittering makes it less likely that a small primitive is lost entirely (i.e it does not cover any sub-sample across all pixels that it touches) - it does not increase the probability that it would be sampled for a given pixel.
 
Last edited by a moderator:
Hmm this looks like it could be quite groovy.

I wonder if this means that reviewers will start using videos to demonstrate AA quality (reduced crawl).

Jawed
 
trinibwoy said:
I think you misunderstood. Jittering makes it less likely that a small primitive is lost entirely (i.e it does not cover any sub-sample across all pixels that it touches) - it does not increase the probability that it would be sampled for a given pixel.
When I read "small" I automatically thought (sub)pixel-sized polygons. Silly me.
 
Jawed said:
I wonder if this means that reviewers will again start using videos to demonstrate AA quality (reduced crawl).
Just wanted to point out the word you left out.

Simon F said:
(sub)pixel-sized polygons. Silly me.
Indeed you are, because there's no such thing! :)

radeonic2 said:
Nice to see they joined the rest of us in 2002...
I'm still browsing the patent. Images would have helped greatly but from the patent this doesn't seem at all similar to ATI's. Of course, this patent was filed in 2003.
 
Last edited by a moderator:
Nom De Guerre said:
Indeed you are, because there's no such thing! :)

If you have AA enabled it is possible that a polygon only hits one single sample inside a pixel. Objects that are far away are good candidates for such cases.
 
Nom De Guerre said:
Indeed you are, because there's no such thing! :)
I'm hoping you are just being facetious because, I suppose, denying their existance certainly can make one feel better.

Unfortunately they do exist and are a pain in the proverbial because of their extremely high frequency content.:cry:
 
Having read the patent, it is extremely interesting to see that it proposes to have varying sample positions for different pixels. The approach it proposes for it is rather... primitive at best, though. It is unclear whether that is programmable, or it'll be forced to do it in such a hacky manner. It also hints towards a varying number of subpixels per pixel, but it never explicits that, so I guess that's just to make the "invention" cover more legal ground.

Uttar
 
Uttar said:
The approach it proposes for it is rather... primitive at best, though.

I htought the exact same thing before I got to the part about ensuring that a given pixel uses the same sampling pattern across every frame to avoid artifacts. I'm not sure how "programmable" you can get and still ensure that. I think it's "programmable" in the sense that the n-sized list of (x,y) offset pairs can be manipulated by the driver.
 
Nom De Guerre said:
I'm still browsing the patent. Images would have helped greatly but from the patent this doesn't seem at all similar to ATI's. Of course, this patent was filed in 2003.

There are images - just click the big red "Images" button at the top of the screen ;) Might have to follow the help link on the following page to get the TIFF viewer.
 
trinibwoy said:
I htought the exact same thing before I got to the part about ensuring that a given pixel uses the same sampling pattern across every frame to avoid artifacts. I'm not sure how "programmable" you can get and still ensure that. I think it's "programmable" in the sense that the n-sized list of (x,y) offset pairs can be manipulated by the driver.
Well, in an ideal world, you've got the hardware storing... I don't know... 256 samples, and guaranteeing unique sample positions in a 4x4 to 16x16 grid. You don't just reuse old values because it's cheaper.

The patent isn't extremely clear as to whether that's possible (although I doubt up to 256), or whether varying sample positions for different pixels requires the usage of the value "shifting"/"rotation" mechanism.


Uttar
 
Simon F said:
I'm hoping you are just being facetious because, I suppose, denying their existance certainly can make one feel better.

Unfortunately they do exist and are a pain in the proverbial because of their extremely high frequency content.:cry:

Sub pixel polygons suck.
Probably the biggest reason we still see relatively low polygon count models in games.
Anyone who's ever tried very highres models quickly realises just how bad they can look, and just what real aliasing is.
 
Uttar said:
The patent isn't extremely clear as to whether that's possible (although I doubt up to 256), or whether varying sample positions for different pixels requires the usage of the value "shifting"/"rotation" mechanism.

Well it does mention that the number of offsets can be set based on resolution but I'm thinking it won't take that many different patterns for this to be effective.

In terms of selecting distinct patterns for each pixel it mentions moving lock-step through the list of offsets but also mentions using a lookup based on pixel position to determine the index of the offset(s) to use. Not sure if those two approaches are complementary in any way.
 
Luminescent said:
What processor architecture does this pattent apply to, G70?

Don't think so. It isn't possible to change the AA sample pattern on G70, far less make it variable across different pixels.
 
Simon F said:
I'm hoping you are just being facetious
Yes, I was. Sorry :)

because, I suppose, denying their existance certainly can make one feel better.

Unfortunately they do exist and are a pain in the proverbial because of their extremely high frequency content.:cry:
Yes, such particular aliasing can suck but I think this is a matter of subjective tolerances. It's not that great of a problem in films (although I know there's a big difference). Filtering and density would help greatly.
 
Sub pixel polygons suck.
Probably the biggest reason we still see relatively low polygon count models in games.
Anyone who's ever tried very highres models quickly realises just how bad they can look, and just what real aliasing is.
Well... if we stopped *rasterizing* polygons and moved towards *sampling* them... :) I know, total pipe dream, but if hardware just keeps getting bigger for the sake of getting bigger, it'll just become pointless after a while.
 
Back
Top