Stochastic sampling

Bumpyride

Newcomer
I have a question:

Would it be beneficial for 3d hardware to implement jittered pixel locations? I mean this seperately from jittered AA sampling where the pixel location is still on a fixed grid. The point of my bringing this up is that stochastic sampling doesn't produce aliasing; it produces noise instead. In most situations I'm aware of, we are much less sensitive to noise than a regular, patterned error like aliasing - but again I don't have much experience with graphics.

I actually remember John Carmack mentioning this in a speech or a blog update somewhere but I haven't been able to find it. It popped into my head and I'd like to discuss the costs and benefits of implementing something like this - i.e. why hasn't it been done?
 
Only if you have many samples , 16 or more. Below that sparsed regular grid probably will look much better with less implementation cost than stochastic
 
Bumpyride said:
I have a question:

Would it be beneficial for 3d hardware to implement jittered pixel locations? I mean this seperately from jittered AA sampling where the pixel location is still on a fixed grid. The point of my bringing this up is that stochastic sampling doesn't produce aliasing; it produces noise instead. In most situations I'm aware of, we are much less sensitive to noise than a regular, patterned error like aliasing - but again I don't have much experience with graphics.

I actually remember John Carmack mentioning this in a speech or a blog update somewhere but I haven't been able to find it. It popped into my head and I'd like to discuss the costs and benefits of implementing something like this - i.e. why hasn't it been done?
I've actually played around with this a little bit. A lot of it depends on how many samples you take, and what kind of filtering you are doing to get intermediate values. Say for example that you take the exact same number of samples that you would if you just sampled each pixel, but now you've got a stochastic pattern. You'll need to figure out how much each sample point should contribute to each pixel element on your screen based on some hueristic. You'll end up with no regular pattern, but also the total amount of information displayed will probably be significantly less because you'll have some sample points very close to each other while others are farther away from each other. The coverage will be sub-optimal compared to the regular grid.

For lower numbers of samples, a better algorithm would be to use an n-queen solution. You basically take the screen size (say 1280x1024), compute the number of pixels P, and from that number determine an AxB sized "board" such that the number of queens Q on the board = P. You then precompute a number of different boards, and for each frame stochastically pick one of the precomputed sample patterns.

You'd probably also want to pick n-queen solutions that have close to optimal board coverage, which of the *many* solutions you'd get at such a large board size could be an interesting problem in and of itself. calculating the n-queen solutions would be *VERY* slow, but on the plus side you don't need all the solutions and once you've figured them out you never need to do it again.

One advantage that using stochastic sampling has, is that it's much easier to do adaptive sampling, where you can increase your density of samples in areas of the scene that have lots of high contrast changes. Doing something like this with n-queen is much harder because you would have to solve n-queen in realtime for each frame, and base the way that the solution is created on your hueristic.

Ultimately, no matter what you do, when you filter the results to fill in your pixel grid on the display, you'll get blurryness. Whether or not that's an acceptable tradeoff compared to using a regular grid is up to you.

Nite_Hawk
 
Last edited by a moderator:
Nite_Hawk said:
For lower numbers of samples, a better algorithm would be to use an n-queen solution. You basically take the screen size (say 1280x1024), compute the number of pixels P, and from that number determine an AxB sized "board" such that the number of queens Q on the board = P. You then precompute a number of different boards, and for each frame stochastically pick one of the precomputed sample patterns.
Well, I don't think it'd be good to do the N-queen solution on a full-screen scale. It'd require too much storage space to be efficient. Better to do it at much smaller granularity, like, say, on a 32x16 tile, choosing a different solution randomly for each tile on the screen.

Anyway, the primary problem with any sort of sampling pattern that is varying across the screen is that the amount of aliasing will vary across a long edge, leading to "spurs" on edges. And then, if the sampling pattern is changed from frame to frame, these spurs will change their locations from frame to frame. You basically need to have enough samples to prevent these spurs from becoming noticeable (might not be so bad starting at 6-8 samples).
 
My reasoning was a little simpler than that, but that makes a lot of sense. I was just thinking that you could consider each pixel on the physical display as occupying a certain finite area, while a pixel is rendered at a point. You could reasonably move the rendered pixel's location around within the area of the display pixel it corresponds to, so the rendered pixel's location on average would have some distribution about the center of the pixel area.

This would definitely lead to blurriness. Instead of a polygon edge being a hard-edged, stair-stepped line it would be more of a rough, grainy edge, but I'm curious how it would actually look. It probably wouldn't work, but it seems like at higher resolutions it could be interesting.

Oh, and I probably dreamed up Carmack having mentioned this because I spent a little more time looking and couldn't find anything.
 
Chalnoth said:
Well, I don't think it'd be good to do the N-queen solution on a full-screen scale. It'd require too much storage space to be efficient. Better to do it at much smaller granularity, like, say, on a 32x16 tile, choosing a different solution randomly for each tile on the screen.
And I suggest using tiling these random tiles using wang tiles so we get a non repatitive pattern.
 
Some time ago I did some own research about pseudo random sampling inside a pixel. Even with 16 subpixels the result was not convincing.

Using a random subpixel means to add noise. Noise is per se not wanted. I consider a sparse 8x grid superior to a 16x stochastic sampling grid.
 
aths said:
Some time ago I did some own research about pseudo random sampling inside a pixel. Even with 16 subpixels the result was not convincing.

Using a random subpixel means to add noise. Noise is per se not wanted. I consider a sparse 8x grid superior to a 16x stochastic sampling grid.

Did you use a completely random scheme or a minimum distance poisson disc method?
 
If I remember correctly, I tried random patterns, but still with some "sparse" characteristics. I created an ultra high-res BMP with no AA, and calculated a 16x minification with random sampling inside a pixel box. I think I made sure that no effective subpixel was used more than once inside the pixel, but I don't used a poisson distribution or something like that.

For 8x random patterns I later used an algorithm to test if the random pattern is really sparse (meaning it has to deliver full 8x8 edge-equivalent resolution) and use only sparse patterns then. Still, due to its randomness, the result was noisy.

I also wrote a little article about 8x sparse AA, but I don't found the optimal pattern, though.

The automatic translation is very bad, just look onto the pictures :)

http://translate.google.com/transla...fe=off&ie=UTF-8&oe=UTF-8&prev=/language_tools
The decreasing effectivness of rotated grids with n^2 subpixels

http://translate.google.com/transla...fe=off&ie=UTF-8&oe=UTF-8&prev=/language_tools
8x masks, ordered vs random sparse patterns and improved random sparse patterns

http://translate.google.com/transla...fe=off&ie=UTF-8&oe=UTF-8&prev=/language_tools
Some nested rotated grid patterns.


I wrote this article because I was already conviced that an antialiasing pattern for realtime-use should not be random. Because of the large memory footprint, I thinkt (conventional) architectures in the near future will not offer more that 8 samples per pixel. Also I doubt that 16x random will offer better quality that 8x sparse – while I don't know the best 8x sparse grid yet.

To use 8x sparse instead of 8x8 ordered grid is already a technique to decrease the number of subpixels and still keeping 8x8 edge-equivalent resolution for angles around 90° (where antialiasing is needed the most.) To use a random pattern or a temporal pattern adds noise and slight edge-flickering, which is unwanted. (One can discuss situations where noise is actually wanted, but I think AA should deliver really smooth edges by default.)
 
Last edited by a moderator:
Back
Top