Using the SPU's to do AA

I don't think terms like "blur" and "anti-aliasing" can withstand the analytical weight you're all trying to put on them. "Filtering" is a much better term, I think, as it's more precise mathematically. The question is more about what kind of filter you use and when in the sequence you apply it. We have an analogous situtation in numerical analysis. If you're trying to solve some PDE, f(u,v,t) = F(u,v,t) you have to approximate F with a discrete operator and represent 3D space as a grid rather than as continuous. In the case of nonlinear PDEs, this introduces aliasing which throw off your time steps, which is an analogous situation to getting shimmering and crawling on your TV. There are a few things you can do:

1. Jack up the resolution of the grid so high that you don't get aliasing for the scale of the problem (analogous to moving from 480p to 720p).
2. Filter the equations .
3. Filter the results (u,v) before applying the operator for your next time step.

IMO, traditional supersampling methods are basically an attempt to at least approximately do #2, while things like edge blurring and selective gaussians are basically #3.
 
Last edited by a moderator:
How about a motivation/explanation of how/what is - in your opinion - wrong?

making smartass remarks is very popular on the interwebs I know, but this is a discussion forum and smartass remarks don't do much fdor the quality of discussion.

Besides.. discussion fora is all abou the sharing of knowledge as well and how can anyone - me included - learn from a smartass remark like yours?

Come again?

Peaxew.
 
I suppose the "flickering details" issue & "nyquist limit" line of reasoning here really calls for decent Geometric LOD as the real solution to aliasing in 3d rendering ...

i.e. Geomorphing-out or fading-out all the details that resolve onscreen at less than one pixel. (e.g. things like curb-stones, lamp-posts, a characters fingers..)

all those fences and grids want to be re-represented as alpha blends etc...

Screen bluring doesn't sound like it can solve the temporal aspect of aliasing.. the way those near horizontal slow moving edges jolt from frame to frame.
Still, i did wonder if there was an edge-AA type solution that might make sense on the PS3, e.g. with SPU's processing geometry (inc backface cull) to pick out edges... hmmm....
does seem like the sort of "thinking-out-of-the-box" type solution that may have some merit (on a machine with an overpowered CPU and underpowered GPU.)


On a related note, does anyone think it would ever be possible or desireable for hardware to change the sample pattern from pixel to pixel? I suppose that would result in dithering which jumps out as looking primitive..
 
Last edited by a moderator:
Isn't majority of game media released nowadays exactly that? It's super-sampled with ridiculous sample counts, usually using non-ordered grid(poisson distributions FTW), and of course better then box-filter averaging.

It started as only print-media, but these days it's just common practice.
I don't know that I'd say "majority" actually take that level of care into getting good AA into so-called "bullshots." Most of what I've seen is just obscene oversampling on a regular grid, and at best, a tent filter. To be fair, though, it may be different for the companies that flood the media with a million and one screenshots.

Moreover, I'd say that short of those who are really attentive to these sorts of detail, I don't think there are even that many people within the industry who can catch the difference. You could apply a sinc filter on really high oversampled images, and after downsampling, only those who know anything about image processing and sampling theory (which is an extreme minority, FWIW) would catch the ringing. It usually takes something that dramatically and obviously outlines the failings of specific filter techniques to show how bad it is for most people -- and that often means animations, not still screens.

If you really want to be pedantic, it doesn't. If you tried to sample, say, an audio signal that contained a significant frequency component of, say, (192- 5)kHz, you'd end up with something that would sound like a 5kHz signal.
Fair enough. Though as you say you can prefilter...

The problem with 3D graphics is that we can't easily put in such a pre-filter. It'd be incredibly difficult to do and so we just resort to sampling at a higher rate and hope that any higher frequency components are insignificantly small. You then put in a post-filter to remove frequencies that are above that which is displayable in the target resolution.
Well, I can think of a few ways to accumulate samples that can sort of get that effect, but I don't see it as a possibility for hardware pretty much at any point in time. Not so much because the hardware manufacturers don't care, but because certain other constraints can easily supercede the problem of "correctness."

That too, various methods I can think of for accumulating weighted samples all get wrenches thrown into the mix when overdraw and blending become an issue -- that's part of why direct rasterization is evil. We should be sampling polygons, not drawing them, but again, no such hardware will ever exist on the mass market. We'll just have GPGPU people go through pointless academic exercises to that effect which run in "realtime" when they draw a scene of 3 objects at 8 FPS. And then more idiots will go around on forums thinking this will be the next big thing.

Screen bluring doesn't sound like it can solve the temporal aspect of aliasing.. the way those near horizontal slow moving edges jolt from frame to frame.
Well, it could if the blur is so broad and unbiased that it hides any aspect of previously sampling on a regular grid. :p Actually, what you really need is some way of telling you how to adjust the blurring that is (or at least can be) not so dependent on the regularity of the pixel grid. Which is hard to do in image space since you've already lost any information about the actual geometry.

Still, i did wonder if there was an edge-AA type solution that might make sense on the PS3, e.g. with SPU's processing geometry (inc backface cull) to pick out edges... hmmm....
does seem like the sort of "thinking-out-of-the-box" type solution that may have some merit (on a machine with an overpowered CPU and underpowered GPU.)
There are tricks that use an additional geometry pass and look for edges by seeing that the rendered sample is not quite centered on the pixel, which drives a resampling from the previously rendered image. But the real problem is just... can you afford to move geometry down again? If the answer was always yes, I think just about everybody and his brother would use these sorts of tricks.
 
There is never a good reason to do edge blurring, other than you're running out of time and want to slap an ugly hack to hide some of the edges. It should take its rightful place in the game visuals hall of shame along with blooming, lense flare and sliding feet animations.

I disagree here. Edge bluring is used very often in offline CGI as well, evne though we can set AA samplig as high as we want to get rid of any aliasing.
The reason to use edge bluring is to better integrate various elements into the scene, like characters and moving objects. It looks better then sharp outlines, and it makes perfect sense to integrate it into realtime 3D engines as well. It should not be used to replace proper AA, but rather to add on top of it, and of course with a small filter kernel.
 
I disagree here. Edge bluring is used very often in offline CGI as well, evne though we can set AA samplig as high as we want to get rid of any aliasing.
The reason to use edge bluring is to better integrate various elements into the scene, like characters and moving objects. It looks better then sharp outlines, and it makes perfect sense to integrate it into realtime 3D engines as well. It should not be used to replace proper AA, but rather to add on top of it, and of course with a small filter kernel.
A good example of that is Crysis, edge blurring is used to blur foliage edges afaik.
 
KZ2 probably also does this, using the depth and normal passes to determine where to blur, like at characters' outlines.

BTW nAo, have you noticed how lens flares for lights seem to shine through some of the level geometry? :)
 
Actually, what you really need is some way of telling you how to adjust the blurring that is (or at least can be) not so dependent on the regularity of the pixel grid.

Not excatly related but...

I like the idea of rendering on a "virtual grid" whose resolution depends on the need (a deformed grid generated by an SPU for instance). Edges would have extra resolution, interiors of triangle less due to prefiltering...

Then that "virtual grid" would be blurred (only the over-precise bits?) and mapped onto the regular grid of the display device. Part of that would be up-sampled, part of that would be down-sampled (ie sub-sampled).

Maybe a virtual grid 1.3 time bigger than the device grid would give good result...

That's very likely to be a completly unpractical solution but imo it asks questions about "the holy screen resolution": Do we really need to exactly match it early in the graphic pipeline? Is it the best strategy to find the optimal perf/quality balance?
 
Pixar (well..Lucasfilm..) invented that stuff more than 20 years ago (see REYES rendering algorithm).
 
Pixar (well..Lucasfilm..) invented that stuff more than 20 years ago (see REYES rendering algorithm).

I could not get my hand on the original paper but that was descriptive enough:
http://graphics.stanford.edu/papers/reyes-vs-opengl/

Interesting stuff, I can see how that's kind of related with what I was saying (stochastic sampling, per-vertex texture fetch).

They also claim that "Each type of calculation is performed in a coordinate system that is natural for that type of calculation.". I wonder what they mean here...


That too, various methods I can think of for accumulating weighted samples all get wrenches thrown into the mix when overdraw and blending become an issue -- that's part of why direct rasterization is evil. We should be sampling polygons, not drawing them

As I'm sure you already know, this is the approach use in Reyes.
Can you expand a bit on the first sentence?
 
I could not get my hand on the original paper but that was descriptive enough:
http://graphics.stanford.edu/papers/reyes-vs-opengl/

Interesting stuff, I can see how that's kind of related with what I was saying (stochastic sampling, per-vertex texture fetch).
Yep, micropolygons don't get shaded in screen space, but they get stochastically sampled in screen space.
This ppt presentation from the same paper has a very interesting slide (check last slide, funding agencies..;) )

They also claim that "Each type of calculation is performed in a coordinate system that is natural for that type of calculation.". I wonder what they mean here...
A bit generic, but I think it means that shading happens at micropolygons level, and micropolygons are generated so that the ratio between their area and a pixel is roughly 1/2.
 
Back
Top