Using the SPU's to do AA

The ratio is actually adjustable as I remember, with the Shading Rate parameter. 0.5 is a good all-round parameter, but even 0.1 is not unheard of.
 
I think I lost you here, what numerical simulation are you talking about?

I was talking in general about simulating a system that evolves in time, usually represented by a PDE. Your choices are pretty much to either analytically filter the original equations or numerically filter the results you get on a particular time-step (the intermediate or final results, depending on the algorithm), because if you don't, you get aliasing, which in that world means getting behaviors in your numerical solution that aren't really representative of the analytical solution.

In fact, the similarities are interesting enough that I wonder how much theory from numerical analysis actually gets applied to the anti-aliasing problem. I suppose at the level of professional CGI, there's a good bit.

"Each type of calculation is performed in a coordinate system that is natural for that type of calculation."

I would think it means they're constantly reorienting their coordinate axes or possibly even using spherical/cylindrical coordinates.
 
As I'm sure you already know, this is the approach use in Reyes.
Can you expand a bit on the first sentence?
Well, among other similar ideas, the main example I was thinking of simply involved storing pixels as a summation of weighted samples -- (keeping track of the sum of weights as well, i.e. an "RGBW" type of color format) -- using whatever weighting scheme is applicable. It's not a generic thing for all types of color arithmetic, but it's a cheap trick that I like for several applications involving weighted sums of color samples. I was originally trying it in an MCPT renderer I mess around with in my spare time, and it works nicely for those types of things.

For accumulating subpixel samples, when you're rasterizing and you can't inherently assume that someone is going to feed you sorted polys or use a Z-Prepass and Early-Z culling and what not, you have some issues... when overdraw occurs, and the same pixel gets new samples from new geometry, you have to know that the old ones might need to be thrown out if those samples occlude the previous ones... then again, it's possible that this pixel might be on the edge of the new geometry, in which case, you might have to reconsider how you toss out samples since you've kind of destroyed any information that there were multiple samples already.

Alpha blending is something else because it kind of depends on proper sorting order, and in general needs to be the last thing you draw anyway, but you effectively have to blend on top of all the samples (which is equivalent to resolving the weighted sum of opaque and then blending). But it is ultimately something that needs to be handled separately anyway, and then you have the question of the aliasing on alpha-test edges.

Now of course, if not for the fact that hardware only does direct rasterization of tris, there would be less of an issue. Bring on the RPU, as far as I'm concerned :cool:. Yeah, I know... it'll never happen, but that doesn't mean it isn't necessary.
 
Thank you all for the great replies and the battle which ensured with them.

Now i have a different question and since it is kinda related to the first one I thought there would be no point starting a new thread.

What can the SPU's do to aid the RSX?. I've already read some stuff from GDC on the SPU's doing stuff such as skinning, triangle culling and shadow map generation, but I would like someone to shed some more light on what else they can do to aid the RSX. Thank you in advance.
 
What can the SPU's do to aid the RSX?. I've already read some stuff from GDC on the SPU's doing stuff such as skinning, triangle culling and shadow map generation, but I would like someone to shed some more light on what else they can do to aid the RSX. Thank you in advance.
You can add to the list occlusion culling and post processing effects.
 
I disagree, applying a low pass filter after having sampled a signal is not what I'd call anti-aliasing, but something more along the lines of: "OMG!!! we didn't pre-filter this stuff, we're also undersampling it, we're screwed! let's do something!!" ;)

Anti-aliasing has a very specific meaning in discrete-sampling systems, and you've certainly nailed the gist of it here. Anybody who disagrees should review first-principles.
 
For accumulating subpixel samples, when you're rasterizing and you can't inherently assume that someone is going to feed you sorted polys or use a Z-Prepass and Early-Z culling and what not, you have some issues... when overdraw occurs, and the same pixel gets new samples from new geometry, you have to know that the old ones might need to be thrown out if those samples occlude the previous ones... then again, it's possible that this pixel might be on the edge of the new geometry, in which case, you might have to reconsider how you toss out samples since you've kind of destroyed any information that there were multiple samples already.

Still a bit unclear :oops: so let me try to rephrase.

You are saying that rasterizing is bad because detail (samples) are especially needed on the edges of triangles and that's exactly where rasterizing is a bit dumb due to fixed resolution. Am I correct?

How can that be resolved with "sampling" as opposed to rasterizing? Would you accumulate all the potenitally visible (micro?) polygons within a single pixel and then do some kind of "sub-pixel ray-casting"?
 
Anti-aliasing has a very specific meaning in discrete-sampling systems, and you've certainly nailed the gist of it here. Anybody who disagrees should review first-principles.

I'd be very grateful if you could point to any references where I can have a look at those first-principles.
 
If you want to use SPU to do AA, you have to handle framebuffer tile and compression yourself. That's the bottom line, and I don't think it's feasible at the least bit. One must rely on the RSX to decompress the depth buffer before accessing it unless you want to do the depth buffer yourself.
 
You are saying that rasterizing is bad because detail (samples) are especially needed on the edges of triangles and that's exactly where rasterizing is a bit dumb due to fixed resolution. Am I correct?

How can that be resolved with "sampling" as opposed to rasterizing? Would you accumulate all the potenitally visible (micro?) polygons within a single pixel and then do some kind of "sub-pixel ray-casting"?
Well, what I was getting at is that when directly filling/rasterizing triangles, you don't necessarily know if a given sub-pixel sample will actually be in the final image. Z-sorting or Z-Prepasses and so on can deal with that, but I was saying that's not something you can assume everybody will do all the time (I'm kind of talking as if we're applying this to hardware, mind you).

So for instance, you have the problem where you've been accumulating samples into some RGBW sum for a given triangle, and along comes another triangle that covers it up. So now you throw out the old sum and start all over again. But then you have the issue of when the new polygon comes along and doesn't cover up the whole pixel, and instead, you're on the edge of a polygon. So what do you do? Obviously *some* of your previously accumulated samples are invalid now, but when all you're storing prior to resolving is an accumulated RGBW sum, you've lost the information about which ones you've accumulated, so you don't know what to take away.

Now when you're raycasting (why I mentioned the RPU thing), the scene is built ahead of time, and you don't need to worry about these sorts of problems -- the first hit is the correct first hit, and it will be in the image. If you graze an edge, you know ahead of time that's how it should be.
 
If you want to use SPU to do AA, you have to handle framebuffer tile and compression yourself. That's the bottom line, and I don't think it's feasible at the least bit. One must rely on the RSX to decompress the depth buffer before accessing it unless you want to do the depth buffer yourself.
Umh..it's easier than you think :)
Nonetheless I don't see any particular advantage in using SPUs to just resolve a multisamples buffer, better offload to the SPUs the whole post processing pipeline then.
RSX would have more time to do 'normal' rendering.
Keep in my mind that a CPU with a local memory such an SPU, can do a much better work than a GPU at many post processing effects.
The current programming model on GPUs forces you to read the same data over and over again for neighbouring pixels, on a SPU a lot of data can be re-used across multiple pixels.
 
Any link with you? Maybe you want to tell us more about your secrete life? ;)
LOL, I've just shared a cab with the author and since I paid the run he must have decided that my financial support needed to be acknowledged ;)
 
hey don't clearly define anti-aliasing
I'll stick my neck out and say that because, in 3D graphics we can't (currently|feasibly**) pre-filter the model data prior to sampling, I am 100% happy with the following broad definition from the graphics bible.

page 132 Computer Graphics. Principles and Practice said:
The application of techniques that reduce or eliminate aliasing is referred to as antialiasing, and primitives or images produced using these techniques are said to be antialiased.

I'd take this to include any super-sampling or stochastic approach (which moves the aliasing into high frequency, less detectable, noise).



** I say feasibly for the following reason. Let's imagine we could afford to take a rendering model consisting of polygons, which we then clipped against each other to produce a set of completely visible fragments, and were able to analytically convolve each segment (with correctly frequency-limited textures and shading) against a per-pixel filter (e.g windowed sinc). This may be doable but it assumes that the polygons themselves are correct. If they, themselves, are an approximation of another surface, then who is to say that it has been sampled correctly <shrug>
 
I am 100% happy with the following broad definition from the graphics bible.
page 132 Computer Graphics. Principles and Practice said:
The application of techniques that reduce or eliminate aliasing is referred to as antialiasing, and primitives or images produced using these techniques are said to be antialiased.
I'd take this to include any super-sampling or stochastic approach (which moves the aliasing into high frequency, less detectable, noise).

Well, it's a broad definition, indeed, since according to that, the following is true:

AA.png


Now, while I agree with Marco and others concerning the fact that calling blur techniques anti-aliasing is a misnomer, I'd also agree with the fact that it's hard to categorise and define anti-aliasing precisely enough to exclude blurring from being considered AA.

The thing is, one could argue that anything that alleviates aliasing artifacts is anti-aliasing. A proper and exclusive of all sort of blurring definition for anti-aliasing should mention that during the process proper details are created. Proper details as opposed to details that results from the linear filtering of an up-sampled target (as what blurring techs do).
 
I'll stick my neck out and say that because, in 3D graphics we can't (currently|feasibly**) pre-filter the model data prior to sampling,

No but supersampling approximates prefiltering the model data by prefiltering the supersampled image before final sampling.

i.e. supersample (with less aliasing), then filter, then sample again.

I am 100% happy with the following broad definition from
How does that particular book define aliasing?
 
Back
Top