Using the SPU's to do AA

Discussion in 'Console Technology' started by Betanumerical, Sep 6, 2007.

  1. Laa-Yosh

    Laa-Yosh I can has custom title?
    Legend Subscriber

    Joined:
    Feb 12, 2002
    Messages:
    9,568
    Likes Received:
    1,455
    Location:
    Budapest, Hungary
    The ratio is actually adjustable as I remember, with the Shading Rate parameter. 0.5 is a good all-round parameter, but even 0.1 is not unheard of.
     
  2. fearsomepirate

    fearsomepirate Dinosaur Hunter
    Veteran

    Joined:
    Sep 1, 2005
    Messages:
    2,743
    Likes Received:
    65
    Location:
    Kentucky
    I was talking in general about simulating a system that evolves in time, usually represented by a PDE. Your choices are pretty much to either analytically filter the original equations or numerically filter the results you get on a particular time-step (the intermediate or final results, depending on the algorithm), because if you don't, you get aliasing, which in that world means getting behaviors in your numerical solution that aren't really representative of the analytical solution.

    In fact, the similarities are interesting enough that I wonder how much theory from numerical analysis actually gets applied to the anti-aliasing problem. I suppose at the level of professional CGI, there's a good bit.

    I would think it means they're constantly reorienting their coordinate axes or possibly even using spherical/cylindrical coordinates.
     
  3. ShootMyMonkey

    Veteran

    Joined:
    Mar 21, 2005
    Messages:
    1,177
    Likes Received:
    72
    Well, among other similar ideas, the main example I was thinking of simply involved storing pixels as a summation of weighted samples -- (keeping track of the sum of weights as well, i.e. an "RGBW" type of color format) -- using whatever weighting scheme is applicable. It's not a generic thing for all types of color arithmetic, but it's a cheap trick that I like for several applications involving weighted sums of color samples. I was originally trying it in an MCPT renderer I mess around with in my spare time, and it works nicely for those types of things.

    For accumulating subpixel samples, when you're rasterizing and you can't inherently assume that someone is going to feed you sorted polys or use a Z-Prepass and Early-Z culling and what not, you have some issues... when overdraw occurs, and the same pixel gets new samples from new geometry, you have to know that the old ones might need to be thrown out if those samples occlude the previous ones... then again, it's possible that this pixel might be on the edge of the new geometry, in which case, you might have to reconsider how you toss out samples since you've kind of destroyed any information that there were multiple samples already.

    Alpha blending is something else because it kind of depends on proper sorting order, and in general needs to be the last thing you draw anyway, but you effectively have to blend on top of all the samples (which is equivalent to resolving the weighted sum of opaque and then blending). But it is ultimately something that needs to be handled separately anyway, and then you have the question of the aliasing on alpha-test edges.

    Now of course, if not for the fact that hardware only does direct rasterization of tris, there would be less of an issue. Bring on the RPU, as far as I'm concerned :cool:. Yeah, I know... it'll never happen, but that doesn't mean it isn't necessary.
     
  4. Betanumerical

    Veteran

    Joined:
    Aug 20, 2007
    Messages:
    1,763
    Likes Received:
    280
    Location:
    In the land of the drop bears
    Thank you all for the great replies and the battle which ensured with them.

    Now i have a different question and since it is kinda related to the first one I thought there would be no point starting a new thread.

    What can the SPU's do to aid the RSX?. I've already read some stuff from GDC on the SPU's doing stuff such as skinning, triangle culling and shadow map generation, but I would like someone to shed some more light on what else they can do to aid the RSX. Thank you in advance.
     
  5. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    You can add to the list occlusion culling and post processing effects.
     
  6. phat

    Regular

    Joined:
    Feb 13, 2002
    Messages:
    496
    Likes Received:
    3
    Location:
    Waterloo, ON Canada
    Anti-aliasing has a very specific meaning in discrete-sampling systems, and you've certainly nailed the gist of it here. Anybody who disagrees should review first-principles.
     
  7. Betanumerical

    Veteran

    Joined:
    Aug 20, 2007
    Messages:
    1,763
    Likes Received:
    280
    Location:
    In the land of the drop bears
    Would you per chance have any info on edge geometry? (I believe GG are using it for KZ2).
     
  8. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    Sony's GDC presentations are full of details about EDGE afaik.
     
  9. purpledog

    Newcomer

    Joined:
    Nov 6, 2005
    Messages:
    108
    Likes Received:
    3
    Still a bit unclear :oops: so let me try to rephrase.

    You are saying that rasterizing is bad because detail (samples) are especially needed on the edges of triangles and that's exactly where rasterizing is a bit dumb due to fixed resolution. Am I correct?

    How can that be resolved with "sampling" as opposed to rasterizing? Would you accumulate all the potenitally visible (micro?) polygons within a single pixel and then do some kind of "sub-pixel ray-casting"?
     
  10. purpledog

    Newcomer

    Joined:
    Nov 6, 2005
    Messages:
    108
    Likes Received:
    3
    I'd be very grateful if you could point to any references where I can have a look at those first-principles.
     
  11. Cal

    Cal
    Newcomer

    Joined:
    Oct 7, 2006
    Messages:
    58
    Likes Received:
    12
    Location:
    Shanghai
    If you want to use SPU to do AA, you have to handle framebuffer tile and compression yourself. That's the bottom line, and I don't think it's feasible at the least bit. One must rely on the RSX to decompress the depth buffer before accessing it unless you want to do the depth buffer yourself.
     
  12. ShootMyMonkey

    Veteran

    Joined:
    Mar 21, 2005
    Messages:
    1,177
    Likes Received:
    72
    Well, what I was getting at is that when directly filling/rasterizing triangles, you don't necessarily know if a given sub-pixel sample will actually be in the final image. Z-sorting or Z-Prepasses and so on can deal with that, but I was saying that's not something you can assume everybody will do all the time (I'm kind of talking as if we're applying this to hardware, mind you).

    So for instance, you have the problem where you've been accumulating samples into some RGBW sum for a given triangle, and along comes another triangle that covers it up. So now you throw out the old sum and start all over again. But then you have the issue of when the new polygon comes along and doesn't cover up the whole pixel, and instead, you're on the edge of a polygon. So what do you do? Obviously *some* of your previously accumulated samples are invalid now, but when all you're storing prior to resolving is an accumulated RGBW sum, you've lost the information about which ones you've accumulated, so you don't know what to take away.

    Now when you're raycasting (why I mentioned the RPU thing), the scene is built ahead of time, and you don't need to worry about these sorts of problems -- the first hit is the correct first hit, and it will be in the image. If you graze an edge, you know ahead of time that's how it should be.
     
  13. purpledog

    Newcomer

    Joined:
    Nov 6, 2005
    Messages:
    108
    Likes Received:
    3
    got it, thanks
     
  14. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    Umh..it's easier than you think :)
    Nonetheless I don't see any particular advantage in using SPUs to just resolve a multisamples buffer, better offload to the SPUs the whole post processing pipeline then.
    RSX would have more time to do 'normal' rendering.
    Keep in my mind that a CPU with a local memory such an SPU, can do a much better work than a GPU at many post processing effects.
    The current programming model on GPUs forces you to read the same data over and over again for neighbouring pixels, on a SPU a lot of data can be re-used across multiple pixels.
     
  15. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,400
    Likes Received:
    440
    Location:
    San Francisco
    LOL, I've just shared a cab with the author and since I paid the run he must have decided that my financial support needed to be acknowledged ;)
     
  16. assen

    Veteran

    Joined:
    May 21, 2003
    Messages:
    1,377
    Likes Received:
    19
    Location:
    Skirts of Vitosha
  17. purpledog

    Newcomer

    Joined:
    Nov 6, 2005
    Messages:
    108
    Likes Received:
    3
  18. Simon F

    Simon F Tea maker
    Moderator Veteran

    Joined:
    Feb 8, 2002
    Messages:
    4,563
    Likes Received:
    171
    Location:
    In the Island of Sodor, where the steam trains lie
    I'll stick my neck out and say that because, in 3D graphics we can't (currently|feasibly**) pre-filter the model data prior to sampling, I am 100% happy with the following broad definition from the graphics bible.

    I'd take this to include any super-sampling or stochastic approach (which moves the aliasing into high frequency, less detectable, noise).



    ** I say feasibly for the following reason. Let's imagine we could afford to take a rendering model consisting of polygons, which we then clipped against each other to produce a set of completely visible fragments, and were able to analytically convolve each segment (with correctly frequency-limited textures and shading) against a per-pixel filter (e.g windowed sinc). This may be doable but it assumes that the polygons themselves are correct. If they, themselves, are an approximation of another surface, then who is to say that it has been sampled correctly <shrug>
     
  19. Farid

    Farid Artist formely known as Vysez
    Veteran Subscriber

    Joined:
    Mar 22, 2004
    Messages:
    3,844
    Likes Received:
    108
    Location:
    Paris, France
    Well, it's a broad definition, indeed, since according to that, the following is true:

    [​IMG]

    Now, while I agree with Marco and others concerning the fact that calling blur techniques anti-aliasing is a misnomer, I'd also agree with the fact that it's hard to categorise and define anti-aliasing precisely enough to exclude blurring from being considered AA.

    The thing is, one could argue that anything that alleviates aliasing artifacts is anti-aliasing. A proper and exclusive of all sort of blurring definition for anti-aliasing should mention that during the process proper details are created. Proper details as opposed to details that results from the linear filtering of an up-sampled target (as what blurring techs do).
     
  20. betan

    Veteran

    Joined:
    Jan 26, 2007
    Messages:
    2,315
    Likes Received:
    0
    No but supersampling approximates prefiltering the model data by prefiltering the supersampled image before final sampling.

    i.e. supersample (with less aliasing), then filter, then sample again.

    How does that particular book define aliasing?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...