"Pure and Correct AA"

Discussion in 'Architecture and Products' started by Reverend, Mar 26, 2007.

  1. Humus

    Humus Crazy coder
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,217
    Likes Received:
    77
    Location:
    Stockholm, Sweden
    Because we want things to be linear in "screen light emittance" space. So any operation that changes the curve from a linear ramp, including the sRGB response of the monitor, has to be done before you average the samples.

    Agreed. Graphics is all subjective anyway. That's why I reject the notion that it's all "science and math" and "no opinions". Ultimately it comes down to opinions.

    Come to think about it, I realize my analog camera analogy sucked too. While the surface on the film reacts pretty much like I said (except that it's non-linear, which further breaks my analogy) it won't be a box filter there either but probably something closer to a Gaussian filter since there's an aperture, which means it'll bring in light from the surrounding area as well. With a pinhole camera that would work though. :) (If we ignore the fact that a zero size aperture would need infinite exposure time). Thinking more about it I think the "most correct representation" must be something that mimics the eye's behavior, and it's basically the same principles as in a camera. Perhaps that means Gaussian (or something close to it) would be the answer after all.

    Again, highly subjective. This will be an interesting field to explore in D3D10.

    Yeah, that's what I figure. I think we're really agreeing. :)

    Well, you can achieve much better quality with a non-linear mapping. It all comes down to what goals you have of course. If you try to mimic a camera the tonemapping operator should be something like this:
    1 - 2^(-exposure * value)

    Well, in that case it doesn't really matter what you do. :) However, if we're to take linear light into account as you talked about, then clearly we must take the monitor response into account if we want to be 100% correct, otherwise our photon linearity breaks. Of course, in practice we can just assume it's 2.2 and be happy with that. It'll look good enough in the vast majority of cases.
     
  2. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,326
    Likes Received:
    107
    Location:
    San Francisco
    It's the first time I think about this and I'm probably wrong..but I'm not entirely sure we want things to be linear in that way. Let's assume we are using alpha to coverage, this would mean that we are doing alpha blending/AA resolve after gamma correction, and this would give wrong results.
    At the same time I can't see why AA samples are so special that we want to filter them after gamma correction while all the rest is resolved/blended/filtered before that 'pass'.
    I'm not saying that you're wrong but I'm wondering which hw is applying gamma correction before AA resolve.
     
  3. stepz

    Newcomer

    Joined:
    Dec 11, 2003
    Messages:
    66
    Likes Received:
    3
    Mapping to non-linear sRGB has to be done after the averaging. If you average two non-linear sRGB values you'll end up with an sRGB value that maps to brightness (amount of photons) lower than the average of the two brightnesses specified by the averaged values. The monitor response curve actually does the inverse mapping from sRGB to linear brightness values.
     
  4. Fred

    Newcomer

    Joined:
    Feb 18, 2002
    Messages:
    210
    Likes Received:
    15
    I like the point that several people made about monitors.

    Mathematically the problem is not as simple as using a sinc function for the graphics filtering process and using infinite supersampling. That would only be true if you lived inside your computer.

    There are actually 3 transforms that are relevant.

    1) graphics card level
    2) Monitor level
    3) Eye response level

    Each one will introduce aliasing (of different types) b/c each is not a perfect reconstruction of whatever signal they recieve. The transform between each is likely highly nonlinear, which is why it is indeed a bit of an art, since everyones monitor is different and everyones eyes are different.
     
    Acert93 likes this.
  5. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,380
    Each level will have its own kind of distortion, but I don't really see how each level can introduce *aliasing* (meaning: introduce faux patterns due to the presence of frequency components above 1/2 of the sample frequency) ?

    Well, this is assuming that the monitor has a resolution that exceeds the sample resolution.

    Hmm. That raises the question: is it fair to assume that the eye is a continuous device or does it also have an inherent sample resolution? (You know, which this whole high school biology class business of little cones and pyramids in your eyeball that are connected to nerves and such.) :wink:
     
  6. Reverend

    Banned

    Joined:
    Jan 31, 2002
    Messages:
    3,266
    Likes Received:
    24
    A question: How jaggy is "jaggy" to Tom and is Tom's perception of "jagginess" the same as Dick's or Harry's?

    Sorry for the interuption :) .
     
  7. Simon F

    Simon F Tea maker
    Moderator Veteran

    Joined:
    Feb 8, 2002
    Messages:
    4,560
    Likes Received:
    157
    Location:
    In the Island of Sodor, where the steam trains lie
    Not if you animate it.

    I did a test with 10K samples per pixel - a box filter still looks like rubbish. Now if we could only explain that to ......
     
  8. Xmas

    Xmas Porous
    Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,305
    Likes Received:
    138
    Location:
    On the path to wisdom
    Yes, you have to take monitor response into account. But only when you output the final image. Not while you're still rendering/processing it.

    Rendering an image is counting photons, and there is no nonlinearity in counting. When you've finished rendering, the result is a description that tells you how many photons (of the represented color channels/wavelenghts) each individual pixel should emit. This description can be encoded (as sRGB for example) to make more efficient use of framebuffer bits. Up to that point, you need no knowledge whatsoever of the display hardware that is used to display the image.

    When you want to display the image, you have to make sure that, if the description says "emit N photons for pixel A" and "emit 2*N photons for pixel B", the monitor will actually display pixel B twice as bright (physically, not perceptually) as pixel A. That is what the gamma/color correction LUT in the pixel output pipeline is for, which has to be calibrated for every monitor.
     
  9. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,926
    Likes Received:
    504
    Dunno how much the cornea of normal people blurs, but mine has roughened a bit from a nasty virus infection (misdiagnosed as a bacterial one for about a week). So I have a build in continuous pre-filter.
     
  10. Bolloxoid

    Newcomer

    Joined:
    May 15, 2003
    Messages:
    191
    Likes Received:
    0
    Since the retina consists of individual receptor cells, it does sample, at around 120 cycles per degree foveally (at the center of the visual field).
     
    #110 Bolloxoid, Mar 30, 2007
    Last edited by a moderator: Mar 30, 2007
  11. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,380
    One thing I overlooked: the receptors are unlikely to be in a nicely organized rectangular grid. Stochastic sampling! :wink:
     
  12. bloodbob

    bloodbob Trollipop
    Veteran

    Joined:
    May 23, 2003
    Messages:
    1,630
    Likes Received:
    27
    Location:
    Australia
    Now if we get a stochastic display everything will look sweet. However I think the frame buffer might get a little more complex.
     
  13. Arun

    Arun Unknown.
    Moderator Legend Veteran

    Joined:
    Aug 28, 2002
    Messages:
    5,023
    Likes Received:
    302
    Location:
    UK
    Why would you want to make your display stochaistic? Your eye's densest receptor locations might at a point in time be aligned with the display's least dense locations, and vice-versa. Your eye moves very quickly of course, so very small variations might not be noticeable; but it remains the case that your eye will more naturally focus on what looks bad than what looks good. And I dont think it's really 'noise' that a stochaistic display would be adding; it's just randomly affecting the 'precision', rather than the data. I could be wrong of course, but I still really don't see how this would help.

    [EDIT]: I just realized that my answer wrt whether the eye is 'sampling' was rather incomplete, I'll reread some sections of this quite nice book and try posting something more accurate later today. Looking around on the web a bit too though, there do seem to be some excellent websites too, which seem so complete that it nearly terrifies me.
     
  14. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    There is actual experimental evidence of aliasing in human vision using experimental lenses constructed from wavefront analysis that corrects for all of the imperfections in the eye, not including the cornea, lense, vitreous humour, shape of retina, etc There have also been experiments using adaptive optics (ala Keck telescope). These sorts of lenses can correct human vision down to 20/6. I remember reading a paper years ago in which a professional baseball player with unaided vision of 20/12, was analyzed and given custom engineered contact lenses which took his vision to 20/6. At this level, the subject reported aliasing effects in his vision (stairstepped edges, etc)
     
  15. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    I've never really completely understood the motivation (or justification) for gamma correcting subpixels before downsampling. Sure, A+B != (A^2.2 + B^2.2)^-2.2, that is addition is not preserved. But there are lots of linear space additions going on throughout the shader pipeline as well as the framebuffer, and I don't see what makes framebuffer additive blends, or additive downsamples anymore "special" requiring non-linear space than say, a shader sampling multiple times from a map with a custom filter to produce a single color. It seems like anyone writing shaders trying to oversample and blend to avoid shader aliasing would want gamma corrected ADDs as well. And wouldn't the AF hardware have to use gamma correct blends as well? Can anyone explain why it's justified for edge-aliased downsampling *only*?
     
  16. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    I think it's just a matter of not having any other option with downsampling. With shaders you can do any correction yourself if you want. A lot of the correction is done on the artwork side also, so sometimes there's no need for it in the shader. The other thing is that straight lines have periodic incorrectness without gamma correction, so not having it is a bigger problem for edge AA than for filtering most textures.

    Gamma correct AA definately looks better, so I have no complaint with the status quo.
     
  17. Xmas

    Xmas Porous
    Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,305
    Likes Received:
    138
    Location:
    On the path to wisdom
    It's not. It's just more important for AA downsampling as the artifacts are more obvious. The only reason it hasn't been done throughout the whole pipeline (before texture filtering, during blending) before is that it's expensive. But D3D10 requires support for sRGB formats and proper conversion (R8G8B8A8, BC1/2/3).

    I wouldn't call it gamma correction though, it's conversion between sRGB and lRGB.
     
  18. NocturnDragon

    Regular

    Joined:
    Feb 6, 2002
    Messages:
    393
    Likes Received:
    17
    Don't know if anyone it's interested, but in the new AMD papers I finally saw for the first time the syntax that lets you access AA samples in the PS.

     
  19. nAo

    nAo Nutella Nutellae
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,326
    Likes Received:
    107
    Location:
    San Francisco
    Cool, hope samples are dinamically indexable..
     
  20. Andrew Lauritzen

    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,526
    Likes Received:
    454
    Location:
    British Columbia, Canada
    Yeah I saw that too, but what I didn't see is if there's a way to query how many MSAA samples there are at a single pixel... or whether you just have to resolve them all which seems rather wasteful (considering that it's supposed to be MSAA, no SSAA).
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...