AccuView Quincunx Image Quality

Discussion in 'General 3D Technology' started by aths, Feb 16, 2002.

  1. pcchen

    pcchen Moderator
    Moderator Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,018
    Likes Received:
    582
    Location:
    Taiwan
    That's why I said the Gaussian filter should not be applied inside a primitive. If supersampling is used (every subsample has its own texture sampling), Gaussian filter should be fine. It is no different to downsampling an image. However, since NVIDIA is using multisampling (using the sample texture samples for each subsample), Gaussian filter inside a primitive can make texture blurry.

    The reason is, although Gaussian filter can remove high frequency components, it also reduces lower frequency components to some extent (box filter is similar on this regard). With supersampling, all subsamples has its own texture samplings, which means these subsamples are filtered with cut-off frequency at the subsample width (with some nice texture filtering). So it won't become too blurry when filtered again. However, with multisampling, all subsamples inside a pixel share the same texture sampling, means these subsamples are filtered with cut-off frequency at the pixel width. Therefore, filtered it again can make things blurry since some lower frequency components are reduced too much.

    I don't know if it is very hard to implement such feature. A straightforward method is to maintain a bitfield, and use one bit for each pixel to mark whether a pixel is covered by only one primitive or many primitives. A triangle strip or a fan can be regarded as a primitive.
     
  2. Reverend

    Banned

    Joined:
    Jan 31, 2002
    Messages:
    3,266
    Likes Received:
    24
    ... and can you think of any way such a "detection" can be achieved?
     
  3. pcchen

    pcchen Moderator
    Moderator Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,018
    Likes Received:
    582
    Location:
    Taiwan
    A simple method would be like this:

    Assuming 4X FSAA is used. First, maintains a bitfiled with one bit for each pixel. This field is cleared to all 'set' when clearing color buffer. Furthermore, arranges the pixel pipeline to render all subsamples inside a pixel sequentially. This should be easy on a 4 pixel pipeline architecture, just arranges them in 2x2 squares and all pipelines will renders into one pixel at the same time.

    When rendering a primitive, since all subsamples inside a pixel are rendered sequentially (or simultaneously), it is possible to check whether all subsamples inside a pixel are rendered (inside the primitive and all passed alpha test, Z test, and stencil test). If so, mark the bit corresponding to the pixel in the bitfield as 'set'. Otherwise, mark it as 'clear'.

    When the rendering is complete, apply Gaussian filter only on the pixels with the bit marked 'clear'. For other pixels, just copy them. This will apply Gaussain filter only on the edge of a primitive. Intersection is also handled well.

    To optimize for bandwidth further, the renderer has to write only one subsample when all subsamples in a pixel are the same. This can save some memory bandwidth.
     
  4. aths

    Newcomer

    Joined:
    Feb 8, 2002
    Messages:
    128
    Likes Received:
    3
    Location:
    Germany (at the Baltic Sea)
    pcchen,

    I have had similar thoughts since few weeks. (Multisampling with a kind of "difference buffer", to saving write cycles.) In my point of view, there must be somewhat make it very complicated - otherwise, why don't nV uses is this bandwich saving method?
     
  5. Freon

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    38
    Likes Received:
    0
    What if we define the center of a given pixel as the weighted average location of all subsamples and subpixels? Edge antialiasing and texture blur would be approximately the same between our pre- and post- quarter pixel geometry offset, would they not?

    But for text (console text in Half-life is a good example) that relies on the specific known center, the new geometry offset would possibly give two "correct" samples and three "incorrect" ones, verses 1 "correct" one and possible 4 "incorrect" samples. Worst case scenario that is. Note that the non-Quincunx RGMS never blurred the text in the first place.

    Other than aligned text, I can't see how the new sampling pattern is much different than say, moving the camera up and to the left a miniscule ammount. Besides of edges its still multisampling with a certain about of blur. And edges it's still 5 samples with one of them being weighted most heavily, correct? Or are the two "inside the pixel" samples weighted equally now (maybe that's what I'm missing here)? If they are, it sound like all they did was change the weightings, not offset geometry. And this doesn't really change the normal non-Quincunx 2x RGMS at all. I consider the center of the pixel the weighted average position anyway, and for 2x RGMS it is 50/50 for edges, and 0/100 for insides. Moving the geometry a quarter pixel doesn't change that.

    Shifting the geometry or image a quarter pixel isn't going to do anything but, well, move the image a bit in said direction. I dunno. Sounds like a bunch of BS to me. I never really cared for Quincunx in the first place. 2x RGMS works well, and I'd still take it over Quincunx even without a speed hit.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...