Game development presentations - a useful reference

Discussion in 'Rendering Technology and APIs' started by liolio, Sep 16, 2009.

  1. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,596
    Likes Received:
    840
    Location:
    Finland
    You know pixel size and then from coordinates the distance to edge at sample position, thus you can use VU-line like distance to edge AA.
    http://www.diva-portal.se/smash/get/diva2:843104/FULLTEXT02.pdf
     
  2. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,135
    Likes Received:
    1,290
    But distance to closest edge does not help so much if triangles become smaller and smaller. We would need to clip the whole triangle to the pixel to get area, which then is still incomplete for the missing other triangles.

    To me, TAA is the best solution, and i do not notice artifacts when playing.
    But i've learned that's not true for all. Some even hate TAA to death. Would be interesting to know how many people are sensible and affected here. And if ML could help them more than traditional TAA does, on the long run...
     
    jlippo likes this.
  3. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,286
    Likes Received:
    1,551
    Location:
    London
    TAA quality varies hugely by game.
     
    techuse and CeeGee like this.
  4. cheapchips

    Veteran Newcomer

    Joined:
    Feb 23, 2013
    Messages:
    1,795
    Likes Received:
    1,929
    The author works at Bethesda, and idTech's TSSAA is rather splendid I think, I was surprised by the whole article
     
  5. Lurkmass

    Regular Newcomer

    Joined:
    Mar 3, 2020
    Messages:
    305
    Likes Received:
    344
    Distance to edge AA is completely different to analytic AA ...

    In our standard rasterization pipeline, we have sample points that all have equal weight contribution to a pixel's final colour output. Let us consider this pathological example, suppose that we'll have 4 samples per pixel (4x MSAA) enabled. We have a pixel that lies fully within the red coloured triangle at a greater depth but within that pixel itself we also have 4 blue coloured subpixel triangles that altogether covers no more than a couple percentage of the total area of our pixel and they cover all of the sample points in the pixel at a lower depth. Regardless of whichever primitive gets tested against the samples, our 4 blue coloured subpixel triangles are always going to win because they cover those very same sample points at a lower depth. Now the main problem is that when we're adding these sample points together our final pixel colour output is going to be completely blue despite the pixel itself being mostly red in area! Can this be the exact solution ?

    https://publik.tuwien.ac.at/files/PubDat_223424.pdf

    In an analytic rasterization pipeline described above so the concept of sample points do not exist as it does in our standard pipeline. The pixel's final colour output is computed via by arbitrarily assigning weights proportionally to the area of the primitive that intersects with the pixel's area. Once we've figured out exactly how much of the pixel's area are covered by each primitives lying within it's boundaries we can now revisit our prior example again. Since we don't have sample points anymore to test against our primitives, this means that we can now consider the contribution of our red triangle which would normally be rejected by the standard pipeline. As we add up the contribution of each primitives colour based on their arbitrarily assigned weights when we output the pixel's final colour it'll be mostly red with a slight purple tint which is empirically consistent with our original input signal!

    In conclusion, with standard rasterization we have the problem of where our primitives can over/under represent their contribution to the pixel's final colour since the area of the primitive does not need to be proportional to the weight of each samples. Analytic rasterization pipeline solves this problem by computing the exact visibility or the area where each primitives intersects with the pixel boundaries ...
     
    #645 Lurkmass, Jan 28, 2021
    Last edited: Jan 28, 2021
    DavidGraham, jlippo and chris1515 like this.
  6. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,596
    Likes Received:
    840
    Location:
    Finland
    Yup, perhaps shouldn't have used the word there.
    Habbit of thinking as it from old good days of wu-line AA.

    Fun paper on NSAA, I might have missed it back in the days.
     
  7. milk

    milk Like Verified
    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    3,679
    Likes Received:
    3,731
    My knowledge on the topic was never that detailed, and I admit I uderstood the two terms to be interchangeable until your distinction right now.
    I thought analytical AA was more of a description of a (for lack of a better word right now - im drunk) a philosophy than an actual well defined algo.

    Knowing that, distance to edge is still already a hell of an improvement over what we have now. Also, even analytical,.from your description, does not sound like a silver bullet, since, as I understood it, it evaluates the coverage of each primitive individually, but does not precisely consider the effect of how different primitives oclude each other. I'm assuming it tries its best to make an educated guess there, but it still not 100% mathematically correct, even though I suppose the error is negligible for IQ, specially at modern resolutions.
     
  8. Lurkmass

    Regular Newcomer

    Joined:
    Mar 3, 2020
    Messages:
    305
    Likes Received:
    344
    What do you mean by primitives occluding each other ? Transparency ?

    The paper admittedly does brush aside the problem of transparency and intersecting geometry but they mention a possibility to extend those cases with the depth peeling technique in their prior work in one of the slides ...

    In terms of quality, an analytic rasterization pipeline is practically equivalent to ground truth results so the only limit to that would be the numeric precision used. Even 256 samples per pixel (256x MSAA) with the standard pipeline will struggle to produce clean results compared to analytic rasterization ...
     
  9. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,135
    Likes Received:
    1,290
    I guess he means there are still failure cases in the process to get all triangles that affect a pixel.
    How do they do it, in the paper you have listed? (I tried to read but i'm blocked from the thought 'nah... sounds too expensive to be worth it', and after that it's hard to focus... :D )

    EDIT: Got it, indeed they bin all triangles to each pixel. Completely unpractical. And even if we replace this with some MSAA trick to get, say max 16 triangles per pixel, then clipping those triangles against each other to find exact area is nonsense.
    When i got my first PC ant Watcom C, first i did was making rotating cube and then i eliminated overdraw by clipping each triangle with each other, producing much more trinagles but no overdraw. Back then i felt clever, but not anymore :) Doing this per pixel really is crazy, and they still have moire.

    I wonder how much it would help if we could jitter the sampling point individually per (sup)pixel in hardware.
     
    #649 JoeJ, Jan 31, 2021
    Last edited: Jan 31, 2021
  10. milk

    milk Like Verified
    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    3,679
    Likes Received:
    3,731
    Nevermind. I had not taken the time to take a look at the paper, but I was going off of your simplified description and my lousy interpretation of it on top of that. Now with JoeJ's post I see how it handles occlusion. It does clip every triangle against each other. Huh... Interesting. Obviously too expensive, but still an interesting concept, if only as a thought experiment or theoretical benchmark.
     
  11. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,135
    Likes Received:
    1,290
    ...not sure if i got this right. It's interesting to relate the 'hopeless task' of proper AA to the visibility problem. It all looks much less nonsense if we try to solve both problems to achieve hidden surface removal for a speedup, and AA for better IQ.
    If we do this triangle clipping at once for all triangles (not per pixel because it's GPU friendly or what), we get AA pretty cheap because occlusion is already resolved. If they did so in the paper then let me apologize.
    Though, this has been tried in software rasterizers, and even for low poly Quake it did not work out and an additional ZBuffer for dynamic objects was used.

    Still, maybe the idea remains interesting even today - either in HW or SW. But there are counter arguments, and all of them increase over time:
    * What does it help to have awesome AA for triangles, while our larger problem is to represent stuff with those flat edgy bastards in the first place?
    * Because of that, we make triangles as small as we can, and so we will end up at point splatting being just faster, making former efforts on triangles obsolete. (Or we go ray / sphere tracing and say goodbye to rasterization completely).

    TAA really seems future proof, because it does not care about those things and works regardless. That's a big plus.
    The other argument is it utilizes temporal accumulation, which is good, even if those that dislike its artifacts also dislike the concept in general.
    But we really want it. The big waste in rendering animation is that we compute the same pixel every frame, even if it has not changed much from the (projected) previous state. So any game rendering is redundant for 90%, maybe?
    I really like how TAA utilizes this waste to increase IQ. Makes total sense, and i hope it will be further improved until everybody is happy.

    I can also imagine AA won't be necessary at all if we move to something prefiltered, like having thin volumetric shells on the triangles. I think about this for a long time, and recently it seems more attractive again because it could help with compression and storage issues eventually...

    The only example i know is this:



    It has still artifacts on mip level switch it seems, but there is no hard edge crawling.
     
  12. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    680
    Likes Received:
    363
    "Perfect" AA should be an impossible task beyond sampling one platonic solid (a shape that doesn't self intersect from any view) per pixel, thus all contiguous samples can be considered coplanar (and even then you have to take the exact shape of the solid into question and correct). Which is a complicated way of saying, other than mipmapping you either start introducing error more and more, and/or start exploding memory costs for prefiltering with each dimension you add.

    Consider two triangles, one red one blue, in the same pixel but with different normal and different depths. To correctly shade the pixel you need to sample the red triangle, then shade according to normal and depth (and brdf) then sample the blue triangle, sample its lighting separately the same as the red triangle as the lighting for each triangle could be completely different, and then combine the two signals into one "pixel". Even analytic aa isn't "correct" as you can't combine the signals *before* shading unless you start paying that pre-filtering cost (less, error, more memory cost).

    Ultimately, to match "reference" rendering you need 256 samples per pixel. To match what a camera sees, well the highest end ones put out a linear 16bit signal, thus 64k samples per pixel for perfect "photorealism" as each of those photons potentially came from a completely different pathway. Which is why I appreciate TAA and don't think it's going away anytime soon. We're not even getting close to reference rendering in the near future, but at least with TAA we can get a few more samples that might be "close enough" to correct.
     
    #652 Frenetic Pony, Feb 2, 2021
    Last edited: Feb 2, 2021
  13. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,286
    Likes Received:
    1,551
    Location:
    London
    If only cameras (or film, then scanned) were even remotely close to this ideal!

    Mr Mars' Camera Resolution Diatribe (warrenmars.com)

    We have this old thread:

    "Pure and Correct AA" | Beyond3D Forum

    which, sadly, has lost lots of images.

    This is one of my all time favourite posts on B3D:

     
    milk likes this.
  14. Lurkmass

    Regular Newcomer

    Joined:
    Mar 3, 2020
    Messages:
    305
    Likes Received:
    344
    Analytic shading can be done via exact evaluation of convolution integrals in one of the author's previous paper which goes on to serve as a basis for non-sampled anti-aliasing. The author also argues that their results are comparable to the reference results which usually involves making the standard rasterization pipeline evaluate many samples approaching to infinity ...

    Even 256 samples per pixel isn't good enough when moire pattern artifacts (spatial aliasing) still visibly show up as detailed in analytic visibility. 256 samples might be good enough to eliminate any visible geometric aliasing but it's not effective against other sources of spatial aliasing. Doing just a couple hundred samples won't give you the 'exact' visibility, analytic visibility on the other hand will net you the 'exact' visibility which involves computing boundaries for the visible regions of the primitives ...

    I sincerely hope that TAA or any other temporal techniques doesn't ultimately end up being the inevitable future because art pipelines would be simpler without them which translates to a productivity advantage. I just hope that the industry's exploration of that area will stop altogether which was mostly a result of finding workarounds or hacks for the limitations behind older, inferior, and weaker systems. I truly believe that TAA is a regression in the productivity of art pipelines ...
     
  15. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,135
    Likes Received:
    1,290
    Why?

    Even exact analytical AA could not fix this completely, as long as we compute and display stuff using regular grids. Adding fuzz to deal with it can be just as good but cheaper.
     
    Frenetic Pony likes this.
  16. Lurkmass

    Regular Newcomer

    Joined:
    Mar 3, 2020
    Messages:
    305
    Likes Received:
    344
    Because now we're stuck in an never ending task of fine-tuning and maintaining a solution that won't work in a potentially infinite amount of cases and constantly refitting your solution to solve different cases isn't a productive use of time. I really hope that TAA will ultimately just end up being on the wrong side of history and that we'll eventually forget about it in the coming next generation. Art pipelines shouldn't be handicapped with such hacks in an ideal future and should evolve towards ease of authoring more content ...
     
  17. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,135
    Likes Received:
    1,290
    You mean like placing probes, fill lights, even portals manually, while keeping draw distance limited, watching at triangle counts, number of shaders, RAM usage, etc, etc, etc...
    I guess limiting TAA artifacts is mostly work of few programmers and some technical artists, and it's pretty much nothing in comparison to the above? What exactly puts a limitation on content authoring?
    AA may change, but temporal accumulation is not wrong, while ignoring available data is. So TAA will be mostly remembered as progress in the future, not as a failure. I doubt the concept will vanish completely from image processing.
     
    milk, Frenetic Pony and BRiT like this.
  18. xz321zx

    Newcomer

    Joined:
    Apr 20, 2016
    Messages:
    139
    Likes Received:
    35
    Apparently TAA (uniform?) hysteresis is additive with probe GI hysteresis that makes heavy use of heuristics.
    https://arxiv.org/pdf/2009.10796.pdf
    Really ? We know such (uniform?) hysteresis is wanted scene-wide?
     
  19. JoeJ

    Veteran Newcomer

    Joined:
    Apr 1, 2018
    Messages:
    1,135
    Likes Received:
    1,290
    Not sure if that's an argument, because real life is a gradual process of smooth change too. Would you call the sunset a hysteresis, just because it happens over some period of time?
    Would you call any physics simulation we do wrong too, because it works by taking a previous state and integrating the change of a timestep? Eventually improving cached contact forces over multiple frames? Surely not.

    So why is TAA different here, why is it bad or wrong, although it works the same way?
    The only answer can be subjective perception of error. But the success of TAA implies only a minority is affected. Still that's a problem, so what would you propose as an alternative?
     
    Dictator likes this.
  20. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    680
    Likes Received:
    363
    Heck, why not just concentrate on making TAA better. Epic's goal for their new TAA upsampling is to have the default values work for all content. I imagine AMD's Super Resolution will be much the same.

    And it's not like progress isn't being made on quashing artifacts. Resident Evil Village looks pretty damned amazing, and they're temporal upsampling to 4k on next gen. TAA just solves too many kinds of aliasing too well not to like.
     
    milk likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...