*spin-off* Screen-space/AO/Path tracing techniques & future... stuff

Discussion in 'Rendering Technology and APIs' started by Laa-Yosh, Feb 16, 2016.

  1. Laa-Yosh

    Laa-Yosh I can has custom title?
    Legend Subscriber

    Joined:
    Feb 12, 2002
    Messages:
    9,568
    Likes Received:
    1,452
    Location:
    Budapest, Hungary
    Also, I can't wait for the day when ambient occlusion is finally dead. It's a monster, a zombie, Frankenstein's work, an insult to pixels, I hate it, go away please.
    Oh damn, path tracers are still to intensive for realtime rendering, ehh.

    *Mod: spun off from console thread*
     
    #1 Laa-Yosh, Feb 16, 2016
    Last edited by a moderator: Feb 17, 2016
  2. Clukos

    Clukos Bloodborne 2 when?
    Veteran Newcomer

    Joined:
    Jun 25, 2014
    Messages:
    4,452
    Likes Received:
    3,781
    Unfortunately, the alternative ain't that pretty.
     
    eloyc and milk like this.
  3. Laa-Yosh

    Laa-Yosh I can has custom title?
    Legend Subscriber

    Joined:
    Feb 12, 2002
    Messages:
    9,568
    Likes Received:
    1,452
    Location:
    Budapest, Hungary
    AO is a relatively cheap and easy hack, but it is a hack and pretty often it's just plain wrong. It does add some nice subtle visual detail that's good on the eyes, but it'd be a good day when it's finally replaced with something more correct.
     
  4. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,851
    Likes Received:
    2,379
    Not only is the time when games can achieve good enough results with a path tracer with the kinds of assets they use far off, but also, by the time they get there, there will still be better things to use their rendering time on. There's still a lot of room for improvement with SS techniches, and voxel/point-cloud/distance-field tech is gonna get more and more use.
    My prediction is that high end engines from years in the future ( possibly in a next gen ) will use voxel cone tracing or something like that for large scale GI, and they'll stick with screen-space to refine that with high frequency detail. Not unlike QB's aproach. With some 2 extra layers of depth information ( say nearest back facing surface, and sencond-nearest front facing ) most artifacts would be gone, and the remaining ones would be pretty perceptually hard to notice. With screen space reflections getting popular, more games will jump into rendering a low-res-lower-lod dynamic cubemap around the camare ( like many racing and open world games do ) to further avoid frustrum related artifacts - The frostbite team talked about that at siggraph. I think that's be the best use of limited resources, and would look pretty darn good. Of course, completely new aproaches might get developed in the meanwhile.
     
    chris1515, London-boy, Clukos and 3 others like this.
  5. Laa-Yosh

    Laa-Yosh I can has custom title?
    Legend Subscriber

    Joined:
    Feb 12, 2002
    Messages:
    9,568
    Likes Received:
    1,452
    Location:
    Budapest, Hungary
    Yeah, results will definitely improve, even just by further advancing the existing tech. Still, AO is inherently wrong, so I'd welcome anything to replace it :)
    Also, same goes with shadow maps, they're also more of a hack, but still closer to the "right" approach.

    And yeah, path tracing is pretty damn expensive. I've just recently seen some stats about the current ratio of final quality renders to all work in progress renders in big studios and it's something like 1.2 - so basically only 20% of all rendering time is spent through something like all the production, before committing to the final frames. That's quite extreme.
     
  6. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,851
    Likes Received:
    2,379
    I know what you mean about AO being inherently wrong, and I agree with it, but I imagine an engine doing voxel cone tracing, and testing against SS withing each voxel step. It wouldn't be AO per-se, just screen-space GI occlusion, but the algo would still be very much an evolution of what SSAO is doing today, without a lot of the inherent wrongness you hate so much.
     
    chris1515 and Laa-Yosh like this.
  7. Clukos

    Clukos Bloodborne 2 when?
    Veteran Newcomer

    Joined:
    Jun 25, 2014
    Messages:
    4,452
    Likes Received:
    3,781
    The tech showcased by Dice in SIGGRAPH, Stochastic Screen Space Reflections, still has a long way to go imo. Gifs from that video(errors = red):
    [​IMG]
    [​IMG]

    This approach is especially problematic when you have characters or movable assets interacting in screen space. And that's why i asked about SSR usage in Uncharted 4 in this post, but maybe i'm missing something.

    Just look at 2:20-2:23
     
    #7 Clukos, Feb 17, 2016
    Last edited: Feb 17, 2016
  8. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,298
    Likes Received:
    386
    Location:
    Finland
    Yes, they use SSR and they do have the problems you described. (At least in gameplay.)

    Been wondering if it would be worthwhile to separate background and moving objects or at least moving characters to a different layer or technique. (Should fix more visible problems.)

    The edge problem is another that needs a fix, easy is to render wider FoW, but expensive.
    Perhaps one could use distance field to get half decent low res representation of what is beyond edge of screen or at least occlusion information so sky-cube wouldn't be visible. (Get color from local cubemap?)
     
    #8 jlippo, Feb 17, 2016
    Last edited: Feb 17, 2016
  9. HTupolev

    Regular

    Joined:
    Dec 8, 2012
    Messages:
    936
    Likes Received:
    564
    Hmm. You'd wind up with potential for massive variations in shading costs, depending on the layer sizes. Then, you'd have to run SSR once for each layer, where the second pass would have some intrinsic sketchiness on account of not knowing how to occlude the rays from earlier passes with the new data (seems like it would be just about impossible to avoid some weird background bleeding on rough surfaces). And at the end of it all there are still basic SSR occlusion issues that it wouldn't really fix (at least not without astronomical layer counts), like getting the backs of objects to be interpreted correctly.

    That's what they're already frequently doing. Area cubemaps are only accurate at their sampling point, and depending on the game's needs there are difficult questions about what they should contain and how they should be "lit."

    One tactic that Bungie tried with Halo 3 was to use a fairly heavy light data format which supplies directional info, so that at any point you can directionally attenuate or recolor the area cubemap according to the incident light at that point. This allows the cubemap to provide area-accurate flavor detail in the reflections, while not creating such blatant light-from-nowhere issues. It's still not perfect, which becomes sort of obvious when looking at a reflective flat surface with lots of different colors of incident light, as the area cubemap details are still obviously continuous across it.
     
  10. milk

    Veteran Regular

    Joined:
    Jun 6, 2012
    Messages:
    2,851
    Likes Received:
    2,379
    Clukos and AlBran like this.
  11. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    There might be mileage in a hybrid algorithm: use a nasty cheap screen space algorithm to mark pixels (geometry, in effect) for occlusion and then path trace those pixels (geometry) to render the actual occlusion.

    So the screen space algorithm is solely to produce candidate pixels/geometry for a path trace (or variant) pass. You discard the screen space effect entirely.

    The problem with this is to convert geometry into something that suits the path tracer (or variant) algorithm with the constraint that you aren't interested in high detail geometry that's distant from the occluded (candidate) pixels that were output by the screen space pass.
     
  12. Rodéric

    Rodéric a.k.a. Ingenu
    Moderator Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,976
    Likes Received:
    837
    Location:
    Planet Earth.
    PowerVR Wizard !
    Mixing both worlds should give quite an interesting tradeof, be it simplicity of implementation, performance and versatility, also better quality.

    I'm a little tired of the huge can of worms rasterization pulls just to fake everything with pretty mediocre results, but we have to do what the hardware can...
     
  13. dah145

    Newcomer

    Joined:
    Nov 3, 2008
    Messages:
    33
    Likes Received:
    1
    Irradiance volumes or whatever, GI yadayada, light probing for GI simulation has been done ever since the beginning of this gen, just from the top of my mind, Far Cry 4 uses it dynamically, Driveclub does it splendidly definitely the best real time GI simulation this gen, hell, it was Killzone: SF that set the benchmark from the start of this gen, a rushed launch game, had light probes for GI simulation, had raymarched light shafts, proper volumetrics, localized cubemaps reflections plus SSR ray traced with occlusion with bounces, bokeh DOF, proper per object motion blur, lots, lots of tech there, I find amazing that's only by now other games are catching up...
     
  14. AlBran

    AlBran Ferro-Fibrous
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,582
    Likes Received:
    5,680
    Location:
    ಠ_ಠ
    You could go farther back actually.
     
  15. VFX_Veteran

    Regular Newcomer

    Joined:
    Mar 9, 2002
    Messages:
    656
    Likes Received:
    176
    How many of you guys feel that the hardware architecture needs to change before we actually start to see RT path-tracing? Disney made a pretty nifty path-tracer that uses ray-bundling and sorting. I'm wondering if leading hardware companies (i.e. ATI/Nvidia) will eventually abandon the old tried and true triangle rasterization techniques and come up with a totally different paradigm. Thoughts?
     
  16. Infinisearch

    Veteran Regular

    Joined:
    Jul 22, 2004
    Messages:
    739
    Likes Received:
    139
    Location:
    USA
  17. VFX_Veteran

    Regular Newcomer

    Joined:
    Mar 9, 2002
    Messages:
    656
    Likes Received:
    176
    That looks extremely promising but I wonder what are the limitations.
     
  18. cheapchips

    Regular Newcomer

    Joined:
    Feb 23, 2013
    Messages:
    675
    Likes Received:
    404
    I though otoy are into server based rendering + streaming, so maybe that?
     
  19. Clukos

    Clukos Bloodborne 2 when?
    Veteran Newcomer

    Joined:
    Jun 25, 2014
    Messages:
    4,452
    Likes Received:
    3,781
    Dreams from Media Molecule is using something completely new with SDF based rendering on GCN 1.1, if you want to read more: http://advances.realtimerendering.com/s2015/mmalex_siggraph2015_hires_final.pdf

    Result
    [​IMG]
    [​IMG]
    [​IMG]
     
    chris1515 likes this.
  20. AlBran

    AlBran Ferro-Fibrous
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    20,582
    Likes Received:
    5,680
    Location:
    ಠ_ಠ
    Sony Defense Force :runaway:
    Spooky. I like.
     
    Lightman likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...