*spin-off* Screen-space/AO/Path tracing techniques & future... stuff

Laa-Yosh

I can has custom title?
Legend
Supporter
Also, I can't wait for the day when ambient occlusion is finally dead. It's a monster, a zombie, Frankenstein's work, an insult to pixels, I hate it, go away please.
Oh damn, path tracers are still to intensive for realtime rendering, ehh.

*Mod: spun off from console thread*
 
Last edited by a moderator:
AO is a relatively cheap and easy hack, but it is a hack and pretty often it's just plain wrong. It does add some nice subtle visual detail that's good on the eyes, but it'd be a good day when it's finally replaced with something more correct.
 
AO is a relatively cheap and easy hack, but it is a hack and pretty often it's just plain wrong. It does add some nice subtle visual detail that's good on the eyes, but it'd be a good day when it's finally replaced with something more correct.

Not only is the time when games can achieve good enough results with a path tracer with the kinds of assets they use far off, but also, by the time they get there, there will still be better things to use their rendering time on. There's still a lot of room for improvement with SS techniches, and voxel/point-cloud/distance-field tech is gonna get more and more use.
My prediction is that high end engines from years in the future ( possibly in a next gen ) will use voxel cone tracing or something like that for large scale GI, and they'll stick with screen-space to refine that with high frequency detail. Not unlike QB's aproach. With some 2 extra layers of depth information ( say nearest back facing surface, and sencond-nearest front facing ) most artifacts would be gone, and the remaining ones would be pretty perceptually hard to notice. With screen space reflections getting popular, more games will jump into rendering a low-res-lower-lod dynamic cubemap around the camare ( like many racing and open world games do ) to further avoid frustrum related artifacts - The frostbite team talked about that at siggraph. I think that's be the best use of limited resources, and would look pretty darn good. Of course, completely new aproaches might get developed in the meanwhile.
 
Yeah, results will definitely improve, even just by further advancing the existing tech. Still, AO is inherently wrong, so I'd welcome anything to replace it :)
Also, same goes with shadow maps, they're also more of a hack, but still closer to the "right" approach.

And yeah, path tracing is pretty damn expensive. I've just recently seen some stats about the current ratio of final quality renders to all work in progress renders in big studios and it's something like 1.2 - so basically only 20% of all rendering time is spent through something like all the production, before committing to the final frames. That's quite extreme.
 
Yeah, results will definitely improve, even just by further advancing the existing tech. Still, AO is inherently wrong, so I'd welcome anything to replace it :)
Also, same goes with shadow maps, they're also more of a hack, but still closer to the "right" approach.

I know what you mean about AO being inherently wrong, and I agree with it, but I imagine an engine doing voxel cone tracing, and testing against SS withing each voxel step. It wouldn't be AO per-se, just screen-space GI occlusion, but the algo would still be very much an evolution of what SSAO is doing today, without a lot of the inherent wrongness you hate so much.
 
Not only is the time when games can achieve good enough results with a path tracer with the kinds of assets they use far off, but also, by the time they get there, there will still be better things to use their rendering time on. There's still a lot of room for improvement with SS techniches, and voxel/point-cloud/distance-field tech is gonna get more and more use.
My prediction is that high end engines from years in the future ( possibly in a next gen ) will use voxel cone tracing or something like that for large scale GI, and they'll stick with screen-space to refine that with high frequency detail. Not unlike QB's aproach. With some 2 extra layers of depth information ( say nearest back facing surface, and sencond-nearest front facing ) most artifacts would be gone, and the remaining ones would be pretty perceptually hard to notice. With screen space reflections getting popular, more games will jump into rendering a low-res-lower-lod dynamic cubemap around the camare ( like many racing and open world games do ) to further avoid frustrum related artifacts - The frostbite team talked about that at siggraph. I think that's be the best use of limited resources, and would look pretty darn good. Of course, completely new aproaches might get developed in the meanwhile.

The tech showcased by Dice in SIGGRAPH, Stochastic Screen Space Reflections, still has a long way to go imo. Gifs from that video(errors = red):
1bordt.gif

289o2q.gif


This approach is especially problematic when you have characters or movable assets interacting in screen space. And that's why i asked about SSR usage in Uncharted 4 in this post, but maybe i'm missing something.

Just look at 2:20-2:23
 
Last edited:
This approach is especially problematic when you have characters or movable assets interacting in screen space. And that's why i asked about SSR usage in Uncharted 4 in this post, but maybe i'm missing something.

Just look at 2:20-2:23
Yes, they use SSR and they do have the problems you described. (At least in gameplay.)

Been wondering if it would be worthwhile to separate background and moving objects or at least moving characters to a different layer or technique. (Should fix more visible problems.)

The edge problem is another that needs a fix, easy is to render wider FoW, but expensive.
Perhaps one could use distance field to get half decent low res representation of what is beyond edge of screen or at least occlusion information so sky-cube wouldn't be visible. (Get color from local cubemap?)
 
Last edited:
Yes, they use SSR and they do have the problems you described.

Been wondering if it would be worthwhile to separate background and moving objects or at least moving characters to a different layer or technique. (Should fix more visible problems.)
Hmm. You'd wind up with potential for massive variations in shading costs, depending on the layer sizes. Then, you'd have to run SSR once for each layer, where the second pass would have some intrinsic sketchiness on account of not knowing how to occlude the rays from earlier passes with the new data (seems like it would be just about impossible to avoid some weird background bleeding on rough surfaces). And at the end of it all there are still basic SSR occlusion issues that it wouldn't really fix (at least not without astronomical layer counts), like getting the backs of objects to be interpreted correctly.

Get color from local cubemap?
That's what they're already frequently doing. Area cubemaps are only accurate at their sampling point, and depending on the game's needs there are difficult questions about what they should contain and how they should be "lit."

One tactic that Bungie tried with Halo 3 was to use a fairly heavy light data format which supplies directional info, so that at any point you can directionally attenuate or recolor the area cubemap according to the incident light at that point. This allows the cubemap to provide area-accurate flavor detail in the reflections, while not creating such blatant light-from-nowhere issues. It's still not perfect, which becomes sort of obvious when looking at a reflective flat surface with lots of different colors of incident light, as the area cubemap details are still obviously continuous across it.
 
There might be mileage in a hybrid algorithm: use a nasty cheap screen space algorithm to mark pixels (geometry, in effect) for occlusion and then path trace those pixels (geometry) to render the actual occlusion.

So the screen space algorithm is solely to produce candidate pixels/geometry for a path trace (or variant) pass. You discard the screen space effect entirely.

The problem with this is to convert geometry into something that suits the path tracer (or variant) algorithm with the constraint that you aren't interested in high detail geometry that's distant from the occluded (candidate) pixels that were output by the screen space pass.
 
PowerVR Wizard !
Mixing both worlds should give quite an interesting tradeof, be it simplicity of implementation, performance and versatility, also better quality.

I'm a little tired of the huge can of worms rasterization pulls just to fake everything with pretty mediocre results, but we have to do what the hardware can...
 
Irradiance volumes or whatever, GI yadayada, light probing for GI simulation has been done ever since the beginning of this gen, just from the top of my mind, Far Cry 4 uses it dynamically, Driveclub does it splendidly definitely the best real time GI simulation this gen, hell, it was Killzone: SF that set the benchmark from the start of this gen, a rushed launch game, had light probes for GI simulation, had raymarched light shafts, proper volumetrics, localized cubemaps reflections plus SSR ray traced with occlusion with bounces, bokeh DOF, proper per object motion blur, lots, lots of tech there, I find amazing that's only by now other games are catching up...
 
How many of you guys feel that the hardware architecture needs to change before we actually start to see RT path-tracing? Disney made a pretty nifty path-tracer that uses ray-bundling and sorting. I'm wondering if leading hardware companies (i.e. ATI/Nvidia) will eventually abandon the old tried and true triangle rasterization techniques and come up with a totally different paradigm. Thoughts?
 
Back
Top