Assembly 2012: Advanced Visual Effects with DirectX 11

Either the volume's nearly nonexistant or there's no audio track whatsoever. I can't hear a bloody thing.
 
Me too. Very interesting ... Makes me feel a tiny bit smart, as someone who only reads and things about this stuff and never really has an opportunity to actually do something with it, I've always thought that knowing for every pixel what poly it hit, at what distance and what it's properties (starting with RGB and Alpha) are could be a huge boon for a number of things, including complex, high quality transparancy. But it looks like you can use it for far more than I imagined.

It's something you need a lot of power and memory for, but the rewards seem tremendous.

I think you could do a lot with this for physics as well.
 
That ambient occlusion did seem neat, until I actually started thinking about it a few hours later. And the problem is that the worst case scenario for that is unsolvable. As in, any and every time your temporal coherence well, isn't, then your game simply crashes performance wise because you are doing 256 rays (or whatever your max is) per pixel and there goes everything.

Examples:
That building collapses, Bugger!
A fast moving vehicle (or anything) eclipses the scene, Bugger!
You turn around really fast, Bugger!
A bunch of anything is moving on the scene almost at all, Bugger!

I.E. anything that moves relative to the camera is a "hole" and crashes performance. Heck, anything moving at all creates holes everywhere, as your previous solution is now potentially invalid. As presented the technique is totally useless for games. But could be neat for modeling scenes and getting results back quickly, and is of course a neat experiment.
 
That should not necessarily be such a big issue, as it would be possible to use a significantly lower number of rays in fast movement spaces.

Sure, it would then take a few frames for the solution to converge, but in case of fast motion combined with motion blur, it might be a good idea.
 
That ambient occlusion did seem neat, until I actually started thinking about it a few hours later. And the problem is that the worst case scenario for that is unsolvable. As in, any and every time your temporal coherence well, isn't, then your game simply crashes performance wise because you are doing 256 rays (or whatever your max is) per pixel and there goes everything.

Examples:
That building collapses, Bugger!
A fast moving vehicle (or anything) eclipses the scene, Bugger!
You turn around really fast, Bugger!
A bunch of anything is moving on the scene almost at all, Bugger!

I.E. anything that moves relative to the camera is a "hole" and crashes performance. Heck, anything moving at all creates holes everywhere, as your previous solution is now potentially invalid. As presented the technique is totally useless for games. But could be neat for modeling scenes and getting results back quickly, and is of course a neat experiment.

I think there could be a way to impose a limit of rays per frame and then resorting to good old SSAO in the missing pixels after going overbudget on the worst case scenarios. That would then need some post smoothing to have the different AO types blend together...
All that seems too hacky to be pratical though.
 
Ahh, no
the demo he talks about between 4:50 and 5:05 of the video "add them all together and you come up with something good enough to come 2nd at assembly, but not first"
in your part of the video he says its from the new demo and the competition hasn't started yet and "this is a sneak preview"
 
As in, any and every time your temporal coherence well, isn't, then your game simply crashes performance wise because you are doing 256 rays (or whatever your max is) per pixel and there goes everything.
Yeah temporal reprojection has been around for a long time now, but it doesn't get used a lot in practice because, frankly, it does not improve the worst case performance of a game... which is what matters! So you can choose to either have performance crash in bad cases (unacceptable) or let artifacts creep in. And as we all know from game shadowing implementations, people normally choose to have "bad and stable" than "decent but can go bad".

Temporal reprojection can sometimes work ok for anti-aliasing since normally in the cases where it falls apart you have motion blur to cover the aliasing. But even then, throw any temporal algorithm in a forest with high frequency foliage everywhere and watch it completely break down.

I'm a little sceptical of algorithms that including temporal reprojection as part of their "advertising". I can do an arbitrary amount of work if I'm willing to let the screen sit there for seconds at a time. Hell real-time path tracing can get pretty decent results in such cases ;) Show me the *real* performance of the technique from start to end to get a given quality level. If I still want to use it and feel like that performance is not enough, I can temporally cache/reproject any arbitrary term in my shaders that I feel like. I don't really need each algorithm to redescribe how to do that... and then conflate their results based on it. :)

Haven't watched the video yet though so perhaps there are good ideas in there. Was just commenting on the general concept of using temporal reprojection to "hide" the real performance of an algorithm.
 
Back
Top