Game development presentations - a useful reference

ReSTIR may not be as robust or easily extensible as I thought ...

Only useful in conjunction with shadow rays, high noise involving near disocclusion/lighting discontinuities/high geometric complexity/high-speed light sources/etc and the algorithm reuses primary rays so it can't be used for global illumination beyond the first hit from the camera ...
 
ReSTIR may not be as robust or easily extensible as I thought ...

Only useful in conjunction with shadow rays, high noise involving near disocclusion/lighting discontinuities/high geometric complexity/high-speed light sources/etc
You have tried it out?
I did only skim over the paper, and my impression was it's very promising. I assume we can not really avoid to organize many lights for large worlds in some spatial data structure, which we then have to traverse for each ray hit.
But i would hope we could throw a large set of lights into one node, and use reservoir sampling to minimize the set to something practical?
 
You have tried it out?
I did only skim over the paper, and my impression was it's very promising. I assume we can not really avoid to organize many lights for large worlds in some spatial data structure, which we then have to traverse for each ray hit.
But i would hope we could throw a large set of lights into one node, and use reservoir sampling to minimize the set to something practical?

I haven't tried it out myself but if you look at algorithm 5 in the paper, they basically state that their prior reservoirs come from the previous frames screen space image. It's conceptually similar to those other massive screen space and temporal rendering hacks. A good portion of the speed up is based on the fact that they didn't have to rely on more complex data structures beyond the previous frames screen space image to do temporal reuse ...

Their RTXDI video itself demonstrates quite a few examples of robustness issues. They make it abundantly clear that if you have sources of poor samples then ReSTIR has a low capacity to reduce that noise but if you have relatively good quality samples then ReSTIR can significantly help the noise reduction ...

You can get around the robustness problems by doing less temporal reuse but then ReSTIR becomes less effective at reducing the noise. Consequently, you can increase the sample reuse so that ReSTIR can more effectively reduce the noise but temporal reuse brings it's own host of problems like reducing overall image details. Alternatively, we could try extending ReSTIR with more complex data structures which could potentially open us up to higher quality temporal reuse or allow us to apply the algorithm beyond just shadow rays but the performance gains wouldn't be anywhere near as convincing since the paper explicitly mentions the higher cost of maintaining this data structure. ReSTIR is far from being an elegant solution as people are hyping it up to be ...
 
the paper explicitly mentions the higher cost of maintaining this data structure.
Of course - the paper is about an alternative to this, so they present results where this alternative is better. But there is no way around it, so we need to solve it. (Or we wait for extended RT cores which also support point and range queries on BVH, not just rays.)
I tend to be 'too optimistic' about acceleration structures in general, but:
If we can use many lights per node, a simple multi level grid might be good enough even for large city night scenes. If not, one or two levels of sparse grid should help, eventually also with scrolling the grid around the moving camera. Most lights won't move quickly, which can reduce this cost a lot.
To reduce traversals while shading, we can make bounding box per screen space tile (or world space clusters) and traverse only once for that to get a list of many lights.

Though, that's ideas about the future, not sure if this makes sense already now. Missed the video, will watch...
 
He mentioned Overwatch and Destiny 2 having good IQ. I find both to have rather poor IQ due to the abundance of temporal aliasing and shimmer.

He did this blog post for a precise reason, he has nothing against TAA but this is because of accessibility some people have nausea and motion sickness because of the ghosting artifact of temporal solution. He thinks it would be good to have some option inside menu to desactivate TAA and use other type of anti aliasing.


 
He did this blog post for a precise reason, he has nothing against TAA but this is because of accessibility some people have nausea and motion sickness because of the ghosting artifact of temporal solution. He thinks it would be good to have some option inside menu to desactivate TAA and use other type of anti aliasing.


I had never heard of TAA causing motion sickness. Interesting. MSAA does seem like a dead end in terms of achieving high levels of IQ though, at least outside of VR.
 
I had never heard of TAA causing motion sickness. Interesting. MSAA does seem like a dead end in terms of achieving high levels of IQ though, at least outside of VR.
I wonder if analytic AA is feasible now that barycentric coordinates are found in most GPUs and visible through APIs.
If it's possible to have close to artifact free results, it might be decent alternative to MSAA.
 
I wonder if analytic AA is feasible now that barycentric coordinates are found in most GPUs and visible through APIs.
If it's possible to have close to artifact free results, it might be decent alternative to MSAA.

How could barycentric coordinates be useful for analytic anti-aliasing ?

I'm pretty sure analytic anti-aliasing is about determining the correct area coverage of each primitives lying inside the pixel boundaries so how are barycentric coordinates going to help us find the intersection area between primitives and the pixel ?
 

Edit: Quite interesting...

Curious: For BVH nodes they store both bounding box with min/max coords, and also a center with radius, guess a bounding sphere. Maybe the center is off, otherwise one could compute it from the box, and maybe the bounding sphere bounds triangles of internal nodes? Though, it sounded more like they have triangles only in the leafs - not sure.

Also interesting is the interlacing for RT. So interlacing is not good enough for the frame buffer where they use checkerboard, but it is sufficient for reflections.
 
Last edited:


Some interview about animation and VFX in videogames and movies. The first one is a LEague of Legend animator but he worked for WETA Digital on Avatar movie animation.
 
Back
Top