Is SHaRC spatial only, or direction aware? Asking because the demo case appeared to have painfully avoided geometry where different lighting conditions would have applied to the front/back facing side a geometry thinner than the voxel size.
Simply didn't bother yet to take a look at the actual hash function, so did someone already check that?
I do suspect it doesn't, so you'd always get an unintended bleed through thin occluders for the parts of GI which had not been sampled in screen space.
Is SHaRC spatial only, or direction aware? Asking because the demo case appeared to have painfully avoided geometry where different lighting conditions would have applied to the front/back facing side a geometry thinner than the voxel size.
I'd be surprised if they aren't using surfels for SHaRC, that will handle two sides just fine. The Neural net will find it difficult to learn surface orientation from just the sparse samples AFAICS.
I think they will have to compromise on the giant neural model approach and do something hybrid.
PS. oh, it's really voxels. This seems a lot less advanced than Dice's surfel GI.
A bit, odd. Cached radiance transfer aint exactly new, and a major point of these new GI techniques is you don't have to bake anything. Look instant feedback and dynamic gameplay! Getting rid of all that so you can add "neural" as a bullet point seems silly.
The tensor cores are there any way, though trying to capture the radiance cache in one model is a bit silly if that's still what they are doing. That feels a bit like DLSS 1, a very sloppy way to satisfy a mandate to leverage the tensor cores.
MLPs/SVMs can serve as a way to compress and interpolate between non-uniform samples, but it makes more sense to do that localized in a scene for instance for a surfel hemisphere or for prefiltered view dependent voxels.
Spatially Hashed Radiance Cache (SHaRC) Library . Contribute to NVIDIAGameWorks/SHARC development by creating an account on GitHub.
github.com
Didn't use surfels. Just a subdivision of the voxel into 8 axis aligned primary normal orientations. And it's simply recording voxels in world coordinates. With no extra metadata for depth, radius or anything, nor even the option to dynamically increase resolution where potentially needed.
So it does suffer from an well known issue:
Also didn't properly account for hash collisions! So this will occasionally resolve to completely wrong hash entries. Partially self inflicted issue, as it used the same bits of entropy for both the bucket and key inside the bucket, could have further reduced the chance of collisions at the same space requirements significantly.
And don't get me started on the complete lack of any locality. It's 32bit random reads all over the place.
I've seen a paper make the argument the neural network will clean up the collissions with a little dither so you don't always hit the same bucket in a region. Though that would mean Sharc needs ray reconstruction.