Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Is SHaRC spatial only, or direction aware? Asking because the demo case appeared to have painfully avoided geometry where different lighting conditions would have applied to the front/back facing side a geometry thinner than the voxel size.

Simply didn't bother yet to take a look at the actual hash function, so did someone already check that?

I do suspect it doesn't, so you'd always get an unintended bleed through thin occluders for the parts of GI which had not been sampled in screen space.
 
Alex was wondering if DDGI is still there, it is.


It has 3 techniques tight now, NRC which uses Tensor cores, SHaRC and DDGI use compute.

 
Do they have a reference path Traced to See how it should Look in ground truth? It would be nice If they did.
Yes. More detailed images I took with sample accumulation on and denoiser off:
comparison.jpg
Also AFAIK, NRC should also do specular bounces. You See good evidence of that?
I think it does.
bathroom.jpg

Is SHaRC spatial only, or direction aware? Asking because the demo case appeared to have painfully avoided geometry where different lighting conditions would have applied to the front/back facing side a geometry thinner than the voxel size.
Perhaps the 'voxel' in SHaRC is not a common word meaning 'volume pixel'.
 
I'd be surprised if they aren't using surfels for SHaRC, that will handle two sides just fine. The Neural net will find it difficult to learn surface orientation from just the sparse samples AFAICS.

I think they will have to compromise on the giant neural model approach and do something hybrid.

PS. oh, it's really voxels. This seems a lot less advanced than Dice's surfel GI.
 
Last edited:
A bit, odd. Cached radiance transfer aint exactly new, and a major point of these new GI techniques is you don't have to bake anything. Look instant feedback and dynamic gameplay! Getting rid of all that so you can add "neural" as a bullet point seems silly.
The tensor cores are there any way, though trying to capture the radiance cache in one model is a bit silly if that's still what they are doing. That feels a bit like DLSS 1, a very sloppy way to satisfy a mandate to leverage the tensor cores.

MLPs/SVMs can serve as a way to compress and interpolate between non-uniform samples, but it makes more sense to do that localized in a scene for instance for a surfel hemisphere or for prefiltered view dependent voxels.
 
I'd be surprised if they aren't using surfels for SHaRC, that will handle two sides just fine.

Didn't use surfels. Just a subdivision of the voxel into 8 axis aligned primary normal orientations. And it's simply recording voxels in world coordinates. With no extra metadata for depth, radius or anything, nor even the option to dynamically increase resolution where potentially needed.
So it does suffer from an well known issue:

Also didn't properly account for hash collisions! So this will occasionally resolve to completely wrong hash entries. Partially self inflicted issue, as it used the same bits of entropy for both the bucket and key inside the bucket, could have further reduced the chance of collisions at the same space requirements significantly.

And don't get me started on the complete lack of any locality. It's 32bit random reads all over the place.
 
Last edited:
Back
Top