GART: Games and Applications using RayTracing

Status
Not open for further replies.
I was wondering when such little puzzle game will make use of RT since it's exactly the genre where one can go nuts with it due to scene simplicity and lack of need to run at more than 10 fps really.

It'll be nice to see RT benchmarking pivoting something so lacking in machismo.
 
The results for spatial probes vs surface caching from that Unity project show why I find spatial probes and cone tracing interesting. Lumen takes up a ton of performance and the emissive contribution is spotty at best (lol lol lol) while working with spatial probes and trading off smaller scale detail for less noise gives you the below for less runtime cost. Or you could go for cone tracing and get surface cache/final gather and get detail without the noise (though probably with the runtime cost).

E3hn1p9VgAgCF-A

ajOdYIu.jpg
 
The results for spatial probes vs surface caching from that Unity project show why I find spatial probes and cone tracing interesting. Lumen takes up a ton of performance and the emissive contribution is spotty at best (lol lol lol) while working with spatial probes and trading off smaller scale detail for less noise gives you the below for less runtime cost. Or you could go for cone tracing and get surface cache/final gather and get detail without the noise (though probably with the runtime cost).
Let's say we want a uniform resolution of one probe all 10 cm.
With spatial probes (uniform volume grid) very most of probes end up in empty or solid space, not contributing to any surface.
If we have probes only on the surface, we are surely much faster, even if finding affection probes for a pixel to is not trivial.
Thus, volume grid only makes sense at low resolution, e.g. having 50cm - 100cm cells as proposed in DDGI paper iirc.
For a compromise we could tag 'useless' volume probes and reject them form the update. Then the volume probe has only twice the numbers of probes to update vs. the surface probes. Still a big number, so we eventually start to consider putting surface probes in a regular grid for easy spatial lookup.

However, we want 'cone tracing' in any case, because angular resolution of probes is always low, no matter which method we choose.
The only efficient way to approximate cone tracing is LOD with proper prefiltering. SDF mips can do this for geometry, but we also want it for material and it's shading. For material we would need a miped voxel volume, and for shading we need to lookup probes again. We thus want also LOD for probes, e.g. mips of the probe volume.
(I realize that's what the Lumen UV atlas probably is good for.)

My conclusion here was: Volumetric approaches are attractive because simplicity, but horribly inefficient and memory hungry if we target high resolution. If we go there, we likely end up using different techniques to compensate the practical low res volume limitations. And that's what Lumen or Exodus end up doing. And there you have it again: Growing complexity, even if the initial motivation was to keep it simple.
 
With spatial probes (uniform volume grid) very most of probes end up in empty or solid space, not contributing to any surface.
We can turn this into an advantage: How can we benefit from probes in empty space? Instead tracing visibility, we can use a light diffusion approach. So we can eliminate the visibility term from rendering eq. which is most expensive. Makes sense.
What we get is Cryteks LPV or Lexie Dostals YT videos, extending it from partial SS voxels to world space volume.
Problems: Hard to represent accurate angular signal - SH2 washes out colors from opposite direction, many coefficients quickly become too expensive, and speed of light depends on our diffusion iterations, so becomes very slow.
Still i think that's attractive for volumetric lighting in combination with surface probes approach.
 
We can turn this into an advantage: How can we benefit from probes in empty space? Instead tracing visibility, we can use a light diffusion approach. So we can eliminate the visibility term from rendering eq. which is most expensive. Makes sense.
What we get is Cryteks LPV or Lexie Dostals YT videos, extending it from partial SS voxels to world space volume.
Problems: Hard to represent accurate angular signal - SH2 washes out colors from opposite direction, many coefficients quickly become too expensive, and speed of light depends on our diffusion iterations, so becomes very slow.
Still i think that's attractive for volumetric lighting in combination with surface probes approach.

While it feels like one is slowly re-inventing signed distance fields, just with direction as well as distance, sparse... visibility probes? Do seem to be an interesting idea. Ambient dice is great, so no de-ringing and whatever there. Maybe combine it with the surface cache? Store material properties and irradiance on the surface, same as Lumen and Metro EE. Then store an approximation of the visibility function in the worldspace probesm, referring to the visibility cache.

Now, does the surface cache use the probe grid to reference its own visibility of the surface cache? You get lightleak out of stuff that's not covered by the probe grid, but, well that'll happen anyway.

Anyway, the coolest thing you could do with a lightprobe grid is include participating media. Include fog visibility... somehow. But once you have that you can start recursively bouncing around light. You get correct multiscattering from and to heavy fog, potentially from all light sources! All the effects are there, indirect shadowing and lighting of fog all shows up. Which feels maybe only useful for very specific use cases, but it'd still be cool. Anyway, for multibounce you still need to cache irradiance somehow. That's what a lightprobe grid as is good at, all the irradiance is right there, and versus storing it in a surface cache the memory differences shouldn't be that great, depending on resolution maybe the lightprobe grid is more compact. You also skip a step, you don't even need to access whatever a visibility probe is pointing to, you've got all the lighting info you need right there, locally, from the probe itself, or well from some interpolation of probes, but still.
 
Anyway, the coolest thing you could do with a lightprobe grid is include participating media. Include fog visibility... somehow. But once you have that you can start recursively bouncing around light. You get correct multiscattering from and to heavy fog, potentially from all light sources! All the effects are there, indirect shadowing and lighting of fog all shows up.
I have tried such things a bit using SH2/3 and ambient cube. Ambient cube has the advantage each direction is independent, so can model multiple colors going in all directions better. Downside is directions are quantized.
Would be interesting what ambient dice / spherical gaussians / low res cube map could do here now.

Not sure what you mean with visibility probes. DDGI has depth channel - something like that?
Maybe it would be nice to diffuse such visibility, to avoid costly tracing. And have decoupled hierarchical gathering of irradiance probe grid mips. So only visibility would be laggy, not the whole lighting. Interesting maybe, sadly a volume gather is insane brute force load.
Though, if we do such volume gathering, we could also approximate visibility at the same time with anti radiosity using Bunnells method. If it's only for volumetric lighting, error would be more acceptable. But still limiting. Bright fog from a neighboring room would leak complementary color into our dark prison cell. Not good - seems no way around having some proper visibility.
Another idea to fight lag of diffusion would be hierarchical approach - use lower mips to go faster through empty space. Never tried something like this.

Well, volumetric lighting still feels a bit out of reach to me. But i'm mentally stuck in prev. gen :)
Most impressive method i've ever seen is this: https://research.nvidia.com/publication/imperfect-voxelized-shadow-volumes
Very accurate. Uses some clever projection and scan algorithm for shadowing. Can't find the video anymore but Chris Wyman has demo and src on his webpage.
 
And RT shadows confirmation in IGN article about teh game.

https://www.ign.com/articles/avatar-ubisoft-massive-open-world-details

Yessss.

We have a completely new lighting system that is based on ray tracing, and I think it is a dramatic step up in quality that makes you feel like it's a real place. One tiny example is that it can actually handle the translucency of the leaves [...] so it can figure out how much of the light is reflected through the leaves, how tinted it is with the colors and everything else. You get lovely reflections and sights for the water, even down to the volumetric clouds up in the sky – they actually receive the correct lighting as well."
 

MSFS will be getting RT support soon:

Martial Bossard, an executive producer on Microsoft Flight Simulator at Asobo, confirmed the upcoming move to DirectX 12 on PC will also allow the team to enable ray tracing in the game. Effects should include better water, improved shadows, and the usual reflections we expect to see in ray-traced games.
 
Status
Not open for further replies.
Back
Top