GART: Games and Applications using RayTracing

Status
Not open for further replies.
Diffuse illumination (area lights / sky irradiance) seems to be per pixel (Metro like), otherwise it would have failed the cases below where rasterization probes and fake lights were not placed:
https://imgsli.com/MzQ0Njc
https://imgsli.com/MzQ0NjY
https://imgsli.com/MzQ0Njg
https://imgsli.com/MzI1NTc
https://imgsli.com/MzQ0NzE
https://imgsli.com/MzQ0NzA
https://imgsli.com/MzQ0NzI
https://imgsli.com/MzQ0NzM
Also diffuse AO shadows (part of the Diffuse illumination system I suppose) capture even the smallest geometry details (trash cans, garbage, etc), so it's definitely not just probes.
The ultimate real time GI system would be these two methods combined - Metro and Quake II RTX style for the first bounce to capture small objects and "high frequency" indirect lighting, followed by updated in real time with RT probes to capture the second and following bounces where you don't need all the small indirect shadows since the second bounce lighting will be super diffuse anyway.
Though, Minecraft RTX already does something similar, but instead of probes it uses per vertex irradience cache - the same stuff as probes, just even more low frequency.

For very local effects like AO per-pixel is definitely the way to go but I'm not seeing anything in those screenshots that couldn't be done with lower frequency light propagation volume textures. Can you clarify which specific cases would fail and why?
 
For very local effects like AO per-pixel is definitely the way to go but I'm not seeing anything in those screenshots that couldn't be done with lower frequency light propagation volume textures. Can you clarify which specific cases would fail and why?
I Highlight a couple areas in my Video olocal Light bounce Detail that would probably outside the granularity or grid detail possible with probes or other schemes.
 
I'm not seeing anything in those screenshots that couldn't be done with lower frequency light propagation volume textures
Pretty sure the directional diffuse shadows from the metal railings and from the character itself here can't be done with good quality even with relatively fine grain Voxel AO/GI (256^3 voxel grid in the Rise of the Tomb Raider with VXAO can't capture such objects with the same fidelity and it ain't cheap either).
For example, VXAO in the Rise of the Tomb Raider doesn't capture small geometry and complex geometry due to insufficient grid resolution hence it's used with HBAO to capture grass and other small scale objects.
Penumbra here is quite visible too. Voxels grid is usually too coarse to capture thin geometry, such as the door in the screenshots pair, so by doing Voxel GI there will be likely light leaks and other artifacts. Look at the DF's video on Control if you need an example of such leaks (since Control uses prebaked static voxel grid for Voxel based GI too).
Here is another more dynamic take on the problem in Enlisted - they inject screen space lighting into dynamic voxel grid, so the voxel grid here serves as a cache (close to what RTXGI does, but instead of tracing world space rays, they do it in screen space), still devs came to the same conclusions:
"Honest raytracing = no light leaking? Voxel Cone tracing can (and will) lead to light leaking on thin walls. Even for specular light it can be noticeable, so we still adjust maximum brightness. Diffuse GI doesn’t suffer from light leaking, because we honestly trace thousands of rays, not simplified “cones”. Almost enough..."
Honestly, one doesn't have to be an expert to realise that with coarse approximations of original polygonal geometry with Voxels, SDFs, you name the other lossy geometry representation, there will always be artifacts like light leaking and missing details/wrong self shadowing/etc since approximations don't match the original geometry, that's unavoidable.
The obvious solution would be moving to the pixel precision solutions with exacltly the same geometry as in main pass at least for the first bounce.
Luckily, this doesn't even seem to be very expensive since multi bounce GI can be done at different levels of details for the first and following bounces, you can even reuse probes, voxels, SDFs for secondary and following bounces, which don't require the same level of precsion.
 
Pretty sure the directional diffuse shadows from the metal railings and from the character itself here can't be done with good quality even with relatively fine grain Voxel AO/GI (256^3 voxel grid in the Rise of the Tomb Raider with VXAO can't capture such objects with the same fidelity and it ain't cheap either).
For example, VXAO in the Rise of the Tomb Raider doesn't capture small geometry and complex geometry due to insufficient grid resolution hence it's used with HBAO to capture grass and other small scale objects.
Penumbra here is quite visible too. Voxels grid is usually too coarse to capture thin geometry, such as the door in the screenshots pair, so by doing Voxel GI there will be likely light leaks and other artifacts. Look at the DF's video on Control if you need an example of such leaks (since Control uses prebaked static voxel grid for Voxel based GI too).
Here is another more dynamic take on the problem in Enlisted - they inject screen space lighting into dynamic voxel grid, so the voxel grid here serves as a cache (close to what RTXGI does, but instead of tracing world space rays, they do it in screen space), still devs came to the same conclusions:
"Honest raytracing = no light leaking? Voxel Cone tracing can (and will) lead to light leaking on thin walls. Even for specular light it can be noticeable, so we still adjust maximum brightness. Diffuse GI doesn’t suffer from light leaking, because we honestly trace thousands of rays, not simplified “cones”. Almost enough..."
Honestly, one doesn't have to be an expert to realise that with coarse approximations of original polygonal geometry with Voxels, SDFs, you name the other lossy geometry representation, there will always be artifacts like light leaking and missing details/wrong self shadowing/etc since approximations don't match the original geometry, that's unavoidable.
The obvious solution would be moving to the pixel precision solutions with exacltly the same geometry as in main pass at least for the first bounce.
Luckily, this doesn't even seem to be very expensive since multi bounce GI can be done at different levels of details for the first and following bounces, you can even reuse probes, voxels, SDFs for secondary and following bounces, which don't require the same level of precsion.

Thanks for the detailed reply. I may be conflating a few different things that contribute to the quality of sparse GI methods.

First there’s the quality of the assets that are sampled to determine light visibility. Lets assume that we’re using the same high resolution geometry used in rasterization passes. Then there’s the precision of the sampling and here I’m assuming with RTXGI there’s pretty good precision since you’re shooting rays. Next is the precision of the storage format and here we know there’s a loss of accuracy in sparse data structures. Finally there’s the sampling of these structures which we can assume to be done per screen pixel.

For very small emissive objects I get why there could be gaps. But if your emitter is large (i.e. Cyberpunk’s sun or moon) I guess I’m not understanding where the process breaks down for fine shadow casting geometry.
 
World of Warcraft Shadowlands: How one of the oldest games in production uses the newest lighting tech | TechRadar
December 31, 2020
WoW’s engineers have updated the game’s tech with every expansion, and the new Shadowlands pack this fall brought in a subtle, but important, new addition to the scenery: ray tracing. We say subtle because Warcraft’s lighting is already an enormous part of the game; a single map can have thousands of light sources, and that doesn’t include weather effects (which include ambient and natural lighting) and other sources of illumination. So if you’re hoping ray tracing will give you a BAM, now-you-see-it-now-you-don’t impact, you may be disappointed.

We enlisted Ryan Anderson, WoW’s lead engine programmer, to shine a light on Warcraft’s newest shadows.
..
Lighting in WoW is created by layering effects, ranging from broad ambient daylight or dusk to specific spotlights cast by individual objects – a torch, say, or a light fixture, or even an NPC.
“It can vary depending on time of day, weather, world location, interior or exterior locations, map phases, local lights, and so forth,” Anderson said. “Light can be both baked into vertices and added at run time from dynamic lights.”

“Given the breadth of content that’s been created for WoW over the years, and the fact that we support that older content as well as new ray-traced lights, you can imagine there’s quite a lot of variation in our lighting model. There can be thousands of lights in a map, and each can have an impact on performance,” Anderson said, and developers cheat a little to try and keep the processor load down.

“Lights marked as ray-traced shadow casters have the additional impact of calculating whether a pixel is shadowed from that light. We use features like draw distance, buffer resolution, and rays per pixel to scale the impact for various levels of hardware.”
...
“We’re always looking for ways to improve the visual quality of World of Warcraft while maintaining the game’s iconic art style,” Anderson said. “Ray tracing felt like a great way to help improve the graphics and at the same time make our worlds more immersive.”

Plus, he said, Blizzard could see the (illuminated) writing on the wall: “Ray tracing is a technology we see as becoming more mainstream over time, meaning we expect most players to eventually be playing on systems that support ray tracing. This is a great way for us to support enthusiasts now while paving the way for more widespread availability of ray tracing in the future.”
 
Last edited by a moderator:
For very small emissive objects I get why there could be gaps. But if your emitter is large (i.e. Cyberpunk’s sun or moon) I guess I’m not understanding where the process breaks down for fine shadow casting geometry.
Even if the probes are very accurate due to RT, they remain sparse in space, so we need to interpolate them. If the distance between probes is 50cm, there is no way to get proper shadows for smaller objects. At some distance where probe resolution will half due to LOD, the problems increase accordingly.
The second, lesser obvious problem of volume probe grids comes when our assumption 'indirect lighting is a smooth low frequency signal' breaks. The assumption holds true in empty space (or participating matter) only, but at the surface the signal is discontinuous and as high frequency as the geometry itself. And unfortunately we are interested in just that part where the error is largest.

So i see it like this: Volume probes are easy to implement, and they give indirect lighting so stuff isn't black. That's fine, but it won't look real - we can not have grid resolution so high to compensate the error.
Even if we do all direct lighting with RT, the error will remain visible, and it keeps looking gamey. The more bounces we would compute with an accurate solution like path tracing before falling back to probes, the better.

Personally, working with probes on the surface instead, i can say the quality is much better although using less probes. But lacking a simple regular grid to look up probes, finding the proper probes that affect a given pixel becomes one problem. The second is the need to precompute good probe positions on the surface of all geometry, which replaces a light baking process with another waiting time in production. Third problem is a lack of volume probes for volumetric lighting, so some work on volume stuff remains necessary anyways.

The third option, using no probes at all but only RT and denoising for everything, has quality problems too. Blurring, temporal lag and SS issues of disocclusion, etc.To solve this we could move the denoising from SS to world space, but then we also need a form of global parametrization like with surface probes, and because we'll use them to cache irradiance as well, both approaches boil down to the same thing in the end.

In the moment, volume probe grids seem very parctical and okish. I guess UE5 uses them too, and results look pretty good (opposed to NVs 'progrmmer art' DDGI demos).
 
Even if the probes are very accurate due to RT, they remain sparse in space, so we need to interpolate them. If the distance between probes is 50cm, there is no way to get proper shadows for smaller objects. At some distance where probe resolution will half due to LOD, the problems increase accordingly.
The second, lesser obvious problem of volume probe grids comes when our assumption 'indirect lighting is a smooth low frequency signal' breaks. The assumption holds true in empty space (or participating matter) only, but at the surface the signal is discontinuous and as high frequency as the geometry itself. And unfortunately we are interested in just that part where the error is largest.

So i see it like this: Volume probes are easy to implement, and they give indirect lighting so stuff isn't black. That's fine, but it won't look real - we can not have grid resolution so high to compensate the error.
Even if we do all direct lighting with RT, the error will remain visible, and it keeps looking gamey. The more bounces we would compute with an accurate solution like path tracing before falling back to probes, the better.

Personally, working with probes on the surface instead, i can say the quality is much better although using less probes. But lacking a simple regular grid to look up probes, finding the proper probes that affect a given pixel becomes one problem. The second is the need to precompute good probe positions on the surface of all geometry, which replaces a light baking process with another waiting time in production. Third problem is a lack of volume probes for volumetric lighting, so some work on volume stuff remains necessary anyways.

The third option, using no probes at all but only RT and denoising for everything, has quality problems too. Blurring, temporal lag and SS issues of disocclusion, etc.To solve this we could move the denoising from SS to world space, but then we also need a form of global parametrization like with surface probes, and because we'll use them to cache irradiance as well, both approaches boil down to the same thing in the end.

In the moment, volume probe grids seem very parctical and okish. I guess UE5 uses them too, and results look pretty good (opposed to NVs 'progrmmer art' DDGI demos).

Read the paper about Signal Distance Field Diffuse Global Illumination, it is better for details than RTXGI with what they call contact GI and no light leak at all. Problem a manual process for creating the sdf primitive cluster is needed but no manual placement of probe. Maybe some of the advantage can be implemented in other solution.

https://forum.beyond3d.com/posts/2186171/
 
Thanks, i did already (a bit). Though it's not very interesting for me personally. I still tend to shy away from volumes. Even if SDF tracing is pretty fast, it's too much brute force IMO, resulting in the 1 second lag of lighting we see everywhere, currently. Too long to claim 'realtime'.
I think SDF shells on surface would be interesting to add details. Wonder what Sebbie comes up with...
 
Even if the probes are very accurate due to RT, they remain sparse in space, so we need to interpolate them. If the distance between probes is 50cm, there is no way to get proper shadows for smaller objects. At some distance where probe resolution will half due to LOD, the problems increase accordingly.

Why does the size of the shadow caster matter given sampling is done per-pixel? It's the resolution of the irradiance map that should determine accuracy.

The second, lesser obvious problem of volume probe grids comes when our assumption 'indirect lighting is a smooth low frequency signal' breaks. The assumption holds true in empty space (or participating matter) only, but at the surface the signal is discontinuous and as high frequency as the geometry itself. And unfortunately we are interested in just that part where the error is largest.

Good point, most of the material I've seen on the topic doesn't address high frequency diffuse. There's a hint of it in the author's blog on the technique. Notice the lack of self shadowing on the dragon.

x3-noise-free.jpg


The more bounces we would compute with an accurate solution like path tracing before falling back to probes, the better.

Agree. Unfortunately we don't have anywhere near the ray budget for it.
 
It's the resolution of the irradiance map that should determine accuracy.
That's what i meant. 'Size of shadow' caster is probably some misconception.

Good point, most of the material I've seen on the topic doesn't address high frequency diffuse.
In practice the difference might end larger than in the images. It looks they used a very high voxel resolution for this dragon scene.
I see two options to address the problem (even working with surface samples, i have it too):
Refinement from screenspace (Looking at UE5 or Cyberpunk this has evolved very well. I really hate SS artifacts from missing information, but those things look nice and acceptable to me.)
We could bake short range AO. Somewhat directional - at least bent normal or SH2. I expect this would give really nice details, with some missing ones on dynamic - static intersections, where SS might fill it in. Though, that's a lot of storage. I hope SS alone is enough.
 
So the Gears 5 update that adds software ray traced GI is now active on PC, it also upgrades AO and shadows to the Insane quality levels. Variable Rate shading is added to the mix too with 3 levels.

Performance on my bog standard non overclocked 2080Ti is 30fps+ @4K max settings (including everything and 32 rays per pixel for the new screen space GI).
 
Last edited:
So the Gears 5 update that adds software ray traced GI is now active on PC, it also upgrades AO and shadows to the Insane quality levels. Variable Rate shading is added to the mix too with 3 levels.

Performance on my bog standard non overclocked 2080Ti is 30fps+ @4K max settings (including everything and 32 rays per pixel for the new screen space GI).
That is very interesting. Maybe DF could do a quality/performance comparison between the Series X/S and PC as well as taking a look at VRS.
 
That is very interesting. Maybe DF could do a quality/performance comparison between the Series X/S and PC as well as taking a look at VRS.
I was bored so I did some testing on a 3080 with the built-in benchmark:

screenshot2021-01-091crjih.png


* "AO Insane" setting doesn't seem to actually work as the option resets back to Ultra after you save it, close the options page and then open the options again.

I also did some IQ comparison screenshots (see below), and judging from them I'm not a fan of what SSGI do as I think that it's actually much worse at dealing with small scale contact shadowing than the game's original SSAO (is it HBAO+?) and doesn't improve large scale GI that much either. Basically for 20% performance cost it seems like a waste to have it enabled.

VRS has three presets - Quality, Balanced and Performance. It's resulting in a fairly minor performance gains even at "Performance" while there is a noticeable quality degradation starting with "Balanced". "Quality" has some blurring too but it's so minor that you likely won't notice it in motion, only when comparing static screenshots. Then again with it providing about 5% of performance at best it seems like something which can easily be turned off entirely.

VRS is also basically unusable now because it produce artifacting on character close ups during cutscenes, at any preset but "Off" (if you can call that a preset).
This coupled with rather minor gains with noticeable IQ hit points to VRS here being likely somewhat of a half-assed first attempt so to speak. Youngblood did it better IMO back in 2018.
Anyone with a 6800 card willing to check if that's a renderer bug or a NV driver one?

IQ comparisons:
(Don't pay much attention to the fps numbers in the overlay, the game tend to jump wildly between various numbers even in a completely static scene for whatever reason and I had it on mostly for ease of figuring out which shot had what settings later.)
 
I was bored so I did some testing on a 3080 with the built-in benchmark:

screenshot2021-01-091crjih.png


* "AO Insane" setting doesn't seem to actually work as the option resets back to Ultra after you save it, close the options page and then open the options again.

I also did some IQ comparison screenshots (see below), and judging from them I'm not a fan of what SSGI do as I think that it's actually much worse at dealing with small scale contact shadowing than the game's original SSAO (is it HBAO+?) and doesn't improve large scale GI that much either. Basically for 20% performance cost it seems like a waste to have it enabled.

VRS has three presets - Quality, Balanced and Performance. It's resulting in a fairly minor performance gains even at "Performance" while there is a noticeable quality degradation starting with "Balanced". "Quality" has some blurring too but it's so minor that you likely won't notice it in motion, only when comparing static screenshots. Then again with it providing about 5% of performance at best it seems like something which can easily be turned off entirely.

VRS is also basically unusable now because it produce artifacting on character close ups during cutscenes, at any preset but "Off" (if you can call that a preset).
This coupled with rather minor gains with noticeable IQ hit points to VRS here being likely somewhat of a half-assed first attempt so to speak. Youngblood did it better IMO back in 2018.
Anyone with a 6800 card willing to check if that's a renderer bug or a NV driver one?

IQ comparisons:
(Don't pay much attention to the fps numbers in the overlay, the game tend to jump wildly between various numbers even in a completely static scene for whatever reason and I had it on mostly for ease of figuring out which shot had what settings later.)
I did some testing with it since I found the same artifacts you did as well... but those artifacts go away if async compute is turned off. It's only when both are on that you get artifacts. So I'm guessing it's an Nvidia driver issue. Turning off async compute for me on my 2080ti did nothing to performance when I tested... but it may be different for you on a 3080.
 
I did some testing with it since I found the same artifacts you did as well... but those artifacts go away if async compute is turned off. It's only when both are on that you get artifacts. So I'm guessing it's an Nvidia driver issue. Turning off async compute for me on my 2080ti did nothing to performance when I tested... but it may be different for you on a 3080.
So it's a combination of both then. Likely has something to do with async compute based post processing (DoF?) in cutscenes.
Yeah, async compute doesn't help at all on 3080 either, as can be seen from my benchmark.
 
So it's a combination of both then. Likely has something to do with async compute based post processing (DoF?) in cutscenes.
Yeah, async compute doesn't help at all on 3080 either, as can be seen from my benchmark.
Yep. I'm guessing Codemasters likely just disabled async compute for Nvidia GPUs in Dirt 5 to get rid of the issue for the latest patch, as a similar artifact was happening in that game too. It all points to an Nvidia issue, but I can't really confirm because nobody with an AMD GPU had tested out the previous version and chimed in.

I reported the issue to Nvidia anyway.
 
The movie, currently in production, features a mix of cinematic visual effects with live-action elements. The film crew had planned to make the movie primarily using real-life miniature figures. But they switched gears once they experienced the power of real-time NVIDIA RTX graphics and Unreal Engine.
...
Hyoguchi and team produced rich, photorealistic worlds in 4K to create rich, intergalactic scenes using a combination of NVIDIA Quadro RTX 6000 GPU-powered Lenovo ThinkStation P920 workstations, ASUS ProArt Display PA32UCX-P monitors, Blackmagic Design cameras and DaVinci Resolve, and the Wacom Cintiq Pro 24.
Gods of Mars Movie Come Alive with NVIDIA RTX Real-Time Rendering (guru3d.com)
 
Status
Not open for further replies.
Back
Top