Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Every time I play Battlefield V, I discover a new reflection effect I didn't know about. Even on maps with almost no reflective surfaces or puddles, there are things to be reflected. Every metallic surface or shiny paint reflects muzzle flashes, explosions and fire. Mirrors of trucks and jeeps reflect the surroundings dynamically while moving or stationary, sun light, muzzle flashes and explosions reflect on enemy weapons dynamically as well!

I once spotted the reflection of a flying pigeon on a puddle, I didn't see the pigeon itself, but the reflection of it gave it away. I dind't even know there were pigeons flying about before this.
 
https://github.com/cschied/q2vkpt

Of special interest:
https://github.com/cschied/q2vkpt/blob/master/src/refresh/vkpt/shader/asvgf.glsl
https://github.com/cschied/q2vkpt/blob/master/src/refresh/vkpt/shader/asvgf_atrous.comp

From what I understand, that's more or less the filtering which is currently applied to get from the (under sampled) 1 SPP frames you get with the DXR API / Vulkan extension, to something presentable.

At the core a 11x11 Gaussian blur kernel, additionally weighted by similarity of depth and normal buffer with exponential falloff. Split into 5 rounds of 3x3 kernel. Some additional math to deal not only with constant depth, but also depth gradients. On top of that TAA.
So that's where the significant constant cost comes from in the currently known real live applications.

Oh, and it also means this doesn't do any good for any translucent materials, as the weighting is only evaluated correctly for whatever object has been recorded in the G-buffer.
Now, do you record the glas, the volumetric fog, or the scene behind it?...

Also really shouldn't be applied to primary rays like that. Works fine in that demo since the textures are all low resolution, but practically, it's also blurring the textures sampled from the primary hit.
 
Last edited:
Awesome work!
This is exactly how my compute stuff looks like. Just my reflections are not that laggy but more 'bandy' because they come from low res env maps. I have no constant denoising cost, and i can do it on an old 5870 GCN in about 4 ms. Temporal lag is similar.

Now, do you record the glas
Those are the cases where i am personally interested in RTX because i can't deliver sharp reflections.

However, Schieds filters can still work in such cases if you put the reflected position and normal into the G-buffer i guess? (would require to run the filters twice maybe :| EDIT: No, just reproject them from previous frame)
Semi opaque materials will remain an open probelm... transparency still sucks. But it's just minor details i would say.

Works fine in that demo since the textures are all low resolution, but practically, it's also blurring the textures sampled from the primary hit.

Texture resolution or contrast should not have any impact here: He works with irradiance which is always pretty smooth, not the radiance which would include high frequency material properties.

The main limitation here seems the need for low frequency geometry to provide good input for the filter. With high normal and position variance the filter lacks information and so can not give more than a coarse approximation.
I assume this limits PBS fidelity a lot in practice - everything would look diffuse at high frequencies.
 
Now, do you record the glas
I assume this limits PBS fidelity a lot in practice - everything would look diffuse at high frequencies.

... worth to mention again how texture space shading would address both issues. It could handle multiple layers of transparency, and it would have a better spatial neighborhood for filtering.
But at the cost of more memory, compromises and complexity. So i don't think we need to go there yet.

This is the first time i want to get me a RTX to run this demo :) I'm so much more impressed by this global solution than from isolated effects like 'just reflections and maybe some shadows too'.
 
The video for the Quake 2 demo that uses full path tracing to achieve a complete scene tracing of shadows, lighting and reflections.
The demo uses RTX extensions in Vulkan to achieve a near 1440p60 performance on a 2080Ti.


More details here: http://brechpunkt.de/q2vkpt/

I just tried it on my 2080, it was doing anywhere between 35fps to 50fps on 1440p, using 1% (yes 1%) CPU utilization, and 98% GPU utilization.
 
Last edited:
The jump this guy took from where he was at before is astounding. I'm curious for how much of it came from Temporal Reprojection and smart filtering, which he was not doing in previous demos I remember seeing, and how much is just the damn brute-force the RTX2080 must be affording him now.

Also, a scene as simplistic as this: low poly, low res textures, and BSP friendly by design (I don't know how much that last one actually helps raytracing through Nvidia's scheme, if at all) is a great benchmark for what RTX can do when absolutely only bottlenecked by that, as I assume shading must cost farts to a modern GPU and probably all the damn level data might fit the GPU caches with space to spare.

EDIT: I think this might not be from the same guy I had seen other Path Traced Quake 2 videos before. At least it's not the same YT channel...
 
Last edited:
Curiously though, 220 million pixels per second to draw and yet he's sampling <1 sample per pixel and denoising. I'd love to know how the peak rays per second figure for RTX relates to real workloads. In this case we aren't looking at shader resolving overheads, so why is sampling (rays per light in path tracer) only managing some 1% pixels drawn versus peak gigarays per second? How many surface rays is it tracing per light source?

I'd love to see the same situation but rendering separate rasterised albedo geometry and traced illumination. Quality should be better and denoising simpler.
 
Impressive tech wise, but its still Quake 2 :p. We need AAA games build from the ground up for RT. Dont see it happening until AMD/intel offer RT hardware though, but im sure that will happen in about two years (arcturus?).
 
Curiously though, 220 million pixels per second to draw and yet he's sampling <1 sample per pixel and denoising. I'd love to know how the peak rays per second figure for RTX relates to real workloads. In this case we aren't looking at shader resolving overheads, so why is sampling (rays per light in path tracer) only managing some 1% pixels drawn versus peak gigarays per second? How many surface rays is it tracing per light source?

I'd love to see the same situation but rendering separate rasterised albedo geometry and traced illumination. Quality should be better and denoising simpler.

This post is very confusing. But there is some info on the page about rays per pixel:

"How many rays does Q2VKPT cast per pixel?
The number of rays that are cast are dependent on the first visible surface. For opaque surfaces Q2VKPT uses one ray each to find the direct and indirectly visible surface. Additionally, for both surfaces Q2VKPT casts one ray each towards randomly chosen light sources. Therefore Q2VKPT will cast at least 4 rays for each pixel."


From this we can conclude the path tracer uses only one indirect bounce (they also said this in earlier papers and videos). It's also interesting how they mention the problem of selecting lights, which is the point where 'elegant and simple' path tracing shows its devil in the details usually.

But what do you mean with '1%'?
And how would rasterizing the first hit instead RT have any affect on denoising or quality? (both would give identical results - only performance would improve)
 
The video for the Quake 2 demo that uses full path tracing to achieve a complete scene tracing of shadows, lighting and reflections.
The demo uses RTX extensions in Vulkan to achieve a near 1440p60 performance on a 2080Ti.


More details here: http://brechpunkt.de/q2vkpt/

I just tried it on my 2080, it was doing anywhere between 35fps to 50fps on 1440p, using 1% (yes 1%) CPU utilization, and 98% GPU utilization.
I just tried on a 2080 Ti and it was like 60 to 40s or so. 40s with explosion lighting and also in the areas with lots of water reflections. This thing is so neat - for being cleaned up and denoised aggressively, the ghosting is really minor and you can see all the neat tricks and stuff. Also, the occasional grain from the non-denoised bits are oddly endearing. Love this!
baseprofile2019.01.18z4khx.png

baseprofile2019.01.18o7jbm.png
 
Impressive tech wise, but its still Quake 2 :p. We need AAA games build from the ground up for RT. Dont see it happening until AMD/intel offer RT hardware though, but im sure that will happen in about two years (arcturus?).

I think the main point (for me) is that just by adding RT lighting, an absolutely ancient game like Q2 can look pretty damn good for what it is.
 
I think the main point (for me) is that just by adding RT lighting, an absolutely ancient game like Q2 can look pretty damn good for what it is.

Yes agree it sure looks nice, in special with in mind its a game from the mid/late 90's. Im sure more games designed towards Turing (and the like) will follow both by indie and aaa devs. After playing BFV i was very impressed by what i saw/experienced. Dont have a RTX myself yet, too expensive for me but il get it sometime, and by then it all has perhaps matured more too.
 
But what do you mean with '1%'?
And how would rasterizing the first hit instead RT have any affect on denoising or quality? (both would give identical results - only performance would improve)

Well, for one, by rasterizing primary visibility you free up a bunch of extra Ray's for lighting. But also, if you keep your geometry/material data in a separate buffer from lighting and combine later in a deferred manner, I imagine you can do much better denoising of the path-traced lighting data, since it's free of arbitrary detail introduced by the albedo. Accumulating diffuse and specular lighting data in separate buffers is also a good idea since the best strategies for filtering and reprojecting each of those types are very different.
 
Last edited:
if you keep your geometry/material data in a separate buffer from lighting and combine later in a deferred manner, I imagine you can do much better denoising of the path-traced lighting data, since it's free of arbitrary detail introduced by the albedo.
I'm sure they do this anyways. Likely they use primary rays only because it was easier for them to get going (they did not want to write raster pipeline, which is some work with low level APIs).
Maybe they add this as an optimization later, and add one more bounce eventually. (just for fun - they focus on denoising not path tracing AFAIK).
Also this way they can check how DOF would break their filters, and work on this...
 
"How many rays does Q2VKPT cast per pixel?
The number of rays that are cast are dependent on the first visible surface. For opaque surfaces Q2VKPT uses one ray each to find the direct and indirectly visible surface. Additionally, for both surfaces Q2VKPT casts one ray each towards randomly chosen light sources. Therefore Q2VKPT will cast at least 4 rays for each pixel."
So 220x4 ~ 900 million, or some 10%

But what do you mean with '1%'?
Well it's a weird and pointless figure. ;) What struck me is that at two rays per pixel, you'd have solid lit geometry instead of the very noisy results shown on that web page. You then need a denoiser to fill in all the texture detail. But despite 10 gigarays per second, or enough to cast 50 rays per pixel at 1440p, only a fraction is being actually drawn.

Now of course, quality requires multiple rays per pixel, so you'd expect less pixels than rays. But the number was striking enough that I mentioned it.

And how would rasterizing the first hit instead RT have any affect on denoising or quality? (both would give identical results - only performance would improve)
You don't rasterise the first hit. You'd still trace first ray lighting. The difference is you can rasterise textures and have them absolute quality instead of having to reconstruct them from noisy data. Lighting data would be simple areas of illumination (or lack thereof) to composite just like deferred rendering. Without any fine details to worry about, denoising becomes pretty straightforward.

eg. Lets say you have a surface covered in dirt which is a very noisy texture. When you trace noisy lighting, you then have to denoise the lighting without screwing with the noisy texture. If you render the albedo, you can then just trace the lighting and denoise (selective blurring) without worrying about the dirt texture which is preserved in the composite.

I don't know if the denoise struggles in places. YT compression.

Also, I'm confused by the inconsistencies in the lighting. There are frequently missing shadows which is the one area raytracing shouldn't fail on unless lights are being traced as non-shadow casting as an optimisation?
 
So 220x4 ~ 900 million, or some 10%
Ah, so you ask 'Why only 10% of the advertised' 10 giga rays per second'?
Well, as i've learned here, this number comes from only primary rays on a single but detailed model with a simple shader that only displays surface normal of the hit point. And half of screen is empty. Something like that.

But those guys do not really want to use more rays. They want to show how far they can get in a worst case scenario of using only one path per pixel. This is their work, and they likely just keep this philosophy when releasing this as a playable demo. It is still mainly research.

On the other hand: As mentioned above, water already hurts performance, so one more path (or just reflection ray?) per pixel is already a performance problem it seems. Not sure though, and as said they could just rasterize primary visibility for a win.

you can rasterise textures and have them absolute quality instead of having to reconstruct them from noisy data

No, they certainly never need to reconstruct textures, and simple Quake textures have no impact on their algorithm anyways. They store and denoise incoming light, which is totally independent from the material. So the end result of denoising can be assumed to be a smooth signal, which is the primary idea here.
This is common for most GI tech, and i remember i've read this in their paper as well.
After this process is done, they just shade the textured buffer with the now known incoming light per pixel. So aldebo has no impact on denoising, but roughness has. High frequency variance in roughness makes denoising harder. (They have a newer paper about specular reflections, but roughness is still constant over large areas of surface.)

There are frequently missing shadows which is the one area raytracing shouldn't fail

If you go to their page and use the slider to alternate noisy input and denoised output, you see that denoising removes some things like contact shadows under a wall or AO kind of darkening corners. That's quite a loss, but for one path per pixel results are excellent. (Gamedevs would add ugly AO most likely :) )
 
But those guys do not really want to use more rays. They want to show how far they can get in a worst case scenario of using only one path per pixel. This is their work, and they likely just keep this philosophy when releasing this as a playable demo. It is still mainly research.
Okay. As ever, context is everything!

On the other hand: As mentioned above, water already hurts performance, so one more path (or just reflection ray?) per pixel is already a performance problem it seems. Not sure though, and as said they could just rasterize primary visibility for a win.

No, they certainly never need to reconstruct textures, and simple Quake textures have no impact on their algorithm anyways. They store and denoise incoming light, which is totally independent from the material. So the end result of denoising can be assumed to be a smooth signal, which is the primary idea here.
This is common for most GI tech, and i remember i've read this in their paper as well.
After this process is done, they just shade the textured buffer with the now known incoming light per pixel. So aldebo has no impact on denoising, but roughness has. High frequency variance in roughness makes denoising harder. (They have a newer paper about specular reflections, but roughness is still constant over large areas of surface.)
Okay, pretty much the suggestion.

If you go to their page and use the slider to alternate noisy input and denoised output, you see that denoising removes some things like contact shadows under a wall or AO kind of darkening corners. That's quite a loss, but for one path per pixel results are excellent. (Gamedevs would add ugly AO most likely :) )
Hmmm...there are NPC shadows on some NPCs but missing on others. In some situations the shadows should be quite prominent, so I'm not sure how denoising is responsible.

Image1.jpg

For the guy on the ramp, the shadow is present when he falls over, so I'm guessing there's not enough occlusion in some cases. Maybe needs more sampling rays for large lights?
 
If they are in fast motion it's not enough duration to capture the shadows, and if they are distant they may bee too small and shadows become classified as 'noise to remove' i guess.
Shoot the guy down and come close to make shadow appear :D

Edit:
Issues like this are a reason i still want to support shadow maps for some direct lights, although i would not really need them.
 
Back
Top