Next gen lighting technologies - voxelised, traced, and everything else *spawn*

But HFTS makes little sense when there's full ray tracing h/w available.

As I said, as far as hard shadows is concerned the raytracing will provide the same result as irregular z-buffering. So the construction of frustums and the gradual transition to PCSS can be done with the results from raytracing too.

With raytracing your design space expands. So you can subsample for soft shadows and pray spatio-temporal filtering gets you out of the woods without too many aliasing artefacts and without too much smudging (blurring is not the accurate description here, the spatio-temporal filters don't just blur shadows in the areas they should be ... it can spreads shadow intensity to where it shouldn't be too). That doesn't necessarily give you superior results though, the hybrid frustum technique might very well still be better for raytracing.
 
for comparatively limited visual benefit compared to pure Ray Tracing.

I doubt the frustum/PCSS part is a significant amount of the time budget and I doubt you can reduce the number of shadow rays per pixel below 1. If so it will be about what offers the least intrusive artefacts, massive subsampling of soft-shadowing and spatio-temporal filtering, or approximating penumbra purely from light point of view silhouettes. I suspect the latter.
 
Problem with HFTS is you need to build one acceleration structure per light, while RT can share one AS for all lights.
Stochastic sampling allows RT to process e.g. only one light per pixel, while with SM tech there is no win to expect from this because all the cost is in the SM generation already.
So, ignoring how good denoised results end up, RT offers better scaling and more options (large area lights). In theory it should win.

If it doesn't, and SMs still remain more practicable for next gen even after say 2 years, RT HW came one generation too early IMO.
 
Problem with HFTS is you need to build one acceleration structure per light
It's not a big deal, it's only 2D. It's not remotely on the level of computing the BVH, which you'll use any way to trace the shadow rays.
So, ignoring how good denoised results end up
I'm not willing to ignore that. Picking out a few major light sources for special treatment is as valid with ray tracing as it was with rasterization, scaling is not an issue.

Until you can go full Monte Carlo supersampling to solve everything it's all hacks.
 
It's not a big deal, it's only 2D. It's not remotely on the level of computing the BVH
It is pretty high cost. IIRC NV builds a linked list per SM texel using atomics. Can't remember if they compact the lists after that for faster lookups.
They stated it's practical for one light, probably for next gen 4 lights would be already very demanding.
I guess all games that used it restricted it to the sun? Wonder if they did for full open world, or only the closest cascade?

Picking out a few major light sources for special treatment is as valid with ray tracing as it was with rasterization, scaling is not an issue.
Disagree. You could use HFTS for selected lights, but you still have go generate usual SMs for all others. So you have high constant cost per light and frame for any SM technique.
With RT the BVH cost is independent of the number of lights. That's my argument why scaling is better at some point. In theory... not sure if we'll reach this point.

Until you can go full Monte Carlo supersampling to solve everything it's all hacks.
Agree. But all this fake area light SM tech is much worse hackery and did not really find its way from papers into games. The light sources still need to be very small (no skylight), and the win from a bit of penumbra is not worth the costs for most.

For RT, we can explore a whole new field of options. We need something like MIS / bidirectional pathtracing techniques suitable for realtime games.
For example, it could make sense to combine denoising pass with gathering a weighing for contribution of lights, so for screen tiles we could build a list of most important lights to sample more often.
With the games we've seen so far the focus was probably mostly on denoising, less yet on optimal light sampling.
I think there is much more progress to expect here than from SM approaches.
 
I guess all games that used it restricted it to the sun? Wonder if they did for full open world, or only the closest cascade?
Yes only the sun.

And they did it for several open worlds: The Division, Agents of Mahyem, Watch Dogs 2, Final Fantasy XV, and it only worked during daytime when the sun was up, during the night it was not active.

I doubt the frustum/PCSS part is a significant amount of the time budget
Yes it was, in Watch Dogs 2 you get fps cut almost in half, same goes for Agents of Mayhem. RT Shadows in Shadow Of the Tomb Raider, Control and Call Of Duty Modern Warfare are relatively cheaper in comparison and achiever a better visual representation of shadows.
 
With RT the BVH cost is independent of the number of lights.
I think you keep missing the point that I'm talking about using hybrid frustum tracing WITH the light point of view depth values for view samples determined by ray tracing (aka. shadow map). For any light you're not using frustum/penumbras on, you can do whatever the hell you want. Stochastic ray trace, hard shadow ray trace, radius limited if you want, or even do (AO) proximity light approximations with deferred/forward+.

It's not like using silhouette extraction and penumbra volumes with ray tracing is some incredible concept. It's just one proven in real time for rasterization, with techniques which generalize to ray tracing.
Yes it was, in Watch Dogs 2 you get fps cut almost in half
Doesn't follow without profiling the cost of rendering into the hierarchical Z-buffer, which would be irrelevant in this case since it's handled by ray tracing and you'd probably be casting at least 1 shadow ray per pixel even with a stochastic approach.
 
Last edited:
I think you keep missing the point that I'm talking about using hybrid frustum tracing WITH the light point of view depth values for view samples determined by ray tracing (aka. shadow map).
Yeah... probably i miss something. But is is hard to make sense of the sentence, you'd need to elaborate 'for dummies' to avoid confusion :)
It's not clear, do you propose to create irregular Z-buffers with RT instead raster?
Or do you think about creating a small view of the area light from the sampling point like in the linked paper?
... probably i'm on the wrong track with both.
 
It's not clear, do you propose to create irregular Z-buffers with RT instead raster?
Yes. Tracing shadow rays almost automatically gets you the irregular sampled shadow map, just need to trace from the light to the eye ray hit rather than the other way around.

The paper just goes to show that using penumbra approximations is hardly antithetical to ray tracing. But using a single silhouette from the centre of the area light's point of view, while less precise is obviously far more efficient.
 
Yes. Tracing shadow rays almost automatically gets you the irregular sampled shadow map, just need to trace from the light to the eye ray hit rather than the other way around.
But what's the motivation? Is it to generate a partial / progressive SM to reuse in next frame to save rays?
Or do you want to trace a complete irregular SM, which would require any hit rays to find multiple triangles per texel, which would be slower than raster?
 
But what's the motivation?
Having unconditionally stable solutions instead of stochastic ones.

The "inverse" shadow ray from the light towards the eye ray intersection does not need any hit. Just the closest. A shadow map is not generally layered, it makes the penumbras an approximation ... but good enough.
 
Last edited:
So it's about approximating area lights, and taking more samples from SM instead screen space.

So your idea is this:
During RT shadowing pass, store shadow ray hits in SM.
When generating final image, gather multiple SM samples to approximate area light, but keep the exact results as well to solve peter panning.
Correct?

Certainly interesting. There might be cases where too few nearby samples are available. Not sure if it can beat stochastic methods overall, but interesting... :)
 
Well, there's tons of things you can do with a shadow map. You can extract a silhouette and use it with stencil rendering tricks to draw the penumbras. You can create one or more mipmaps with min/max/whatever functions and use those to get cheaper soft shadows at large distances to the occluder (explored in the past for regular shadow maps, but not yet for irregularly sampled shadow maps, might work better).

Regardless, just defaulting to stochastic approaches because it feels elegant seems premature to me.
 
Well, there's tons of things you can do with a shadow map. You can extract a silhouette and use it with stencil rendering tricks to draw the penumbras. You can create one or more mipmaps with min/max/whatever functions and use those to get cheaper soft shadows at large distances to the occluder (explored in the past for regular shadow maps, but not yet for irregularly sampled shadow maps, might work better).

Regardless, just defaulting to stochastic approaches because it feels elegant seems premature to me.
The Thing is, i think even if Stochastic rays they end up being biased by the denoiser or by any number of other 'Tricks'. So they will not be exactly random then :)
 
i think even if Stochastic rays they end up being biased by the denoiser or by any number of other 'Tricks'
I have registered here primarily to ask what are the limitations of denoising. It seemed all too good to be true.
I speculated one limitation would be surfaces with high frequency normal variance, because there would not be enough pixels around with similar normals.
While this is true, this video about Quake RTX finally helped me to understand the real issue:
upload_2020-3-29_19-13-35.png
It shows samples from 2 light sources from teh level plus skylight.
When they added the skylight for the improved Q2RTX, they got a problem: The skylight samples are much brighter. They are also more sparse because skylight shines through a smaller window or hole, so most rays shot towards sky hemisphere end up shadowed.
This forced them to increase the filter width a lot - otherwise it would not be possible to reconstruct a smooth signal. Result is image being much too blurry (it also blurs the other lights that would have worked with narrow filter), also wider filter is more expensive.
They ended up handling skylight in its own pass.

This is really a great example to illustrate how denoising works. It's no trick or magic - it's just smoothing of a noisy and sparse input. And it only works if the input is good enough. So, as many RT techniques it depends on the scene and requires tweaking.


Regardless, just defaulting to stochastic approaches because it feels elegant seems premature to me.
If you missed it, you might like Eric Heitz work on area lights. He managed to combine stochasitc shadowing term with analytical light contribution, so one half of the noise problem goes away. There is a paper somewhere.
 
Back
Top