It's not a big deal, it's only 2D. It's not remotely on the level of computing the BVH
It is pretty high cost. IIRC NV builds a linked list per SM texel using atomics. Can't remember if they compact the lists after that for faster lookups.
They stated it's practical for one light, probably for next gen 4 lights would be already very demanding.
I guess all games that used it restricted it to the sun? Wonder if they did for full open world, or only the closest cascade?
Picking out a few major light sources for special treatment is as valid with ray tracing as it was with rasterization, scaling is not an issue.
Disagree. You could use HFTS for selected lights, but you still have go generate usual SMs for all others. So you have high constant cost per light and frame for any SM technique.
With RT the BVH cost is independent of the number of lights. That's my argument why scaling is better at some point. In theory... not sure if we'll reach this point.
Until you can go full Monte Carlo supersampling to solve everything it's all hacks.
Agree. But all this fake area light SM tech is much worse hackery and did not really find its way from papers into games. The light sources still need to be very small (no skylight), and the win from a bit of penumbra is not worth the costs for most.
For RT, we can explore a whole new field of options. We need something like MIS / bidirectional pathtracing techniques suitable for realtime games.
For example, it could make sense to combine denoising pass with gathering a weighing for contribution of lights, so for screen tiles we could build a list of most important lights to sample more often.
With the games we've seen so far the focus was probably mostly on denoising, less yet on optimal light sampling.
I think there is much more progress to expect here than from SM approaches.