Anyway,
https://devblogs.nvidia.com/rtx-best-practices/ has a lot of indirect information about how RTX works under the hood.
Assuming reordering is only worth it for incoherent rays, there likely would be a flag for the pipeline to turn it on or off by the user.
But there is no such flag, and the current focus may be on coherent shadow / sharp reflection rays, and there is ImgTecs claim, so... ok, i assume there is no reordering yet.
If laggy light is part of real-time raytracing, I'm starting to take umbrage with calling it 'real-time'.
Lag is necessary because processing power is finite. I don't think we can ever get rid of it completely, and also it would be new to me NN based approaches would need no form of history.
For GI the information from a single frame can't be enough to produce a realistic approximation.
It's not the noise that causes the lag, it is the infinite amount of calculations we would need to solve all those reflections. (My approach causes no noise, but i have lag too.)
But i don't think it's a big problem, personally i have those features to limit visual lag:
Cache directional lighting in worldspace, so no ghosting, and view dependent specular keeps angular correct even under motion but outdated lighting information.
More updates at regions where more changes happen.
Temporally reducing detail if overall change is high, so more surface area can be updated in the same time (e.g. if all lights turn off).
This helps a lot already, but i expect much more from calculating direct lighting with lag free traditional methods, which is a feature i don't have yet.
Basically only indirect lighting (or area lights where traditional stuff sucks) would cause lag, and if that's visible at all it's just the price for realistic lighting.
So with some engineering effort this problem is surely solvable good enough. It also helps that real world moving lights usually can use traditional point / spot lights (cars, flashlight).
I'm more worried about specular artifacts and limitations than about lag.