I see this quite differently: 1. Current games and performance results are a qualified measure and statement.I don't see how anyone can make a qualified statement on AMD RT hardware right now. It was literally just released a few months ago. Its a different implementation than RTX which has been around 2+ years. Given that RDNA2's RT doesn't accelerate everything that RTX does, it will force devs to come up with better solutions to deal with that issue and that's going to take time to implement.
2. Devs on PC can not come up with better solutions to optimize for HW architecture. Currently we can do only general things like optimizing sampling strategy and denoising, but that's orthogonal to HW architecture.
(Well, we can do things like trying inline tracing and see if certain architecture runs faster or slower form that, but that's not something we can expect so much from.)
If AMD would expose their intersection instructions to compute shaders, i could reuse my own BVH fo RT. Because this BVH is streamed from disk no building at runtime is necessary. My BVH also links to lower LOD geometry in internal nodes, so traversal finishes earlier.I imagine that RDNA2 RT will never be as performant as most of RTX's current offerings but it does not mean we will not see more capable games that will better reflect what the current AMD cards can offer.
I can not imagine RTX would be faster than RDNA2 for me, because RTX only works with BVH built from driver and LOD only works with replacing parts of geometry (which increases the necessary but redundant work on BVH building even further).
I expect a very similar situation for UE5, and also other engines as they increase detail and utilize acceleration structures. AMDs potential flexibility might become a big advantage, reversing the picture we see now.
At the moment, hardware traversal is a performance win, but on the long run the resulting fixed acceleration structure might turn up being a restriction and net loss.