So up front the usual caveat here - I'll give my 2c but HWRT is not my primary area of the engine, so take everything with a grain of salt.
@Andrew Lauritzen - How easy would it be for a dev team to go from software Lumen to hardware?
This seems to get thrown around a lot as something that's "just an easy option to expose", and while in some cases that can be true functionally, there's a lot of considerations that come with maintaining a BVH for raytracing, which is ultimately what most considerations are around. As I've noted ad nauseum at this point, tracing rays is not really the difficult part, it's maintaining a raytracing acceleration structure at sufficient quality and accuracy in dynamic scenes.
To that end, there are generally a few considerations:
1) Memory. RT BVHs can get big quickly, especially if they need to span a large part of the scene and be at relatively high quality. This can sometimes be partially offset by removing distance fields, but many games use distance fields for other things (VFX being a big one), so they can't be dropped without alternatives. HWRT is often not a good substitute for these DF-based VFX functions that are looking more for smooth, continuous fields more than visibility queries.
2) TLAS and BVH update cost. For performance reasons on PC, you are generally limited to a few hundred thousand instances max as there's no good way to stream/partially update the TLAS. Many modern Nanite scenes use far more than that number so various compromises have to be made, generally via dropping small instances, heavily clamping RT scene distances and/or leaning heavily on HLOD to try and combine and simplify the static scene. The latter brings one a little bit back towards a baking-style workflow which is of course undesirable. BVH update cost can be an issue in scenes with lots of deformable vertex animation (characters, foliage, etc.)... basically anything that is Nanite unfriendly is *even more* unfriendly to raytracing.
Given the above, the question is really for a given set of content is it reasonable to maintain a BVH at sufficient quality such that it would at least provide improved visuals without completely destroying performance. Despite the consumer sentiment of "just expose everything!" I think AAA game devs are right to try and maintain a reasonable level of QA on things they ship. Several recent DF videos have touched on cases where I think it's clear that certain combinations of options should simply not have been exposed at all.
I'm thinking of PS5 Pro in this case as I think that could be a quick upgrade devs could offer for potentially little work on their end?
Consoles are an additional consideration since generally there you ship the acceleration structures themselves and stream them. There's a lot of upsides to that, but the downside is it means recooking the game and potentially large patches. Overall though the considerations will depend a lot on the content of the given game, as described above.
What about the games that don't use Nanite, would that be an easy switch?
No simple answer to that one. On one hand not using Nanite generally implies simpler geometry and smaller instance counts, which does make RT somewhat easier. On the other though Nanite can accelerate a bunch of stuff with Lumen so there are likely to be some caveats in practice depending on the original content target.
In general while the engine does make it a lot easier to do HWRT than if you were implementing it from scratch or something (and this is continually improving), the details matter. We're not yet at the point where you can just turn it on and assume that engine magic will take any content and make it work well with raytracing. Hopefully we'll get to that point eventually but we do need some graphics API, engine and hardware evolution to that end.