As I understand it, DXR specifically obfuscates the data format of the BVH. It seems the intention here is that each IHV can optimise the data format to match the way the hardware works.
So, for example, perhaps inline ray tracing (DXR 1.1) is preferred on AMD and the BVH data format is optimised for that.
Sure, but the question is how much this obfuscation is necessary. Let's make a speculative example about 'Add RT support to UE5 geometry.'
And let's assume AMDs RT support only means additional instructions to intersect boxes and triangles (they never mention 'traversal' in their marketing).
If so, Epic can reuse their existing data structures, which probably have hierarchy and LOD, and eventually work for RT. With HW support for both consoles, plus AMD GPUs on PC, that's a nice option to have. And no RT API is necessary at all.
To support RTX, a lot of problems come up:
Streaming BVH is not possible. So they have to build BVH on CPU each time they load a model, with little control over performance costs.
LOD is not possible either. They can not just load the levels of hierarchy that are necessary. They have to use discrete LODs, and build BVH for each level, requiring a lot of extra memory, with little control over how much of it.
In practice this means on PC you need more memory and CPU cores to have RT.
We really want to know precisely what the situation is.