Once it's out, like with an ISA, you have to support it forever, on all past, present and future HW.
That's not practical.
If future HW has new data structures, the software has to be updated and patched. So we can share the cost between IHV and devs.
I really look at this as a temporary and experimental solution / hack. On the long run we would need a standard format, converted to the HW format by driver (streaming) or compute functions to add / remove branches of BVH and geometry to the tree nodes. But then we also need a full BVH API.
Can the ray tracing units read the compressed geometry? I don't think so and the decompressed geometry would have to be stored in memory first in a ray tracing unit friendly format.
Regarding LOD, we will converge to the following conclusion, which for Nanite already holds in parts:
RT acceleration structure and compressed geometry is the same thing.
Both use a tree (likely BVH), either to find triangles spatially, or to add levels of detail. Ideally we want to use the same data structure for both, but likely we can only achieve efficient conversation from 'custom compression format' to 'RT BVH'.
(Nanite uses a BVH, but it's not build with efficient RT in mind. But this could be changed and improved. E.g. for my geometry format the BVH is build for RT as primary application, so the quality of the converted results may be better than current RT BVH generated by drivers.)
There is no better way to achieve dynamic and efficient detail which can be actually ray traced. DMM is very nice and useful, but it does not solve the harder and more important goal we have with LOD: Being able to
reduce geometric complexity dynamically and gradually to what we actually need. DMM only addresses detail amplification. But at some distance, the low poly base meshes themselves become too detailed and waste memory and performance.
Now i see two options:
An industry wide LOD standard, which works for both RT and rasterization, and data ofc. isn't balckboxed. Sounds ideal, but due to the mapping from geometry to texture space (which breaks as holes open and close), such solution is not possible without agreements on tolerable faults and limitations. And i don't think it makes sense to develop and specify such faulty and thus temporary standard. It's better to let devs work on custom solutions, seeing if at some point they might eventually converge to practices, similar and robust enough to consider making it a standard. Notice the topic is also related to difficult problems such as seamless UV maps to support displacement mapping for example. That's hard enough the games industry has not even tried to utilize this, besides some theoretical talks at GDC. It's really too early for off the shelf standards. We need flexibility first to explore the options.
The other option is to expose IHV BVH data structures as said, so those willing to accept the price can actually work on the LOD problem. With an eventual future BVH API in mind.
I'm aware not many people want do this. But if Nanite turns out successful, and not everybody wants to use UE, people just have to work on LOD to remain competitive.
And if we get to this point, suddenly the whole industry sees the problem of RT APIs not being compatible with any gradual LOD solution.
Thus, RT is indeed an 'Nanite decelerator' as you said, just in a different way: It prevents the whole industry to finally tackle the LOD problem. In a time where suddenly discrete LOD is no longer good enough.
I do understand there were good reasons to keep BVH black boxed. But sadly time has already shown the decision was wrong. And now it has to be fixed by those which made the decision, which is IHVs and API designers.
Ignoring the problem and postponing the fix will only make it harder the longer we wait, as future HW might rule out solutions which would be possible now.
So, being modest and accepting the hack of releasing BVH specs to be good enough, is all i can do from my side.
The third option i've proposed above keeps interesting as well:
Add the option to replace clusters of geometry together with a bounding box. The driver can then update the existing BVH locally without a need to rebuild it for the whole mesh.
This allows to keep the blackbox intact, but then we still build BVH instead converting it from our custom and already existing data.
It could work regardless i guess, but eliminating building costs is very tempting to make RT more affordable.
Nice sum up i think. Spread the word... ; )