Yes. The RTX units are invaluable to offline raytracing where nVidia has been working on GPU acceleration, and well worth including in GPUs designed for professionals.
The inclusion of RTX encourages devs to use an accelerated ray-tracing solution using nVidia's BVH structure rather than explore alternatives like cone-tracing.
From Nvidia's description of RTX, the driver handles building and refitting of the BVH, and the RT cores autonomously handle traversal. The BVH implementation used seems to be pretty black-box.
RTX has some of Nvidia's particular spin on the concept, but the overarching idea behind the API for ray tracing is that the low-level acceleration structures are encapsulated so that other implementors can have different structures while allowing them to plug into the API.
What specific elements of the BVH are developers exposed to?
Tensor cores are maths accelerators. They don't limit any ML algorithms and were included to solve the limitations of ML, not solve a specific problem. The BVH units in RTX are designed to solve a particular problem - traversing a particular memory structure - as opposed to being versatile accelerators.
Tensor cores accelerate ML in the form of weights and connections worked through dense matrix and vector multiplication with digital ALUs and crossbars that either map very well to existing hierarchies or extend them in a reasonable way.
By the logic applied to RTX and its BVH, they discriminate against various neuromorphic and analog methods, and steer devs away from optical and quantum methods as well.
RT cores do accelerate BVH traversal, and also the intersection tests (although the latter can be replaced). The BVH is the immediate implementation's solution for a more general problem.
Without an acceleration structure, no alternative methods have been able to get themselves close to practicality. BVH is the choice Nvidia want for in terms of what it thought it could map to the existing architecture. It's not the only one, but it's the one Nvidia seems to have been able to best map to the existing SIMD hardware for construction.
Traversal of the acceleration structure is a challenge for a lot of alternative methods, however. I thought cone tracing still had need of an acceleration structure, and its intersection evaluation would be more complex than for a ray. The latter point would seem to favor a different sort of hardware optimization, since Nvidia offers to accelerate intersection testing for a simpler case.
They are akin to the inclusion of video decode blocks. These video decode blocks were included after the need for video decoding was ascertained as pretty vital to any computer and the codec defined, after years and years of software decoding on the CPU gravitated towards an 'ideal' solution worth baking into hardware.
It wasn't always the case that these were taken for granted. AMD got nailed for lying by omission about R600's lack of a UVD block, so today's settled question had a period of pathfinding work and initial efforts.
It wasn't settled whether there would be T&L hardware on the graphics chip, texture compression, AA, or AF until someone put in the hardware to do it--and there were any number of now-gone implementations before a rough consensus was reached.
If Nvidia's specific version of RT hardware doesn't catch on, it's no different than other items didn't pan out in the long run (Truform, Trueaudio, quadric surfaces,.etc), or from many of the features we have now where someone had to commit hardware before there would be adoption.
My personal opinion about mobile RT is the exact opposite: Compute is no alternative and only FF can do it at all, and second: I do not understand the need for RT on mobile, while on PC / console i do.
I think there's a desire to have the same games or similar games with similar features on mobile platforms as there are in the PC and console. Among other things, it helps mobile devices steal more time from the other platforms, and can help the same product expand to multiple markets more readily.
Btw, most doubts efficient RT in compute can be done at all likely come from two arguments: Building acceleration structure takes too much time (solution: refit instead full rebuild),
Depending on the level of change in a scene, a refit can take significant fractions of the cost of a rebuild. There's no theoretical ceiling to this, and if the cost of a rebuild is no longer the dominant one, scene's complexity can raise until the refit becomes a similar limit.
and traversal per thread is too slow (solution: don't do it this way). Both of those arguments are outdated.
Traversal of some structure to find arbitrarily related geometry in an unknown place in DRAM or cache is a fundamental challenge. "Don't do that" is both true and unhelpful.
Mobile hardware often favors fixed-function more because it is cost and power constrained to a degree PCs and consoles are not. General compute resources have a higher baseline of area and power consumption, and the mm2 and milliwatts it takes are more costly.
Recently i have read a blog from a developer, and he made this interesting speculation: NV simply can not talk about how their RT works - they would have to expect legal issues from ImgTec patents.
This tends to be true in a wide range of cases, and if IMG really wanted to poke that hornet's nest it would likely have the resources to figure out if Nvidia was using patented tech.
Cross-licensing is common, and silently developing with disregard as to whether a competitor's technique is invented in parallel is constant. There's good odds that if there wasn't pre-existing licensing, there's a case of mutually assured destruction where IMG could infringe somewhere else that Nvidia hasn't yet taken them to task over.
Apple and Intel for example did get caught out for infringing on memory disambiguation hardware whose patents were enforced by an organization related to the University of Wisconsin, who couldn't be sued back like another IHV could.
One other way non-disclosure can help is if Nvidia finds a better method, they can change things while minimizing how much bleeds out from under the abstraction.