Probably because they wanted it ultimately to stay in the hands of the IHV to determine what would be best for them. And what may be better today may not necessarily be better for their IHV tomorrow. Though I’m not sure honestly.
Another way is to consider that BVH is hw dependent. If BVH is done in driver no problem for IHV. They can add support for new hw in new driver. If BVH was implemented in game engine each game would have to be patched to have new BVH implementation whenever new hw(architecture or chip) comes available. Also game would have to be specifically optimized for old hw like turing variants also. This creates ton of work. Considering this and first generation RT games it would be very unlikely old games would get patched for new hw to function or perform optimally. In this case it would be tremendously difficult for amd to bring RT support to games after the fact. AMD would have needed to go back to each developer and ask them to make amd specific BVH implementation or non of the "dxr" games would run on amd. This would imply each game would have to be patched and released again, onus being game developer to verify amd hw+rt versus currently the onus being on amd to provide a performant driver.
Even simple things like data format and possibly compression used to describe and store BVH can(is) be HW specific. Compression, structuring of data to be hw friendly etc. Much better for now to let driver handle this than force game developers to deal with each chip and architecture separately in their code. It get's more complicated once developer has to implement hw friendly grouping of rays etc. to go through BVH efficiently. Just naively using hw will likely be very cache unfriendly and poor performance. The black box called BVH is not at all trivial at scale to own. It's fine to own BVH if you do a demo for one specific gpu or console exclusive game.