This is one aspect. Another is, as has been pointed out, that lighting with RT is still difficult.
And a third is that, for the foreseeable future, any multiplatform game has to offer both approaches, which means that it can't help but increase the total development work. RTRT lighting may be The Future(tm) but I maintain that for that to be the case it has to offer tangible benefits at a net lower resource cost. At what point, if ever, a game designer/publisher can assume that all targeted platforms will have sufficiently efficient support for RT that no other approach needs to be considered is very much up in the air.
One question about dropping the more conventional rendering path is how much further does the graphical power, power efficiency, and the level of knowledge in state of the art have to be to overcome how very good the architectures are at handling their primary use case?
How much of what users see on the screen is dominated by or sources heavily from calculations and data flow that is accelerated by rasterization hardware or the primary rays they handle quite well?
Many of Nvidia's initial efforts make a point of leaving the vast majority of the screen to be handled by the raster path, leaving the margins of the technique and types of incoherent ray behavior that are more difficult by nature for hardware of any kind to handle well.
If 80-90% of what is seen is heavily or partly dependent on the standard paths that handle those calculations up to an order of magnitude more efficiently, how eager are developers or hardware designers to discard the optimizations that made the room for ray-tracing hardware or the spare compute resources/power budget for a new sort of compute kernel?
It's around 9 mins in where they mention this. I wonder why, if the low level mode is so much faster, why there is even a conventional SSD mode.
Assuming the Sony patent is what the PS5 tries to do, and assuming that something similar is being done in the purported low-level mode:
There would be functions the low-level mode cannot do, like writing data. One scenario that I was curious about for the Sony method is what might happen for an open-world game that has a sprawling item database with small writes throughout, and if the write-back is better left to the standard file system.
Other standard functions like arbitration of file access, garbage collection, file and file system protection, system integrity, and quality of service are not handled to a significant degree by the low-level mode.
Depending on the specifics of the hardware mode, there may be limits to how complex the access pattern can get before limits to the hardware (controller on-die buffer space, queue depths, synchronization points, etc.) can prompt special handling or fallback behavior. While games are treated as a single-user system, there may be security implications to game or OS functions that interface with the network or other elements, and where those missing arbitration and protection layers need to be brought back in.
Also, depending on the nature of the low-level access and how well-protected it is from bad or malicious payloads, there is the question of what errors or flaws in low-level handling may affect. Can overflows or timing issues on a controller with fewer insulating layers potentially trash something like a remapping table--which may corrupt or brick the drive. As an example, what if the low-level management firmware had the same bug rate as Sony's suspend mode--where a game like Stardew Valley needed a system firmware update to stop the game from locking up the console and sometimes prompting a disk database rebuild?
A read-only performance mode may also avoid worrying about what may happen in the case of power loss, which many SSDs and operating systems already have problems with. Something like a whole-game remapping table subject to edits and shifting compression/encryption coverage may raise the stakes further.
There could be a risk-management theme for the platforms to have separate modes, so that they can offer the low-level mode later once they're much more certain about the quality and robustness of their methods before letting games get at them.