If we believe that DMC5 was eventually run on PS5 hardware before release, that's close to the time frame needed for a compression run on the package. This is also assuming there isn't a baseline zlib or Kraken implementation akin to how you can mark a drive or folder as "compressed" and the disk subsystem compresses anything that goes into that region of the disk. If the PS5 has a default compression setting for the SSD and the use of it is transparent, it seems possible that it would require little attention or effort to happen--unless the PS5's functionality itself is not ready in that regard.
Optimization stages or preprocessing for things like Oodle texture could take more time, but some level of packing or compression would likely happen regardless for the disc and download packages, with zlib being omnipresent and Kraken already in use prior to the PS5. This assumes the PS5 doesn't have a platform-wide compression flag for data going onto the SSDs.
Loading a level or save is performing other tasks besides moving data off-disk. Setting up objects and the render pipeline and constructing a scene again from the save data is non-trivial. It may have been dwarfed by the IO time with an HDD, so a massive improvement in drive performance may reveal what was going on concurrently.
It's possible that the drive isn't reaching peak performance, if it turns out there are more general layout optimizations missed. However, if the baseline were optimized for non-random access it seems like the drive would handle things well.
However, if the game's base design assumed there would be a big IO event at launch or loading a save, it's potentially safer wipe a lot of state and reinitialize rather than worry about dirty state carrying over. If there's going to be many seconds of downtime, a small set of broad invalidations and restarts to a known state can be developed and debugged more readily.
A drive list in firmware may be possible, although heavy-handed relative to prior generations. A performance check at installation time may be too trivial. Perhaps raw bandwidth could be checked in a small amount of time, but would a more rigorous benchmark be reasonable to expect a console to perform on its own, and could it be done without making the upgrade process more onerous for the consumer? Should we also look at that concept in light of Sony's level of platform and software engineering, and the time it took to implement other system features?