I'm not sure I follow what you're saying but I thought Quadbitnomial explained it eloquently. The change to a custom ASIC was simply to enable the drive to work with a 2x PCI4 interface rather than a 4x PCI3 interface. There's no change in bandwidth there, it's most likely simply a future proofing exercise because the console, and more importantly it's external peripherals will still need to be manufactured more than half a decade from now and PCI3 is already an old standard.
The max speed of the drive is determined by the number of NAND channels supported by the controller along with speed of the NAND used. The Phison E19T supports 4 channels and NAND speeds up to 1200 MT/s to give a maximum possible throughput of 3.75GB/s. However that doesn't mean the SSD has to use such high speed (expensive) memory.
As mentioned above we know the XBSX uses a Western Digital SN530, but the only customisation to that drive that has been mentioned is to the PCIe interface which results in no change to overall potential bandwidth. So given that the standard version of that drive is rated at 2.4GB/s, and given that we've been told of no changes that would impact that peak throughput, and given that Microsoft advertise the drive as being capable of 2.4GB/s, I think Occams Razor applies.
Also the sustained speed thing is likely a red herring. Any drive can sustain it's max throughput under ideal circumstances as long as it doesn't throttle. Presumably Microsoft are simply very confident of their cooling solution - something that would obviously be helped greatly by using a drive with much slower memory than the maximum supported.
I guess there would be some truth to this, though I don't know if calling their mention of sustained speeds a red herring is necessary. To me (as one interpretation, admittedly) it'd suggest the speeds aren't sustainable but you already make a case for why they would very much be sustainable. I don't do a ton of looking into SSD performance metrics on PC but from what I have noticed is certain drives tend to hit their peaks sporadically, then have noticeable drops, basically bobbing and weaving their bandwidth rates up and down dramatically over the course of a transaction. It can be especially noticeable with certain tasks like 4K video file writes to drives, again though I haven't kept up too much with benchmarks on very recent drives in the PC space.
So if I'm understanding you here, the flash memory controller's been changed from 3.0 to 4.0 moreso to keep the interconnect standard more relevant. But functionally the bandwidth could very well still be 3.75 GB/s. However I also still agree with Brit that they may've made some changes to the controller, even if they are minor. 4.0 has some small adjustments to the PHY layer and link-level management compared to 3.0, so the flash controller would need to be modified to accommodate at least those changes and be 4.0-compliant I think.
But that does not eliminate the possibility of other customizations. The drive is software driven and Microsoft has a lot of technical papers about improvements in that area.
Yeah at the very least they'd need to change the controller's PHY and link-level management to be in line with 4.0. But there's also the Flashmap papers and they clearly have done a lot of R&D into flash memory optimization systems for bandwidth, latency, redundancy etc. They also have designed their own custom SSDs like the 100 GB/s ASIC drive they made, I forgot it's name though.
My guess is even if the flash memory controller is more or less like the stock one minus some changes to accommodate 4.0 specifications, the decompression block will have most of the customizations insofar as MS's I/O solution. The flash memory controller is only a part of the whole I/O subsystem (goes for both Microsoft and Sony's systems).
I think for 10th gen systems they need to focus on shipping with 2TB SSDs and an expandable slot similar to what's in the PS5. That would alleviate some of the issues you've mentioned. With a target of about 200-300GB for game sizes this wouldn't be hard. They could simply ship 2 Bluray discs. Again it's not clear if companies like MSFT will even ship another console after this tbh.
If developers industry-wide can get better at compressing their data assets maybe 2 TB will be doable for 10th-gen but we have to assume the worst and therefore I think 4 TB minimum might be more likely
. Though, I have thought about a possibility of them making an even *more* genuine move back to cartridges and dropping Blu-Ray altogether for cheap USB 4.0 Gen 2-based flash carts. 128 GB - 192 GB capacities, should be readily doable by 10th-gen with 2.4 GB/s bandwidths and not costing more than $5 - $8 at those capacities provided NAND prices continue to trend downward over time (which they should).
That way your cold storage medium is fast enough to send potentially double digits worth of compressed data to a 2 TB-size SSD drive, saving on costs for the SSD capacity and game installs being literal seconds. I think Microsoft might have something like that planned in the future when you look at the specs for the expansion cards as well as their form factor. They're perfectly sized, their capacity just needs to come way down but NAND prices also would need to fall more too and they're a few years out on that front. But I won't be too surprised if say 2024 or so, some Series games start shipping on small 64 GB expansion cards rebranded as flash cartridges, and they can release a Series system with no internal storage but same spec'd decompression hardware as the S and X. Could maybe even try this with a Series S refresh around 2023/2024 just depends on how NAND pricing works out among a few other things.
I think with SSDs you solve a lot of the issues you've mentioned. The Series X and PS5 will be capable of fully utilizing the small amount of RAM they have thanks to the SSDs. You can simply have the game install on the SSD as part of virtual RAM. So roughly speaking, if they doubled the RAM on 10th gen to 32GB, a game with 200GB install size could possibly see 232GB of virtual RAM. All they'd need is an SSD with 12GB/s throughput and decompression ratio of about 2.5:1 and the game could instantly page in anything it needs from the 200GB install. Thats how the Series X and PS5 games are going to work as well. Much better memory paging. We're definitely not at the point of diminishing returns if there's anything to learn from Apple's IC gains with the M1. The 10th gen will definitely have GPUs much more powerful and efficient than an RTX 3090. Thats 6-7 years from 2020. So much better hw acceleration for RT, better geometry processing engines, higher bandwidth RAM, etc. So finally achieving photorealistic games is possible on the 10th Gen.
This sounds like it'd work very well insofar as game loading, but I'm thinking more along the lines of rapid asset streaming. Which is why I still think for 10th-gen they may want drives with faster raw bandwidths and larger decompression ratios. To make up for that maybe they do indeed go with 2 TB standard instead of 4 TB or something like that, gotta save costs somewhere. Plus if they can do something like the aforementioned "flash cartridges" to replace Blu-Ray while not costing too much more than Blu-Ray ($5 - $8, depending on capacity and also hopefully NAND pricing being cheap enough by then), they could do a lot more with the 2 TB of storage than what Series X can do with 1 TB or PS5 with 825 GB.
And since storage space is already one of the biggest things people are taking issue with regarding 9th-gen out of the gate, that sounds like a part of the systems MS and Sony will want to resolve for 10th-gen; "flash cartridges" in lieu of Blu-Ray discs would save costs on not needing a Blu-Ray drive and also offer MUCH better transfer speeds to the internal SSD, plus game-specific updates and save data (at least some of it) could be optionally written back to the flash cartridge if the user wants, freeing up more SSD space for dynamic data (dynamic in the case of both virtual RAM and multiple reads/writes for dynamic game data, this kind of requires NAND with even better P/E cycles at the cheaper levels however or as a last resort, some inclusion of NVRAM reserved simply for massively dynamic game data)