Agree, thats why my guesstimate is at 6-8x the current tflops. But they'll definitely be better than the RTX 3090. At least double the performance.
You'll be increasing latency by adding an extra layer. And you'd have to create a new virtual address space and train developers in using that new cache. That extra cache here and there thats added to boost the SSD can be abstracted away from the developer but you don't get that option when adding a layer of "NVRAM" between the SSD and RAM (at least not enough to justify adding it in the first place). And basically the biggest bottleneck for next gen is no longer going to be I/O related but probably memory bandwidth. So most of the investments will go into that as well as the processors. 12-16GB/s sequential read write SSD would be sufficient for 10th gen. In 7 years maybe even more.
Not too sure that 32GB of NVRAM between the SSD and RAM would be beneficial in that case. It would need to be substantially larger than the size of RAM and with substantially higher throughput than the SSD. It would need a combination of those two in a system with only 32GB of RAM in order to justify the extra cost. Unless its dirt cheap in 7 years.
That certainly seems to open up the spectrum of where designs can go for 10th-gen, then x3. I still don't know if adding in NVRAM would complicate things that much; the way I was trying to present the NVRAM in that other example was just as a memory direct cache. The OS would handle the transfer of the data, so the NVRAM gets treated basically like a last-level cache. Exactly the same way Optane is on PCs today for applications operating with it in Memory Direct mode.
In that setting it saves the developers a lot of code rewriting; in terms of bandwidth, I have thought of other figures where future NVRAM could go. You can already get around 40 GB/s in read speeds from Optane DC Persistent Memory on server systems, but you have to install cards into six DIMM slots for that. Which, obviously, sounds worst than going with actual DRAM where you'd only need maybe two slots of DDR4 to achieve that type of bandwidth if not more, but you're trading in some bandwidth for a big increase in capacity for this case, of byte-addressable memory. That's a different market segment sure, but just an illustration of where NVRAM bandwidth is already at currently, so it can improve even more in the future.
But, if it's being used as a sort of memory-direct cache for moving data in and out of storage to/from RAM, going by some of your previous posts you would only need 12 GB/s - 16 GB/s SSDs to fill in a 32 GB capacity of RAM with lossless decompression; I'm just trying to envision a scenario where the NVRAM naturally provides that bandwidth without decompression being required, but you can still have the decompressor present for data coming from the SSD into the NVRAM. Maybe the could in fact make the NVRAM capacity larger but bandwidth more modest, that's why I threw out the 32 GB/s figure. Though, I guess for instances where one would like to direct CPU or audio to address that NVRAM cache directly (which you're right, would require specific code to do, though maybe not that much), they can increase the bandwidth by a factor of maybe 2x to 64 GB/s (by 10th-gen I think that should be easily doable for things like Optane, though whether there's a transition to something like using PCIe 6.0 with CXL for interfacing vs. DIMM slots, I don't know if or when that would happen).
And it's another option for providing more memory bandwidth, too, while cutting down on some bus contention. That would essentially offer more effective bandwidth for the GPU since if the CPU or audio need to access some memory, they could address the NVRAM in parallel. You'd just need to ensure some type of cache coherent management. In any this case, just speculating on these possibilities is really exciting and these back-and-forths are always very helpful. I think, having had these ideas bounce around, I could see Sony taking something more tried-and-true that might not leverage NVRAM but goes for a decent sized pool of very fast unified memory and fast (but friendlier) SSD storage, a narrower processor offsetting raw TF computation with a bigger focus on dedicated hardware accelerators. And I honestly think they will try pushing VR as a default standard included in every PS6, by then the tech and prices should be advanced and manageable enough to do this, at least for a basic VR option as standard.
Microsoft probably goes more open in terms of VR/AR, they partner with another company (Samsung, HTC most likely) and continue whatever VR initiative they start with the Series X and S (if or when that happens). They go further in on a PC-style design, incorporate a lot of new tech into a package OEMs would not be able to match for price-to-performance, finally unifying Xbox and PC in that regard. More raw TF computational power, but less a focus on dedicated hardware accelerators. They wouldn't have a hUMA design by default but basically leverage newer versions of SAM/BAR with further cache coherence to make up for that, and they offer a console and PC version of such a design with a staggered release between the two. NVRAM isn't a massive focus, but they provide support for the technology in their design through some sort of JEDEC-approved, industry-standard card form factor and interface through PCIe 5.0/6.0 with CXL integrated within, to facilitate use of byte-addressable, DRAM-class NVRAM without needing to use DIMM slots.
I think that's the general direction Sony and Microsoft go for 10th-gen, so we might see systems even more different in quite a few ways than what we see between PS5 and Series X|S, fitting the business strategies of the respective companies.