So I'm watching this DF vid about the PS5 specs and I'm really curious about latency to the SSD when they say things like, "it seems like it can read from the disk almost as if it were just RAM". If you look at all of the work that is being done in optimizing game engines, keeping the GPU and CPU working by avoiding cache misses is paramount. The reason is for each step in the cache hierarchy away from the registers and towards HDD/SSD your processor waits longer for data retrieval and your CPU or GPU sits idle. The kind of ballpark numbers on the CPU side is registers are 0/1 clock cycle, L1 is around 5 cycles, L2 is around 10 cycles and RAM is like 200+ cycles. I know the SSD is light years better than an HDD, and looks massively better than any SSD on the market, but they say seek time is "instantaneous" ... but no. Game devs work in nanoseconds as a unit of measure. It's not 0 nanoseconds. Unless it's very close to the same access time reaching RAM, it won't be able to be used like RAM. You really have to look at the latency in terms of clock cycles, and not throughput in seconds. Maybe they do have access times in line with RAM.
The throughput looks very good.
So absolute best case of:
22 GB/s = 22 MB/ms = 352 MB/16ms frame
Average case of:
9 GB/s = 9 MB/ms = 144 MB/16ms frame
PS4 HDD (best case, unrealistic):
100 MB/s = 100 KB/ms = 1.6 MB/16ms frame
PS5 RAM:
440 GB/s = 440 MB/ms = 7 GB/16ms frame
It's been a long time since I've read up on virtual texturing to see how much data you'd need to read on the fly for a 4k framebuffer to make sure you don't have texture pop-in like Rage.