RT does definitely throw a spanner in the mix. Of course, the flip side is people are coming off the back of conventional memory management. If prior to RTRT we were using SSDs, far slower than now, with effective virtual assets, the whole thought process towards an RT solution would be different. With things as they are now, the momentum is towards faster storage for all the reasons you state.I know what is is virtual texturing and virtual geometry(nanite). One day Unreal Engine will use only Nanite for all geometry and virtual texturing. Since my first post I said there is more than geometry and texture to load, First this is cool to use virtual geometry but if the game use raytracing you need a BVH and some offscreen geometry. The game probably stream BVH for static geometry. For the moment, it is using proxy geometry and not Nanite triangle data.
If the game uses this type of rendering where they mix sdf tracing and triangle based raytracing. You can stream two data structures for static geometry. In the paper the SDF is generated at runtime.
After like I said animation, sound or Alembic Cache animation/destruction or any other baked stuff can be stream too. And if I remember well 2019 GDC Spiderman postmortem, animations takes tons of place on the disc.
Another key point is the end-target. Potentially, for the Dr. Strange example you give, we need as many GB/s as possible. But realistically, how much do real games need? Is it worth designing hardware around a tiny percentile versus the 95th percentile, or even 99th? On the one hand it's always nice to give devs as much freedom as possible. On the other, limits have to be drawn and I posit the alternative solution would be better overall resulting in more efficient hardware that's either cheaper and lower power draw or with a better processing/data ratio. But then maybe that be constraining and the future is inevitably Big Data? But then how do you get past the very real limits imposed by mammoth datasets, storage size and cost to produce?