I dont think there's any 'major' leaps possible with current SSD's that would be game changing from what exists now. We see constant linear improvements because the technology itself doesn't present opportunities for anything else. Though something like ReRAM could be really interesting.
Yeah, that's my point. I think when it comes to storage, there won't be major leaps, other than what the natural evolution of the tech affords in the future. But it would be clever to aim at chips that can reach higher throttle speeds than they can sustain consistently, so they have the option to use those short bursts for sporadic app switching and full level loadings.
As for completely changing the geometry hardware because of virtualized geometry(Nanite), it's an interesting thought. But Nanite itself is already so relatively cheap, and current geometry engines in GPU's do not take up a ton of die space, so I dont think there really needs to be any huge overhaul. The paradigm change with the software side alone should do most of the work. Plus, you dont want to hurt performance badly in older games.
I think the geometry hardware has already been largely generalized and opened up on AMD side through "next-gen-geometry" and now adopted by Nvidia and the API's through geometry shaders. When I said rasterization has to be re-thought, I am strictly thinking of how triangles are drawn after the geo is already processed. How many fragments and pixels are generated, how they are grouped, in what order and format they are sent to shader units, what filling conventions it follows, how MSAA is handled, subpixel placement, etc. Epic has shown empirically that there is room to rethink the performance trade-offs in that area with their SW micro-polygon rasterizer.
And again, rasterization is only one field I mentioned. I suspect there are many other conventional ways the render pipe is handled in fixed function forms that might benefit from more generalization and programability. Even if traditional algos might be a little slower, the whole point is most devs are NOT using traditional algos anyways.
Texture Units were built to sample texels and filter them and feed the result to a fragment shader that would do some blending with the triangle gourad-shaded colours and output that to a frame buffer. Modern devs are researching ways of SW rasterizing an ID buffer, that is later re-evaluated in a compute shader that samples the texture values directly, filters them in SW and draws a new G-Buffer, to be shaded in a later pass, in a nother fully SW compute shader. High end devs and engines do their own tiling of the screen on this pass, and do all sorts of SW optimizations no GPU ever thought of considering. Mipmapping has to be biased to account for TAA, invalidating old-school assumptions on how that worked as well. SSAA is not used, but hacks are employed to do similar things but in slightly different ways like checker-board-rendering. Variable-rate shading was created to try and catch up with those trends, and yet devs such as infinity ward have already developed pure copute-based software solutions that are faster and achieve better results for their purpuses.
Eventually, GPUs will indeed just be a parallel compute chip. I don't think performance will be sufficient for that by PS6/XBStupidName times, but all the fixed function stuff could benefit from at least becoming as generalized, reprogramable and circumventable as possible. A lot of modern techniques spend a lot of effort to try and walk around fixed-funcion aspects. Those things that were meant to make their lives easier are often making it harder. Just open that shit up.