Well if megatexturing wasn't possible due to storage requirements, that'll be just as much the case with super-fast SSDs, surely? As mentioned earlier in this conversation, just how much data can a game realistically use given production limitations? I guess you could have a smaller download bake textures for use via procedural generation. Not necessarily JIT, but in a background thread writing files to storage ahead of them being accessed. You could have 10 GBs of current textures, and a buffer of 10 GB/s of next-area textures generated over 15 minutes play in the current area. A complicated solution though! The other obvious solution would be streaming over the internet but that's a bit overkill and inefficient.
Storage requirements don't make it not possible, it just limits how unique and detailed the texture can be. As iD Software did with Rage, you can have higher quality textures in some areas at the expense of lower (in the case of Rage, far lower) quality textures in other areas. Keep in mind this also shipped when Rage needed to be shipped on DVDs.
You'd have the same potential issue even with the traditional method of using many small textures if you wanted to attempt to have unique textures everywhere.
Megatextures do solve some problems that arise with traditional texturing in games, however it doesn't come without its own set of issues that the developer has to deal with.
Larger amounts of memory likely helped make megatextures less desirable the past console generation as you could just have more textures in memory. With significant increases in the amount of video memory for consoles likely a thing of the past, we may see a renewed interest in megatextures.
Being able to access and use just a fraction of any given texture will potentially be a huge benefit in games going into the future. And if you are going to have to do that anyway then megatextures might be seen as preferable to many smaller textures. Technology like SFS could eventually help make megatextures much more in demand than they currently are.
As it stands, we're sort of in a transition period WRT console games. Industry inertia is going to lean towards using existing techniques and only changing when a developer feels they absolutely have to. Hence we see relatively limited attempts to leverage the fast storage available. And similar to that we see most developers still texturing in their games in a traditional manner. Going forward, developers will have to leverage both fast storage as well as more efficient texture streaming techniques if they wish to advance the state and quality of textures within games.
However, until they feel they
have to overhaul their engines in order to do that, they'll (in general) need to be forced to do it, otherwise they'll continue to do things as they've done it in the past. And they won't necessarily feel pressured to do it until another developer greatly advances the state of what is possible such that they feel they need to change things in order to "keep up." While Insomniac did some nice things with fast storage and streaming, it wasn't so incredibly advanced over what developers are already doing that they've felt any real pressure to significantly overhaul what they are doing.
Compounding this is that all the media buzz right now is RT RT RT. So developers are mostly focused on that at the expense of leveraging faster storage and more efficient texture streaming. Two things that, IMO, could bring similar or greater graphical improvements to games as the relatively limited RT that hardware (especially consoles) can currently leverage.
It's sort of like Audio. It could make an incredibly huge change in how games are experienced if used well, however, it gets relegated to ... well, if we have the time for most developers (who then don't have the time) due to lack of media attention which leads to lack of consumer demand.
Regards,
SB