These days, should there only be one texture setting, 'ultra awesome'?
No. We need those settings to achieve scaling to HW with less memory. There is no easier way to save memory, and reducing detail also has little impact on visuals, so this option won't go away. Same applies to geometric detail, and ideally we want to relate both settings. High res textures on low poly models look shitty, and then i even prefer low res textures too.
The second argument is storage. Personally i associate virtual texturing with unique detail everywhere, like Rage did (but no other game after that).
If we want this, we're obviously constrained by storage space. Or, if we would consider a streaming platform to avoid the client storage problem, we're still constrained by network bandwidth.
Looking at it this way it also becomes clear that 'virtual texturing' itself does not solve any problem. People use the term for any mechanism which does only load memory pages we currently need. But that's obvious, and can be only a low level implementation detail of solving a real problem on some higher level.
So to discuss this, you first need to make clear
what is the promise of virtual texturing you see, and what problem you hope to solve using it.
There's also the question of tool maturity? UE's virtual texturing is only four years old for instance.
I don't know much about UEs VT, only what Karis said in his talks about Nanite.
Iirc, he made UEs VT system, but initially he had problems to convince the company about the potential benefits. Before they only had a system at the granularity of mip levels, but no texture pages, and they were fine with it.
The benefit than only showed up in combination with Nanite, which builds upon the VT system to store geometry data in the texture pages as well, i assume.
The 'limitations' of UE show if we compare it to Rage. UE does not try to achieve unique detail everywhere. Instead they build on the idea of instancing.
And that's a good example of why i think it's important to have context.
Think of a Nanite model of a rock, and we use 100 instances of this same rock across the scene, scattered across all distance.
Because Nanite relies on instancing, that's a likely case, and applies to most other models we use as well. A column, a wall, all kinds of modular building blocks, e.g. each skyscraper window in Matrix demo being an instance of one such model.
What happens with our VT system in this case? Because we have instances at all distance, we will just load all data of the model - all levels of detail for both texture and geometry.
There really is nothing wrong or suboptimal here. But we could say that our virtual memory management is not really utilized and thus needed in this case.
That's not correct for any case, e.g. if complex models are used only once or a few times. That's also why VT is worth it for them.
But it's enough of an argument to illustrate why i personally associate VT much more with Rage, where they do not use instancing but unique detail everywhere.
To me Rage is the perfect example of what virtual texturing enables. But maybe you have other applications in mind.