@Pjotr actually covered a lot of good info and background so I'll maybe just add my 2c in a few places.
Worth remembering this is not a black and white issue here. As Pjotr mentioned, you can indeed do a lot of things up front and games do tend to do this. In fact you can think of HLOD and similar systems as precisely this - bake down a pile of complicated editor things into a simpler representation that can be resident most of the time without redoing all of that work. But there are a number of considerations worth mentioning:
1) A lot of these streaming systems are not *just* about saving memory footprint, but rather various knock-on effects of having "more things". Ex. Nanite and virtual textures can generally handle streaming of the rendering of most objects independently at a fine-grain, so why do we even need HLOD anymore? A big one is just cutting down on total object/instance counts, which can affect Nanite but can have an even greater effect on lots of other parts of the engine. Ex. having a whole pile of physics objects loaded is not feasible, as even with acceleration structures there's some practical limits on how many active simulated objects there can be at a time. There are many more examples of places where systems that are optimized for thousands of objects just fall over entirely when presented with millions. Some of this can and is being improved on the tech side of course, but there will always be a need for streaming and LOD for open world games because...
2) The asymptotic scaling of this stuff in a 3D world is awful. Even in a relatively 2D/flat plane kind of world, doubling a streaming radius is 4x the cost (footprint, instances, everything). If there's significantly more "3D" content/verticality, it's even worse (up to 8x). Even with hierarchical data structures and cleverness you will always need LOD, both in graphics and in "gameplay".
3) Given the above, having more RAM is not necessarily even a huge advantage. Sure you can potentially push the radius of "high fidelity loaded stuff" out a bit, but the scaling is poor and more importantly, it doesn't actually change the cost of how many things you need to swap in/out as you move throughout the world, which is really just a function of movement speed and asset density. i.e. you don't actually make the streaming problem any easier by increasing the streaming radius until you can actually load the entire world, which is basically impossible for a game of any reasonable size due to the scaling function.
First it's not an excuse, it's an explanation. But more importantly, there *has* been major progress on this front over the years; modern engines and consoles are significantly more efficient at streaming and handling higher complexity scenes than they used to be. But along with the improvement comes the insatiable desire to push the content even further, which unfortunately has often entirely eclipsed the improvements. It's the same issue with much of computing... as the capability expand so do the desires and expectations.
I will re-echo
@Pjotr's point that part of the problem is that a lot of this code is on the game side, not the engine side, and it is often written by folks who are not performance/optimization experts primarily. Sometimes it'll be blueprint, but even if it's in C++ (in Unreal) it is often written in a very practical way. The systems have existed to make this stuff async and parallel for a long while, but it makes the code more complex and hard to maintain, particularly for non-experts. Stuff like MASS and PCG are trying to help address some of these cases in the short term in Unreal, but require rethinking how some of these systems are architected on the game side.
Ultimately though this is one of the main goals of Verse - to provide a programming language that is better designed for modern, parallel/async by default type needs while still letting people express the logic in a way that is more procedural and intuitive. I will not claim this is an obvious or easy thing to do... there are entire graveyards of languages that have attempted similar things and fallen short. Thus I think a "believe it when I see it" attitude is very warranted here, but conversely it's unreasonable to claim that this problem is being ignored; indeed they are trying to invent a new programming language with ambitious design goals pretty much precisely to try and address the issues of non-experts writing inefficient serial code that is impossible to optimize at an engine level.
This is all a good discussion, just don't expect a silver bullet here. This stuff is very adjacent to the "how do we parallelize general purpose code" questions that have been at the core of computer science research for decades.