The volume that the camera position can occupy is a mere sliver of a level's total volume and still only allows the camera to get close to or zoom in on a very few surfaces of the programmer's choosing, so pre-determining the MIPMAP and LOD ranges will still give you a huge reduction in your data set. By huge I mean a factor of 8, 16, or more, depending on your maximum resolution MIPMAP and LOD.
I'll repeat one more time: the camera can pan, tilt, and zoom depending on what the player does. For example, if the player does an instant kill, the combat designers often have the camera pull in really close for the kill effect. This happens near instantaneously, and certainly too fast to be able to stream in any textures. As you can lure an enemy to just about any location and kill it there, the camera can also zoom in on just about any texture. Those surfaces you cannot get close to, those are not easy (read non-practical) to deduce. In other words, everything you just said does not hold true for our game!
You are still making assumptions without really paying attention to what goes on in a God of War game with respect to the camera. The camera -- and the game, affecting it -- does much more than you think it does. My point still stands: anyone who thinks the camera is "fixed" or that you can easily "optimize" things due to how our camera has been set up is naive about the system.
How do you scan your camera through its path volume?
I mentioned before that we (Phil) did a thorough description of the camera system at GDC (in 2008). You should do the research and read up on what the camera system does instead of making incorrect guesses. As a bonus, you'd know the answer to this question!
You can manually hint your level sweep tool, based on the camera dolly rigging. You can exhaustively sweep through the control parameters of the dolly. You can randomly sample those parameters. Or you can gather camera position data as a player does an exploratory sweep of a level.
I'm afraid this is crazy talk. We "could" also, for example, prerender every possible screenshot of the game based on every possible input, and just pick the appropriate image to display based on the input history. The game will ship on several million BluRays, but it "can" be done.
There's a lot of things that fall into what you "can" do, but just aren't worth it in a real-world game development environment. Trying to deduce what can or cannot be rendered based on the setup of our camera system falls into that category.