Yes, because you need some general pass that answers the question "what LODs do I want resident for my scene right now". The Intel paper has the same thing, and with very similar heuristics. I'll quote section 4.1:
This also sounds extremely similar and sensible as a base implementation right? i.e. for onscreen geometry you project it and compute rough triangle sizes in camera space. For offscreen geometry you use some sort of distance-scaled representation. Regarding occlusion culling, that's entirely up to the implementation whether you use that as part of your heuristic or not. I would imagine for practical reasons you do want to scale the quality of occluded geometry - just like offscreen geometry - but how much is entirely up to the heuristic. Obviously if you want to just stream all your geometry at high LODs based on purely solid angles or something similar you are welcome to, but you will almost certainly run into VRAM problems with that approach.
You could of course imagine in the future driving or augmenting some of this selection with feedback from secondary rays/differentials themselves; that would be the only real way to capture stuff like refraction and the like in sufficient detail, but is also brings its own can of worms that probably doesn't make sense for the near term.