What's the possibility or a hybrid approach to reflections? When buildings and terrain geometry is static, is it feasible to apply a high quality cube map to a window for example, then overlay Ray traced dynamic objects like animated models on top of that? I could imagine it would be hard to get look right, you would have to have the ability for rays to ignore certain pixels based on geometry and no idea if it would be a significant increase n performance to be worth it.
In addition to Voxillas response, planar reflections require to render the scene once for each plane normal, so if you have two windows side by side, but both have a slightly different angle, you'd need to render once for each window.
See the recent Hitman game. They implement this method for planar reflections (windows share the same plane). Half Life 2 used it for water (just one plane), i guess it has been used even before that.
So, for perfect reflections on a sphere, you would need to raster the whole scene for each pixel of the sphere, because normals differ everywhere.
Of course you can just render a cube map, so 'only' 6 times the scene, but then although you can fetch along the correct normal direction, the origin is not at the surface. Hard to notice for a convex object like a sphere, but on a torus it would fail pretty badly (it would miss self reflections of the torus in the best case).
Actual games 'solve' this with projection tricks, often requiring manual artist work like placing a box together with the reflection probe, and this box is used to 'fix' projections a bit.
So, cases where planar reflections can work at all are rare, and ray tracing quickly becomes faster if geometry is complex.
Your idea with the cube map is not bad however. It is possible to store low resolution cubemaps (or another data structure covering just the visible hemisphere) at dense locations on the surface. Usually this is precomputed, and the low resolution limits the usecase to rough materials. See the PS4 game The Order. (Personally i work on doing this in realtime)
This approach is good enough for most real life materials! Exceptions are mainly water and human made materials which show sharp reflections.
For sharp reflections, you would need to calculate one 4K cube map for every pixel on screen just to fetch a single texel
(This data would also allow to calculate full GI of course. But we do not need so high res for that.)
At this point it becomes clear, no matter if we like it or not, ray tracing is absolutely necessary if we want progress. I've already criticized the way NV handles the problem with a black box approach, but that's another story - we need to trace rays because it is the most efficient way to solve those problems correctly.
But RT is good for high frequency reflections, and slow for low frequency diffuse reflections (because it requires many rays and / or temporal filtering for the latter). So i agree about a hybrid approach being the way to go, but it's less about static / dynamic geometry and more about sharp / diffuse reflections.
BFV really is about sharp reflections only, so exactly what RTX is good for and i hoped for better performance here. (It's not so bad either, but... will there be a RTX 2060 at all? Seems to make no sense.)
I'm pretty sure we can gain a lot with texture space lighting so reusing results over multiple frames, but it will take years until we see this in game engines.
For now my hope goes towards reducing material complexity or lower LOD for raytracing, but i do not know if this is a bottleneck at all.