Multiple bounces has nothing to do with sharp refections. Muliple bounces are about reflections within refections.
We are talking about arbitrary reflection angles and positions and the whole point of RT is its infinitly more optimal than what you are suggesting, and games like watch dogs and spiderman on new gen consoles already show it is possible.
You have severly misinterpreted how rendering tech works about every time you tried to say something about it. I would recommend that you research more before asserting things tah are above your head.
With all due respect, for sharp reflections, you need enough samples, and a pretty big depth of tracing to be done. Kindly point me to all those reflections on especially in console games that have a decent sample size to produce sharp reflections and mirror more than a few objects nearby (like watchdogs) or do not severely reduce detail on objects further away (like Spider-Man). From there, multiple bounces are even more of a stretch, if a typical game can’t even get a single bounced reflection with enough depth and resolution. And I am basing this among others on all DF videos on the subject, which I’ve all seen, including the explanation on raytracing. And of course I know how raytracing works simply from occasionally playing around with raytracing software since the first ones were released on PC and it took ages to just render one dumb picture with a few balls.
It is totally and extremely obvious to me that raytracing is the superior solution and that there are situations where only raytracing may be realistically feasible, but as it is also almost painfully clear that the current rendering power available is so limited to the point where many turn off raytraced reflections because it cost so much more in framerate than it delivers, that I wonder about what kind of optimizations and alternatives exist that can help out, and when.
So when I look at Spider-Man for instance and I see the sorry excuse for what remains of the tree in Central Park, say, in the reflection of a building, I am just wondering if some basic raytraced data that can tell where the tree should be placed wouldn’t be seriously enhanced if you just drew the actual tree in the reflective surface using a few rays worth of data to determine where it should have been, say. And I am perfectly happy to hear why that is not feasible, but I think more than anything it is probably a problem that grows in complexity quickly also in terms of graphics engine design. But I bet for most actual flat mirror on the wall type situations on a console just drawing the actual geometry with an extra camera transform could be cheaper for now, to get a convincing quality. There are more diffuse surfaces that require much less precision, for which I imagine raytracing is cheaper quickly.
As for Dreams, that is a combination of SDF and more traditional rendering, depending on the situation, as far as I am aware. But also I am pretty sure that in traditional rendering it is also much cheaper to use this technique, just as it was when the tech first started being used in VFX render era ages ago.
So thank you to JoeJ who took the time to actually explain the limitations of raytracing for curved surfaces and such, that makes sense, that the transformations get too complex or expensive.
What I am sure of is that eventually, raytracing on pure geometry (eventually without textures) is the best solution for practically everything - just multi bounce the lights on everything sufficiently often on materials with the proper information on color, diffuse, etc and you solve lights (and automatically shadows as well), reflections etc. Same for sound, even.
But the current generation of consoles is way off that point, so I’m willing to bet that for large flat surfaces like buildings in watchdogs and Spider-Man using a vector transformation like here could be worth it
Especially if you then use the raytracing for better lighting and shadows instead.