I don't think this is right -- I believe you're mixing some ideas together. maybe somebody who knows more rtx details could correct, but my impression of rtx raytracing is that it's fairly normal raytracing.
General (monte carlo) raytracing refresher: Rays are projected from each pixel on the "camera" and bounce
randomly in a hemisphere away from the surface normal they hit. The degree of randomness (size of the hemispherical lobe) varies depending on how reflective the surface is, up to completely random for diffuse surfaces and not random at all (straight reflective bounce along the normal) for mirror surfaces. With those random distributions, many raytracers (i assume all the ones we see running realtime) will
bias that randomness towards the light. Due to this biasing, it will take far fewer samples to converge on a realistic lighting situation, at the cost of maybe less precise GI. RTX raytracing is broken up into several kinda different calculations and passes, and I'm not sure what they are or how they all work.
One common step in raytracers is casting
shadow rays which behave like the rays in your picture -- they bounce directly towards the light. By doing this, they can check whether they "hit" a light to determine whether the surface they bounce from is in shadow or not, and then this can be added to the shading.
However, under no circumstances should
reflective surfaces bounce towards light. That would give you a wildly inaccurate render with no gains to performance. Mirror reflections should be straight reflections: I - 2 * dot(I, N) * N -- there are some illustrations of this concept here:
https://www.scratchapixel.com/lesso...tion-to-shading/reflection-refraction-fresnel