Concerning RT:
1) Would require a LARGE amount of memory (try several Gig) to store the entire scene to test for intersections - primary or secondary rays. And the rendering engine can't utilize a delayed load geometry set as we need to see the entire scene. We aren't dealing with bucket rendering (i.e. offline).
2) How are you going to deal with aliasing? You'd need to cast several rays per pixel and each intersection would have to evaluate the shaders every time.
3) What would your depth limit of the ray type be before quitting? If you can't bounce at least 2-3 indirect rays, then you won't get very good results. What if you had 2 geometric objects that both refracted one behind the other? Objects would suddenly have to have "thickness" (which means even more geometry).
4) RT direct lighting would only be beneficial if you had area lights. But that would require even more samples. Doing specular lighting would most certainly require importance ray sampling (firing rays from the lights as well as the materials) with PDF and special sampling algorithms for both the light and BSDF. This would be the only way to get rid of the noise.
5) Notice how the Kepler demo that Nvidia showed only had 3 objects in the scene. LOL! Not even close.
In short, if they came out with a viable hardware device for RT, the film industry would get it first.
And I don't see that happening for at least 2-3 more generations.