The ability to do raytracing does not automatically mean that it's worth doing it.
Today's hardware rendering architectures are built on a few principles, like streaming the scene through the rendering pipeline, and skipping unseen parts of the scene.
But raytracing means that when rendering a single pixel, your reflected/refracted ray may pass into any part of the scene, even those outside the camera's view. Bumpy surfaces throw rays all around the scene and that means that you'll suddenly need to access a different texture, evaluate a completely different shader, tesselate another object that hasn't been processed yet, and so on. Memory accesses will become completely random, instead of the predictable dataflow. Rendering times will change dramatically between neighbouring pixels and between successive frames. And all this for visual improvements that 99% of the audience will probably won't see, or care about...
Today's hardware rendering architectures are built on a few principles, like streaming the scene through the rendering pipeline, and skipping unseen parts of the scene.
But raytracing means that when rendering a single pixel, your reflected/refracted ray may pass into any part of the scene, even those outside the camera's view. Bumpy surfaces throw rays all around the scene and that means that you'll suddenly need to access a different texture, evaluate a completely different shader, tesselate another object that hasn't been processed yet, and so on. Memory accesses will become completely random, instead of the predictable dataflow. Rendering times will change dramatically between neighbouring pixels and between successive frames. And all this for visual improvements that 99% of the audience will probably won't see, or care about...