Why is improved pencil ray tracing the wrong path? Support for more flexible BVH formats or LOD would go a long way. It’s an inherently scalable paradigm that you can throw transistors at.
There's only so many singular rays you can trace, and notably the gains from increasing rays become exponentially small. 5 rays is much better than 1 ray, but 10 rays isn't a great deal better than 5 but comes at twice the cost. Cone tracing covers more information in less effort.
I think one of the mindsets for HWRT was looking to create better lighting, the only solution was off-line path tracing. It's been a 'holy grail' to accelerate path tracing and HWRT took that a step further. It's an algorithm that solves every problem and can produce photorealism, if only it can be achieved fast enough. HWRT has greatly accelerated offline rendering for one and it's development has helped nVidia's position in professional productivity, reinforcing the value of HWRT to them.
However, gaming doesn't need that exact quality, and other algorithms could provide a better solution; algorithms that were in their infancy at the beginning of this gen. The hardware went one route that it could do, while the software is stuck in the middle of using that hardware or trying something else. If there's not enough software to drive a reason for hardware to accelerate that 'something else', it won't gain proper hardware support.
It's really a rematch of the origins of compute. GPUs were all about fixed function graphics hardware. Software had to hoodwink it into doing general purpose GPU work. But as software started to use the hardware differently, there became value in changing how the hardware worked to support the new workloads. Hence GPU moved from graphics to compute. HWRT is exactly the same hardware solution to a specific problem as hardware vertex and pixel shaders. The question then is, "is the problem being tackled by HWRT the right problem to be tackling?" But the decision making for console hardware has to pick between the devil you know and the devil you don't. Do you gamble on Unified Shaders or go with established discrete shaders? Do you ditch graphics hardware entirely and go with a software renderer and a novel CPU? Do you go all in on streaming when designing your hardware and hope the devs will use it, or do you double down on preloading because that's what everyone's doing now and you can't be sure they'll be willing and able to adapt to a different data system? Do you go with unified RAM and as wide a bus as possible, or give the devs a scratchpad of insanely fast but frustratingly tiny EDRAM?
Right now, HWRT is a known solution to the current and near-term future workloads of games. It's used offline in productivity and IHVs wanting to be competitive there want to have fast RT solutions. But maybe software can provide better solutions if not tied to the HWRT mindset? Maybe AMD will implement a great new arch. And maybe devs will use, or maybe it'll not catch on and HWRT will win out for another generation?
The big problem here is, AFAIK, no-one's created a demo that shows these alternative solutions othe than tech demos that have their pros and cons. And we've had various GI solutions for a decade without anything competing with what HWRT pathtracing is currently managing.