Maxing out RT just to bring it to its knees is not efficient either i guess (we'll see if / how they'll improve).
I don't know what you find inefficient about RT. RT inefficiency is a myth. It's way more efficient than rasterization in too many cases.
As for stochastic sampling, there are great improvements in this area - importance sampling, shading caches, coherency sorting, all this improved by a lot in just 3 years.
Looking at how fast path tracing (and multi-bounce tracing) is right now in hundreds of millions polygons scenes in UE5, Omniverse, etc, it feels as if it is already here, just mix in all mentioned above improvements, add in denoising and push it to prod.
So we need to add something new not present in the console game we aim to port.
Honestly, the problem is that your average console developer treats PC as a third tier platform and unless there is an IHV involved, which would help with the workloads you've mentioned, the developer won't do anything for PC.
Which could be (summing up my previous proposals) volumetric stuff (fog simulation, lighting), layered framebuffer to address SS hacks shortcomings, fancy SM based area shadow techniques. And ofc. GI if compute can do this better than RT. What else?
I don't think that's the way to go. Those SS hacks, fancy SM based area shadow techniques, compute GI, etc are just another hacks with tons of drawbacks that are layered on top of the energy inefficient computations on general multiprocessors in a time when the Denard scaling is long dead, hence the 500W monsters on the horizon.
Why bother making more of those fragile and unmaintainable systems when people can't even use existing ones properly in most cases because there are millions of tweakable parameters (apparently many devs can't even setup DLSS properly with just a few parameters).
I'd rather prefer devs and IHVs putting more time and HW effort into something way more general/unifyed and maintainable, which requires minimum tweaking and has proven to beat any hacks in CG industry.
Other than this, general/unified algorithm can live up and scale with way more efficient specialized hardware, which is the way to go in future, at least if you want to avoid 1000W monster GPUs in near future, lol.