Path tracing in AAA games is not going to become some critical thing anytime soon. It'll be an optional setting for those with ultra high end rigs. It's largely unusable, or at least certain not worth the performance cost for the large majority of RTX users as well.
Hypothetically "pathtracing" in the Nvidia sense here, just some light bounces, could be done by the end of this generation on consoles. Heavily spreading samples out over space/time is really useable for diffuse, Capcom has an updated RT model that heavily relies on this and denoising, but can run a diffuse RT single bounce at 60fps on PS5/Series X.
Reflections, well specifically mirror/near enough reflections are a bit harder. Movement of object and camera can quickly invalidate most samples, and you need a lot of samples. Obviously it can be done, Spiderman and Ratchet both run on PS5 with RT reflections, but still it's harder. Heavy reliance on hybrid RT might be a good optimization here; the idea is to use sparse SDFs (you can get a decent enough resolution, better than UE5 software right now) and make rays faster by building a close fit BVH around the SDF. SDFs are much faster than triangle testing, even on Nvidia's latest cards, but ray/box testing in a relatively sparse BVH is still faster than running an SDF trace the whole way.
Now that AMD has finally gotten rid of the old RTG head, and his repeated mediocre results, we might see better execution from whoever his replacement is. If I were AMD I'd go out and see who I could hire from a pair of papers at Siggraph Asia, one describing a "neural net does it for you" variable rate shading with better performance for visual quality than upscaling:
https://drive.google.com/file/d/1wSPdfpwOkOIznQUqUZdBMmdQ3WAWlhms/view
The second, by some of the same researchers, is a neural net based next frame prediction, basically FSR/DLSS 3 but without waiting for the next frames motion vectors even, the thing just predicts everything for you and you go. The paper for that isn't up yet though.