I mean, if we're going to dig to the bottom of that particular pendandicism hole, we could just say "futureproofing doesn't exist" because there's literally no way for us to truly know what the next interesting technology will depend upon. The next thing might somehow be using all those int4 operations in NPU's to somehow accelerate, I dunno, game AI seems the most apt right? So if you didn't buy an NPU, then your fancy-shmancy new CPU is basically half speed now right?<snip> Particularly because an added complication is that future games may leverage/stress the different pros/cons of various architectures.
However, to get back to the real world, we can make inferences about what's likely to continue happening in the near future. And that future is going to continue bringing faster GPUs in both raster as well as raytracing, which will need to be fed by faster CPUs. Which actually is a great segway to:
It does! Check out the GN review of the 9800X3D, where they include prior X3D parts and their non-X3D bretherin:Does the 5800x3d increasingly pull ahead of the 5800x in 2024 games with rtx 4090 at 1440p/4k as reflected by 720p/1080p rtx 3090 tests in 2022?
I linked specifically to the Stellaris portion of the review, because simulation time actually matters in that game and is irrespective of GPU power. And in Stellaris, the 5800X3D variant is 13% faster than the 5800X, despite the 400MHz base / 200MHz turbo clock disadvantage to the X3D.
Even at 1440p or 4k, there will be scenes in modern games where even the fastest GPUs will still be waiting on the CPU to send data. And in those scenarios, a faster CPU will matter, such as the X3D line. It probably isn't going to increase maximum framerate, it will instead drag up the average on the low-end.