I find it hard to believe that neither Sony, Amd or Microsoft will not invest in raytracing acceleration from now until 2028-2029.
The architecture that will get bankrolled by the console makers (RDNA 6?) will be competitive with Nvidia at the time of release (maybe a generation behind), just like RDNA 2 and ampere-turing.
This gen will feel like a beta compared to the next, just like PS3 before PS4.
I think it would be very unlikely PS6 doesn't have enough grunt to run Cyberpunk PT mode. Next gen levels of detail with PT would presumably be out of the question, but running currently available titles should be absolutely doable.
That's actually where I'm at. The problem is that there's too much grunt required. And I'm going to be very reductive here, so bear with me on this.
There will always be this sweet spot between clock speed, energy required, cooling required and silicon required.
And I don't really care about which IHV, but if we're having a serious discussion around getting 4090 levels of power into something of a form factor the size of a PS5, then we're talking about cramming all that power into something approximately the size of 350mm2 to 292mm2.
So let's assume to run CyberPunk PT, you need X levels of computational power, which we can call X, and right now that X, say is 4090 levels of computation. And you're looking at about 300-450W of power consumption on a 4090 to play to that game with PT on.
Now combine that with a CPU (80mm2), and shrink a 4090 (609mm2) combined into something around 350mm2. Think about all that power now being pushed into a very tiny area. And cooling to me becomes increasingly harder to do.
And for consoles to exist, they have to be at a very particular price point.
So when you consider the combination of heat, cooling, energy, silicon size, which the smaller it gets and the more computation we require of it, the energy rating of watts/mm2 is eventually going to so high, that we have no materials to cool it, at least nothing that will allow us to get it at console level pricing. And so the obvious answer, is to go wider with slower clocks to reduce the power requirement and to increase the die size therefore increase the cooling area, but now we're paying significantly higher per die due to silicon costs.
Thus regardless of AMD or nvidia, that's the issue that stands to me, is that there's a clear physics barrier that can only be overcome by significant cost increases. And the reason why PC continues to flourish is because we have more applications for this level of power (whereas consoles are dedicated as gaming machines only) and that also we're moving back to mainframe days where computation is moving to cloud so that it's cheaper for everyone to have access to it.
I just don't see how with the rate of how slow our node shrinks are coming, by PS6 we'll be able to fit that level of computation into silicon of 350mm2.
We could develop entirely new hardware accelerators, or come up with a way to use magnitude order less silicon to do the same amount of computation, but outside of that, by 2026/27, I don't think we'll be far enough in the node shrinks to make this happen. And even if we were far long enough, I don't think the cooling solution would be ideal to keep us at our current price point.