It literally is!
Using the PS5 (very similar architecture) as a comparison point, you can see that in Watch Dogs it's basically the same amount of maths being done and more or less the same amount bandwidth being used. This means that there is more CU time going unused, and more bandwidth going unused on XSX.
These games are significantly underutilising XSX execution units and available bandwidth. Why is up for discussion, but that this is happening is not.
First thought would be to use higher native resolution settings (rather than the current dynamic solution) and other higher IQ settings on utilizing that untapped/underutilized CU and bandwidth, but of course it isn’t. And this underutilization (of rendering units) seems very familiar of PC GPUs having memory bandwidth limitations (usually not enough VRAM), and or bottlenecking issues/limitations dealing with system resources, small cache (too many GPU rendering units hammering cache resources), I/O limitations, and/or CPU resources/bandwidth limitations.
But of course, I’m only thinking of these potential issues from a PC perspective, and not solely from a console development environment perspective. As such, none of these issues mayn’t relate to “why” XBSX GPU CUs are being underutilize in certain situations where it could/should have an advantage over PS5. I suspect I know the reasons why, but I will wait for more next-generations games on making such calls.
Last edited: