I really don't understand the "tflops are arbitrary" meme. Floating point operations are most of what gpus do. It's an extremely relevant metric -- so is bandwidth (xbox wins), bus size (xbox wins), architecture (they're mostly the same). Microsoft's "most powerful console" claim is pretty safe
The rated theoretical maximum TFLOPs output isn't the only indicator that GPU's actual TFLOPs throughput during a real gaming scenario. The total theoretical bandwidth to the off-chip memory isn't the only indicator on how often and for how long the GPU is stalling / waiting for that memory, much less bus size (width?).
Effective memory bandwidth is what matters here.
But the last mistake here (which I confess I also made at some point) is claiming the PS5's and SeriesX's architectures are "mostly the same" just because they seeming share the RDNA2 instruction set and Dual-CU arrangement, despite being quite different everywhere else. The PS5 has a lower number of ALUs per shader engine, and on top of that they run at 22% higher clocks, so a higher ALU occupancy on the PS5 should be expected.
The cache scrubbers should increase the proportion of cache hits within the L2 (otherwise why bother) and we don't know exactly how that affects the console's effective bandwidth. (We do know why AMD didn't want the cache scrubbers for their PC RDNA2 GPUs though, and it's because they're already spending a
ton of die area on LLC in there)
The PS5 having a considerably faster I/O could also mean they don't need to cache as much data inside the system RAM, with assets coming in "on-the-fly" just a handful of frames before they're needed, and we again don't know how that will impact effective bandwidth.
And this is all before we start considering the differences between the SeriesX's VRS and Sony's own implementation for foveated rendering, the impact of the custom geometry processor, the PS5's higher pixel fillrate, etc.
So despite the SeriesX having larger
big numbers (max theoretical TFLOPs throughput and max theoretical bandwidth), the fact that the consoles' GPUs are actually substantially different in many of the other resources means we can't really think that "
10 vs 12 means 12 is 20% faster" and "
448 vs 560 means 560 is 25% faster".
The Vega 64 averages above 11 TFLOPs and 480GB/s bandwidth, and these are about the same big numbers as the GTX 1080 Ti. It never reached GTX1080 Ti numbers AFAIK.