...
Can you expand on where the Series X falls behind on GPU bandwidth? Is that DRAM bandwidth, or bandwidth elsewhere?
The PS5's clock isn't going to make it win in total bandwidth for any per-CPU caches.
Perhaps the L1, assuming the Series X GPU didn't adjust its size/bandwidth. One reason why it might need to depends on whether the L2's slice count increased to mirror the wider memory bus. In RDNA, the L1 is subdivided to match the number of L2 groups, since there are 4 slices per 64-bit controller, and the L1's subdivisions match how many requests it can respond to per clock.
The Series X may have 5 L2 groups, in which case the L1 might increase to have 5 sections, and thus 5 requests per clock, which would keep it above the PS5.
However, if the Series X doesn't create a 5th L2 group, it might mean that the L1 and L2 capabilities are as wide per-clock as the probable PS5 arrangement, and then clock speed could have an effect.
One possible complication to adding another cache division like that is that the ROP caches are aligned in a specific manner, and some of the no-flush benefits that Vega touted for making them L2 clients didn't hold if there was some kind of misalignment (maybe for an APU?).
...
The 6GB have an effective bandwidth of 336 GB/s. Just as a hypothetical, if the GPU is accessing this memory more than Microsoft would have expected I guess there is potential to be bandwidth limited. Ideally you want access across all memory channels for full bandwidth, but I guess if you access that particular range too much you'll lower your effective bandwidth. This goes back to how the memory interleaving is set up and I'm by no means an expert.