Isn't texture filtering done on the CU's?
Only the Texture Mapping Unit (TMU) is responsible for Texture Fill Rate, and Filtering.
The TMU's are contained within the CU's and as XSX has more CU it actually has higher texturing performance by around 17%.
That's true for the texture fill rate (texture mapping) part, but texture
filtering requires a different methodology, as it's quite complex and not as straight forward.
Texture Filtering (using FP16 precision) on AMD GPUs is often half their texture fill rate, filtering also requires way more memory bandwidth, and more cache usage. The Series X has 44% more CUs/TMUs than PS5 (208 TMUs vs 144 TMUs), however it only has 25% more L2 Cache (5MB vs 4MB), memory bandwidth is also fractured on Series X, with a 560GB/s portion and a 336 GB/s portion, while the PS5 enjoys a fixed 448GB/s at all times.
So in practice, while texture fill rate on Series X is a huge 380GT/s vs the PS5's 320GT/s (a difference of 60GT/s), the texture
filtering rate stands at 190GT/s vs 160GT/s (a meager 30GT/s difference) easily compensated for with the higher clocked caches on the PS5 and the more consistent memory bandwidth.
Keep in my mind that even the 160GT/s filter rate is a crazy high number for consoles, that is never reached in practice. Even if we assume a console game implementation of native 4K120 (which never happened), we only need 1 billion Texel/s for filtering, multiply that by 8X (using AF 8X, which is rarely used in consoles), you get about 8 billion Texel/s, still far from the 160GT/s rate. You would become bottlenecked by memory bandwidth, ROPs, or compute way faster than the filtering rate.
Edit: of course other advanced texture techniques need more texture fill rate (such as render to textures and cubemaps), but still the available fill rate in modern GPUs if often way more than enough.
That's why I believe the PS5 can
filter more textures than Series X (even if by a small amount), or at the very least, is not at a deficit compared to Series X.