Why so? XSX have more Unified shader units and texture units. Doesn't that make them at least on par?
but Series X has more L2 and will have a higher L2 hit rate, and it has more memory BW .. Texture filtering rate should be a wash
Texture Filtering (using FP16 precision) on AMD GPUs is often half their texture fill rate, filtering also requires way more memory bandwidth, and more cache usage. The Series X has 44% more CUs/TMUs than PS5 (208 TMUs vs 144 TMUs), however it only has 25% more L2 Cache (5MB vs 4MB), L1 Cache is smaller, and memory bandwidth is also fractured on Series X, with a 560GB/s portion and a 336 GB/s portion, while the PS5 enjoys a fixed 448GB/s at all times.
So in practice, while texture fill rate on Series X is a huge 380GT/s vs the PS5's 320GT/s (a difference of 60GT/s), the texture
filtering rate stands at 190GT/s vs 160GT/s (a meager 30GT/s difference) easily compensated for with the higher clocked caches on the PS5, larger L1, and the more consistent memory bandwidth.
Keep in my mind that even the 160GT/s filter rate is a crazy high number for consoles, that is never reached in practice. Even if we assume a console game implementation of native 4K120 (which never happened), we only need 1 billion Texel/s for filtering, multiply that by 8X (using AF 8X, which is rarely used in consoles), you get about 8 billion Texel/s, still far from the 160GT/s rate. You would become bottlenecked by memory bandwidth, ROPs, or compute way faster than the filtering rate.
Of course other advanced texture techniques need more texture fill rate (such as Render to Textures and Cubemaps), but still the available fill rate in modern GPUs is often way more than enough. That's why I believe the PS5 can
filter more textures than Series X (even if by a small amount), or at the very least -like you said-, is not at a deficit compared to Series X.