I thought the general consensus was the PS5 would be running at or near it's max GPU clock almost all of the time. And yes the 2080 will boost higher than it's rated, but not to a crazy degree. Based on
this random review I googled you might expect of average something in the region of 1825Mhz which will give you around 11TF or about 7% more compute/texturing than PS5 but still only 84% of the fill rate and equal memory bandwidth.
While the bandwidth contention is definitely a thing there, I don't think there's any evidence of Turing using it's bandwidth more effectively than RDNA1. The 5700XT for example is able to outperform the 2070 with the same 448GB/s and can often compete with the 2070S with the same bandwidth as well. RDNA 2 uses far less main VRAM bandwidth for similar or better performance but obviously makes up for that with IF, so comparisons there aren't much help.
The summary to all this though is that even taking all the above into account, the PS5 is pretty much a 2080 level GPU on paper (winning some/losing some, but only my small margins). Now compare that tot he 2070 which is often held up as the PS5 equivalent and you can see how it's far more accurate to compare it to the 2080 on paper. 2070 has only 73% of the PS5's fill rate, compute and texturing throughput while having 9% more geometry throughput and the same memory bandwidth. Even the 2070S has only 79% the fill rate and 88% the the compute/texture throughput.
So being surprised that the PS5 can trade blows with the 2080 doesn't make much sense to me. It's ore surprising that it doesn't do so very often. The 2080Ti on the other hand is obviously in another league.
That's kind of the opposite of my point. On paper they're very comparable, but in reality, we don't often see the PS5 get up to 2080 levels of performance. But when it does, everyone acts surprised.
All of the above assumes raster only btw. No RT in play.