Well, the RTX 2080 does just fine with the same RAM configuration as RX 5700 XT. If the memory bandwidth is indeed the limiting factor for Navi at higher clocks, it has something to do with the RDNA architecture itself.
Before getting into a discussion, lets make sure that we are on the same page. There is no "limiting factor" brick wall when rendering a game. Various parts of the process are limited by different factors, making total performance a statistical phenomenon. Increasing ALU performance will always bring benefits, but they will diminish the further up you go as other limitations will increasingly dominate. The same goes for other such factors. If you would double the bandwidth, you wouldn't double the performance, because your change would only affect the bandwidth limited parts of the total process.
OK, if this is the case, (it is), it's start to look at the problem at hand. If we consider the RX5700XT, it has the same bandwidth available to it as the RX5700, but 18-20% higher nominal (<=problem area) ALU capabilities. It provides 13% or so higher performance depending on game and settings. So if you push the ALU capabilities up
another 20% how much of an improvement would you typically expect? Well, the answer HAS to be "less than 13%", because the time portion of the problem that is spent on the bandwidth limited parts has to increase. Just how much less will again depend on the specific settings/game.
Two factors that makes a detailed analysis tricky is that actual frequencies and frequency differences will depend on game, cooler, and temperatures. They are no longer fixed. Also, drivers.
Those two factors come into play when trying to figure out if, and if so, to what degree Nvidia cards are different (your statement that the RTX2080 is "just fine"). The same formal reasoning still applies - the more you increase ALU capabilities, the more time proportionally will be spent elsewhere. Diminishing returns.
The RTX2060 Super and RTX2080 has the same bandwidth, and a difference in, again, nominal ALU capabilities of 47,5% higher (Founders edition). Same caveats regarding titles and settings apply. Average performance is approximately 25% using sites that use a large amount of benchmarks and average them.
Using the RTX2070 Super vs the RTX2080FE we see a difference in nominal ALU of 17% and a performance difference of 8%, again using averages of several dozens of tests.
Conclusion: I just don't see much of a difference here. Actually it suggests that the RTX2080 is
more bandwidth limited than the RX5700XT which makes sense given its higher ALU capabilities, but the data is vague enough that drawing conclusions from a few % either way would be folly.
RDNA is inherently more vulnerable to stalls caused by the off-chip memory transactions compared to GCN. In GCN one of the benefits of the instruction rate of 4 cycles per wavefront is hiding RAM latency (as opposed to the native 1 wavefront per cycle mode in RDNA). AMD beefed up the whole cache architecture in Navi to account for that, but they still may be less efficient in that regard compared to Nvidia's Turing. Still, I wouldn't dismiss the possibility for a significant driver optimization in the future. In particular, the driver gets to choose wether shaders are compiled for 32-wide or 64-wide wavefronts, the latter being theoretically less demanding for memory bandwidth. I also observed pretty poor 99-percentile fame times in some games on the overclocked RX 5700 XT beyond 2100 MHz, which to me screams driver problems.