You may be right, but if I were a betting man I'd say you're under estimating by quite a bit. 30% in 2 years every generation is extremely slow performance growth and unprecedented in GPU history.
Last generation for example, according to TPU, in the 6.5 years between Kepler and Turings launches, performance grew just over 4x (GTX 680 - > RTX 2080Ti) at 1080p alone. That's well over 30% or even 50% every 2 years and our most recent data point, 2080Ti -> 3090 sits at 54% in 2 years at the more appropriate 4K. If the 680 - > 2080 Ti measurement were taken at 4k or even the middle ground 1440p the growth would be far higher than 4x.
It's also likely all those data points were taken at each GPU's launch which means you're also significantly underselling it's performance increase since you're measuring at the point when games haven't yet started to take advantage of new architectural features vs the old GPU that has had time to mature and for games to take advantage of it's full potential. For example Ampere's performance advantage over Turing should grow with time as games make more use of RT and it's async RT/Tensor/CUDA capabilities. Similar to how Maxwell extended it's lead over Kepler when compute became more heavily used.