Kudos on the effort you put in there. I feel like I should match it now. So here are the results from what you've posted above in terms of Turing vs Pascal performance:
Forza - 2080 is between 1% - 12% faster than the 1080Ti depending on resolution which is perfectly in line with TPU's 8% faster rating on their GPU spec database which is presumably taken at the point of launch.
BFV - 2080 is 33% faster than the 1080 which is in line with the TPU database
RDR2 - 2080 is 15 % - 26% faster than the 1080Ti here so this one is definitely above where we would expect Turing to be in relation to Pascal
Starwars Squadrons - the 2080S is 14% faster than the 1080Ti - perfectly in line with the TPU database
Godfall - 2080 is 15% faster than the 1080Ti so a little more than the TPU DB
Dirt 5 - 2080 is 10% faster than the 1080Ti which is in line with the TBU DB
Doom Eternal - 2080 is 26% faster than the 1080Ti so well above where we would expect Turing to be in relation to Pascal
WWZ - 2080 is 4% - 9% faster than the 1080Ti which is in line with the TBU DB
Division 2 - 2060 is 5%-7% faster than the 1080 which is a little above the TPU DB which pegs them as even
So of the 9 games looked at, only 2 offer a significant variation from what we'd expect based on the launch performance. And there will of course always be outliers, especially where games take advantage of Turings newer feature set (something I already called out in the post you were responding to).
I get that you're trying to show AMD's growing performance over time vs Nvidia but if you cherry pick AMD friendly games then of course you'll be able to show that. I'm sure the reverse could be shown by cherry picking Nvidia friendly games. That said I won't deny that AMD consistently seems to gain ground over time, but that's different to Nvidia performance falling off a cliff like it did with Kepler (where the equivalent today would be something like the RX580 performing in line with a 2080Ti). Those gains can probably be attributed to AMD picking up more console level optimisations than Nvidia, as opposed the Kepler situation of it's architecture simply being unsuited to modern games.
Let me remind you of what I said earlier in
this post:
"It seems to me that developers and Nvidia offer good support for at least n-1 architectures which would give a typical architecture 4 years of well supported life. Pascal for example is still more than capable in any new game now a little over 4 years from it's launch. But I do expect it to start falling behind now that Ampere has launched and it's likely receiving less support from Nvidia"
It seems Cyberpunk falls into that description perfectly. Pascal is now 2 generations and more than 4 years old so we should expect some performance loss. Especially in a game known to be using DX12U features which we know Pascal lacks.
Remember, this discussion started with you claiming that Nvidia performance "falls of a cliff" after 18 months which is what I'm disputing. I still see no evidence of that.