well not really. Someone in another forum calculated the size of the chip, frequency and the average performance in benchmarks of the 2080TI. Yes the card is faster (nothing new here) but if you calculate performance per mm² the performance was almost reduced by 30% per mm².
Turing != Pascal with RT and TensorCores.
GP100 was worse than GP102 for gaming, too. The whole architecture is much more future proof.
Also DLSS has many flaws (btw, also TAA). There are quite pretty upscaling techniques on consoles with only minor flaws. Really don't know why nvidia didn't invest in those. DLSS is really a waste of resources, not that easy to implement after all (only one title so far) and has really mixed results (from "it doesn't do anything at all" to "flickering" to "compression"-artifacts) with a heavy performance hit (compareable to TAA @1800p vs 1440p DLSS (to 4k)).
nvidia is just trying to invent something new, something that isn't optimized for the use-case just to be the first.
Innovation comes from trying. DLSS is brand new. It is the first time that somebody is recreating pictures in real time without programming it.
And most upscaling techniques on consoles are still worse than DLSS with more blur and more artefacts...