Before the announcement, I am going to go out on a limb here and claim that if Ada is truly a 75 billion chip compared to 28 billion of Ampere, then Ada is going to be significantly faster than 2X, maybe 2.5X or more, that ... or Ada has another focus entirely.
Let's trace it back a little. the G80 (8800 GTX) had 680 million transistors, more than double that of the 7900GTX (278 million), it successfully achieved almost double the performance. Tesla (GTX 285) doubled the transistor count to 1400 million (1.4 billion) too, but it had a compute focus, so the transistor budget went there and the uplift in performance was limited to around 50%, Fermi (GTX 580) doubled that again to 3 billion (again with a heavy compute focus), so got around 60% more performance.
The pattern quickly got corrected with Kepler (GTX 780Ti), which more than doubled the budget to 7 billion, while successfully almost doubling performance. Maxwell (Titan X) continued down the path and achieved 30% more performance with a mere 14% increase in budget to 8 billion. Pascal (Titan Xp) saw a modest budget increase to 12 billion (a 50% increase), while also achieving almost double the performance, Turing (Titan RTX) boosted the budget to 18 billion (another 50%) achieving 40% more performance, but it had a different focus (Ray Tracing and Machine Learning), so by those metrics it often achieved triple the performance for those metrics. Ampere (3090Ti) continued down that path, increasing budget by 50% to 28 billion, to achieve 50% more performance.
Data Center GPUs have forked themselves away from consumer GPUs with the V100/A100/H100 lineup, so we won't see the budget wasted on compute again, and with Ray Tracing and Machine Learning already paid for, If Ada is truly going from 28 billion to 75 billion, a 2.7X increase in budget (this explains the huge uplift in power), it will either net us a more than 2X the performance, or the focus this time is on something else entirely, something mysterious.