You have to assume that Nvidia picked the stock clock for a reason, and that's most likely a sweet spot for efficiency. While +30% appears still plausible, it's certainly not going to happen on the same efficiency level.
I'm not sure that the chosen clocks are in the sweet spot for efficiency.
In terms of TDP (W)/Billion transistors:
980 165/5.2 = 32
1080 180/7.2 = 25
980 Ti 250/8.0 = 31
Compute-only, using HBM, has significant IO and not fully enabled, but included for reference
GP100 300/15.3 = 20
It's true that even though the process does promise 2x more power efficiency at the same level of circuit performance, there can be confounding factors like memory and non-ASIC power consumption.
I tried to adjust for the memory subsystem. Using some GDDR5 numbers from Fury's memory comparison, and roughly in line with some earlier percentages of power budget for memory being 20-30% (split the difference), I used the Ti's wider GDDR5 bus to derive a value for the interface's cost of 63W for a 384-bit bus.
The first three with GDDR5-ish interfaces (adjusted for width) go 24,19, 23. There is an assumption that the GDDR5X bus is more in line with the 980's 256-bit GDDR5 bus in power consumption than the 980 Ti's 384-bit bus. Using the latter's value puts the 1080 at a better ~16, although that might not be flattering for GDDR5X.