There are 2560×1600 numbers on this page as well. I got about 35% in 1080p and about 45% in 2560×1600, which is why I called it ~40%.
But look at it this way: the best GK110-based Tesla has 2688 shaders at 732MHz, for a TDP of 235W. Let's assume that this TDP would hold in games, not just in compute workloads that don't make much use of dedicated graphics hardware.
NVIDIA could enable the remaining SMX on a GeForce card. It may seem a bit unlikely because low-volume, high-margin products like Teslas are where you'd expect them to do that, but perhaps they're disabling one SMX for power more than for yields; or other financial/stock management reasons.
Now, let's say NVIDIA manages to enable this remaining SMX, and let's say it was disabled for yields, not power, and therefore enabling it does not result in a super-linear power increase. Just enabling this additional SMX takes us to about 250W with linear scaling (15/14 × 235). But a GPU is more than SMXs, so let's say NVIDIA can increase the clock speed a bit too, to 750MHz. Both NVIDIA and AMD seem to agree that 250W is the highest acceptable TDP on a single-GPU card, so I'll stop there.
Comparing this to the GTX 680, we get approximately:
Code:
(shaders) × (clocks)
(2880/1536) × (750/1070*) = 1.31, or a 31% improvement.
Maybe I'm being too conservative with clocks, maybe NVIDIA will manage to go a bit higher, perhaps with Turbo. So maybe the theoretical improvement is more like 35%, or even 40%. And that's an upper bound, assuming ideal scaling.
I don't think it's realistic to expect it to be anywhere near 50% faster.
*Typically, 680s tend to stay in rather high Turbo most of the time.