Frenetic Pony
Veteran
You don't think that power (and ground) is supplied through the mounting screw points? Apple does that in the Mac Pro. Mezzanine connector doesn't seem heavy-duty enough to supply 300W, especially if less than half the pins are available for power delivery...
You could simply drop clocks and volts a little, get a correspondingly larger drop in power than the additional draw from the extra enabled functional units (along the lines of the fury nano.)
The Fury nano is a special bin of the Fiji GPU that's the product of low performance/low power silicon. The clock speeds of a normal Fury/X can't be hit with this silicon, but much lower ones can be at a lot less power. Considering it's so small AMD saw an opportunity to bin what might be a useless and limited set of their product as the Nano, a tiny high efficiency chip, that seems to have worked. That they charge so much is in part because this sort of bin is a relatively low percentage of produced and still viable chips, so even if Nvidia wanted to do the same with GP100 the amount available would be tiny.
And you can't normally expect the kind of efficiency gains from dropping frequency on most silicon, and this goes doubly for finfet. The exponential fmax curve also goes the other way, meaning the more you drop frequency the less power savings you get as an exponential curve.
And of course the GP100 Tesla stuff is going to be sold out, with such a huge chip on a new node you'd expect yields to be low (which is why they branded it as a Tesla for the first run, low volume, high margins). The competition from Nvidia itself and AMD is going to be an interesting thing to watch over the next fiscal year. Their new GPUs make their old ones out of date and so far less appealing, data centers spend a lot on cooling/power after all, so discounts on GPUs themselves are of limited value. At the same time if they can't get the volume of the new chips up Nvidia could end up with a lot less profit overall, at least for quarter or three.