I see your point, AMD targeted perf/W on HPC when nvidia targeted perf/W on gaming (and relevant computing workloads). Maxwell doesn't target HPC.
I don't know if AMD specifically cared about HPC perf/W. (Though I doubt it.) I'm just acknowledging that they seem to have an edge in that respect compared to Kepler. We don't know how it would compare against an FP64 enabled Maxwell because such a thing doesn't exist.
The HPC numbers are interesting by themselves if you care about perf/W in HPC, but I have no clue what to do with those numbers when talking about desktop gaming or even desktop compute. There is only a 10% difference in perf/W between the AMD based and Nvidia based supercomputer.
Is that because of silicon perf/W? If I look at perf/W at hardware.fr it shouldn't. Is it because they have a difference power architecture? We're talking megawatts here. Because the cooling requires more power? Is it because the AMD HPC is specifically tuned for perf/W (eg lower clocks) while the Nvidia one is tuned for max performance? Is AMD specifically better at FP64 than Nvidia?
10% difference is too little to make conclusions when there are this many variables that we don't know and that we don't have any expertise in.
Imagine Anandtech reviewed Kepler and GCN. But the setups had different PSUs, different methods of cooling (say water vs convection only), different CPUs with different amounts of RAM and different amounts of storage and types of storage (say SSD and mechanical HD). And one GPU overclocked and the other not. And one is SLI and the other not. And they run exactly one benchmark. And they only measure wall power of the whole system.
And then they would conclude that one GPU architecture has 10% better perf/W.
An interesting data point on its own, if you care about comparing machines you can buy from Alienware vs Northwest, but useless when discussing minute details of the GPUs only.