Actually, yes. I expect it is purely for power, after all that's 400W / ~10A more than a regular PCIe slot could safely handle. There is no other power connector on that PCB, and strictly splitting power delivery from data is a sensible choice. Guess the wide one is 12V rail, while GND is split beween original PCIe connector and the narrow one.
EDIT: Nope, looks like you are right. There are individual lanes on the second connector too. That doesn't add up in terms of PCIe lanes though. Even if it's the 64 lanes CPU, using all of the lanes just for GPU (in the 4x GPU configuration) would result in a major bottleneck for storage.
EDIT2:
In terms of advertised specs, that system appears to be designed for a throughput of <=180 fps @ 8k (frame rate wasn't specified, so it may actually just be 3x24fps, or worst case 3x60fps). That's just a single PCIe 3.0 16x port worth of bandwidth, uncompressed / lossless compression. Means, I do actually doubt there is more than 16 lanes per MPX module, simply because there is hardly any use case for streaming more than that off-GPU in uncompressed form. Respectively when abusing the 3 additional GPUs solely as decoder cards with a dedicated rendering card, even if you were to route all traffic via CPU instead of IF link, there is still sufficient throughput.
In terms of probable routing on the mainboard, the slots blocked by the MPX modules are probably 8x electrical, for a full 8x8 setup if no MPX modules are being used. Also speaks for the MPX modules only using a dedicated 16 lanes each.