Sticking to the rumour, it is like four on-package GMI links with a total of 100 GB/s bandwidth. That would be 25 GB/s per link, and assuming it is bi-directional 16-bit SerDes, 6.25 GT/s would be the data rate.
That would be something AMD products have offered before, if going by the link bandwidth of AMD's Opterons. It would mean no significant increase (or rather a slight decrease in GT/s) since 2011, however.
I have been trying to puzzle through whether the HPC APU is the same thing as this MCM package. It already breaks the definition of APU, and without an interposer that AMD's "dis-integration" future would have predicted.
Combining the various features gives a hefty aggregate IO count. 4 DDR4 channels, 64 PCIe, and then there's 4x25GB/s GMI link that would be IO on both the CPU and GPU chips.
Keeping the PHY matching, which is already hinted at in the slide that says one PCIe section can be repurposed to different IO standards that have aligned with PCIe, could keep that amount of perimeter real-estate from becoming single-purpose, but perhaps some of the advances over the years could cut down the physical investment by upping speed and reducing link width.
The server CPU could get away with it since it could use it for a multisocket solution.
What a discrete Greenland would do with enough IO to exhaust one side of a Fiji die is uncertain.
Where the 64 lanes of PCIe go in addition to all that is not clear to me.
Could just be more lanes. PCIe is very well designed and has no obvious flaws, has plenty of existing good controller IP and the licensing allows for it to be easily modified into a custom bus. Because of this, a lot of modern custom buses are just PCIe under a different name. Unlike in the past years, there is very little to gain from the work required to design something new.
Some years ago, AMD briefly roadmapped coherence over PCIe, and then silently dropped it. Perhaps this is a return of sorts? Even without adopting PCIe, a lot of the same physical design decisions get made due to the same reasons.
NVLink seems to have some common decisions, and it bumps speed up by not being as physically accommodating as PCIe must be.
Perhaps GMI's interface has different capabilities based on what it has to traverse. An MCM package could give some speed benefits over a connection for an expansion bus.