You mean nvidia has given numbers for a next-gen on-package interconnect that exists in a paper or perhaps in a lab that is much more power-efficient than a solution that AMD has had in a final product sitting in shelves for over half a year that uses a 2 year-old manufacturing process, and is meant for CPUs and not GPUs?
Early disclosures and patents go back to 2013, with at least physical demonstration of the concept at 28nm.
http://research.nvidia.com/publicat...ngle-ended-short-reach-serial-link-28-nm-cmos
I've seen brief references to it a few times, although the most recent is Nvidia's MCM GPU paper.
I am not actually sure what Nvidia has proposed is sufficient for an EPYC-style solution for a seamlessly operating MCM GPU. What Nvidia has proposed is focused on compute and often needs significant adjustments to the software running, despite how much of the hardware was changed to minimize the impact. Graphics would be less consistent and more difficult to distribute, and Nvidia focused on the compute workloads.
I've said earlier in the thread that AMD's aspirations may be more consistent with something post-Navi or maybe even post-Next Gen.
Within the scope of promises, Nvidia's 2017 paper cited more tangible figures and design points, such as an upcoming hardware node and an interconnect with some history of physical demonstration. It was compared with near-term architectures or reasonable extrapolations one or two generations in range, and come in before 2020.
It has features that match EPYC more closely than what AMD has shown plans for, with AMD counting on interposers and more complicated variations of the tech, whereas Nvidia's method works with organic substrates and with narrower widths than EPYC's MCM links.
In other instances, tech demonstrations for chip interconnects with multi-year lead times have given ball-park figures for what was eventually realized.
Inter-socket communications for interconnects like Hypertransport had technology demonstrators five years ago that reached 11 pj/bit, which EPYC's xGMI managed to get down to 9 pj/bit by 2016/2017.
The physical and cost constraints for interconnects seem to enforce a more gradual rollout of technologies.
Perhaps it's been a feint by AMD, but the promised integration and next-generation communication methods have been focused on interposer tech or unspecified link technologies in an Exascale context (post-2020). There are some papers on interposer-based signalling that could significantly undercut Nvidia's method, but the companies/laboratories with papers on that don't seem to be aligned with AMD and EPYC doesn't use interposers.
There are some upcoming 2.5D integration schemes from partners like Amkor, although those conflict with AMD's plans since they are working to avoid interposers rather than creating active ones, and may be too late for Navi.
So the logic here is AMD will keep Infinity Fabric's MCM interconnects stagnant regarding power-per-transferred bit?
They'll just copy/paste what they used in a MCM CPU solution that was developed somewhere in 2014-2015 for a new arch launching in 2017, and then apply it directly in a 2018 GPU.... because they haven't released a paper saying otherwise?
Presentation slides for EPYC were cited as a path for Navi. AMD's CPUs are probably targeting PCIe 4.0 in 2020, although that gives a 2x improvement. AMD has so far preferred to keep its memory, package, and inter-socket bandwidths consistent, so its next sockets are coming out in that time frame.
Navi per AMD's revised roadmap doesn't have until then.
Vega 20 has slides indicating its gets PCIe 4.0, and if accurate the slides show xGMI comes into play. Those aren't close to bridging the gap between today and the Navi's delayed launch in the year following Vega 20.
nVidia's latest Titan V GPU doesn't even have SLI support meaning every GPU IHV is going that way.
If EPYC's overheads are being cited, then it should be noted that EPYC doesn't go that way.
Raja took a huge upgrade in his position from head of RTG at AMD (market cap $10B, results just starting to go from red to black) to Chief Architect at Intel (market cap >$200B, tens of billions of revenue exceeding their records YoY), but somehow you're trying to make it sound like a demotion?
Before his "sabbatical" that presaged his leaving AMD, Raja's statement was that he would have come back in a different role, with a more focused set of responsibilities.
The people most familiar with Koduri's work and Navi did not plan on allowing him to have as much autonomy going forward, and eventually his employment ended. There are multiple ways to interpret this. Perhaps Koduri had lost the confidence of those internal to AMD, or perhaps Koduri felt what AMD had to offer him and his plans was insufficient.
There is evidence for both:
Koduri was effectively being demoted and there are rumors of significant clashes in between him and Su.
There are statements to the effect that AMD de-emphasized graphics and rumblings to the effect that GPU development had been gutted.
The excuses for Vega's significant teething pains play into both, where one or both parties could not achieve a fully-baked result or they were actively trying to abandon one another.
Not the best way to use one side's tech on the other.
And gone entirely?
He was hired to start developing a high-end discrete GPU at Intel. How exactly is this gone entirely?
Unless Intel buys RTG, he's gone from Navi's development pipeline. Intel didn't hire Navi.
My impression is that both AMD and Koduri's positions and visions appear to be more modest than an EPYC-style Navi.