AMD: Navi Speculation, Rumours and Discussion [2019]

Discussion in 'Architecture and Products' started by Kaotik, Jan 2, 2019.

  1. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    12,668
    Likes Received:
    8,980
    Location:
    Cleveland
    Since this is an AMD thread, how far behind Intel and Nvidia are on implementing PCIExpress 4 isn't directly germane. The facts of the matter is AMD has PCIExpress 4.0 available to the public around 2 years from finalized specifications. I suspect the same or quicker turn around time on PCIExpress 5 from AMD.
     
    no-X likes this.
  2. Shaklee3

    Newcomer

    Joined:
    Apr 9, 2016
    Messages:
    18
    Likes Received:
    10
    DavidGraham likes this.
  3. Shaklee3

    Newcomer

    Joined:
    Apr 9, 2016
    Messages:
    18
    Likes Received:
    10
    It's entirely germane given that this is a GPU thread, and most people with gaming machines have Intel machines. So it the majority of cases, people buying an AMD GPU will not be able to use pcie4.

    Besides, my original question was about nvlink/switch competitors. Pcie4 is irrelevant because that's not even what AMD is using for inter-GPU communications.
     
  4. manux

    Veteran Regular

    Joined:
    Sep 7, 2002
    Messages:
    1,570
    Likes Received:
    401
    Location:
    Earth
    You might want to read about cxl: https://www.anandtech.com/show/1406...w-industry-high-speed-interconnect-from-intel

    https://www.anandtech.com/show/14213/compute-express-link-cxl-from-nine-members-to-thirty-three

    and amd is also now involved: https://www.anandtech.com/show/1466...ium-a-coherent-inteldeveloped-pcie-50based-io
     
  5. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,973
    Likes Received:
    1,656
    AMD will likely get their high-speed interconnect from the Compute Express Link (CXL) consortium, which they joined last month.
    https://www.guru3d.com/news-story/amd-joins-consortium-for-cxl-interconnect-based-on-pci-e-5.html
     
  6. Shaklee3

    Newcomer

    Joined:
    Apr 9, 2016
    Messages:
    18
    Likes Received:
    10
    Thanks for the replies. It sounds like with CXL they'd get to 64GB/s bidirectional, which is still less than half of Nvidia. I really want AMD to succeed, but the interconnect is really important for my applications. Given the timelines of the future announcements, that appears to be at least 3 years behind.
     
  7. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    523
    Likes Received:
    239
    CXL is dumb and doesn't support p2p either way.
    IFIS v2 in Vega20 is already 100GB/s each and works in whatever topology you want assuming you have enough ports.
     
  8. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,240
    Likes Received:
    1,914
    Location:
    Finland
    Except that AMD already has their high-speed interconnect as @Bondrewd pointed out. Also AMD was already part of all the other "next gen interconnect consortiums" before (CCIX, Gen Z, OpenCAPI), so AMD joining CXL doesn't actually mean much at this point
     
  9. Shaklee3

    Newcomer

    Joined:
    Apr 9, 2016
    Messages:
    18
    Likes Received:
    10
    The anandtech article on the MI60 said it was 50GBps each direction and a ring topology. Can you point me to somewhere that says it's what you're describing?
     
  10. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    523
    Likes Received:
    239
    Oh you bet it does.
    Like jeez, have you seen Rome?
    Which is 100GB/s bidir and IF works in whatever topology you want, rings, meshes, whatever.
     
  11. Shaklee3

    Newcomer

    Joined:
    Apr 9, 2016
    Messages:
    18
    Likes Received:
    10
    From the anandtech article: "Notably, since there are only 2 links per GPU, AMD’s topology options will be limited to variations on rings. So GPUs in 4-way configurations won’t all be able to directly address each other". So it's clearly not rings/meshes/whatever, unless they're wrong. And if we're talking about bandwidth between cards that's still 1/3 of the V100 at 300GBps total. This is the point I keep coming back to: the V100 is approaching 2 years old now, and there will likely be an update next year with Turing professional cards. So AMD needs a huge leap in both interconnect bandwidth and topology changes to even compete on that front.

    Maybe they've made the trade-off that compute is more valuable than interconnect.
     
  12. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    523
    Likes Received:
    239
    That's the specific IFIS implementation, not IF in general.
    2/3 akshually, being 100GB/s*2 vs 50GB/s*6.
     
    Lightman likes this.
  13. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    2,973
    Likes Received:
    1,656
  14. BRiT

    BRiT (╯°□°)╯
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    12,668
    Likes Received:
    8,980
    Location:
    Cleveland
    Said Footnote:

    2. As of Oct 22, 2018. Radeon Instinct™ MI50 and MI60 “Vega 7nm” technology based accelerators are PCIe® Gen 4.0* capable providing up to 64 GB/s peak theoretical transport data bandwidth from CPU to GPU per card with PCIe Gen 4.0 x16 certified servers. Previous Gen Radeon Instinct compute GPU cards are based on PCIe Gen 3.0 providing up to 32 GB/s peak theoretical transport rate bandwidth performance. Peak theoretical transport rate performance is calculated by Baud Rate * width in bytes * # directions = GB/s per card

    PCIe Gen3: 8 * 2 * 2 = 32 GB/s

    PCIe Gen4: 16 * 2 * 2 = 64 GB/s

    Radeon Instinct™ MI50 and MI60 “Vega 7nm” technology based accelerators include dual Infinity Fabric™ Links providing up to 200 GB/s peak theoretical GPU to GPU or Peer-to-Peer (P2P) transport rate bandwidth performance per GPU card. Combined with PCIe Gen 4 compatibility providing an aggregate GPU card I/O peak bandwidth of up to 264 GB/s. Performance guidelines are estimated only and may vary. Previous Gen Radeon Instinct compute GPU cards provide up to 32 GB/s peak PCIe Gen 3.0 bandwidth performance. Infinity Fabric™ Link technology peak theoretical transport rate performance is calculated by Baud Rate * width in bytes * # directions * # links = GB/s per card

    Infinity Fabric Link: 25 * 2 * 2 = 100 GB/s

    MI50 |MI60 each have two links:

    100 GB/s * 2 links per GPU = 200 GB/s
     
    Lightman, w0lfram, Shaklee3 and 2 others like this.
  15. Shaklee3

    Newcomer

    Joined:
    Apr 9, 2016
    Messages:
    18
    Likes Received:
    10
    Thanks. Is it also the case that it's NOT full mesh? It seems hard to find info on that.
     
  16. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    10,056
    Likes Received:
    4,625
    Is PCIe 4.0 in any of Intel's roadmaps?
    I had the idea they'll skip PCIe 4.0 and go straight to PCIe 5.0, considering their roadmap up to early 2021 seems grim as hell.
     
  17. Bondrewd

    Regular Newcomer

    Joined:
    Sep 16, 2017
    Messages:
    523
    Likes Received:
    239
    ICX-SP is defo PCIe4, dunno about Cooper.
    Everything Tigerlake should be PCIe4 too.
     
  18. Pressure

    Veteran Regular

    Joined:
    Mar 30, 2004
    Messages:
    1,346
    Likes Received:
    276
    IBM POWER9 had PCIe 4 support at release.
     
    Lightman and BRiT like this.
  19. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    339
    Likes Received:
    89
    Yah, there's a bunch of new interconnect standard. CXL seems the least interesting, most backwards looking of all of them, which is why when it was announced Intel was the only actual hardware company in the "consortium".

    Regardless, I'd think the question is how much workloads even saturate Express 3 now, let alone 4.
     
  20. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,240
    Likes Received:
    1,914
    Location:
    Finland
    Storage workloads can easily do that
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...