NVidia Hopper Speculation, Rumours and Discussion

Discussion in 'Architecture and Products' started by xpea, Sep 21, 2021.

  1. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    >reticle limit.
    Is it still monolithic when using stitching?
     
  2. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,393
    There's supposedly a two die product there though.
     
  3. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,210
    Hopefully it works better than the lame attempt that is MI200.
     
    PSman1700 likes this.
  4. Rootax

    Veteran

    Joined:
    Jan 2, 2006
    Messages:
    2,400
    Likes Received:
    1,845
    Location:
    France
    What's the problem with MI200 ?
     
  5. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,393
    There's also Grace which AFAIU can be coupled with GH as an MCM. Which kinda also makes it a multi-die solution if not a GPU one.
     
    DavidGraham likes this.
  6. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,210
    It featured two dies, but each die works as a separate GPU, essentially two cross-fire GPUs on a single PCB, with all of the associated problems with such confiuration. Not a true MCM die where all the chiplets work together as one coherent big GPU.
     
  7. Granath

    Newcomer

    Joined:
    Jul 26, 2021
    Messages:
    80
    Likes Received:
    81
    yeah, but it would be true for gaming GPU.
    HPC consists of thousands of such interconnected nodes, so software can and must split workflow between them. It must, without it there won't be any supercomputer. It's a build-in feature. Two districts GPU doesn't looks as bad as you try to picture it.
     
  8. Qesa

    Newcomer

    Joined:
    Feb 23, 2020
    Messages:
    57
    Likes Received:
    107
    It's not bad, no, but it's not worthy of AMD calling it the first MCM GPU
     
    pharma, DavidGraham and Rootax like this.
  9. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    Exact same shit (but more kilowatts per node!) so ugh, NUMA-NUMA yay.
    Bingo.
    Oh but it is one.
    The next thingy is chiplet tho.
     
  10. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,393
    Nope.
     
  11. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    I've seen a suggestion that there will be a consumer version of Hopper.

    I do wonder whether Lovelace is actually a consumer GPU. There was a suggestion at one point that Lovelace is for a new Nintendo.

    NVidia's "Ampere next" and "Ampere next next" games are certainly fun...
     
  12. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,393
    I doubt that it would make much sense as a consumer GPU but then again who knows what they'll do against some $5000 competition product. A 10% win over a $1000 product at $5000 is considered a win in modern days, right?
     
  13. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    Yea.
    Was Gracehoppium slideware not clean 'nuff?
    You get more NVLink and you're gonna like it.
    That's a whole lineup of cookers, not a single part like DC stuff.
     
  14. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,109
    Location:
    New York
    Useful from a rack density perspective but still 2 GPUs on a stick like MI200. Wake me up when software actually treats these things as a single GPU.
     
  15. Bondrewd

    Veteran

    Joined:
    Sep 16, 2017
    Messages:
    1,682
    Likes Received:
    846
    Never, at least in DC.
    It'll be faster/sleeker/lower wattage per bit but...
    ...NUMA is way of the future.
     
  16. neckthrough

    Newcomer

    Joined:
    Mar 28, 2019
    Messages:
    138
    Likes Received:
    388
    The single GPU abstraction would need to be created at some level in the software stack, because the hardware doesn't look like that any more. While providing that abstraction universally (e.g., in the driver) may be useful to get scaling for some software (e.g., legacy code), actually exposing the non-uniformity of the underlying hardware allows more sophisticated software to squeeze out efficiency. The way that silicon scaling is going, we should expect this trend to continue. It doesn't work for all workloads, but is tractable for some.

    But I'll play devil's advocate for a second. We've seen this scenario (kinda) play out in the VLIW-vs-OOO/superscalar CPU space. Both architectures expose single-threaded program model to the high-level programmer. VLIW's approach is that hardware provides the parallel substrate and relies on an amazing (and sometimes non-existent) compiler to discover the ILP, while an OOO/superscalar processor does that in silicon. OOO/superscalars won that battle handily and dominated the general-purpose compute space, while VLIWs stayed in their niches (e.g., image-processing processors).

    So why do I expect things to be different this time? The simple answer is necessity. We desperately need that efficiency, and the foundries are running out of tricks to play with Mother Physics.
     
  17. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,393
    Single GPU abstraction doesn't make much sense to pursue for HPC applications as these are made to scale to 100s and 1000s of GPU dies anyway.
    In fact it can be counter productive as the thing (s/w or h/w) making this abstraction can get in a way of code execution and reduce the transparency of what the system is actually doing.
     
  18. neckthrough

    Newcomer

    Joined:
    Mar 28, 2019
    Messages:
    138
    Likes Received:
    388
    Certainly. But I was arguing that the asymmetries of hardware are going to be revealed to more "mainstream" datacenter applications as well, not just HPC.
     
  19. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,109
    Location:
    New York
    I agree. There’s little benefit for HPC. The real win will be for games where the programming model is not multi-GPU friendly. Maybe we don’t need it that soon and we can keep maxing out reticle limits on the next process node for a few more years.
     
  20. troyan

    Regular

    Joined:
    Sep 1, 2015
    Messages:
    603
    Likes Received:
    1,122
    Man from Atlantis and pharma like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...