NVidia Hopper Speculation, Rumours and Discussion

Discussion in 'Architecture and Products' started by xpea, Sep 21, 2021.

  1. xpea

    xpea Regular

    Real picture of the beast (H100 CNX)

    Nvidia Hopper real pic.jpeg
     
    Lightman, sonen, Jawed and 1 other person like this.
  2. kalelovil

    kalelovil Regular

  3. nnunn

    nnunn Newcomer

    Re: shared memory, Hopper adds an optional "cluster" level. By grouping thread blocks into clusters, those thread blocks can share each other's shared.mem.

    From section: [Distributed shared memory]

    "Figure 13 shows the performance advantage of using clusters on different algorithms. Clusters improve the performance by enabling you to directly control a larger portion of the GPU than just a single SM. Clusters enable cooperative execution with a larger number of threads, with access to a larger pool of shared memory than is possible with just a single thread block."​
     
    pharma likes this.
  4. trinibwoy

    trinibwoy Meh Legend

    Also…

    “The clusters in H100 run concurrently across SMs within a GPC. A GPC is a group of SMs in the hardware hierarchy that are always physically close together. Clusters have hardware-accelerated barriers and new memory access collaboration capabilities discussed in the following sections. A dedicated SM-to-SM network for SMs in a GPC provides fast data sharing between threads in a cluster.”

    On a separate note, what’s the point of grouping 2 SMs into a TPC on Hopper? TPCs historically housed the triangle setup hardware but Hopper doesn’t have those in every GPC right?
     
  5. xpea

    xpea Regular

    Super interesting read with an interview of Ian Buck from Nvidia at nextplatform:
    https://www.nextplatform.com/2022/03/24/the-buck-still-stops-here-for-gpu-compute/

    Much much more at the link
     
  6. pharma

    pharma Veteran

    Discussion on Hynix HBM 3 memory which will be used in Hopper.
     
    Last edited: Mar 31, 2022
  7. xpea

    xpea Regular

    Deep dive into Hopper architecture by The Next Platform:
    https://www.nextplatform.com/2022/03/31/deep-dive-into-nvidias-hopper-gpu-architecture/
    Much more at the source
     
  8. Jawed

    Jawed Legend

    "But we are really good at making big dies" - translation, "we're really good at getting people to pay for our big dies" :)

    Cerebras might have something to say about big dies, "46225mm², 2.6 trillion transistors".
     
  9. nutball

    nutball Veteran Subscriber

    Not really the same now is it.

    Wafer scale is all very clever and that but really not the same. Are they going to scale it down to reach mass market? No they're not. So their customer base is three digits maybe four if there are some TLAs I've not heard of.
     
  10. pharma

    pharma Veteran

    NVIDIA will manufacture H100 GPUs using TSMC 4-nm process. (guru3d.com)
     
    Man from Atlantis and PSman1700 like this.
  11. Kaotik

    Kaotik Drunk Member Legend

    Krteq likes this.
  12. pharma

    pharma Veteran

    It's apparently news in Korea (as of April 4) that TSMC will carry all Hopper (4nm) and Lovelace (5nm) production. The article highlights Samsung's woes as Qualcomm will likely move to TSMC as well.
     
    Last edited: Apr 5, 2022
    Putas and PSman1700 like this.
  13. pharma

    pharma Veteran

    Nvidia Hopper H100 80GB Price Revealed | Tom's Hardware (tomshardware.com)
    ...
    [​IMG]
     
  14. xpea

    xpea Regular

    Man from Atlantis and pharma like this.
Loading...

Share This Page

Loading...