NVidia Hopper Speculation, Rumours and Discussion

Discussion in 'Architecture and Products' started by xpea, Sep 21, 2021.

  1. xpea

    Regular

    Joined:
    Jun 4, 2013
    Messages:
    551
    Likes Received:
    783
    Location:
    EU-China
    I didn't know where to put this, so I guess it's time to start a new topic...

    It's a new Nvidia patent about Face-to-Face dies with enhanced power delivery using extended TSVs:
    https://www.freepatentsonline.com/20210233893.pdf

    NV_FACE-TO_FACE_DIES.jpg

    For reference, previous 2017 Nvidia Multi-Chip-Module GPU whitepaper:
    https://research.nvidia.com/publication/2017-06_MCM-GPU:-Multi-Chip-Module-GPUs

    Hopper is the new Nvidia datacenter multi-die GPU that will be sold in module with the companion Grace ARM CPU

    It will arrive few quarters after AMD MI200 and before MI300...
     
    Lightman, pharma, Jawed and 4 others like this.
  2. How is the cooling made? Is there a copper film between the chips with holes for the TSVs?
     
  3. Nebuchadnezzar

    Legend

    Joined:
    Feb 10, 2002
    Messages:
    1,061
    Likes Received:
    328
    Location:
    Luxembourg
    Cooling works the normal way. You're not increasing the power density, rather just moving the power rails from the logic die to the overlying power delivery die. Dissipation here should actually be better than a normal design which has thick bulk silicon between the FEOL and the heatspreader, which is ground down here.
     
  4. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    Can't work out what's novel here. Clues?

    The claims include various "symmetrical" specifications, such as:

    Claim 8 isn't necessarily actually symmetrical, but where rotation symmetry (to make the dies face-to-face) provides contacting areas on both dies that meet each other, then electrical connections can be formed. Claim 13 suggests a mirror-symmetric variation.

    And other claims specify GPUs.

    All pretty weird. It's almost as if NVidia is solely trying to patent face-to-face GPUs using TSVs and that's all.
     
    CarstenS likes this.
  5. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    A Chinese source is claiming NVIDIA to spend a total of 7 billion dollars on the 5nm process of TSMC.

    https://wccftech.com/nvidia-spends-...or-next-gen-geforce-rtx-40-ada-lovelace-gpus/
     
    #6 DavidGraham, Dec 29, 2021
    Last edited: Dec 30, 2021
  6. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    We have no way to disentangle consumer and data centre spending by NVidia based upon such a rumour. Presumably NVidia is prioritising data centre?

    Hopper could launch in Q1 next year?
     
  7. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,398
    I'd expect both Lovelace and Hopper to use N5.
     
  8. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Nvidias fiscal Q1? Possibly.
     
  9. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    How long has Apple had 5nm chips in production? Is there any reason to think NVidia would be far behind?
     
  10. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,398
    How many Apple N5 chips are >600mm^2?
     
  11. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    Nvidias 2022 q1 is long gone, q1/23 is possible
     
  12. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    Yes, obviously, since Jawed asked Q1 next year, not 2022. Fiscal next year is 2023.

    Moneyz? And it would depend on whether or not Hopper is ready.
     
  13. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    Yeah, those billions NVidia is spending according to that claim was the reason I made the suggestion. NVidia isn't struggling to finance R&D.

    Big Lovelace is presumably going to be about 600mm² if not more, so I don't think "big chip" is a reason for this not to be Hopper in Q1, since Lovelace is due Q2/3 next year (isn't it?).

    iPhone 12 based on 5nm TSMC launched in October 2020, so how much longer than 18 months later are we expecting it to be for Hopper to launch on 5nm?

    So really it's a question of whether Hopper (data centre replacement for A100) appears before or after Lovelace (consumer replacement for Ampere).

    Will there be much commonality between these two? The names seem to suggest not. A100 and Ampere had chunks of common architecture, as I understand it. Hopper and Lovelace names could imply that there's very little common architecture. In which case their timelines should be separate and mainly dependent upon the fab/process.

    In the end the claimed dates (Q3 2021 and Q1 2022) for that massive spending could be interpreted as being too early for productisation/launch of Lovelace. Though I wonder why spending in Q4 2021 wasn't mentioned and whether "Q3 2021" should be "Q4 2021".

    If Hopper is multi-die (GPU chiplets, face-to-face, whatever) perhaps there's a chance that it's not 800+mm² per die.
     
  14. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,398
    Half a year can mean a pretty big difference in wafer pricing which would make something possible which would otherwise had to be priced at pointless levels.
     
  15. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    Agreed. But I think it's safe to assume H100 will be priced at profit++² levels :)
     
  16. CarstenS

    Legend Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,800
    Likes Received:
    3,920
    Location:
    Germany
    It's no free reign anymore in ML/HPC for Nvidia. There are other players making their serious bidding now after mostly having sorted out their birthing pains over the last 2-3 years.
    The main GTC San Jose is 2nd to last week of march. Maybe Hopper is being presented there, but I would not count on immediate availability.
     
  17. pharma

    Veteran

    Joined:
    Mar 29, 2004
    Messages:
    4,887
    Likes Received:
    4,534
    I think that's possible since the "other" players have priced their current offerings in a similar fashion, despite existing performance discrepancies in most applications compared with competing solutions.
     
    PSman1700 likes this.
  18. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    Well, I'll coin a phrase, "no one ever got fired for buying NVidia". Just dropping H100 into infrastructure already running A100 is such an easy sell, I would expect.

    The birthing pains may have disappeared in the slideware for competitors, but it takes a long time to translate that into sales. Sales that would hurt NVidia.

    I'll be honest, I'm not tracking contract-winners in the HPC (mostly AI) arena, my opinion is really about whether NVidia will have to lower its margins to continue its rapid revenue growth.

    In passing I heard an interesting comment: "demand for computing power in AI doubles every 3 months". I can believe it's true. Against that background why wouldn't you buy NVidia? The risks associated with 3 to 6 months of integration pain due to some other platform look pretty scary to me.

    I expect NVidia will have a serious fight on its hands in 2025. Some of the techniques in AI that are coming up may actually favour the "more general purpose" architecture of GPUs. I expect over the next few years we'll see a direct dependency upon brute-force "tensor-math" become less important as sparsity and hierarchical-mesh-connectivity based techniques rise in importance (I pay a mild amount of attention towards cutting edge algorithms).

    Yes, you're right, I shouldn't count an announcement as if it was the start of sales.
     
    CarstenS and nnunn like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...