Nvidia Hopper Speculation, Rumours and Discussion

xpea

Regular
Supporter
I didn't know where to put this, so I guess it's time to start a new topic...

It's a new Nvidia patent about Face-to-Face dies with enhanced power delivery using extended TSVs:
https://www.freepatentsonline.com/20210233893.pdf

NV_FACE-TO_FACE_DIES.jpg

For reference, previous 2017 Nvidia Multi-Chip-Module GPU whitepaper:
https://research.nvidia.com/publication/2017-06_MCM-GPU:-Multi-Chip-Module-GPUs

Hopper is the new Nvidia datacenter multi-die GPU that will be sold in module with the companion Grace ARM CPU

It will arrive few quarters after AMD MI200 and before MI300...
 
How is the cooling made? Is there a copper film between the chips with holes for the TSVs?
 
Cooling works the normal way. You're not increasing the power density, rather just moving the power rails from the logic die to the overlying power delivery die. Dissipation here should actually be better than a normal design which has thick bulk silicon between the FEOL and the heatspreader, which is ground down here.
 
Can't work out what's novel here. Clues?

The claims include various "symmetrical" specifications, such as:

8. The semiconductor device of claim 1, wherein the first semiconductor die is fabricated using a same mask set as the second semiconductor die.

[...]

13. The semiconductor device of claim 11, wherein the first semiconductor die is fully symmetrical with the second semiconductor die.

Claim 8 isn't necessarily actually symmetrical, but where rotation symmetry (to make the dies face-to-face) provides contacting areas on both dies that meet each other, then electrical connections can be formed. Claim 13 suggests a mirror-symmetric variation.

And other claims specify GPUs.

All pretty weird. It's almost as if NVidia is solely trying to patent face-to-face GPUs using TSVs and that's all.
 
A Chinese source is claiming NVIDIA to spend a total of 7 billion dollars on the 5nm process of TSMC.

MyDrivers reports that NVIDIA has prepaid TSMC around $1.64 Billion US in Q3 2021and will pay $1.79 Billion US in Q1 2022. The total long-term 'Multi-Billion' dollar deal is set to cost NVIDIA an insane $6.9 Billion US which is much higher than what they paid last year. NVIDIA will not just use this money to procure wafer supply from TSMC but also from Samsung but it looks like that the majority of the amount will be spent on TSMC's 5nm technology.

https://wccftech.com/nvidia-spends-...or-next-gen-geforce-rtx-40-ada-lovelace-gpus/
 
Last edited:
We have no way to disentangle consumer and data centre spending by NVidia based upon such a rumour. Presumably NVidia is prioritising data centre?

Hopper could launch in Q1 next year?
 
Yeah, those billions NVidia is spending according to that claim was the reason I made the suggestion. NVidia isn't struggling to finance R&D.

Big Lovelace is presumably going to be about 600mm² if not more, so I don't think "big chip" is a reason for this not to be Hopper in Q1, since Lovelace is due Q2/3 next year (isn't it?).

iPhone 12 based on 5nm TSMC launched in October 2020, so how much longer than 18 months later are we expecting it to be for Hopper to launch on 5nm?

So really it's a question of whether Hopper (data centre replacement for A100) appears before or after Lovelace (consumer replacement for Ampere).

Will there be much commonality between these two? The names seem to suggest not. A100 and Ampere had chunks of common architecture, as I understand it. Hopper and Lovelace names could imply that there's very little common architecture. In which case their timelines should be separate and mainly dependent upon the fab/process.

In the end the claimed dates (Q3 2021 and Q1 2022) for that massive spending could be interpreted as being too early for productisation/launch of Lovelace. Though I wonder why spending in Q4 2021 wasn't mentioned and whether "Q3 2021" should be "Q4 2021".

If Hopper is multi-die (GPU chiplets, face-to-face, whatever) perhaps there's a chance that it's not 800+mm² per die.
 
Big Lovelace is presumably going to be about 600mm² if not more, so I don't think "big chip" is a reason for this not to be Hopper in Q1, since Lovelace is due Q2/3 next year (isn't it?).
Half a year can mean a pretty big difference in wafer pricing which would make something possible which would otherwise had to be priced at pointless levels.
 
Half a year can mean a pretty big difference in wafer pricing which would make something possible which would otherwise had to be priced at pointless levels.
Agreed. But I think it's safe to assume H100 will be priced at profit++² levels :)
 
Agreed. But I think it's safe to assume H100 will be priced at profit++² levels :)
It's no free reign anymore in ML/HPC for Nvidia. There are other players making their serious bidding now after mostly having sorted out their birthing pains over the last 2-3 years.
The main GTC San Jose is 2nd to last week of march. Maybe Hopper is being presented there, but I would not count on immediate availability.
 
Agreed. But I think it's safe to assume H100 will be priced at profit++² levels :)
I think that's possible since the "other" players have priced their current offerings in a similar fashion, despite existing performance discrepancies in most applications compared with competing solutions.
 
It's no free reign anymore in ML/HPC for Nvidia. There are other players making their serious bidding now after mostly having sorted out their birthing pains over the last 2-3 years.
Well, I'll coin a phrase, "no one ever got fired for buying NVidia". Just dropping H100 into infrastructure already running A100 is such an easy sell, I would expect.

The birthing pains may have disappeared in the slideware for competitors, but it takes a long time to translate that into sales. Sales that would hurt NVidia.

I'll be honest, I'm not tracking contract-winners in the HPC (mostly AI) arena, my opinion is really about whether NVidia will have to lower its margins to continue its rapid revenue growth.

In passing I heard an interesting comment: "demand for computing power in AI doubles every 3 months". I can believe it's true. Against that background why wouldn't you buy NVidia? The risks associated with 3 to 6 months of integration pain due to some other platform look pretty scary to me.

I expect NVidia will have a serious fight on its hands in 2025. Some of the techniques in AI that are coming up may actually favour the "more general purpose" architecture of GPUs. I expect over the next few years we'll see a direct dependency upon brute-force "tensor-math" become less important as sparsity and hierarchical-mesh-connectivity based techniques rise in importance (I pay a mild amount of attention towards cutting edge algorithms).

The main GTC San Jose is 2nd to last week of march. Maybe Hopper is being presented there, but I would not count on immediate availability.
Yes, you're right, I shouldn't count an announcement as if it was the start of sales.
 
Back
Top