Speculation and Rumors: Nvidia Blackwell ...

ROI is missing the point, what matters is what it could be done for and what it was done for. Spending 10x as much as needed is an "opportunity cost", they let 9 billion slip for no reason, and this sort of thing companies on the downslope do. The prime example would be Apple, it's spent the past decade not caring that it's near, or at, the biggest company around. Tim kept costs down, continued to wring every penny out of consumers he could, and ended up being rewarded for it by fighting for the biggest publicly listed company around.

Nvidia paying out million dollar+ salaries to employees is great for the employees. But by accounts of people working with them it's made some of those same managers/high level employees tend towards slacking off and not caring. They've got their house paid off and enough in stock options to retire today, what do they care if the company succeeds a bit more or a bit less? They don't anymore, but these are the very people Nvidia would be relying on the keep growing.

Which is why I'm going to guess Blackwell consumer isn't going to be super impressive either. Enough to fight AMD's hyper truncated RDNA4, and Intel is still struggling with software and perception with Battlemage. Not like they're going to collapse quite yet (let's see what RDNA5 is). But as a 2 year update probably not the most impressive thing around.

You’re jumping to a lot of conclusions there based on no hard data. Nvidia didn’t pay out million dollar salaries. The shares that employees already own appreciated significantly. The whole lazy rich employee thing is probably real but we have no idea if there will be any tangible impact to their roadmap. Lots of companies have made employees rich without crashing and burning after.
 
ROI is missing the point, what matters is what it could be done for and what it was done for. Spending 10x as much as needed is an "opportunity cost", they let 9 billion slip for no reason, and this sort of thing companies on the downslope do. The prime example would be Apple, it's spent the past decade not caring that it's near, or at, the biggest company around. Tim kept costs down, continued to wring every penny out of consumers he could, and ended up being rewarded for it by fighting for the biggest publicly listed company around.

Nvidia paying out million dollar+ salaries to employees is great for the employees. But by accounts of people working with them it's made some of those same managers/high level employees tend towards slacking off and not caring. They've got their house paid off and enough in stock options to retire today, what do they care if the company succeeds a bit more or a bit less? They don't anymore, but these are the very people Nvidia would be relying on the keep growing.

Which is why I'm going to guess Blackwell consumer isn't going to be super impressive either. Enough to fight AMD's hyper truncated RDNA4, and Intel is still struggling with software and perception with Battlemage. Not like they're going to collapse quite yet (let's see what RDNA5 is). But as a 2 year update probably not the most impressive thing around.
1. Don't believe the published R&D cost. It's CEO math marketing BS. He put stuff inside that doesn't belong there

2. I can say from first hand experience that Jensen is the most demanding CEO I have ever met. And from the feedback I have these days, he is pushing the company like never before. He is accelerating all product developments and gives higher targets to every department. In consequence, Nvidia is firing on all cylinders that most employees do overtime. To the point that Rubin DC will tape out in August when Blackwell won't be shipping yet. Honestly, Jensen is very much in an Andrew Grove "Only the Paranoid Survive" moment because he knows that everybody and his dog wants a pie of his AI lunch
 
1. Don't believe the published R&D cost. It's CEO math marketing BS. He put stuff inside that doesn't belong there

2. I can say from first hand experience that Jensen is the most demanding CEO I have ever met. And from the feedback I have these days, he is pushing the company like never before. He is accelerating all product developments and gives higher targets to every department. In consequence, Nvidia is firing on all cylinders that most employees do overtime. To the point that Rubin DC will tape out in August when Blackwell won't be shipping yet. Honestly, Jensen is very much in an Andrew Grove "Only the Paranoid Survive" moment because he knows that everybody and his dog wants a pie of his AI lunch
And that's what AMD is missing.
 
You think there’s a chance Nvidia will waste precious CoWoS capacity on a lowly 5090? Seems unlikely especially if there’s no competition in the high end.

The margins on the ProViz stuff are nice but probably not as nice as AI. Of all the rumored reasons for canceling RDNA 4 the most reasonable is that AMD didn’t want to burn scarce CoWoS capacity on consumer chips. B200 should be a good enough test bed for MCM coupling. HPC workloads are more MCM friendly anyway. I don’t see the need to rush into a super expensive and complicated MCM graphics flagship right now given absent competition and the opportunity cost of allocating CoWoS capacity away from AI.

CoWoS capacity is set to increase significantly this year though. As per reports from Trendforce, TSMC is set to expand capacity by 150% by Q4 -https://www.trendforce.com/news/2024/03/13/news-tsmc-reportedly-sensing-increased-orders-again-cowos-production-capacity-surges/

Probably planned as per customer requirements so might well be possible.
 
CoWoS capacity is set to increase significantly this year though. As per reports from Trendforce, TSMC is set to expand capacity by 150% by Q4 -https://www.trendforce.com/news/2024/03/13/news-tsmc-reportedly-sensing-increased-orders-again-cowos-production-capacity-surges/

Probably planned as per customer requirements so might well be possible.
And that capacity could easily be eaten up entirely by B100/200 if Nvidia could have their way, given the stupid levels of demand for AI horsepower right now.
 
Expand by 150% is incorrect though. It’s either expand to 150% or expand by 50%.
Yes I know, I'm just saying people make this mistake even though they mean the same thing. Hence why I said Degustator is technically correct. I'm trying to avoid further confusion or arguing over this, though that's obviously not working. lol
 
By 50% unless anything's changed.
Basis the figures quoted in the article it is actually 150% (Though you have to reference an earlier article as well - https://www.trendforce.com/news/202...kor-ase-also-enter-advanced-packaging-for-ai/ ). It says capacity was 14-15k wafers per month in Dec 2023 and was to increase to ~33k-35k per month in Q4'24 which the newer article says is revised to ~40,000 wafers per month by Q4'24. Which is ~150%.
And that capacity could easily be eaten up entirely by B100/200 if Nvidia could have their way, given the stupid levels of demand for AI horsepower right now.
CoWoS is the biggest capacity limitation at the moment so it would certainly help Nvidia and other companies as well. Though 150% is large enough that it should satisfy AI demand and still have some leftover for consumer products as well.
 
Fan out packaging enables chip manufacturers to remove faulty chips from the packaging process. FOPLP, or fan out panel level packaging, is an advanced FOWLP variant. The primary difference between wafer level and panel level packaging is that instead of reassembling the cut chips on a wafer, they are reassembled on a larger panel. This allows manufacturers to package a much larger number of chips, reducing the cost of the packaging process. It also improves packaging efficiency as chips on the edges of the panel can be packaged as well.
 
Have we ever had graphics cards with 16 modules on one side of the PCB before?

The only 512 bit card I can even remember is the Radeon HD 2900XT, which had 8 on each side of the PCB. I don't think there's been anything beyond 384 bit since. Given the need for active cooling for the modules these days, would likely be on the same side.
 
The only 512 bit card I can even remember is the Radeon HD 2900XT, which had 8 on each side of the PCB. I don't think there's been anything beyond 384 bit since. Given the need for active cooling for the modules these days, would likely be on the same side.
Ah you just reminded me that Hawaii(290/390 series) were 512-bit and I just checked and they did indeed have sixteen 512MB memory chips on one side for the 8GB versions.
 
I'm a bit skeptical it's 512-bit, but if it is, then with GDDR7 and the maximum number of DRAM chips, they could go up to 96GiB on professional cards which is more than the original H100... that'd be quite interesting! As long as they don't include NVLink, there's still strong differentiation for B100/GB200, and they'd probably limit the consumer/GeForce version to 32GiB as well (half as many chips, and 16Gbit rather than 24Gbit).
 
The only 512 bit card I can even remember is the Radeon HD 2900XT, which had 8 on each side of the PCB. I don't think there's been anything beyond 384 bit since. Given the need for active cooling for the modules these days, would likely be on the same side.
GT200 was 512-bit as well, but the photos of those I can find from back in the day were also 8 chips on each side of the PCB:
 
Back
Top