Speaking of which, I wanna discuss the difference between H100 and MI300X die size and cost wise
We know MI300X uses about 920mm² of combined TSMC 5nm chiplets (8 compute chiplets) stacked on top of 1460mm² of TSMC 6nm chiplets (4 IO chiplets). Combined this makes for a total silicon area of 2380mm², this is compared to a 814mm² single die of the H100.
https://www.semianalysis.com/p/amd-mi300-taming-the-hype-ai-performance
NVIDIA's transistor footprint is much smaller, but yield is worse, however the H100 is not a full die product, it has about 20% of it disabled to improve yields. So the difference in yields between the two may not be that large.
MI300X costs significantly more due to it's larger overall size, more complex packaging and the need for additional 6nm chiplets.
Any thoughts?