NVIDIA discussion [2024]

  • Thread starter Deleted member 2197
  • Start date
nVidia released benchmarks of the ULMark "Gen AI" benchmark:
NVIDIA-AI-on-RTX-2024-Quarter-1-Sharing-Session-8.jpg

 
I'm not sure exactly how to frame this but...

I think people should be cognizant of how important memory (VRAM as it pertains to GPUs) are in creation workloads and AI specifically and ask how much benchmarks should reflect that.

Without bringing in crossvendor considerations for example the 4060ti 16GB is a much better (I can't emphasis this enough) GPU for this purpose than the 4070, 4070 Super and 4070 ti in practice that isn't reflected at all in that benchmark.

It's going to be interesting how the industry going forward (and this isn't Nvidia either, see others such as Apple) resolves trying to sell on AI with how they also product segment artificially with memory.
 
nVidia released benchmarks of the ULMark "Gen AI" benchmark:
NVIDIA-AI-on-RTX-2024-Quarter-1-Sharing-Session-8.jpg

 
In January, Meta said it would buy 350,000 H100 GPUs from Nvidia in 2024, but a recent update from the company's head of AI, Yann LeCun, suggests that the company has bought even more H100 chips in recent months.

Speaking at the Forging the Future of Business with AI summit last month, LeCun and host John Werner said that Meta has bought an additional 500,000 GPUs from Nvidia, bringing its total to 1 million with a retail value of about $30 billion.

 
Last edited:
  • Nvidia's (NASDAQ:NVDA) next-generation artificial intelligence GPU, dubbed the R100, could go into mass production late next year, a widely watched analyst said on Tuesday.
  • The R-series chip, which is likely to use Taiwan Semiconductor's (TSM) 3 nanometer node, could be mass-produced by the fourth quarter, with the system and rack solution starting mass production in the first quarter of 2026, TF International Securities analyst Ming-Chi Kuo said in a blog post.
Is it time to open the "Nvidia Rubin speculation thread" yet?
 
Top 10 IC Design Houses’ Combined Revenue Grows 12% in 2023

NVIDIA, Broadcom, and AMD benefit from a surge in demand for AI

The top five IC design houses boosted their 2023 revenues to $55.268 billion—a 105% year-over-year increase—primarily driven by NVIDIA’s AI GPU H100. Currently, NVIDIA captures over 80% of the AI accelerator chip market, and its revenue growth is expected to continue in 2024 with the release of the H200 and next-generation B100/B200/GB200. Broadcom’s revenue reached $28.445 billion in 2023 (semiconductor segment only), growing by 7%, with AI chip income accounting for nearly 15% of its semiconductor solutions.

20240509_094033_2024-05-09_093645.png

 
NVIDIA announced that their GH200 chip (Grace CPU + H100 GPU) has claimed 9 supercomputers wins in 2023/2024.

 
A Taiwanese newspaper are reporting that Nvidia and MediaTek are partnering on a Snapdragon X competetor, built on TSMC 3nm.


I don't quite understanding why they're partnering on it, Nvidia make their own SoCs already, but maybe MediaTek's mobile SoC experience brings plenty to the table.
 
I don't quite understanding why they're partnering on it, Nvidia make their own SoCs alread
They've already partnered once to make a SoC with RTX GPUs for automotives, so they are extending the partnership to PCs.

On another note, Apple seems to be partnering up with OpenAI for AI services on Mac and iOS, which explains why Microsoft is doubling up their purchase of NVIDIA GPUs. Apple users will require more hardware to satisfy their demands.
 
Last edited:
They've already partnered once to make a SoC with RTX GPUs for automives, so they are extending the partnership to PCs.

There's an element with automotive where it made obvious sense. MediaTek are already have infotainment relationships and Nvidia's autonomy links into that. I can't see the same angle with chips for Windows laptops. Nvidia have relationships with those companies already.
 
Back
Top