Nvidia shows signs in [2023]

  • Thread starter Deleted member 2197
  • Start date
Status
Not open for further replies.
Absolutely insane. And all it would take is another “killer app” like LLM to appear and the frenzy will continue.
 
I think it's actually always the case, that people will always want more computing power. It's just sometimes people forgot about it.

I still remember in the olden days people were questioning the value of those big supercomputers. What would you need them for? Simluating nukes? You don't really need better computers because you already have good enough results, right? Then people found out that it'll be nice to be able to simulate the weather, which is of course much more complex than simulating a few microseconds of a nuke exploding. Then people wanted to simulate how protein folds. That's another step up. NN training is just the latest one that people suddenly find they need more computation powers for.

If one looks at the performance trend in the top500 list, it's obviously that the growth of total performance has slowed down in the last decade. One of the reasons is probably because the Moore's law no longer works in our favor, but that's probably only part of it. I think we're probably going to see the trend reversed in next few years (though not necessarily on the top500 list, as China is no longer sending results, also many private companies using supercomputers for AI purpose probably don't think LINPACK perfectly reflects how their computers perform).
 
AI is not a fad, but the extreme demand for AI processing is definitely leaning that way at the moment. and while the dissolution of 'Moore's Law' is absolutely hampering normal processor gains in recent times, AI hardware specialization is leading to a pretty big explosion of capabilities.

But this explosion will die out pretty quickly. And they'll soon be arriving at the same general limitations that every other processing sphere has been facing. Not to mention the fact that Nvidia is facing a whole lot more competition in the coming years.
 
I wonder if Nvidia can keep ahead of the pack through hardware and software innovations. Continually adding distinct hardware features like the transformer engine, improving CUDA+ code optimizations in the maintenance cycle and creating more, varied industry (medical, engineering, scientific, etc) software applications makes me think Nvidia will be difficult to compete with over the next 3 - 5 years. Having a complete software stack designed to utilize specific GPU features will definitely have it's advantages in the long run.

I can also see competitors moving to a similar software model since their GPU's will also feature distinct features not found elsewhere.
 
I wonder if Nvidia can keep ahead of the pack through hardware and software innovations. Continually adding distinct hardware features like the transformer engine, improving CUDA+ code optimizations in the maintenance cycle and creating more, varied industry (medical, engineering, scientific, etc) software applications makes me think Nvidia will be difficult to compete with over the next 3 - 5 years. Having a complete software stack designed to utilize specific GPU features will definitely have it's advantages in the long run.

I can also see competitors moving to a similar software model since their GPU's will also feature distinct features not found elsewhere.
Good points.
The real difficulty is to create sales opportunities with new shiny expensive hardware and Nvidia excels at it. Hardware alone is useless in innovative fields. For example, AI boom occurred because of ChatGPT real world applications for companies. The same way, NVidia offer multitude of ready-to-use frameworks and APIs that shorten product development in medical, industrial, robotic, automotive, and weather fields just to name a few. Startups may offer faster/cheaper hardware in the near future but their overall solution may not be the cheapest when you consider development cost and time to market.
 
Korean media outlets report that Foxconn has reportedly acquired AI-based orders from NVIDIA, with a majority of them being based on chip substrates. For those who aren't familiar with the "chip substrates" term, it is basically called the packaging process and holds significant importance, especially for NVIDIA's AI GPUs. The majority share accounts for NVIDIA's HGX/DGX supercomputers, which are currently in high demand, and are categorized as the "holy grail" in the industry.
...
Foxconn aims at becoming the "biggest chip substrate supplier of NVIDIA", establishing its dominance amongst competitors like Wistron. For clarification, Foxconn's division Fii (Foxconn Industrial Internet), has taken responsibility to fulfill DGX and HGX orders.
...
NVIDIA looks to diversify its supply chain since the company anticipates the AI hype to prolong for a decade. To prevent situations such as order backlogs and production bottlenecks, NVIDIA is also looking out for potential suppliers, with the names Samsung and Micron topping their list.
 
I don't understand why people think NVIDIA will remain interested in the gaming GPU market. Whatever production bottlenecks they face with the HPC parts will surely be dealt with. Jen-Hsun may well consider every gaming GPU they sell to be a massive loss considering what they could've gotten for that Silicon in an H100.
 
I don't understand why people think NVIDIA will remain interested in the gaming GPU market. Whatever production bottlenecks they face with the HPC parts will surely be dealt with. Jen-Hsun may well consider every gaming GPU they sell to be a massive loss considering what they could've gotten for that Silicon in an H100.
As a company selling products you generally want to diversity your product stack and not narrow your focus to only your largest most profitable part. Especially when yet another boom goes bust and you're stuck with a container ship load of GPUs that now has very few customers until your next big iteration.
 
Whatever production bottlenecks they face with the HPC parts will surely be dealt with.
The packaging capacity is severely limited compared to silicon capacity. So for example (hypothetical numbers), if NVIDIA can produce 10 million H100s on 4nm per year, the CoWoS packaging capacity from TSMC is only enough to sustain 1 million, with capacity improvements next year they might be able to sustain 2 million, with capacity expansion from other foundries they might be able to push that to 3 million, still far below their optimal 4nm output, leaving a huge 4nm capacity leftover that they want to keep (don't forget that other companies are competing for capacity), to make for professional and gaming products, otherwise someone else will take that capacity and fill these markets with their products, kicking NVIDIA out. Obviosuly no sane company would let that happen, especially as these markets still make billions for NVIDIA.

NVIDIA also still produces lots and lots of A100 and A800 on 7nm, that end up competing with H100 on CoWoS capacity, so in the end, NVIDIA has plently of 4nm capacity for Ada GPUs.

Some companies even go as far as reserving silicon capacity just to block others from producing competing products on it, Intel has done so in the past repeatedly.

Not to mention that NVIDIA is still investing billions into ray tracing and path tracing algorithms, as well as DLSS and AI gaming applications .. these investments are only going to be profitable if NVIDIA keeps making ever more powerful gaming GPUs.
 
Last edited:
I don't understand why people think NVIDIA will remain interested in the gaming GPU market. Whatever production bottlenecks they face with the HPC parts will surely be dealt with. Jen-Hsun may well consider every gaming GPU they sell to be a massive loss considering what they could've gotten for that Silicon in an H100.

You don’t understand why companies sell more than one product? It’s extremely poor business to put all your eggs in one basket. The AI bubble can burst at any time.
 
Some companies even go as far as reserving silicon capacity just to block others from producing competing products on it, Intel has done so in the past repeatedly .
When did Intel do that? I'm not a fan of government interference but even I think that should be illegal.
 
You don’t understand why companies sell more than one product? It’s extremely poor business to put all your eggs in one basket. The AI bubble can burst at any time.
They don't have to leave the market completely... they just have to be uninterested enough to greatly affect how GPUs are priced. It's either pay this... or.... buy a low end AMD part.
 
You don’t understand why companies sell more than one product? It’s extremely poor business to put all your eggs in one basket. The AI bubble can burst at any time.
I think NVIDIA will continue to not focus on GPUs until the HPC bubble bursts. If that ever happens it could be a long ways off. They've already priced Ada Lovelace in a manner where they won't sell very well so they won't have to make too many of them. So I'm not really saying this will happen. I'm saying it is happening.
 
The packaging capacity is severely limited compared to silicon capacity. So for example (hypothetical numbers), if NVIDIA can produce 10 million H100s on 4nm per year, the CoWoS packaging capacity from TSMC is only enough to sustain 1 million, with capacity improvements next year they might be able to sustain 2 million, with capacity expansion from other foundries they might be able to push that to 3 million, still far below their optimal 4nm output, leaving a huge 4nm capacity leftover that they want to keep (don't forget that other companies are competing for capacity), to make for professional and gaming products, otherwise someone else will take that capacity and fill these markets with their products, kicking NVIDIA out. Obviosuly no sane company would let that happen, especially as these markets still make billions for NVIDIA.

NVIDIA also still produces lots and lots of A100 and A800 on 7nm, that end up competing with H100 on CoWoS capacity, so in the end, NVIDIA has plently of 4nm capacity for Ada GPUs.

Some companies even go as far as reserving silicon capacity just to block others from producing competing products on it, Intel has done so in the past repeatedly.

Not to mention that NVIDIA is still investing billions into ray tracing and path tracing algorithms, as well as DLSS and AI gaming applications .. these investments are only going to be profitable if NVIDIA keeps making ever more powerful gaming GPUs.
I understand there are limitations. 1000% profit margins have a way making these limitations go away.

Sorry I messed up my reply and it got split in two.
 
Status
Not open for further replies.
Back
Top