Nvidia shows signs in [2023]

Status
Not open for further replies.

With their revenue increasingly dominated by far more profitable data center products, I do wonder if like AMD, NV will start shifting wafter starts to more profitable products which means less wafter starts for consumer graphics (gaming) which means higher prices.

Basically, similar to AMD, there's little incentive to allocate more wafer starts to consumer graphics if you have other more profitable (higher margin) market segments where your product is in high demand such that demand is higher than your ability to supply product.

So, there's absolutely no reason at this point for NV to not drastically increase prices for consumer GPs going forward since there's no downside. If less consumers buy consumer GPU's they can just shift their product stack more towards the more profitable data center chips.

Regards,
SB
 
With their revenue increasingly dominated by far more profitable data center products, I do wonder if like AMD, NV will start shifting wafter starts to more profitable products which means less wafter starts for consumer graphics (gaming) which means higher prices.

Basically, similar to AMD, there's little incentive to allocate more wafer starts to consumer graphics if you have other more profitable (higher margin) market segments where your product is in high demand such that demand is higher than your ability to supply product.

So, there's absolutely no reason at this point for NV to not drastically increase prices for consumer GPs going forward since there's no downside. If less consumers buy consumer GPU's they can just shift their product stack more towards the more profitable data center chips.

Regards,
SB
Said another way -- at least from wall street's point of view -- AI GPUs are sold at "normal" prices, while gaming GPUs are being sold at a heavily subsidized "discount". If supply is constrained, and foundries start squeezing even more on the cost side, investors are going to demand answers on why the subsidy is necessary.

I don't think it makes sense for NV to drastically increase gaming GPU prices in response. The market just isn't there. But maybe I'm projecting and I should be more precise -- *I* don't see myself paying $2K for a gaming GPU, and I'm at a stage in life where I have enough play money to do so. At the end of the day these are just toys. I'm sure NV knows this.

In the near future I think what's likely is that they'll keep MSRPs roughly the same (or at least pegged at similar ratios to Si costs) but reduce wafer starts as you suggest. For us, this means some shortages in storefronts but I doubt it will lead to massive retail price inflation in the absence of a theoretically infinite-demand driver like crypto. As long as NV keeps the markets artificially segmented (e.g., using VRAM capacity, bandwidth, tensor-op constraints) I think gaming products can be shielded from AI hunger.

Why would NV even bother with all this? Maybe it's just wishful thinking on my part, but I do think would want to hedge against any potential AI crashes, as unlikely as that may seem today.
 
NVIDIA Announces Financial Results for First Quarter Fiscal 2024

GAAP
($ in millions, except earnings per share)Q1 FY24Q4 FY23Q1 FY23Q/QY/Y
Revenue$7,192 $6,051 $8,288 Up 19%Down 13%
Gross margin 64.6% 63.3% 65.5%Up 1.3 ptsDown 0.9 pts
Operating expenses$2,508 $2,576 $3,563 Down 3%Down 30%
Operating income$2,140 $1,257 $1,868 Up 70%Up 15%
Net income$2,043 $1,414 $1,618 Up 44%Up 26%
Diluted earnings per share$0.82 $0.57 $0.64 Up 44%Up 28%




Non-GAAP
($ in millions, except earnings per share)Q1 FY24Q4 FY23Q1 FY23Q/QY/Y
Revenue$7,192 $6,051 $8,288 Up 19%Down 13%
Gross margin 66.8% 66.1% 67.1%Up 0.7 ptsDown 0.3 pts
Operating expenses$1,750 $1,775 $1,608 Down 1%Up 9%
Operating income$3,052 $2,224 $3,955 Up 37%Down 23%
Net income$2,713 $2,174 $3,443 Up 25%Down 21%
Diluted earnings per share$1.09 $0.88 $1.36 Up 24%Down 20%

Outlook
NVIDIA’s outlook for the second quarter of fiscal 2024 is as follows:


  • Revenue is expected to be $11.00 billion, plus or minus 2%.
  • GAAP and non-GAAP gross margins are expected to be 68.6% and 70.0%, respectively, plus or minus 50 basis points.
  • GAAP and non-GAAP operating expenses are expected to be approximately $2.71 billion and $1.90 billion, respectively.
  • GAAP and non-GAAP other income and expense are expected to be an income of approximately $90 million, excluding gains and losses from non-affiliated investments.
  • GAAP and non-GAAP tax rates are expected to be 14.0%, plus or minus 1%, excluding any discrete items.
 
Why would NV even bother with all this? Maybe it's just wishful thinking on my part, but I do think would want to hedge against any potential AI crashes, as unlikely as that may seem today.

This is actually a very good reason, and maybe 'the' reason. :)
I don't know how supply constrained (if any) NVIDIA is facing in AI products, but I know it's not easy to get a H100 right now. There are also "edge computing" AI products but they are pretty low margin and NVIDIA is not dominanting that market. However, I have heard some rumors that NVIDIA is not interested in bidding for some governments large HPC projects. It could simply be that the price is not right, but there a "prestige" value in getting these projects so it could also mean a supply constrain issue.

Another reason it's probably unwise to give up gaming is that you don't want someone to be able to come up with something from the lower end to fight you. For example, if NVIDIA put less emphasis on gaming and let AMD and Intel to eat that pie, it's quite possible that AMD and Intel's GPU will one day somehow be more popular in AI or whatever future data center markets. People like to be able to experiment things at home, and many times interesting new markets started in some student dormitory or garage. You won't see high end data center products there.
 

Nvidia's stock zoomed as much as 28% after the bell to trade at $391.50, its highest level ever. That increased its stock market value by about $200 billion to over $960 billion, extending the Silicon Valley company's lead as the world's most valuable chipmaker and Wall Street's fifth-most valuable company.
 
Last edited:
In the near future I think what's likely is that they'll keep MSRPs roughly the same (or at least pegged at similar ratios to Si costs) but reduce wafer starts as you suggest. For us, this means some shortages in storefronts but I doubt it will lead to massive retail price inflation in the absence of a theoretically infinite-demand driver like crypto. As long as NV keeps the markets artificially segmented (e.g., using VRAM capacity, bandwidth, tensor-op constraints) I think gaming products can be shielded from AI hunger.
The chips are mostly the same though. Outside of GH100 the rest are just the same chips which may go into either gaming or DC products. It doesn't make any sense to lower their production if they are in high demand on the DC side. The allocations may be skewed towards DC products - but then one have to wonder how much chips are we even talking about? DC products have high margins but I doubt that they are as high volume as gaming ones meaning that it is unlikely that DC demand will affect gaming supply much.
 
So, there's absolutely no reason at this point for NV to not drastically increase prices for consumer GPs going forward since there's no downside.
There are many reasons for Nvidia to not "drastically increase" any prices, unless they aim at abandoning the market altogether - which they are not.
Also - there are no "drastic price increases" anywhere in the Ada lineup. There are products with close to zero perf/price improvements but that's the current extent of it.
Ada's lineup is being offered in the exact same price range as Ampere was. Same for Turing if we account for RTX Titan.
And since this seem to have been missed by many again - Nv has just forced AMD to lower the launch price of their new SKU, again.
 
Last edited:
Does the procreator market influence pricing or supply of gaming products where these products could be used for specific workflows? Depending on what application is used it seems some of these cards could be viable workflow alternatives based on recent Techgage reviews.
 
Jensen's thought on the custom AI ASICs, from the Q1 earnings call transcript:
Vivek Arya -- Bank of America Merrill Lynch -- Analyst

Thanks for the question. I just wanted to clarify, does visibility mean data center sales can continue to grow sequentially in Q3 and Q4, or do they sustain at Q2 level? So, I just wanted to clarify that. And then, Jensen, my question is that given this very strong demand environment, what does it do to the competitive landscape? Does it invite more competition in terms of custom ASICs? Does it invite more competition in terms of other GPU solutions or other kinds of solutions? What -- how do you see the competitive landscape change over the next two to three years?


Colette Kress -- Executive Vice President and Chief Financial Officer

Yeah, Vivek. Thanks for the question. Let me see if I can add a little bit more color. We believe that the supply that we will have for the second half of the year will be substantially larger than H1.

So, we are expecting, not only the demand that we just saw in this last quarter, the demand that we have in Q2 for our forecast, but also planning on seeing something in the second half of the year. We just have to be careful here, but we're not here to guide on the second half. But yes, we do plan a substantial increase in the second half compared to the first half.


Jensen Huang -- President and Chief Executive Officer

Regarding competition, we have competition from every direction, start-ups, really, really well funded and innovative start-ups, countless of them all over the world. We have competitions from existing semiconductor companies. We have competition from CSPs with internal projects, and many of you know about most of these. And so, we're mindful of competition all the time, and we get competition all the time.

NVIDIA's value proposition at the core is we are the lowest cost solution. We're the lowest TCO solution. And the reason for that is because accelerated computing is two things that I talked about often, which is it's a full stack problem. It's a full stack challenge.

You have to engineer all of the software and all the libraries and all the algorithms, integrate them into and optimize the frameworks and optimize it for the architecture of not just one chip but the architecture of an entire data center all the way into the frameworks, all the way into the models. And the amount of engineering and distributed computing -- fundamental computer science work is really quite extraordinary. It is the hardest computing as we know. And so, number one, it's a full stack challenge, and you have to optimize it across the whole thing and across just a mind-blowing number of stacks.

We have 400 acceleration libraries. As you know, the amount of libraries and frameworks that we accelerate is pretty mind-blowing. The second part is that generative AI is a large-scale problem and it's a data center scale problem. It's another way of thinking that the computer is the data center or the data center is the computer.

It's not the chip, it's the data center. And it's never happened like this before. And in this particular environment, your networking operating system, your distributed computing engines, your understanding of the architecture of the networking gear, the switches, and the computing systems, the computing fabric, that entire system is your computer. And that's what you're trying to operate.

And so, in order to get the best performance, you have to understand full stack and understand data center scale. And that's what accelerated computing is. The second thing is that utilization, which talks about the amount of the types of applications that you can accelerate and the versatility of your architecture keeps that utilization high. If you can do one thing and do one thing only incredibly fast, then your data center is largely underutilized, and it's hard to scale that out.

NVIDIA's universal GPU, the fact that we accelerate so many of these stacks, makes our utilization incredibly high. And so, number one is steer put, and that's software-intensive problems and data center architecture problem. The second is digitalization versatility problem. And the third, it's just data center expertise.

We've built five data centers of our own, and we've helped companies all over the world build data centers. And we integrate our architecture into all the world's clouds. From the moment of delivery of the product to the standing up and the deployment, the time to operations of a data center is measured not -- it can -- if you're not good at it and not proficient at it, it could take months. Standing up a supercomputer -- let's see.

Some of the largest supercomputers in the world were installed about 1.5 years ago, and now they're coming online. And so, it's not unheard of to see a delivery to operations of about a year. Our delivery to operations is measured in weeks. And that's -- we've taken data centers and supercomputers, and we've turned it into products.

And the expertise of the team in doing that is incredible. And so, our value proposition is in the final analysis. All of this technology translates into infrastructure, the highest throughput and the lowest possible cost. And so, I think our market is, of course, very, very competitive, very large, but the challenge is really, really great.
 
Nvidia Computex Keynote is live right now. I'm going to start it from the start. Been a while since Jensen has done a live presentation :)

 
Mediatek will use a nVidia GPU chiplet for their automobile SoCs:
With this new GPU chiplet, NVIDIA can extend its GPU and accelerated compute leadership across broader markets.
MediaTek will develop automotive SoCs and integrate the NVIDIA GPU chiplet, featuring NVIDIA AI and graphics intellectual property, into the design architecture. The chiplets are connected by an ultra-fast and coherent chiplet interconnect technology.
 
Status
Not open for further replies.
Back
Top