Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Said another way -- at least from wall street's point of view -- AI GPUs are sold at "normal" prices, while gaming GPUs are being sold at a heavily subsidized "discount". If supply is constrained, and foundries start squeezing even more on the cost side, investors are going to demand answers on why the subsidy is necessary.With their revenue increasingly dominated by far more profitable data center products, I do wonder if like AMD, NV will start shifting wafter starts to more profitable products which means less wafter starts for consumer graphics (gaming) which means higher prices.
Basically, similar to AMD, there's little incentive to allocate more wafer starts to consumer graphics if you have other more profitable (higher margin) market segments where your product is in high demand such that demand is higher than your ability to supply product.
So, there's absolutely no reason at this point for NV to not drastically increase prices for consumer GPs going forward since there's no downside. If less consumers buy consumer GPU's they can just shift their product stack more towards the more profitable data center chips.
Regards,
SB
GAAP | |||||||||||
($ in millions, except earnings per share) | Q1 FY24 | Q4 FY23 | Q1 FY23 | Q/Q | Y/Y | ||||||
Revenue | $ | 7,192 | $ | 6,051 | $ | 8,288 | Up 19% | Down 13% | |||
Gross margin | 64.6 | % | 63.3 | % | 65.5 | % | Up 1.3 pts | Down 0.9 pts | |||
Operating expenses | $ | 2,508 | $ | 2,576 | $ | 3,563 | Down 3% | Down 30% | |||
Operating income | $ | 2,140 | $ | 1,257 | $ | 1,868 | Up 70% | Up 15% | |||
Net income | $ | 2,043 | $ | 1,414 | $ | 1,618 | Up 44% | Up 26% | |||
Diluted earnings per share | $ | 0.82 | $ | 0.57 | $ | 0.64 | Up 44% | Up 28% |
Non-GAAP | |||||||||||
($ in millions, except earnings per share) | Q1 FY24 | Q4 FY23 | Q1 FY23 | Q/Q | Y/Y | ||||||
Revenue | $ | 7,192 | $ | 6,051 | $ | 8,288 | Up 19% | Down 13% | |||
Gross margin | 66.8 | % | 66.1 | % | 67.1 | % | Up 0.7 pts | Down 0.3 pts | |||
Operating expenses | $ | 1,750 | $ | 1,775 | $ | 1,608 | Down 1% | Up 9% | |||
Operating income | $ | 3,052 | $ | 2,224 | $ | 3,955 | Up 37% | Down 23% | |||
Net income | $ | 2,713 | $ | 2,174 | $ | 3,443 | Up 25% | Down 21% | |||
Diluted earnings per share | $ | 1.09 | $ | 0.88 | $ | 1.36 | Up 24% | Down 20% |
Why would NV even bother with all this? Maybe it's just wishful thinking on my part, but I do think would want to hedge against any potential AI crashes, as unlikely as that may seem today.
Nvidia's stock zoomed as much as 28% after the bell to trade at $391.50, its highest level ever. That increased its stock market value by about $200 billion to over $960 billion, extending the Silicon Valley company's lead as the world's most valuable chipmaker and Wall Street's fifth-most valuable company.
The chips are mostly the same though. Outside of GH100 the rest are just the same chips which may go into either gaming or DC products. It doesn't make any sense to lower their production if they are in high demand on the DC side. The allocations may be skewed towards DC products - but then one have to wonder how much chips are we even talking about? DC products have high margins but I doubt that they are as high volume as gaming ones meaning that it is unlikely that DC demand will affect gaming supply much.In the near future I think what's likely is that they'll keep MSRPs roughly the same (or at least pegged at similar ratios to Si costs) but reduce wafer starts as you suggest. For us, this means some shortages in storefronts but I doubt it will lead to massive retail price inflation in the absence of a theoretically infinite-demand driver like crypto. As long as NV keeps the markets artificially segmented (e.g., using VRAM capacity, bandwidth, tensor-op constraints) I think gaming products can be shielded from AI hunger.
There are many reasons for Nvidia to not "drastically increase" any prices, unless they aim at abandoning the market altogether - which they are not.So, there's absolutely no reason at this point for NV to not drastically increase prices for consumer GPs going forward since there's no downside.
Vivek Arya -- Bank of America Merrill Lynch -- Analyst
Thanks for the question. I just wanted to clarify, does visibility mean data center sales can continue to grow sequentially in Q3 and Q4, or do they sustain at Q2 level? So, I just wanted to clarify that. And then, Jensen, my question is that given this very strong demand environment, what does it do to the competitive landscape? Does it invite more competition in terms of custom ASICs? Does it invite more competition in terms of other GPU solutions or other kinds of solutions? What -- how do you see the competitive landscape change over the next two to three years?
Colette Kress -- Executive Vice President and Chief Financial Officer
Yeah, Vivek. Thanks for the question. Let me see if I can add a little bit more color. We believe that the supply that we will have for the second half of the year will be substantially larger than H1.
So, we are expecting, not only the demand that we just saw in this last quarter, the demand that we have in Q2 for our forecast, but also planning on seeing something in the second half of the year. We just have to be careful here, but we're not here to guide on the second half. But yes, we do plan a substantial increase in the second half compared to the first half.
Jensen Huang -- President and Chief Executive Officer
Regarding competition, we have competition from every direction, start-ups, really, really well funded and innovative start-ups, countless of them all over the world. We have competitions from existing semiconductor companies. We have competition from CSPs with internal projects, and many of you know about most of these. And so, we're mindful of competition all the time, and we get competition all the time.
NVIDIA's value proposition at the core is we are the lowest cost solution. We're the lowest TCO solution. And the reason for that is because accelerated computing is two things that I talked about often, which is it's a full stack problem. It's a full stack challenge.
You have to engineer all of the software and all the libraries and all the algorithms, integrate them into and optimize the frameworks and optimize it for the architecture of not just one chip but the architecture of an entire data center all the way into the frameworks, all the way into the models. And the amount of engineering and distributed computing -- fundamental computer science work is really quite extraordinary. It is the hardest computing as we know. And so, number one, it's a full stack challenge, and you have to optimize it across the whole thing and across just a mind-blowing number of stacks.
We have 400 acceleration libraries. As you know, the amount of libraries and frameworks that we accelerate is pretty mind-blowing. The second part is that generative AI is a large-scale problem and it's a data center scale problem. It's another way of thinking that the computer is the data center or the data center is the computer.
It's not the chip, it's the data center. And it's never happened like this before. And in this particular environment, your networking operating system, your distributed computing engines, your understanding of the architecture of the networking gear, the switches, and the computing systems, the computing fabric, that entire system is your computer. And that's what you're trying to operate.
And so, in order to get the best performance, you have to understand full stack and understand data center scale. And that's what accelerated computing is. The second thing is that utilization, which talks about the amount of the types of applications that you can accelerate and the versatility of your architecture keeps that utilization high. If you can do one thing and do one thing only incredibly fast, then your data center is largely underutilized, and it's hard to scale that out.
NVIDIA's universal GPU, the fact that we accelerate so many of these stacks, makes our utilization incredibly high. And so, number one is steer put, and that's software-intensive problems and data center architecture problem. The second is digitalization versatility problem. And the third, it's just data center expertise.
We've built five data centers of our own, and we've helped companies all over the world build data centers. And we integrate our architecture into all the world's clouds. From the moment of delivery of the product to the standing up and the deployment, the time to operations of a data center is measured not -- it can -- if you're not good at it and not proficient at it, it could take months. Standing up a supercomputer -- let's see.
Some of the largest supercomputers in the world were installed about 1.5 years ago, and now they're coming online. And so, it's not unheard of to see a delivery to operations of about a year. Our delivery to operations is measured in weeks. And that's -- we've taken data centers and supercomputers, and we've turned it into products.
And the expertise of the team in doing that is incredible. And so, our value proposition is in the final analysis. All of this technology translates into infrastructure, the highest throughput and the lowest possible cost. And so, I think our market is, of course, very, very competitive, very large, but the challenge is really, really great.
I can remember some people were concerned about Nvidia last year because they prepaid too many 5nm wafers from TSMC. What a twist.![]()
Nvidia rush orders lifting TSMC 5nm fab utilization
An influx of Nvidia orders requiring super hot runs (SHR) have shored up capacity utilization rates for TSMC's 5nm process platform to almost full, according to industry sources.www.digitimes.com
With this new GPU chiplet, NVIDIA can extend its GPU and accelerated compute leadership across broader markets.
MediaTek will develop automotive SoCs and integrate the NVIDIA GPU chiplet, featuring NVIDIA AI and graphics intellectual property, into the design architecture. The chiplets are connected by an ultra-fast and coherent chiplet interconnect technology.