AMD Execution Thread [2024]

Pretty much on life support atm. Either rdna4 is a good hit or it’s close to gg. And that’s an awful thing for consumers from a competitive standpoint.

image.png
If that's really mostly due to semi-custom volume decline then 2% margins don't pay a very nice picture on how much money they make on dGPUs...
 
How long until AMD gaming is spun off and Sony takes over it? lol
It won't. AMD still needs competitive graphics everywhere even if they'd stop making gaming specific chips. And Sony couldn't do anything with AMDs gaming department by themselves, they'd still lack the IP.
 
Pretty much on life support atm. Either rdna4 is a good hit or it’s close to gg. And that’s an awful thing for consumers from a competitive standpoint.
I am just shocked it dropped this low, RDNA sales must be less than 200 million at best (Geforce usually averages 2600 million)! That's a massive massive regression.

The separation of AMD gaming line from data center line is hurting Radeon specifically. For reference, NVIDIA's AD102 can be sold in the RTX 4090, Quadro RTX6000/RTX5000 and L40/L20/L4 data center GPUs, NVIDIA controlled 3 markets with one chip, they can't lose, if one market underperforms, other markets will pick up the difference.

Contrast that to AMD, the Navi31 has only one market to succeed in (gaming with the 7900XTX/XT), technically it's two markets, but the Radeon Pro lineup is just a token existence, and is so hopelessly outmatched due to it's weak ray tracing and machine learning performance. So, realsitically they only have gaming, and they are soundily beaten there, so they are stuck with unsold stocks of Navi31 chips that can't go anywhere. This is a very bad business strategy and Radeon pays a hefty price for it, the cost of developing a Radeon chip is so much higher than NVIDIA and with much higher risks and very little returns.

Which explains why -in the future- AMD is going to unify all chips under the one UDNA roof to remedy this situation. Imagine a hypothetical Navi61 chip with competent ray tracing and machine learning cores that can be sold and be competitive in gaming, professional, and data center markets easily. If it loses in one market it has a chance to compensate in another. That's how it should have been from the beginning.
 
What’s going on with console sales and licensing that AMDs revenue can drop so precipitously in one quarter? Do Sony and Microsoft have warehouses full of chips already and don’t need to order any more?
 
It very much is simple and obvious as people have talked about it before, and they're still not even fully taking advantage of things yet, or at least taking things to their logical conclusion(of what is currently possible, at least). If you make putting a cache chip underneath standard, you can remove all the L3 from the CCD, which makes up a significant portion of the die. This gives a lot of playroom in terms of what to do with the CCD. You could shrink it heavily, you could widen the architecture massively, you could add more cores, you could reduce transistor density to improve thermals and clockspeeds, or obviously any combination of these things.

You could even add more cache or perhaps use an even lower density process for the cache chip(though I think TSMC has limits on compatibility currently?), since you're not having to worry about it being too big and covering up logic or anything.

Honestly, this being third generation of chips since Vcache was introduced, I'm a little disappointed they haven't gone this route yet.

There are no compatibility issues anymore afaik, unless you want to use something older than 7/6nm, but that would make no sense. 6nm is probably the most cost efficient node at the moment and you also get decent power efficiency (cache still consumes some power remember).

The bigger issue is additional complexity, cost, packaging capacity/cost/cycle time/yield, all of which make it unviable for mainstream/volume products. Vcache is likely to remain a niche for premium products.

AMD Q3 2024 is up, revenue: $6.82B, up +18% YoY, data center is massively up, client is up, embedded is down, and gaming is massively down. Q4 guidance is a bit cautious.

- Data Center: $3.55B, up +122% YoY, and up 25% sequentially.
- Client: $1.88B, up +29% YoY, and up 26% sequentially.
- Gaming: $462M, down -69% YoY and down 29% sequentially.
- Embedded: $927M, down 25% YoY but up 8% sequentially.


rqCPTP0v3GILKPaN.jpg




The Q4 guidance would still be their highest quarterly revenue ever, despite gaming being down significantly. And AMD is usually conservative with their guidance so I would expect them to hit the top end of the range.

This is also likely the first ever quarter that AMD will beat Intel in Data Center revenue. FPGA is recovering and AI GPU sales are increasing. They're already almost 50% of data center, and likely will be more than 50% in 2025. The gaming outlook is definitely not good, and they certainly could have done even better. They announced that RDNA 4 is coming only in 2025 and while that will likely prop the segment up a bit, I have my doubts if they will return to RDNA 2 levels.

If that's really mostly due to semi-custom volume decline then 2% margins don't pay a very nice picture on how much money they make on dGPUs...

Fixed costs don't really go away so naturally margins will fall. They were making reasonable margins until last year (~25-30% if I remember correctly).

It won't. AMD still needs competitive graphics everywhere even if they'd stop making gaming specific chips. And Sony couldn't do anything with AMDs gaming department by themselves, they'd still lack the IP.

Yep no way they're going to sell the graphics IP (They made that mistake once with Adreno and Qualcomm is laughing all the way to the bank). They need it for the APUs, and also next gen consoles, which are presumably into the design phase already. Also to a small extent for mobile IP partnerships with Samsung but that's pretty much a drop in the bucket.

What’s going on with console sales and licensing that AMDs revenue can drop so precipitously in one quarter? Do Sony and Microsoft have warehouses full of chips already and don’t need to order any more?
There are practically no new games and sales have tanked. Sony has done better than MS but both have reported sharp drops in console sales. The PS5 Pro was expected to boost sales a bit but given the pricing, even that looks unlikely.
 
That drop in revenue is wild. I guess the shift back to midrange and low end is crucial, and failure is death. Hoping both Intel and amd succeed in making good offerings in the lower price points.
 
There are no compatibility issues anymore afaik, unless you want to use something older than 7/6nm, but that would make no sense. 6nm is probably the most cost efficient node at the moment and you also get decent power efficiency (cache still consumes some power remember).
It's really not that much more complex. Much of the complexity is actually built in to literally all of these chips in the first place to enable a Vcache add-on. Yields also dont need to be an issue, cuz the cache chiplets are simple and done on a super mature and relatively cheap process, and you dont need to use just the very best bins for everything as you could sell them as a slightly lower model. I imagine the bonding process itself is quite mature by now as well. Plus again, there's room to reduce costs by shrinking the CCD further without needing the L3 on there.

They could absolutely do it. And they'd absolutely monster the competition in the process, Nvidia-style. Which they could also leverage to push up average prices for the mid and higher end SKU's because everybody would want them.
 
It's really not that much more complex. Much of the complexity is actually built in to literally all of these chips in the first place to enable a Vcache add-on. Yields also dont need to be an issue, cuz the cache chiplets are simple and done on a super mature and relatively cheap process, and you dont need to use just the very best bins for everything as you could sell them as a slightly lower model. I imagine the bonding process itself is quite mature by now as well. Plus again, there's room to reduce costs by shrinking the CCD further without needing the L3 on there.

They could absolutely do it. And they'd absolutely monster the competition in the process, Nvidia-style. Which they could also leverage to push up average prices for the mid and higher end SKU's because everybody would want them.

It really is that much more complex in terms of production. While the TSVs may be built in to the chips, they aren't used for the majority of products. I don't mean chip yields, I mean bonding process yields which while mature, can still fail and then you have 2 chips wasted. You still don't reduce cost as such, as you add a second chip and a more complex production process.

Overall as I mentioned, there is simply not enough packaging capacity, there are additional costs, production time and QA/QC involved, for something which offers benefits in select workloads, which is not needed for most of the market. Gaming is a already small subset of the market, and X3D is an even smaller niche, along with EPYC with Vcache. For the majority of the market, the standard design and production process is the best option. If it was that easy, they would have done it by now.
 
It really is that much more complex in terms of production. While the TSVs may be built in to the chips, they aren't used for the majority of products. I don't mean chip yields, I mean bonding process yields which while mature, can still fail and then you have 2 chips wasted. You still don't reduce cost as such, as you add a second chip and a more complex production process.

Overall as I mentioned, there is simply not enough packaging capacity, there are additional costs, production time and QA/QC involved, for something which offers benefits in select workloads, which is not needed for most of the market. Gaming is a already small subset of the market, and X3D is an even smaller niche, along with EPYC with Vcache. For the majority of the market, the standard design and production process is the best option. If it was that easy, they would have done it by now.
I'm sure it has nothing to do with their strategy of upselling the X3D parts for like $100-150, of course.
 
I'm sure it has nothing to do with their strategy of upselling the X3D parts for like $100-150, of course.
I don't see why it cannot be both.

To @Erinyes 's point, stacked chips are a more technically challenging process which inevitably results in some fewer quantity of successfully assembled and saleable devices. Of course we're likely talking about a very small percentage overall, which doesn't equate to $100+ USD increase in a bill of materials.

Which means, we're now back to another recent conversation where the explicit bill of materials isn't the defining factor in the retail price of a thing. Last time this conversation came up, it was about GPUs, but the core of the argument remains: a more desirable product can carry a higher pricetag, not because the true BOM cost is higher, but because the market will absolutely bear it. And for gamers who want the utmost in stable performance? The X3D line is where to get it.
 
I don't see why it cannot be both.

To @Erinyes 's point, stacked chips are a more technically challenging process which inevitably results in some fewer quantity of successfully assembled and saleable devices. Of course we're likely talking about a very small percentage overall, which doesn't equate to $100+ USD increase in a bill of materials.

Which means, we're now back to another recent conversation where the explicit bill of materials isn't the defining factor in the retail price of a thing. Last time this conversation came up, it was about GPUs, but the core of the argument remains: a more desirable product can carry a higher pricetag, not because the true BOM cost is higher, but because the market will absolutely bear it. And for gamers who want the utmost in stable performance? The X3D line is where to get it.

Of course it's some combination of both, but not only from the logic of upselling.

If they do a cache-less CCD + vcache like @Seanspeed suggests for all parts, they would increase the BOM cost and the production time for all parts. And while some users (a small subset as I said) might be willing to pay more, the majority of users would not benefit much from this/ascribe as much value to it and thus would not be willing to pay more. Net net, AMD would take a lot more risk and additional cost while most likely not achieving a corresponding increase in revenue over and above, which would make it unviable.
 
  • Like
Reactions: nAo
There are rumours however that they might go this route for Venice dense, i.e. Zen 6c which is on N2. It is supposedly going to be a 32 core cache-less CCD with all the L3 on Vcache.

This would make sense as SRAM scaling is practically dead beyond 5nm. So putting all the logic on N2 and cache on N4C perhaps would be an optimal solution from a cost point of view.
 
There are rumours however that they might go this route for Venice dense, i.e. Zen 6c which is on N2. It is supposedly going to be a 32 core cache-less CCD with all the L3 on Vcache.

This would make sense as SRAM scaling is practically dead beyond 5nm. So putting all the logic on N2 and cache on N4C perhaps would be an optimal solution from a cost point of view.
Actually TSMC claims they found another round of SRAM scaling from N2, but personally I'm afraid it might be one time thing thanks to them moving to GAA transistors
 
Actually TSMC claims they found another round of SRAM scaling from N2, but personally I'm afraid it might be one time thing thanks to them moving to GAA transistors

Yes there is a slight shrink, around ~17% vs N3E, but given that N3E didn't scale at all vs N5, it's pretty small. Given the high cost and demand for N2 (Likely double the cost of N5/N4 based nodes), it likely makes a lot more sense to keep as much of the SRAM on N4 as possible and the logic on N2 which will benefit more from the shrink and the power/performance improvements. The IODs are also expected to be on N4 for Zen 6 and likely Zen 7 as well.
 
Yes there is a slight shrink, around ~17% vs N3E, but given that N3E didn't scale at all vs N5, it's pretty small. Given the high cost and demand for N2 (Likely double the cost of N5/N4 based nodes), it likely makes a lot more sense to keep as much of the SRAM on N4 and the logic on N2 which will benefit more from the shrink and the power/performance improvements.
Yes, it's expensive of course, point was just that the wall has moved a bit from where it was
 
Yes, it's expensive of course, point was just that the wall has moved a bit from where it was

Yes but it's been moving slowly ever since N7, and is not expected to scale much beyond N2. SRAM heavy designs will incur large costs going forward and AMD will have quite an advantage over the competition if they can't implement similar tech.
 
Yes but it's been moving slowly ever since N7, and is not expected to scale much beyond N2. SRAM heavy designs will incur large costs going forward and AMD will have quite an advantage over the competition if they can't implement similar tech.
Yet it wasn't expected to scale beyond 5nm either after 3nm didn't improve.
 
Back
Top