AMD Execution Thread [2024]

AMD going for a yearly cadence with Instinct. MI325 going to be the same as MI300 but with faster memory, MI350 is CDNA4 for 2025 - "the biggest generational leap in AI performance in our history" - 3nm, 288GB HBM3E, FP4/FP6 - claimed 35x inference performance vs CDNA 3. Claiming "1.2x AI compute TFlops" vs B200. Then MI400 in 2026 with "CDNA next"

Nothing on Strix GPU perf, lots of "AI" focus and partnerships with companies *and nothing about RDNA4

 
Last edited:
IPC =/= performance of the final product.
I'm aware, but boost clocks are also unchanged and TDPs have gone down for all products apart from the 9950X. Maybe we will see slightly higher all-core clocks in gaming, but still it's hard to see how it can be 15% faster than the 14900K.
 
Seems like AMD is playing it a bit safe with clocks in order to really market efficiency advantages, especially knowing that Intel isn't gonna be a node generation behind anymore soon enough. Wouldn't be surprised if people can squeeze out a bit more clock/performance from these from default than before.

Still, I'm happy enough to see lower TDP since that's how many people will run them and the gains for an extra 40-70w+ of power or whatever are usually quite minimal.

For gaming, Zen 5 looks like it'll probably not be some huge leap. It's fine, but I think without a bigger process leap, Zen 4 had already locked up most of the potential for further clock gains.
 
she (Su) said 16% but whatever ... anyway it´s far from all of so called "leaks" from Internet. The greatest performance uplift is unsurprisingly in AVX
16% is a reasonable generational IPC increase, similar to Cortex X5/X925 vs X4 but Cortex has been on a yearly cadence. AMD has sort of slipped to a ~2 year cadence considering Zen3 launched in Nov'20. The rumours pointed to ~20-30% or more so it is a bit disappointing tbh. However, with no increase in L2/L3 cache, supposedly the area increase is minimal even with AVX 512, and it seems AMD is still far ahead of Intel on overall PPA. While this is understandable considering AMD's focus is the higher margin server market, many were hoping for much more from Zen 5.
I'm aware, but boost clocks are also unchanged and TDPs have gone down for all products apart from the 9950X. Maybe we will see slightly higher all-core clocks in gaming, but still it's hard to see how it can be 15% faster than the 14900K.
As with all manufacturer provided numbers, we should always take it with a grain(s) of salt. We don't know the exact settings and power limits used for the Intel config. We'll get independent third party reviews closer to the launch.
Seems like AMD is playing it a bit safe with clocks in order to really market efficiency advantages, especially knowing that Intel isn't gonna be a node generation behind anymore soon enough. Wouldn't be surprised if people can squeeze out a bit more clock/performance from these from default than before.

Still, I'm happy enough to see lower TDP since that's how many people will run them and the gains for an extra 40-70w+ of power or whatever are usually quite minimal.

For gaming, Zen 5 looks like it'll probably not be some huge leap. It's fine, but I think without a bigger process leap, Zen 4 had already locked up most of the potential for further clock gains.
Wonder if they did end up using N4X, but in general I agree, Zen 2 and Zen 3 were clocked more conservatively and with Zen 4 they really pushed it to the max on the v/f curve to squeeze out every last bit of performance. If Zen 5 is more power efficient (which Lisa hinted as well), it's better for consumers overall. Should also make more of a difference to the laptop parts and Strix Point seems to have a lot more design wins vs Phoenix/Hawk Point. The Zen5c "dense" server CCD is on 3nm so that should at least offer disproportionally higher performance and performance/watt vs Bergamo.
 
Q: Last week you said AMD will make 3nm with GAA chips. Samsung Foundry is the only foundry doing 3nm GAA - so will AMD choose Samsung Foundry for this?

A: Referring to keynote at imec last week. What we were talking about is that AMD will always use the most advanced technology. We will use 3nm. We will use 2nm. We didn't say the vendor for 3nm or GAA. Our current partnership with TSMC is very strong - we talked about 3nm products we're doing now.

Lisa Su took a step back on the potential use of Samsung's GAA process. She seems to have finally realized that her previous speech could make AMD's relationship with TSMC inconvenient.
 


Lisa Su took a step back on the potential use of Samsung's GAA process. She seems to have finally realized that her previous speech could make AMD's relationship with TSMC inconvenient.

AMD has been working with TSMC for decades and are probably the 3rd largest TSMC customer after Apple and Nvidia. They used to work with both Globalfoundries and TSMC earlier. Even Nvidia works very closely with TSMC yet used Samsung for Ampere. In the end the relationship boils down to the contracts and volumes they actually commit to and AMD seems to be pretty happy with TSMC. Wafer availability and pricing do play a part as well, and seemingly AMD will use Samsung 4nm for a low cost Zen 5 APU (Sonoma Valley) in 2025. However what I have read is that TSMC does not work with customers on the bleeding edge nodes unless they single source from TSMC. Hence the use of Samsung 8nm by Nvidia and 4nm by AMD, which were/are both older nodes and does make sense from that perspective.
 
However what I have read is that TSMC does not work with customers on the bleeding edge nodes unless they single source from TSMC.
Too many examples that contradict this for it to be true. Nvidia, Intel, Qualcomm, etc have all sourced TSMC for certain parts while using a different fab for others at the same time.

Like, while Nvidia used Samsung 8nm for consumer Ampere parts, they used TSMC 7nm for A100. Or even now, Intel will be using both TSMC N3 and Intel 20A on literally the same processor! Pretty sure even AMD were using TSMC 7nm chiplets with a GF 14nm I/O die for Zen 2, right?

Or am I misunderstanding what you're saying?
 
Everything is a question of costs and prices. Someone willing to commit is more likely to receive discounts and other advantages of course but no business would cut themselves from potential customers just because these are using products and services from a competitor. This whole idea is just completely wrong.
 
Everything is a question of costs and prices. Someone willing to commit is more likely to receive discounts and other advantages of course but no business would cut themselves from potential customers just because these are using products and services from a competitor. This whole idea is just completely wrong.
Why? if you acknowledge discounts are likely, it's exactly the same thing.

Of course no one would flat out reject an customer but the leading edge may very well be priced too high without having a "deal" to lower it.
 
Why? if you acknowledge discounts are likely, it's exactly the same thing.
Not really. For a business those who don't use your products and services is the most obvious option of increasing your revenues and profits. So while you're likely to provide discounts and other enticements to your current customers to keep them as your customers you would be looking to those who are not for growth opportunities. Cutting these off just because they are not your customers is just completely backwards to how any business operates.
 
Too many examples that contradict this for it to be true. Nvidia, Intel, Qualcomm, etc have all sourced TSMC for certain parts while using a different fab for others at the same time.

Like, while Nvidia used Samsung 8nm for consumer Ampere parts, they used TSMC 7nm for A100. Or even now, Intel will be using both TSMC N3 and Intel 20A on literally the same processor! Pretty sure even AMD were using TSMC 7nm chiplets with a GF 14nm I/O die for Zen 2, right?

Or am I misunderstanding what you're saying?

Yes they have all dual sourced, which even I mentioned but as I said they were not bleeding edge. Nvidia used Samsung 8nm, a derivative of 10nm, when TSMC 5nm was shipping (Apple). AMD used GF's by then antiquated 14nm process with TSMC 7nm (more to satisfy the WSA with GF than anything). Neither AMD nor Nvidia are considering Samsung for 3nm/2nm while they are both working on TSMC 3nm/2nm. Intel is the only one who is actually using leading edge from both TSMC 3nm and their own 20A as you mentioned. But intel is not using TSMC 2nm and their TSMC 3nm contract itself may be a unique case where they also put down a substantial pre-pay years in advance. Anyway I could well be wrong but this is something I had come across from semiconductor industry insiders.
TSMC doesn’t have perfect leverage here. They need early adopters just as much as customers need advanced nodes. We will never know what these deals really look like.

Actually TSMC usually has more leverage than their customers because there is usually more demand than supply for advanced nodes, especially in the current scenario. Samsung is not a reliable fab partner, and Intel Foundry is yet to prove itself in volume. Pretty much leaves only TSMC.
 
Could AMD be in significant trouble in the PC and laptop space in the nearish-medium term future with the ARM and Qualcomm invasion if they don't go to yearly release schedules, at least for CPUs? The rate of improvement should be terrifying if they don't have massive improvements and QC have said they want to be in desktops too

  • Phone makers are used to annual releases with decent improvements whereas AMD are every two years
  • 15-20% 1T perf/efficiency yearly isn't an unfair expectation for them, whereas that's on the lower-average end of expected for AMD every 2 years
  • Intel are also yearly with reasonably good improvements in the ballpark to ARM/Qualcomm, although granted their starting point is worse than AMD
  • GPU wise 1.3-1.4x iGPU is about expected every year, sometimes slightly higher for phone makers - similar to AMD but they generally are 1.5-2 year releases

Hypothetical 2029 scenario:
AMD are still 2 years/CPU gen, averaging 1.2x perf/efficiency per gen - 1.2^2 = 1.44x (their next release is a year later in 2030)
ARM/Qualcomm yearly 1.2x perf/efficiency, 1.2^5 = 2.49x
Difference = 1.728x in favour of ARM/QC

For reference, 1.6^2 = 2.56x | 1.15^5 = 2.01x | 1.4^2 = 1.96x | 1.3^2 = 1.69

Who would buy an AMD laptop/desktop if the competition improve that much relative to them? Which vendor would want to spend money making laptops with them over their ARM rivals, if they're able to make progress like they have done in phones? I don't want to be a doom and gloom person here but those aren't unrealistic numbers for rates of perf/eff improvement/expectations given what's happened since 2017 (Zen release) for phones/AMD/Intel. Obviously the future hasn't happened yet, but is that unrealistic if things continue in a similar way to now?
 
Could AMD be in significant trouble in the PC and laptop space in the nearish-medium term future with the ARM and Qualcomm invasion if they don't go to yearly release schedules, at least for CPUs? The rate of improvement should be terrifying if they don't have massive improvements and QC have said they want to be in desktops too

  • Phone makers are used to annual releases with decent improvements whereas AMD are every two years
  • 15-20% 1T perf/efficiency yearly isn't an unfair expectation for them, whereas that's on the lower-average end of expected for AMD every 2 years
  • Intel are also yearly with reasonably good improvements in the ballpark to ARM/Qualcomm, although granted their starting point is worse than AMD
  • GPU wise 1.3-1.4x iGPU is about expected every year, sometimes slightly higher for phone makers - similar to AMD but they generally are 1.5-2 year releases

Hypothetical 2029 scenario:
AMD are still 2 years/CPU gen, averaging 1.2x perf/efficiency per gen - 1.2^2 = 1.44x (their next release is a year later in 2030)
ARM/Qualcomm yearly 1.2x perf/efficiency, 1.2^5 = 2.49x
Difference = 1.728x in favour of ARM/QC

For reference, 1.6^2 = 2.56x | 1.15^5 = 2.01x | 1.4^2 = 1.96x | 1.3^2 = 1.69

Who would buy an AMD laptop/desktop if the competition improve that much relative to them? Which vendor would want to spend money making laptops with them over their ARM rivals, if they're able to make progress like they have done in phones? I don't want to be a doom and gloom person here but those aren't unrealistic numbers for rates of perf/eff improvement/expectations given what's happened since 2017 (Zen release) for phones/AMD/Intel. Obviously the future hasn't happened yet, but is that unrealistic if things continue in a similar way to now?

I have similar thoughts as yours and even with Zen 4, I felt the IPC increase was sub-par (~13%) for what was a ~20-22 month cadence with an increased transistor budget. It was only due to the performance of the new node that they managed to stay on par with Intel. Zen 5 brought a ~16% IPC increase, which is what ARM managed in a year while AMD again took ~20-22 months. Zen 5 seems to have only a nominal increase in transistors and seemingly they've spent a majority of it's transistor budget on AVX 512, which is not a benefit to consumer workloads. It is clear AMD is focusing on the higher margin server market and client is secondary. While you cannot fault AMD for prioritizing their higher revenue/margin lines, one hopes they do focus a bit more on client as well. Part of the reasons for AMD's seemingly slow progress is also simply resources. CPU and GPU design cycles take years and it's only since 2021 onwards where they've been making enough money to really invest more in R&D/personnel (Though Zen 3's brilliant execution would seem to imply otherwise).

On the GPU front as well, there seems to be a shortage of execution resources/memory bandwidth and performance hasn't improved significantly since Rembrandt. Phoenix definitely underperformed, and early Strix Point benchmarks seem to put it about on par with Lunar Lake, when AMD has traditionally always been significantly ahead.

However it isn't as black and white as you put it. ARM, Qualcomm and Apple's CPUs are designed for lower power and inherently cannot clock as high as AMD/Intel. AMD seems to be ahead on overall PPA. There are also diminishing returns as you keep increasing performance. Qualcomm also seems to be on a ~2 year cadence with their next gen due only in 2026. Apple has been ~12-15 months but if you take IPC from M1 to M4, it has not been double digit CAGR. The performance increase has also been due to clocks.

I do agree though that AMD needs to get back to a 12-15 month cadence and move to leading edge nodes faster or else they risk getting surpassed by other players.
 
Could AMD be in significant trouble in the PC and laptop space in the nearish-medium term future with the ARM and Qualcomm invasion if they don't go to yearly release schedules, at least for CPUs? The rate of improvement should be terrifying if they don't have massive improvements and QC have said they want to be in desktops too

Who would buy an AMD laptop/desktop if the competition improve that much relative to them? Which vendor would want to spend money making laptops with them over their ARM rivals, if they're able to make progress like they have done in phones? I don't want to be a doom and gloom person here but those aren't unrealistic numbers for rates of perf/eff improvement/expectations given what's happened since 2017 (Zen release) for phones/AMD/Intel. Obviously the future hasn't happened yet, but is that unrealistic if things continue in a similar way to now?
If ARM implementations can't feasibly emulate AVX/2 instructions then they won't be taken seriously by many consumers. A major ISV like Adobe have several applications that require AVX/2 ...
 
Back
Top