AMD Execution Thread [2024]

I think AMD needs to be just as worried about Intel as ARM in the CPU space.

Intel might have some goodwill to reclaim in the server space, but it wasn't so long ago that they were still dominant here, so shouldn't be massively hard for them to convince customers to come back if they can get back to executing, which they might well do soon.

Intel both choosing to use TSMC's near-latest nodes and also making huge leaps with their own nodes perhaps signals an end to some of AMD's better competitive advantages lately.

For PC/laptop, ARM will probably be a hassle for sure. Most of the market doesn't care about high performance applications, so all ARM-based processors need to do is offer ubiquitous product offerings and competitive pricing. And there's so much misplaced hype around ARM, with everybody thinking any ARM chip will magically have Apple-like performance and efficiency which definitely isn't the case, but reality doesn't matter as much as marketing and hype. Gonna be interesting how this plays out. QC/Nuvia might be the most exciting player, but other more standard ARM designs could still ride the coattails.

Gonna be a weird next 5 years or so I think. Many moving parts, with nobody looking like they're locked in as a winner in the end.
 
If ARM implementations can't feasibly emulate AVX/2 instructions then they won't be taken seriously by many consumers. A major ISV like Adobe have several applications that require AVX/2 ...

If they are willing to port to ARM, AVX/2 is not a big issue. Adobe has no problem selling Apple silicon version of Photoshop for example.
 
If they are willing to port to ARM, AVX/2 is not a big issue. Adobe has no problem selling Apple silicon version of Photoshop for example.
But who's going to buy devices that won't support these applications out of the box ? For many potential customers, they usually don't want to own more than 1 of a similar type so they'll eventually decide to pass up on ARM-based offerings and the cycle repeats itself again ...
 
But who's going to buy devices that won't support these applications out of the box ? For many potential customers, they usually don't want to own more than 1 of a similar type so they'll eventually decide to pass up on ARM-based offerings and the cycle repeats itself again ...

But not everyone needs applications requiring AVX/2. If that's the case, I believe Microsoft'd be already emulating that (it's not very difficult even without vector instructions, it's just a matter of performance). It's probably going to go like this: if enough people are buying these laptops for, say, Office, application vendors will notice and will port their applications. Rosetta 2 does not support AVX/2 either.
 
But not everyone needs applications requiring AVX/2. If that's the case, I believe Microsoft'd be already emulating that (it's not very difficult even without vector instructions, it's just a matter of performance). It's probably going to go like this: if enough people are buying these laptops for, say, Office, application vendors will notice and will port their applications. Rosetta 2 does not support AVX/2 either.
If they don't need any high performance applications then there'll be more freedom in terms of choice but AVX/2 is quickly proliferating along in the world of Windows so I suspect that any ARM devices to become less attractive proposition over time until they can implement 256-bit SVE2 ...
 
If they don't need any high performance applications then there'll be more freedom in terms of choice but AVX/2 is quickly proliferating along in the world of Windows so I suspect that any ARM devices to become less attractive proposition over time until they can implement 256-bit SVE2 ...

Well, I don't think the length of vector is a big issue here. One of the reason why AVX/2 is popular is for the extra registers. ARMv8 already has 32 general registers. SVE also has 32 vector registers. That's the same register space as AVX2.
It's probably more important for Microsoft to quickly make sure that DirectCompute and DirectML works well on ARM for high performance applications.
 
But who's going to buy devices that won't support these applications out of the box ? For many potential customers, they usually don't want to own more than 1 of a similar type so they'll eventually decide to pass up on ARM-based offerings and the cycle repeats itself again ...
You keep bringing this x86 256-bit AVX issue on ARM but it's not really a problem. AVX or whatever extension, is a click on the compiler option and then the software runs faster if it can take advantage of the CPU instructions available. The end result is only a matter of performance, not compatibility. When a developer targets ARM platform, they just have different options (NEON, SVE, SME etc). Thus on a supported CPU, 128-bit SVE code will run faster than without.
BTW, Adobe is porting all its Creative suite on WoA, like they did on Apple Silicon. And most of large publishers are following suite. In 2~3 years time, I expect all new software will be available simultaneously on ARM and X86.
 
You keep bringing this x86 256-bit AVX issue on ARM but it's not really a problem. AVX or whatever extension, is a click on the compiler option and then the software runs faster if it can take advantage of the CPU instructions available. The end result is only a matter of performance, not compatibility. When a developer targets ARM platform, they just have different options (NEON, SVE, SME etc). Thus on a supported CPU, 128-bit SVE code will run faster than without.
BTW, Adobe is porting all its Creative suite on WoA, like they did on Apple Silicon. And most of large publishers are following suite. In 2~3 years time, I expect all new software will be available simultaneously on ARM and X86.
AVX/2 extensions are already undeniably a problem in modern AAA gaming and is fast gaining traction in other types of software applications. It's a massive issue if a new device is unable to run programs in comparison to prior systems!

Developers are not likely to go out of their way to support ARM devices even if it's only a compiler toggle option away if they don't see any significant share gains in the PC space ...
 
AVX/2 extensions are already undeniably a problem in modern AAA gaming and is fast gaining traction in other types of software applications. It's a massive issue if a new device is unable to run programs in comparison to prior systems!

Developers are not likely to go out of their way to support ARM devices even if it's only a compiler toggle option away if they don't see any significant share gains in the PC space ...
Hmmm In case you are not aware, x86 is the minor platform, not Arm. Android phones, iphones and Switch are the majority.
 
Developers are not likely to go out of their way to support ARM devices even if it's only a compiler toggle option away if they don't see any significant share gains in the PC space ...
The issue mainly to me is games/programs that have already been released (and don't get periodic refresh released). Developers are not required by any means to support a new compiler target.

@xpea 's refering to AVX as a click in compiler options is a dishonest misrepresentation of the situation.
 
Last edited:
The issue mainly to me is games/programs that have already been released (and don't get periodic refresh released). Developers are not required by any means to support a new compiler target.

@xpea 's refering to AVX as a click in compiler option is a dishonest misrepresentation of the situation.
Of course any programmer out there that's worth their salt will understand the concept of "undefined behaviour" at the source language level ...
 
Instead of playing the renaming game, AMD should look at their market share, because it's not going well...

GPZUxFwbAAASHUn.png

 
I get why having "AI" in your product name seems attractive to marketing people (funny side note: Nvidia doesn't have "AI" in ANY of its product names presently) but I will never get why it's so important to have your chipsets starting with the same number as your competitor. It just creates more issues for your customers than it attracts any IMO.
 
I get why having "AI" in your product name seems attractive to marketing people (funny side note: Nvidia doesn't have "AI" in ANY of its product names presently) but I will never get why it's so important to have your chipsets starting with the same number as your competitor. It just creates more issues for your customers than it attracts any IMO.

Yeah it’s not like people compare Samsung, Sony and LG TVs based on model #. Really weird.
 
Instead of playing the renaming game, AMD should look at their market share, because it's not going well...
So, NVIDIA managed to sell 10% more units in Q1/24 vs Q4/23 (7.7m vs 7.0m), thus increasing their share to 88%. Despite this their gaming GPU revenue dropped by 8% in Q1/24 vs Q4/24. I am guessing this is due to them slashing prices with the Super series introduction?

Meanwhile, AMD fell 55% in unit sales in Q1/24 vs Q4/23 (1.0m vs 1.8m), while their gaming revenue dropped by 33% in the same period, the deficit could be explained by console sales picking up the difference of course.
 
Instead of playing the renaming game, AMD should look at their market share, because it's not going well...

View attachment 11426


This is nothing new really, Nvidia has held ~75-80% market share in desktop graphics ever since Maxwell from what I remember with maybe a few brief periods where AMD managed to to grab some additional market share (Polaris and RDNA2 perhaps?). Their focus for the last few years has been the higher margin sever and HPC markets which is where they dedicated most of their resources. Nvidia is well entrenched in dGPU and to many consumers, almost the default option. It will take RDNA2 like execution across multiple generations to gain consumer confidence and market share but even in the long term looks to be almost like an Airbus/Boeing kind of situation where Airbus will have ~60% or more market share for the foreseeable future. AMD does want to compete but it's clear the focus is Server, DC, HPC, and even the laptop market ahead of dGPUs presently. This might change with RDNA5 but for now it seems the status quo will continue.
 
Back
Top