The Intel Execution in [2025]

By institutional shareholders. I wonder what the folks working there think about it.
 
By institutional shareholders. I wonder what the folks working there think about it.
hmmmm, there are some more applauses, but not from the people working at Intel as of currently.


on a different note....

 
Based on their AVX-history AVX10.3 will remove 512-bit completely though 🤭
They flip flopped a bit over the years regarding AVX-512 but I assume this is one of the rare moments where AMD forced their hands for good this time since they've had the performance lead for so long ...
 
sigh, I liked Larrabee so much back in the day, too bad it never came into fruition, but it was a super cool concept back in the day, I remember @Shifty Geezer being very interested in it for some reason, and I also wanted some kind of console or device from Intel that used Larrabee.

At Nvidia's GTC event, Pat Gelsinger reiterated that Jensen 'got lucky with AI,' Intel missed the boat with Larrabee​



 
Personally I think if Intel stick with x86 they really can't achieve much. They were too attached with the "x86 everywhere" idea. The reality is that outside of Windows desktop/laptop and servers, x86 has very little presence. Since x86 is not particularly a good architecture, pushing it everywhere does not make much sense.
Even without AI, in the HPC field people were still starting to use GPU accelerators (most of them NVIDIA, albeit not as dominant as in AI). There are very few "pure CPU" supercomputers today.
 

"You know, it became this broader view," Geslinger added. "And then he got lucky with AI, and one time I was debating with him, he said, 'No, I got really lucky with AI workload because it just demanded that type of architecture.'"

"I had a project that was well known in the industry called Larrabee and which was trying to bridge the programmability of the CPU with a throughput-oriented architecture, and I think had Intel stayed on that path, you know, the future could have been different," Gelsinger opined during his appearance on the webcast. "I give Jensen a lot of credit; he just stayed true to that vision."

In the broadcast, Pat Geslinger also discussed the rising costs of AI hardware, stating that it is "way too expensive"- 10,000 times more than it needs to be for AI and large-scale inferencing to reach its full potential. As for Intel, the next-generation Jaguar Shores project in its AI GPU division will arrive sometime in 2026, competing against next-gen offerings from both NVIDIA and AMD.
 
Personally I think if Intel stick with x86 they really can't achieve much. They were too attached with the "x86 everywhere" idea. The reality is that outside of Windows desktop/laptop and servers, x86 has very little presence. Since x86 is not particularly a good architecture, pushing it everywhere does not make much sense.
Even without AI, in the HPC field people were still starting to use GPU accelerators (most of them NVIDIA, albeit not as dominant as in AI). There are very few "pure CPU" supercomputers today.
x86 specifically thrives in those domains (personal computing/servers/high perf productivity) because of competition! If you really think about it, how many other architectures are pushing the envelope (feature extensions/vertical cache technology to minimize cache misses/low latency inter-core communication) in terms of raw performance innovations ? x86 never needed to be the "best architecture everywhere" because there were always other architectures that were inherentely designed to be more suited to their applications ...

Not all HPC applications can convert their C++ code to the more feature restricted CUDA language. Scientific simulation and modelling aren't even the biggest applications for x86 CPUs but it's security applications like database mangement, virtual machines, and especially anti-cheat kernel drivers where they play the most important role ...
 
x86 specifically thrives in those domains (personal computing/servers/high perf productivity) because of competition! If you really think about it, how many other architectures are pushing the envelope (feature extensions/vertical cache technology to minimize cache misses/low latency inter-core communication) in terms of raw performance innovations ? x86 never needed to be the "best architecture everywhere" because there were always other architectures that were inherentely designed to be more suited to their applications ...

Not all HPC applications can convert their C++ code to the more feature restricted CUDA language. Scientific simulation and modelling aren't even the biggest applications for x86 CPUs but it's security applications like database mangement, virtual machines, and especially anti-cheat kernel drivers where they play the most important role ...

This is exactly the problem I was saying about Intel: they really wanted to push x86 everywhere, without considering whether it's a good fit or not. Mobile is growing? Make a x86 CPU for it (Atom). Super computing? Make a x86 CPU array. They even wanted to push x86 into embedded computing (with limited success).

Intel was not like this back then. They did i860 and i960, which were not very successful but not bad architectures. Unfortunately, after the spectacular failure of IA-64, the "x86 gang" inside Intel got to make major decisions and thus the "x86 everywhere" nonsense. I mean, they got StrongARM from DEC and they sold it to Marvell, just to missing out on the iPhone train. Their handling of x86 ISA is also not very good. Too much artificial fragmentation (exhibiton A: AVX10). AMD seems to be doing much better in this regard lately.

I don't know how powerful the "x86 gang" are (if there are still such mentality), but Intel really needs to get rid of it. Obviously they have much bigger problem today, but they still need to produce compelling products to keep their fabs running.
 
This is exactly the problem I was saying about Intel: they really wanted to push x86 everywhere, without considering whether it's a good fit or not. Mobile is growing? Make a x86 CPU for it (Atom). Super computing? Make a x86 CPU array. They even wanted to push x86 into embedded computing (with limited success).

Intel was not like this back then. They did i860 and i960, which were not very successful but not bad architectures. Unfortunately, after the spectacular failure of IA-64, the "x86 gang" inside Intel got to make major decisions and thus the "x86 everywhere" nonsense. I mean, they got StrongARM from DEC and they sold it to Marvell, just to missing out on the iPhone train. Their handling of x86 ISA is also not very good. Too much artificial fragmentation (exhibiton A: AVX10). AMD seems to be doing much better in this regard lately.

I don't know how powerful the "x86 gang" are (if there are still such mentality), but Intel really needs to get rid of it. Obviously they have much bigger problem today, but they still need to produce compelling products to keep their fabs running.
I'm not so sure if they truly had an inherent unhealthy obssession with x86 or if it was just simply more out of convenience to reuse their "core IPs" ...

Also Intel never really wanted to keep their XScale division since it was an artifact of their lawsuit settlement with DEC and the division was losing money for their entire business too. I don't think Apple intended to have a long term partnership with Intel either when they acquired P.A. Semi just a year after the launch of the iPhone so it's not as much of a loss as everyone else believes it to be especially when Apple has a history of fostering "toxic relationships" with many of their suppliers ... (Apple's autonomous EV project ended up nowhere for that very good reason)

Intel wouldn't have had much success convincing Apple otherwise to continue their expanded relationship without taking more losses ...
 
I'm not so sure if they truly had an inherent unhealthy obssession with x86 or if it was just simply more out of convenience to reuse their "core IPs" ...

Also Intel never really wanted to keep their XScale division since it was an artifact of their lawsuit settlement with DEC and the division was losing money for their entire business too. I don't think Apple intended to have a long term partnership with Intel either when they acquired P.A. Semi just a year after the launch of the iPhone so it's not as much of a loss as everyone else believes it to be especially when Apple has a history of fostering "toxic relationships" with many of their suppliers ... (Apple's autonomous EV project ended up nowhere for that very good reason)

Intel wouldn't have had much success convincing Apple otherwise to continue their expanded relationship without taking more losses ...

Intel did that, for example:

https://www.techcentral.ie/intel-sees-x86-everywhere-in-future/ (from 2008)
https://arstechnica.com/gadgets/2013/01/intel-at-ces-more-performance-less-power-and-x86-everywhere/ (from 2013)

The thing about XScale is not whether it's losing money at the time, but they get rid of that because it's not x86. Intel did want to get into the mobile market (Apple or not), but since they had nothing all they can do is to push a x86 CPU which is not power efficient compared to other ARM based CPU at the time. It might be possible to make a competitive x86 CPU in terms of power efficiency, but at the time Intel couldn't do that. Even to this day, ARM based laptops are more power efficient than x86 ones (although both Intel and AMD weren't really trying to make that a target).

IMHO Intel should just do whatever they can do grab all the market if possible. I mean, of course there's no guarantee of success. Even if Intel did a XScale based ARM SoC does not mean they could take the mobile market, but with Atom they certainly wouldn't. It's like wearing lead shoes to a race of 100m sprint. They practically sabotaged themselves.

I think it's the same with AI. When NVIDIA started doing CUDA, Intel wasn't paying attention. It's of course possible to add a vector instruction set to x86 (probably not the best idea, but anyway), but they weren't doing that. AVX512 was first proposed in 2013 and implemented in 2016, almost 10 years later than CUDA. And Intel made it available only on server and workstation CPU, so the first consumer CPU with AVX512 (Cannon Lake) shipped 2 years later in 2018. (IMHO making a new tech available to consumer grade hardware is very important as it's much easier for students to try them at home).

I also remember that Intel's so called IDM 2.0, which was their plan to enter the foundry business, with one of the focus being making their x86 IP available to their foundry customers. But who wants x86 IP? Who wants to put a x86 CPU inside their SoC? It makes very little sense.

Anyway these are all old stories. Today even if Intel has something good with their fabs, they still need a lot of chips to fill these fabs. Their CPU alone is already not enough (note that it's not the case before, as Intel used TSMC or other fabs to make non-CPU chips). They'll have to find foundry customers, or have some compelling products other than their CPU (e.g. AI chips), or both.
 
Intel did that, for example:

https://www.techcentral.ie/intel-sees-x86-everywhere-in-future/ (from 2008)
https://arstechnica.com/gadgets/2013/01/intel-at-ces-more-performance-less-power-and-x86-everywhere/ (from 2013)

The thing about XScale is not whether it's losing money at the time, but they get rid of that because it's not x86. Intel did want to get into the mobile market (Apple or not), but since they had nothing all they can do is to push a x86 CPU which is not power efficient compared to other ARM based CPU at the time. It might be possible to make a competitive x86 CPU in terms of power efficiency, but at the time Intel couldn't do that. Even to this day, ARM based laptops are more power efficient than x86 ones (although both Intel and AMD weren't really trying to make that a target).

IMHO Intel should just do whatever they can do grab all the market if possible. I mean, of course there's no guarantee of success. Even if Intel did a XScale based ARM SoC does not mean they could take the mobile market, but with Atom they certainly wouldn't. It's like wearing lead shoes to a race of 100m sprint. They practically sabotaged themselves.

I think it's the same with AI. When NVIDIA started doing CUDA, Intel wasn't paying attention. It's of course possible to add a vector instruction set to x86 (probably not the best idea, but anyway), but they weren't doing that. AVX512 was first proposed in 2013 and implemented in 2016, almost 10 years later than CUDA. And Intel made it available only on server and workstation CPU, so the first consumer CPU with AVX512 (Cannon Lake) shipped 2 years later in 2018. (IMHO making a new tech available to consumer grade hardware is very important as it's much easier for students to try them at home).

I also remember that Intel's so called IDM 2.0, which was their plan to enter the foundry business, with one of the focus being making their x86 IP available to their foundry customers. But who wants x86 IP? Who wants to put a x86 CPU inside their SoC? It makes very little sense.

Anyway these are all old stories. Today even if Intel has something good with their fabs, they still need a lot of chips to fill these fabs. Their CPU alone is already not enough (note that it's not the case before, as Intel used TSMC or other fabs to make non-CPU chips). They'll have to find foundry customers, or have some compelling products other than their CPU (e.g. AI chips), or both.
Even if their strategy served as the direct reason for why they offloaded their XScale division, just losing money alone on operating it was a good enough reason to eventually get rid of it ...

They could've very well kept iterating on their ARM designs to break into the mobile phone SoC market but that idea wasn't very attractive due to the more "decentralized nature of the ARM ecosystem" since the market had to be 'shared' potentially with many other competitors in both chip design (ARM/QCOM/MTK/etc.) and manufacturing (TSMC/Samsung/UMC) so Intel's conclusion was that they thought that the 'race' would look like which player could reach to the bottom to have their resources be exploited the most to the lowest bidder since the market would be dangerously competitive to the point whoever could most efficiently "flood the market" at the lowest profit/surplus. Intel wasn't much of player in developing wireless standards compared to Qualcomm so they burned even more cash ($1.4B USD) to acquire a moden designer (Infineon's Wireless Solutions division) ... (AMD held similar concerns for their ARM Opteron project before discontinuing it)

By reusing x86 for mobile phones, Intel thought that their only potential threat would be AMD (only other player that could control the distribution of x86 systems) and that they'd be closing another money pit (XScale) while they just opened up another one (modems) ...

For Intel, ALL of their choices (market share vs control) involved burning tons of cash one way or another ...
 
Last edited:
Honestly I just think many of Intel's past decisions were sort of half-assed :p . For example, they wanted to go into mobile market, that's why they bought a modem designer and also involved in the WiMAX standard. However, on the SoC side they insisted on doing x86, which can't be made very power efficient. Then they wanted to get into the foundry market, and they got a FPGA company as their first customer, which actually makes sense because FPGA are very regular and easier to make, but it ended up badly and Intel even bought that company (and they finally made it independent this year).

All these didn't end up with too much damages because Intel was so big and so profitable. I think their past successes in x86 somehow clouded their view. I mean, in a way x86 was (and still is in some ways) a moat, especially on the PC desktop/laptop market. However, in the server market, it's less so. And as I mentioned in other fields x86 is not much used at all. It's not that they didn't try to branch out to fields other than PC and server, but when they put a unnecessary requirement (x86), it impeded the progress.

I imagine a different timeline where Intel used their superior processes (back when they were superior) to make ARM SoC for mobile applications. Low end mobile phone companies won't be interested but those who want fast high end devices certainly will notice. The most profitable smartphone today uses SoC made with the best process money can buy. This certainly would worth something to Intel today. But what Intel brought was an in order x86 CPU, which was not fast nor power efficient, and only a few vendors (such as ASUS, which I suspect they did a deal with Intel just as a favor) actually used them.
 
Back
Top