DegustatoR
Legend
Well, that's a CEO...
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
the news was taken positively.Intel Appoints Lip-Bu Tan as Chief Executive Officer
![]()
Intel Appoints Lip-Bu Tan as Chief Executive Officer
www.intc.com
hmmmm, there are some more applauses, but not from the people working at Intel as of currently.By institutional shareholders. I wonder what the folks working there think about it.
Based on their AVX-history AVX10.3 will remove 512-bit completely though
Starting from AVX10.2, future Intel platforms will unconditionally support 512-bit AVX register file implementations ...
They flip flopped a bit over the years regarding AVX-512 but I assume this is one of the rare moments where AMD forced their hands for good this time since they've had the performance lead for so long ...Based on their AVX-history AVX10.3 will remove 512-bit completely though![]()
"You know, it became this broader view," Geslinger added. "And then he got lucky with AI, and one time I was debating with him, he said, 'No, I got really lucky with AI workload because it just demanded that type of architecture.'"
"I had a project that was well known in the industry called Larrabee and which was trying to bridge the programmability of the CPU with a throughput-oriented architecture, and I think had Intel stayed on that path, you know, the future could have been different," Gelsinger opined during his appearance on the webcast. "I give Jensen a lot of credit; he just stayed true to that vision."
In the broadcast, Pat Geslinger also discussed the rising costs of AI hardware, stating that it is "way too expensive"- 10,000 times more than it needs to be for AI and large-scale inferencing to reach its full potential. As for Intel, the next-generation Jaguar Shores project in its AI GPU division will arrive sometime in 2026, competing against next-gen offerings from both NVIDIA and AMD.
x86 specifically thrives in those domains (personal computing/servers/high perf productivity) because of competition! If you really think about it, how many other architectures are pushing the envelope (feature extensions/vertical cache technology to minimize cache misses/low latency inter-core communication) in terms of raw performance innovations ? x86 never needed to be the "best architecture everywhere" because there were always other architectures that were inherentely designed to be more suited to their applications ...Personally I think if Intel stick with x86 they really can't achieve much. They were too attached with the "x86 everywhere" idea. The reality is that outside of Windows desktop/laptop and servers, x86 has very little presence. Since x86 is not particularly a good architecture, pushing it everywhere does not make much sense.
Even without AI, in the HPC field people were still starting to use GPU accelerators (most of them NVIDIA, albeit not as dominant as in AI). There are very few "pure CPU" supercomputers today.
x86 specifically thrives in those domains (personal computing/servers/high perf productivity) because of competition! If you really think about it, how many other architectures are pushing the envelope (feature extensions/vertical cache technology to minimize cache misses/low latency inter-core communication) in terms of raw performance innovations ? x86 never needed to be the "best architecture everywhere" because there were always other architectures that were inherentely designed to be more suited to their applications ...
Not all HPC applications can convert their C++ code to the more feature restricted CUDA language. Scientific simulation and modelling aren't even the biggest applications for x86 CPUs but it's security applications like database mangement, virtual machines, and especially anti-cheat kernel drivers where they play the most important role ...
I'm not so sure if they truly had an inherent unhealthy obssession with x86 or if it was just simply more out of convenience to reuse their "core IPs" ...This is exactly the problem I was saying about Intel: they really wanted to push x86 everywhere, without considering whether it's a good fit or not. Mobile is growing? Make a x86 CPU for it (Atom). Super computing? Make a x86 CPU array. They even wanted to push x86 into embedded computing (with limited success).
Intel was not like this back then. They did i860 and i960, which were not very successful but not bad architectures. Unfortunately, after the spectacular failure of IA-64, the "x86 gang" inside Intel got to make major decisions and thus the "x86 everywhere" nonsense. I mean, they got StrongARM from DEC and they sold it to Marvell, just to missing out on the iPhone train. Their handling of x86 ISA is also not very good. Too much artificial fragmentation (exhibiton A: AVX10). AMD seems to be doing much better in this regard lately.
I don't know how powerful the "x86 gang" are (if there are still such mentality), but Intel really needs to get rid of it. Obviously they have much bigger problem today, but they still need to produce compelling products to keep their fabs running.
I'm not so sure if they truly had an inherent unhealthy obssession with x86 or if it was just simply more out of convenience to reuse their "core IPs" ...
Also Intel never really wanted to keep their XScale division since it was an artifact of their lawsuit settlement with DEC and the division was losing money for their entire business too. I don't think Apple intended to have a long term partnership with Intel either when they acquired P.A. Semi just a year after the launch of the iPhone so it's not as much of a loss as everyone else believes it to be especially when Apple has a history of fostering "toxic relationships" with many of their suppliers ... (Apple's autonomous EV project ended up nowhere for that very good reason)
Intel wouldn't have had much success convincing Apple otherwise to continue their expanded relationship without taking more losses ...
Even if their strategy served as the direct reason for why they offloaded their XScale division, just losing money alone on operating it was a good enough reason to eventually get rid of it ...Intel did that, for example:
https://www.techcentral.ie/intel-sees-x86-everywhere-in-future/ (from 2008)
https://arstechnica.com/gadgets/2013/01/intel-at-ces-more-performance-less-power-and-x86-everywhere/ (from 2013)
The thing about XScale is not whether it's losing money at the time, but they get rid of that because it's not x86. Intel did want to get into the mobile market (Apple or not), but since they had nothing all they can do is to push a x86 CPU which is not power efficient compared to other ARM based CPU at the time. It might be possible to make a competitive x86 CPU in terms of power efficiency, but at the time Intel couldn't do that. Even to this day, ARM based laptops are more power efficient than x86 ones (although both Intel and AMD weren't really trying to make that a target).
IMHO Intel should just do whatever they can do grab all the market if possible. I mean, of course there's no guarantee of success. Even if Intel did a XScale based ARM SoC does not mean they could take the mobile market, but with Atom they certainly wouldn't. It's like wearing lead shoes to a race of 100m sprint. They practically sabotaged themselves.
I think it's the same with AI. When NVIDIA started doing CUDA, Intel wasn't paying attention. It's of course possible to add a vector instruction set to x86 (probably not the best idea, but anyway), but they weren't doing that. AVX512 was first proposed in 2013 and implemented in 2016, almost 10 years later than CUDA. And Intel made it available only on server and workstation CPU, so the first consumer CPU with AVX512 (Cannon Lake) shipped 2 years later in 2018. (IMHO making a new tech available to consumer grade hardware is very important as it's much easier for students to try them at home).
I also remember that Intel's so called IDM 2.0, which was their plan to enter the foundry business, with one of the focus being making their x86 IP available to their foundry customers. But who wants x86 IP? Who wants to put a x86 CPU inside their SoC? It makes very little sense.
Anyway these are all old stories. Today even if Intel has something good with their fabs, they still need a lot of chips to fill these fabs. Their CPU alone is already not enough (note that it's not the case before, as Intel used TSMC or other fabs to make non-CPU chips). They'll have to find foundry customers, or have some compelling products other than their CPU (e.g. AI chips), or both.