The Intel Execution in [2024]

Intel has gone through some hard times as of lately, but they have interesting things like VPro and now 18A, a technology Microsoft is interested in and signed a deal with them.

 
Last edited:
I'm not surprised that TR7K is selling much better than the Xeons. But for R7K vs. Core-i I'd have thought AMD would do better.
 
The mind factory numbers frequently published on x indicate (while they don't prove) that R7K is doing better in retail.
 
I'm not surprised that TR7K is selling much better than the Xeons. But for R7K vs. Core-i I'd have thought AMD would do better.
It does in DIY market but OEM/prebuilts are still overwhelmingly more Intel than AMD, TR or not
 
That's a training benchmark btw. The Wccftech writer only mentions this in the fifth paragraph for some reason.

And as Techpowerup uses Wccftech as their source for the same story and doesn't read the original source article, they manage to misunderstand it and claim that the benchmark in question is an inference benchmark. :oops:

 
great news that the competition is starting to wake up, 'cos having a monopoly in the AI department is certainly not interesting for the future of the industry and to develop a variety of ideas and solutions. Also this way nVidia might start caring about their traditional GPUs too.

On a different note....


Also, there is this article showing how a chip is made. The most impressive part is the interactive real time zoom where you can compare the size of a human hair, a red blood cell, a virus, pollen, etc to a chip transistor. It¡s very well done.

 
That's a training benchmark btw. The Wccftech writer only mentions this in the fifth paragraph for some reason.

And as Techpowerup uses Wccftech as their source for the same story and doesn't read the original source article, they manage to misunderstand it and claim that the benchmark in question is an inference benchmark. :oops:

The original source (stability ai) has both training and inference results
 
The original source (stability ai) has both training and inference results
Yes, but TPU is citing training results as generation aka inference results:

With 2 nodes, 16 accelerators, and a constant batch size of 16 per accelerator (256 in all), the Intel Gaudi2 array is able to generate 927 images per second

StabilityAI in the original article:

Keeping the batch size constant at 16 per accelerator, this Gaudi 2 system processed 927 training images per second
 
Would've liked to have seen battlemage by now. Entering qtr2 soon, by the time it releases could be close to next gen AMD & Nvidia.
Also hope their XeSS FG is coming along nicely, but would've been nice to see it (if progress is looking good).
Not heard anything about either.
 
Back
Top