The Intel Execution in [2023]

Status
Not open for further replies.
Looks like Intel dropped the a750 down to $200. Think its an amazing value for the money and hopefully forces more competition
I mean the card is about on par with RX6600 (not XT) in 1080p in modern games (which means that it does way worse in older ones).
And you can find RX6600 at $200 rather easily.
So it looks more like AMD has forced Intel to drop the A750 price.
 
If Intel wants to change things they may need to par back dividend and take losses to ensure people use their foundry services. They have dabbled in it before and never really committed. I think the idea that they are a competitor in cpu, GPU etc hurts then, but Samsung seems to have made the jump.


I hope they don't give up on GPUs also. More competing companies is always a win for consumers and I'm tempted to try one of their GPUs if they can get them a little better and more consistent.
 
The rumored meteor lake igpu would ate entry level dgpu. Iirc it was rumored with the same EU count as arc 300
 
The rumored meteor lake igpu would ate entry level dgpu. Iirc it was rumored with the same EU count as arc 300
As in A380? I doubt that is good enough unless they make other improvements.
A380 struggled against 6500 XT, so I don't think it can match up to Ryzen 7040 3 GHz Graphics core.
 
https://www.anandtech.com/show/18894/intel-details-powervia-tech-backside-power-on-schedule-for-2024

PowerVia%20Technical%20Deck_04.png


PowerVia In Practice: Intel’s Finds 30% Reduction in IR Droop, 6% Higher Clockspeeds, Ready for HVM​


PowerVia%20Technical%20Deck_10.png


https://videocardz.com/press-releas...formance-gain-on-meteor-lake-e-core-test-chip

Intel PowerVia technology shows 6% performance gain on Meteor Lake E-core test chip​

 
Last edited:
I still find it very hard to believe that they're gonna be able to release Arrow Lake on 20A before the end of next year. If they can, and basically go from one major node leap in a consumer product to another major node leap in a consumer product in basically a single year - that'd be extremely impressive. But nothing Intel has done lately gives me confidence they can deliver that kind of rapid and impressive advancement.

And if they cant do this, it begs the question of what on earth Intel can do for desktops next year...
 
Found this an interesting perspective for Intel and their gpu market from hardware reviewers
I doubt that Intel feels comfortable selling a GPU which is about twice as complex at the same price as 6600.
They may be getting some marketshare with this but it's likely costing them in losses.

Also recent GPR report shows that Intel has actually managed to take 2% of m/s from Nvidia, not AMD. So this thumbnail is nothing more than a clickbait.
 
Also recent GPR report shows that Intel has actually managed to take 2% of m/s from Nvidia, not AMD. So this thumbnail is nothing more than a clickbait.
Have they mesured that current Intel owners ovewhelmingly owned nVidia previously? And in a significantly greater percentage than the actual nVidia market share ? Otherwise the concept of vendor x taking marketshare from y is purely fictional.

Even if, considering again that the nVidia is the market leader, and a video caption is not supposed to be paragon of statistical accuracy, i find the description appropriate. Hyperbolic, but appropriate
 
June 22, 2023
Aurora is expected to secure the top spot on the Top500 supercomputer list once tests are complete.
Argonne National Laboratory and Intel said on Thursday that they had installed all 10,624 blades for the Aurora supercomputer, a machine announced back in 2015 with a particularly bumpy history. The system promises to deliver a peak theoretical compute performance over 2 FP64 ExaFLOPS using its array of tens of thousands of Xeon Max 'Sapphire Rapids' CPUs with on-package HBM2E memory as well as Data Center GPU Max 'Ponte Vecchio' compute GPUs. The system will come online later this year.
...
The machine is powered by 21,248 general-purpose processors with over 1.1 million cores for workloads that require traditional CPU horsepower and 63,744 compute GPUs that will serve AI and HPC workloads. On the memory side of matters, Aurora has 1.36 PB of on-package HBM2E memory and 19.9 PB of DDR5 memory that is used by the CPUs as well as 8.16 PB of HBM2E carried by the Ponte Vecchi compute GPUs.
...
The installation of the Aurora supercomputer marks several milestones: it is the industry's first supercomputer with performance higher than 2 ExaFLOPS and the first Intel'-based ExaFLOPS-class machine. Finally, it marks the conclusion of the Aurora saga that began eight years ago as the supercomputer's journey has seen its fair share of bumps.
 

welp there goes any chance of me buying an intel gpu

 
So you don't own any Nvidia GPUs and will never buy one?
They no longer optional? Or the checkbox in the installer are misleading? (it's just a checkbox for the telemetry of the installer, not for the driver that will be installed).

Btw still can use Nvidia driver without Nvidia cpanel and experience, right?

Edit:

LOL I forgot that Nvidia has included nvngx.dll auto updater in their driver.

So yeah, other than blocking with firewalls, can't use without being spied.

Edit2:
Its amd that have telemetry checkbox in the installer. Not Nvidia
 
Last edited:
https://www.hardwaretimes.com/intel...ith-rentable-unit-cores-hyper-threading-axed/

16th Gen Panther Lake, leveraging the Cougar Cove core, will leverage the Rentable Unit Cores, offering a 40% IPC uplift over an earlier designed 4-way hyper-threaded model. According to Moore’s Law is Dead, RUs are something akin to addressable cores, packed in groups of two, each with a chunk of SRAM and sharing MLC per unit.These cores are grouped into units and take up notably less space. However, they can be boosted (four of them) to incredible levels, providing a substantial IPC boost in lightly threaded workloads. As always, take rumors like this with a grain of salt.

sounds like orginal idea of Andy Glew Bulldozer speculative multithreading ( here ) and CMT
 
Last edited:
RUs concept would provide IPC boost in *low threaded* workloads not high threaded ones since it is based on the idea of improving a single thread performance by using processing units from "sister cores". A highly threaded workload would thus see way less of an IPC boost but theoretically would get a boost from the number of such "sister cores" which is supposedly high (you get 4 small cores instead of 1 high performance core).

This is an evolution of P+E cores model but now a "virtual P core" is 2-4 E-cores all working on a single thread each clock. So such a CPU with 16 cores would be able to run up to 4 threads considerably faster than a standard CPU would.

The concept is old but maybe it will get implemented now.
 
Status
Not open for further replies.
Back
Top