Intel ARC GPUs, Xe Architecture for dGPUs [2018-2022]

Status
Not open for further replies.
They've always been capable of playing some oldies. I remember playing with the i810 and being fairly satisfied with what it could do. ;) Sandy Bridge was when things started to get interesting.

I suppose I'm most interested in how they behave with SteamVR. NVidia has a lot of VR functionality. Some games support DLSS. They have VRSS foveated rendering capability too which might become very important. AMD has much less going for them. I don't know if Intel has anything going for VR.
A laptop I purchased in 2011 -with an i5-2500H or something like that- had a Sandy Bridge GPU and I was soooo hyped. Specially with the Miracast feature, so I could use my TV which supported Miracast along with my laptop without cables. It turns out that my particular HD3000 Graphics GPU from Intel wasn't Miracast compatible -iirc, some HD3000 were compatible, HD 4000 was totally compatible-.

Your point makes sense. It was after Sandy Bridge when I could complete The Witcher 1 on my laptop, and I loved the feeling of having a computer I could move around and use to play good games on it.

Also, Diablo 3 was playable on the HD 3000. I purchased it day one and while it rarely ran at 720p 60fps at the lowest possible details, the feeling of having your humble laptop running an AAA game was kinda special. The less you got the more you get to recognize the true worth of things. :smile2:

Did you really have the Intel 740 from 1998? Isn't that the only dedicated GPU they have made before?
did you have the Intel 740? I didn't tbh, it was only when I started buying laptops when I began to use Intel GPUs.

The first laptop I ever got (2005) had some GMAxx something GPU, which was REALLY bad for gaming but well, it worked. I had several laptops until 2011 with integrated GMA solutions from Intel, but well, I didn't complain, it was the time of my life (2005 'til 2015) where I played consoles the most 'cos my laptops couldn't keep up, and I prefer laptops to desktop computers.
 
Intel is really stirring the pot with their more accurate definition of a core. How are we supposed to compare to the other guys where every SIMD lane is a “core”.

Still hdmi 2.0. I hope the entry level desktop cards are 2.1.
what do you mean by compare to the other guys where every SIMD lane is a “core”? Could you elaborate? Just curious....

Now that you mention it I wonder how Intel teraflops are going to compare to nVidia and AMD teraflops.

Teraflops numbers aside, something were Intel are light years ahead of the competition is in their control panel for the GPUs. It looks nice, it's fast, compact, it looks really professional compared to nVidia Control Panel -horrible and slow design I suffer everyday, Geforce Experience isn't much better either- and AMD designs -I purchased a RX 570 back in 2017 and also I mounted 3 mining rigs with Vega GPUs for people who asked me to do that back then, and that red background colour, plus the overall design was poor, although slightly better than nVidia's, also faster-.

Kinda miss that Intel HD Graphics option in the main desktop menu that appears when you right click on the desktop. It was simple, it was fast and the interface was gorgeous, without being the Sistine Chapel, imho.
 
This method accounts for the change in architecture inside each GPU SM/Core.

Perhaps we should borrow from the CPU guys and introduce the concepts of cores and threads. Where a 3090Ti is composed of 84 cores, with each core having 128 threads.

That would actually make more sense. There are some other interesting architecture features that aren’t exposed in the marketing. A Xe core is configured as 16x256-bit, an Nvidia SM is 4x2x512-bit, an AMD CU is 2x1024-bit etc.
 
... Intel 740...
Despite all the vitriol that it gets, I loved the Intel 740 for what it was -- an affordable 3D accelerator that was available at an affordable price in the country where I grew up. Cards based on Nvidia and 3dfx were exorbitantly expensive due to purchasing power differential in a developing country, plus markups and import duties. The only two realistic options for us at that time were the SiS 6326 and the Intel 740.

As far as I remember the SiS 6326 didn't really "accelerate" much, but I think it did provide texture filtering. As a teenager who grew up looking at blocky software textures, seeing texture filtering in action on the Half Life: Uplink demo was mind-boggling. When the game first loaded up I thought it was stuck on a pre-rendered loading screen. I distinctly remember thinking that the game had hung because it was taking so long to load! When I moved the mouse and realized the filtered textures on the walls were being rendered in real-time, it was... one of those defining moments that you remember all your life. All that from the sorry little SiS. The Intel 740 was a huge step up in comparison.

I believe the i810 was just an integrated version of the 740. I remember it being plagued by horrible driver (and/or performance?) problems that were never an issue with the 740. The problems did not go away over multiple chipset generations.
 
did you have the Intel 740? I didn't tbh, it was only when I started buying laptops when I began to use Intel GPUs.

The first laptop I ever got (2005) had some GMAxx something GPU, which was REALLY bad for gaming but well, it worked. I had several laptops until 2011 with integrated GMA solutions from Intel, but well, I didn't complain, it was the time of my life (2005 'til 2015) where I played consoles the most 'cos my laptops couldn't keep up, and I prefer laptops to desktop computers.

Back then I had a G200 together with the Voodoo 2. A year later the Matrox G400 Max. In 2000 it was the first Radeon 64MB DDR VIVO AGP (R100), then Radeon 8500, Radeon 9700 PRO, Radeon X800 XT, Radeon 4890, Radeon 5870 ... how time flies!
 
what do you mean by compare to the other guys where every SIMD lane is a “core”? Could you elaborate? Just curious....

Nvidia and AMD market their cards based on the number of FP32 instructions per clock. So the 3070 Ti has 6144 “cores”. In Intel speak that would be 48 cores with 8 x 512-bit vector ALUs each.

Intel’s version is more accurate but not as helpful for marketing.
 
Nvidia and AMD market their cards based on the number of FP32 instructions per clock. So the 3070 Ti has 6144 “cores”. In Intel speak that would be 48 cores with 8 x 512-bit vector ALUs each.

Intel’s version is more accurate but not as helpful for marketing.

Even Intel's terminology of what a "core" means is arguably spurious as well from the standpoint of CPUs . An "Xe core" may likely not be an individual unit of control flow. On AMD HW, a unit of control flow is defined per vector unit on on the CUs. Without delving too much deeper, a unit of control flow could be possibly defined as either per vector engine or per-pair of vector engines which might be closer to the traditional concept of a "core" ...
 
Limited Edition? Is it going to be something like GeForce FE to pay more for the privilege to get the card earlier, or is it going to be worse?
 
The reviewer points out some graphics glitches in games, or the fact that some games do not work at all (such as Forza Horizon 5 and COD Cold War).
In gaming tests, the reviewer notices a few anomalies, such as relatively low GPU usage through tested games and not very stable GPU clock. This is especially visible in PUBG testing, which has a lot of stutters, but the core frequency stays at 2.2 GHz.
In 3DMark test, the Arc A350M is indeed faster than NVIDIA GeForce MX450 with 25W TDP, but visibly slower than GTX 1650.
https://videocardz.com/newz/intel-a...sted-slower-than-gtx-1650-up-to-2-2-ghz-clock
 
that laptop is close to what I want for my future computer, a super light gaming laptop, although I prefer another, more capable, Intel GPU --still expectant as to whether they deliver or not-.

AMD has shared a comparison of their RX 6500M vs the Arc A370M -the one with a 35W to 50W TDP-

AMD Posts Intel Arc A370M vs. Radeon RX 6500M Benchmarks | Tom's Hardware (tomshardware.com)
 
For those not familiar with Siru, it was founded by Kaj & Mika Tuomi and Mikko Alho.
Kaj & Mika have been in "the scene" from Future Crew days, which after they founded BitBoys, which eventually was sold to ATi and later sold by AMD to Qualcomm as part of their Imageon group.
Tuomi brothers left Qualcomm in 2010 and asked Alho to join them from Qualcomm too to found Siru. Alho had been with them at least since BitBoys days.

As for Siru products, their IP is in customer(s) production hardware, but their customers aren't public information
 
Status
Not open for further replies.
Back
Top