Intel ARC GPUs, Xe Architecture for dGPUs [2018-2022]

Status
Not open for further replies.
Q1 2022.
Right around the corner.

And they're now showing gaming "video teasers", about half a year before release.
Sounds like Raja alright.


Good to see Ryan is still alive though it seems Anandtech videocard reviews are dead and buried.

IIRC Ryan's home was affected by 2020's wildfires, but did Anandtech ever make any official statement about why they suddenly stopped publishing any substantial GPU coverage, or whether they'll resume them some day?
 
If these do come in Q1 2022, it will be months until the next gens from Nvidia or AMD launches. It will get to compete vs current gen cards in initial reviews. Will be interesting if they can actually match current gen parts. I have my doubts a 512EU is as hyped as people think it is. Just doesn't really add up to much looking at current DG1 performance. Unless they really amped up clocks or changed the architecture significantly but haven't discussed it at all.

I would be pleasantly surprised if it does perform better than a rtx 3070.
 
If these do come in Q1 2022, it will be months until the next gens from Nvidia or AMD launches. It will get to compete vs current gen cards in initial reviews. Will be interesting if they can actually match current gen parts. I have my doubts a 512EU is as hyped as people think it is. Just doesn't really add up to much looking at current DG1 performance. Unless they really amped up clocks or changed the architecture significantly but haven't discussed it at all.

I would be pleasantly surprised if it does perform better than a rtx 3070.
When Nvidia released the 750Ti, its performance didn't exactly portend a great future for Maxwell. It was efficient, but was barely faster than the 650Ti. Yet obviously when the bigger Maxwell GPU's finally did hit later in the year, they were performance monsters.

DG1 never felt like a serious attempt at a GPU. Pretty obviously a pipe cleaner product and shouldn't be taken as representative of what more developed, scaled up Intel GPU's will be like.
 
I'm just not sure how can they compete on supplying the thing because it will use tsmc. Tsmc is already overloaded.
 
It was efficient, but was barely faster than the 650Ti.
One third faster on average. This was compared to prev. generation, not to 3-yo-models like it's common today. In some cases, it was even more than 50% faster, in CoH2 up to 73 (which I just looked up in order to refresh my memory).
--

What's an important factor in Xe-Architecture performance is power. In IGP space, it had very limited amount of power and share bw with the CPU cores. Those constraints are practically lifted on a dedicated gaming card (not DG1ex), which would be free to go up to 225 watt realistically or 300 watt without any major uprising with gamers.
 
What's an important factor in Xe-Architecture performance is power. In IGP space, it had very limited amount of power and share bw with the CPU cores. Those constraints are practically lifted on a dedicated gaming card (not DG1ex), which would be free to go up to 225 watt realistically or 300 watt without any major uprising with gamers.
The discrete DG1 probably isn't power or bandwidth limited and performance was still pretty meh, scale it all the way up to 512 EU suddenly if going to be like 400W and performance isn't even that great compared to competition.
 
The discrete DG1 probably isn't power or bandwidth limited and performance was still pretty meh, scale it all the way up to 512 EU suddenly if going to be like 400W and performance isn't even that great compared to competition.
DG1 was never meant to be discrete outside laptops.
 
DG1 was never meant to be discrete outside laptops.
Yes but it gives a baseline of what kind of Performance/EU or performance/watt to expect. Unless intel does a massive boost, 512EU vs the 96 in the DG1 is only a 5.3x increase.

Even if not designed to be a great performer, DG1 performs poorly compared to the GT 1030 gddr5 (which is based on pascal and not even Turing) and AMD's current integrated Vega APUs (not RDNA 1 or 2). There would need to be major changes to improve it to be competitive vs even current gen GPUs.

5x of a GT 1030 is like a GTX 1660 super so just scaling the EUs up clearly isn't enough. A RTX 3070 is over 10x better performance than the DG1. Need about 2x the performance per EU of the DG1 for DG2 to compete with it.
 
The discrete DG1 probably isn't power or bandwidth limited and performance was still pretty meh, scale it all the way up to 512 EU suddenly if going to be like 400W and performance isn't even that great compared to competition.
Discrete DG1 is Xe LP, DG2 will be Xe HPG. Those could use very differently specialized transistors (LP vs. HP for example) built with other characteristics.
And yes, everything about DG1 was meh, I agree.
 
There is almost no way Intel will offer anything of value to gamers in the discreet market. Even if they do produce competitive hardware, which i find very unlikely given their inability to do so in the CPU space, the software side is a massive undertaking which Nvidia and AMD have had 20+ years to work on.
 
There is almost no way Intel will offer anything of value to gamers in the discreet market. Even if they do produce competitive hardware, which i find very unlikely given their inability to do so in the CPU space, the software side is a massive undertaking which Nvidia and AMD have had 20+ years to work on.

Haven’t Intel’s IGP drivers been pretty good for the past few years?
 
A, B, C, D... 10 points for creativity.

Yet unnanounced: Exorcist?
:mrgreen: Necromancer, Amazon, Sorceress, Barbarian, Assassin , etc
if those numbers are real that's impressive stuff, specially with RT on. A bit meaningless without more data though.

What intrigues me the most is the upscaling method they are going to use, that neural supersampling sounds exciting, specially the comparisons with DLSS and FSR, but much like AMD, Intel tends to publish open drivers.
 
Status
Not open for further replies.
Back
Top