Intel ARC GPUs, Xe Architecture for dGPUs [2018-2022]

Status
Not open for further replies.
So Intel seems to have a very strong Ray Tracing engine indeed, maybe even better than NVIDIA on a theoretical level (Intel Level 4 RT vs NVIDIA Level 3 RT?). On actual games though, that theoretical RT prowess is not shown, Intel is throwing their top of the line A770 against NVIDIA's midrange RTX 3060, a battle which averages out with the A770 being only 14% faster than the 3060 on average, with heavy RT titles showing about 30% advantage for the A770.
 
Both impressed and also not with that? The RT hardware itself looks very good for first gen, drivers are definitely holding it back however it's 21.7b transistors 406mm^2 TSMC N6 vs 3060/GA106 13.25b transistor 276mm^2 SS 8nm, even 3070Ti/GA104 is smaller with 17.4b & 392.5mm^2. Significant wins in Dying Light 2, Metro Exodus and Control vs the 3060 which are RT demanding, the synthetic benchmark scores are big but again it's their top end vs much smaller GA106 and their competitors will be releasing next gen very soon which makes it a very different game

Overall impressive first gen RT hardware with hamstrung/underwhelming raster that should've released in 2021? Hope they keep making significant gains everywhere and Battlemage is released properly in 2023 with much improved drivers
 
So Intel seems to have a very strong Ray Tracing engine indeed, maybe even better than NVIDIA on a theoretical level (Intel Level 4 RT vs NVIDIA Level 3 RT?). On actual games though, that theoretical RT prowess is not shown, Intel is throwing their top of the line A770 against NVIDIA's midrange RTX 3060, a battle which averages out with the A770 being only 14% faster than the 3060 on average, with heavy RT titles showing about 30% advantage for the A770.
I find it a bit disingenuous using language like "Intel's top of the line against competitors midrange offering" when they share the same performance bracket but not the same price point.

Isn't that the entire point of these cards that you get better performance at cheaper prices?
 
I find it a bit disingenuous using language like "Intel's top of the line against competitors midrange offering" when they share the same performance bracket but not the same price point.

Isn't that the entire point of these cards that you get better performance at cheaper prices?
It depends why the card is cheap. Bulldozer had better multi-threaded performance than its more expensive sandy bridge competition, yet it's still considered a disaster. It nearly sent AMD broke because it had to be priced so low compared to its production cost to actually move any units.

In this case it's a 50% larger die than a 3060, on a significantly better (and more expensive) node, with a wider memory bus and using more power. The production cost is probably about double that of a 3060, for a 9% average performance gain in first party benchmarks (which Intel uses a flawed averaging methodology to instead turn into 14%)
 
when they share the same performance bracket but not the same price point.
When Intel wanted to show a competitor for the 3060 in raster performance, they brought the A750 (mid range), when they wanted to show the RT performance, they brought the A770 (high end), so which one is the true competitor of the 3060? The A750 or the A770?

It's Intel who is being so dishonest here, the A770 was supposed to be a 3070/3060Ti level in raster performance, the A770 should also beat them in RT performance with that powerful RT engine, yet here it is sinking down to a 3060 level! What gives?
 
It's Intel who is being so dishonest here, the A770 was supposed to be a 3070/3060Ti level in raster performance, the A770 should also beat them in RT performance with that powerful RT engine, yet here it is sinking down to a 3060 level! What gives?
Where exactly Intel claimed it's 3060Ti/3070 level in raster?
 
When Intel wanted to show a competitor for the 3060 in raster performance, they brought the A750 (mid range), when they wanted to show the RT performance, they brought the A770 (high end), so which one is the true competitor of the 3060? The A750 or the A770?

It's Intel who is being so dishonest here, the A770 was supposed to be a 3070/3060Ti level in raster performance, the A770 should also beat them in RT performance with that powerful RT engine, yet here it is sinking down to a 3060 level! What gives?
They did have that graph that compared the A750 to the RTX 3060, so you would assume the A770 would compete at the next tier.
 
They did have that graph that compared the A750 to the RTX 3060, so you would assume the A770 would compete at the next tier.
You could assume that, but when the jump from 3060 to 3060 Ti is 29-33 % you could easily fit two cards in between, especially since Intel seems to have hard time scaling at the moment (most likely due drivers, seeing they thought first they could get away with iGPU driverbase and realizing relatively late it's useless)
 
I really like how Tom Petersen explains things and talk, he's pretty good at making complicated thing "simple" and interesting. Now, It must be hard for him not to say "so, on paper we did everything right, but the performances sucks".
 
Where exactly Intel claimed it's 3060Ti/3070 level in raster?
The further you go back, the further up the stack it appears Intel was really hoping to play, from their internal/partner slides.
They were definitely planning on their top end SKU going toe to toe with the 3070 at one point, but some months later the slides positioned the A770 ever so slightly below the 3060Ti line.
 

Attachments

  • Intel-Arc-Alchemist-DG2-XeHPG-Performance.jpg
    Intel-Arc-Alchemist-DG2-XeHPG-Performance.jpg
    105.6 KB · Views: 23
  • Intel-A-Series-Desktop-GPU-Lineup.jpg
    Intel-A-Series-Desktop-GPU-Lineup.jpg
    289 KB · Views: 24
Gotta say I think Intel's messaging is really on point with this release. They are embracing the underdog position, acknowledging shortcomings and are being transparent about their challenges. I can't say either AMD or Nvidia has ever done those things. Tom Peterson is about the most perfect messenger too. Has been around forever and has a face and demeanor that's hard not to trust.

At the same time Intel is out front on promoting RT directly to gamers and seem to have the performance to back it up. Beating the 3060 is a huge deal for a first attempt and with the right price will surely turn some heads. They're hitting Nvidia where it hurts (RT/DLSS) and that's great for us because it means more awesome tech at more competitive prices.

The thread sorting unit is cool tech but the results in the Intel sphere demo aren't super impressive. The A770 lost about 25% performance in hard mode and the 3060 lost about 35%. Doesn't seem like thread sorting is helping that much. Either way thread sorting only works in DX1.0 mode so Intel is presumably going to bang that drum even harder than Nvidia. Now let's see if AMD sticks to their DX1.1 tune or will they get in line with RDNA3...
 
It's probably the best/only way to break into the market with a product that isn't fast enough and is buggy. Broadcast all the warm fuzzies we know and love from Intel Corp. I want them to do well now! Yeah I dunno about all that lol but it is nice to see them talking about the tech. The presentations are very slick (Powerpoint A team hehe) and he is great at explaining it all.
 
It depends why the card is cheap. Bulldozer had better multi-threaded performance than its more expensive sandy bridge competition, yet it's still considered a disaster. It nearly sent AMD broke because it had to be priced so low compared to its production cost to actually move any units.

In this case it's a 50% larger die than a 3060, on a significantly better (and more expensive) node, with a wider memory bus and using more power. The production cost is probably about double that of a 3060, for a 9% average performance gain in first party benchmarks (which Intel uses a flawed averaging methodology to instead turn into 14%)
All that doesn't matter. It's cheaper and offers the same performance or slightly better performance. I specifically wrote that in the post you replied to.

Anytime power usage is brought up it always end in nearly the same reply. Power usage on desktop doesn't matter, only performance and price. Once the drivers are sorted it will be an even better value with a more robust RT implementation.
 
Anytime power usage is brought up it always end in nearly the same reply. Power usage on desktop doesn't matter, only performance and price. Once the drivers are sorted it will be an even better value with a more robust RT implementation.

It doesn't matter for some users and it matters for other users. Just dismissing concerns of some buyers over power consumption doesn't magically make them suddenly not care about power consumption. I don't have an Intel 12xxx CPU due to power consumption. I'd also never buy a 3090 or 6900 even if they only cost 5 dollars.

Regards,
SB
 
I don't think power usage matters for the majority of users within specific price/performance tiers. Performance/price will always be the primary factor even for those with limited budgets.
 
Nobody expects Intel to win on all fronts with their first discrete card in ages. They've got the features and the performance that most of the market cares about. They just need to fix their drivers and price it right. Power consumption is the least of their worries right now.
 
The thread sorting unit is cool tech but the results in the Intel sphere demo aren't super impressive. The A770 lost about 25% performance in hard mode and the 3060 lost about 35%. Doesn't seem like thread sorting is helping that much. Either way thread sorting only works in DX1.0 mode so Intel is presumably going to bang that drum even harder than Nvidia. Now let's see if AMD sticks to their DX1.1 tune or will they get in line with RDNA3...
It's a bit ironic to see people advocating for DXR 1.0 since the API uses the EXACT same PSO compilation model as the regular graphics pipeline which is the source of these compilation stutters in games for which many consider it to be a failure of the model itself. If end users and developers want more compilation stutters due to the combinatorial explosion of different PSOs then I guess it's the future they deserve ...

DXR 1.0 is known as RTPSO for a reason ...
 
The further you go back, the further up the stack it appears Intel was really hoping to play, from their internal/partner slides.
They were definitely planning on their top end SKU going toe to toe with the 3070 at one point, but some months later the slides positioned the A770 ever so slightly below the 3060Ti line.
IIRC at least some of those slides have been refuted as fakes because some NV/AMD parts are way off their real performance in relation to each other
 
It's a bit ironic to see people advocating for DXR 1.0 since the API uses the EXACT same PSO compilation model as the regular graphics pipeline which is the source of these compilation stutters in games for which many consider it to be a failure of the model itself. If end users and developers want more compilation stutters due to the combinatorial explosion of different PSOs then I guess it's the future they deserve ...

DXR 1.0 is known as RTPSO for a reason ...
It's not like 1.1 solves the issue...
 
Status
Not open for further replies.
Back
Top