Intel ARC GPUs, Xe Architecture for dGPUs [2018-2022]

Status
Not open for further replies.
Double the power of the nvidia chips they compared to being 15% faster? Probably won't even be faster when third party tests come out.
Without even counting on the TSMC 6nm full node advantage. That's even worst than what I expected
 
I'm stil curious about "big" desktop gpu, and see if it's a driver problem, or hardware one. Intel have been not bad with integrated gpu driver for a while now, so I would be surprised if drivers were the main problem. For now, I believe it's another Raja gpu, so...
 
Which makes it a full node ahead of the 10nm class 8lpp that ampere is fabbed on.
Which makes the performance a little more embarrassing when you also consider that RTX 3060 mobile it's being compared against is only full-fat GA106, with 13.25 billion transistors and a 276mm2 on the much less dense Samsung process.

A770m is 21.7 billion transistors and 406mm2 on TSMC N6 (!!!)
47% bigger die, 64% more transistors used compared to GA106 on a process that's almost certainly more expensive per transistor, to say nothing of more expensive per die area. Yikes.

While we're picking on poor A770m, should also be noted that RTX 3060 mobile gets by with a 192-bit memory bus so only 3/4 the GDDR6 BOM cost and some board area saved there too versus the 256-bit one for A770m.

No wonder Intel's not in too much of a hurry to get these out the door en masse.
 
Last edited:
Alchemist could literally end up being a pipe cleaner for battlemage at this point.
Not even due to performance but down to delays and if battlemage is on schedule.

That would not actually be a bad thing for battlemage.

Pipe cleaner in terms of hardware, software,everything.
 
So all of the passed around Intel GPU numbers for 3D Mark are useless, Intel drivers have something called "Advanced Performance Optimizations" that added specific illegal optimizations for 3D Mark Time Spy and Port Royal, which are against the rules, and of course are considered a "cheat". Which explains why these GPUs did well in synthetic tests, but not actual games.

Intel released a new driver today that gives the user the ability to disable such optimizations, with reduced performance of course.

Furthermore, Intel finally adds the promised 3DMark optimizations toggle. In case you don’t remember, Intel launched its discrete GPU architecture with special architecture optimizations that boosted synthetic performance in software such as 3DMark. This is why we are seeing great results from Arc GPUs in this software, yet these cards perform much worse in games. Nevertheless, such benchmark-specific optimizations are against UL 3DMark rules, which is why all former Arc GPU tests were annotated as ‘not approved’.

Intel promised to provide a toggle in its Arc Control software that would disable said optimizations and allow Arc GPUs to perform a valid benchmark test. This was supposed to launch in April, but it took Intel another 2 months to finally enable this toggle.
 
Try running something like 1280x800 or even lower to give the cards a chance

Why would lower resolutions help slower cards catch up? Higher resolution may actually help improve BVH cache hit rates and increase coherence in ray packets benefiting architectures that don’t handle divergence well.
 
Why would lower resolutions help slower cards catch up? Higher resolution may actually help improve BVH cache hit rates and increase coherence in ray packets benefiting architectures that don’t handle divergence well.
Don't we see the gap between AMD and NVidia widen with increased resolution?
 
Try running something like 1280x800 or even lower to give the cards a chance

Why would lower resolutions help slower cards catch up? Higher resolution may actually help improve BVH cache hit rates and increase coherence in ray packets benefiting architectures that don’t handle divergence well.

Don't we see the gap between AMD and NVidia widen with increased resolution?
Who said it would help slower cards catch up? None of the cards is comfortable at that resolution based on performance, it doesn't mean the gaps would be smaller, bigger or even same at resolutions they're comfortable with. Whichever it is depends on several factors, obviously.
 
Who said it would help slower cards catch up? None of the cards is comfortable at that resolution based on performance, it doesn't mean the gaps would be smaller, bigger or even same at resolutions they're comfortable with. Whichever it is depends on several factors, obviously.

It’s a synthetic benchmark so comfortable frame rates aren’t very relevant. I was referring to your earlier comment that “the differences might be quite a bit different”.
Don't we see the gap between AMD and NVidia widen with increased resolution?

In RT specifically? Not sure as there isn’t a lot of data on pure RT benchmarks at different resolutions.
 
Status
Not open for further replies.
Back
Top