Intel ARC GPUs, Xe Architecture for dGPUs [2018-2022]

Status
Not open for further replies.
Lots-o-Photos
 
Lots-o-Photos

Damn, if it had a blower fan and exhausted out of the back, I might have taken a chance on one for my media PC.

Regards,
SB
 
September 28, 2022
Please note that while the best competition for Intel’s Arc A380 is AMD’s Radeon RX 6400 and NVIDIA’s GeForce GTX 1630, we don’t have either, so we’re using the next-best-thing we could find: Radeon RX 6500 XT and GeForce GTX 1660. Both of those hover around the $200 price point, so the gap is larger than we’d like, but it will still be interesting to see how Arc holds up.
 
Last edited by a moderator:
If Alchemist scales relatively close to tflops in Blender (like RNDA2 and Nvidia in CUDA) and similar workloads than the A770 16GB would have very high value in theory as it carries x4 + the tflops of the A380 tested here, as long as the software support is there.
 
They're basically throwing a lot of raw hardware at you for the price. So I wouldn't be surprised if you took aggregate benchmarks for current mainstream/popular games/software that it does relatively well. Also technically neither AMD or Nvidia have released their next gen products at this segment and are likely still months out.

A large concern that reviews won't show is what the actual long term support prospect will be from both Intel themselves and software developers.

Another would be how much variance and issues crop up if you expand the usage criteria if you aren't just using more mainstream/modern games (or other software).
 
I keep reading these cards basically require resizable BAR to perform well. Why is that? Is that a "we don't feel like making our drivers perform well without it" sort of thing?
 
One the one hand, I'm happy that another serious competitor is joining. On the other hand, this is even more fragmentation in the PC space and might make optimizing games even more difficult than it already was due to another major variable shift.
 
I keep reading these cards basically require resizable BAR to perform well. Why is that? Is that a "we don't feel like making our drivers perform well without it" sort of thing?

I get the sense that their PCIe and/or memory controllers are just not capable of enough transactions in flight simultaneously, or the latency between transactions too high that it craters the throughput. Tough to say why, might just be that this is their first dGPU memory controller since the i740 and they probably don't have much experience working with GDDR in any of its forms.

Think about how you'd approach a blank-sheet design for storage. If you knew you had the SATA bus as one of your constraints, and you knew you wanted to be able to transfer data at, say, ~200MiB/s, you could achieve that with either a mechanical HDD or an SSD. If late in the game you suddenly realized that you also needed to be able to sustain that 200MiB/s not with sequential multi-gigabyte file transfers but with tens of thousands of tiny 4k file transfers... if you'd started down the HDD path for cost reasons and had locked that into your product, you'd now be in big trouble.

My complete speculation is that the same sort of thing may have happened here - like if the systems they were using during development all had ReBAR. They may have gotten far enough into the physical layout before they really understood what the tradeoffs were for only optimizing for those large transfers. Not enough die area spent on FIFOs/buffers, or on really optimizing out every scrap of latency they could find, that sort of thing. It's hard to believe that it was an intentional tradeoff to just completely abandon all the older non-ReBAR systems, even AMD and NV's smallest GPUs over the years haven't seemed to have issues, so it may just be that it wasn't obvious how bad the issue was until late in the game.
 
I keep reading these cards basically require resizable BAR to perform well. Why is that? Is that a "we don't feel like making our drivers perform well without it" sort of thing?
Something they thought was a good idea during design but didn't forsea this issue.
So hardware related that they may be able to help in driver but don't buy expecting it to improve.

They've talked about it a few times.
Good explanation
 
I have a blower-cooled 2080 and yeah that is not what you want on a 225W card if quietness is a priority. It will go dustbuster with demanding software.
 
Last edited:


Arc A770/A750 embargo:

  • Unboxing : Sep 30, 09:00 (EDT)
  • Review : Oct 5, 09:00 (EDT)
  • Launch: Oct 12, 09:00 (EDT)
 
I have a blower-cooled 2080 and yeah that is not what you want on a 225W card if quietness is a priority. It will go dustbuster with demanding software.

I remember blower-cooled R290(X) gpus. I think they reached close to 100c under gaming loads.
 
I remember blower-cooled R290(X) gpus. I think they reached close to 100c under gaming loads.
They are 300W cards and AMD had the fan control setup to target 95C. It was the same cooler as on the 7970, which was a ~ 250W card. I had a 290X for awhile and it was a very noisy card, and annoying because the fan changed speed too much. It would even ramp up rendering the WIndows GUI sometimes.

I suggest avoiding blowers unless you need to put a card into an enclosure without much airflow. That's what they are best for. They won't be a quiet solution unless the card is much lower power and the cooler still kept beefy.
 
Last edited:
Status
Not open for further replies.
Back
Top