Intel ARC GPUs, Xe Architecture for dGPUs [2018-2022]

Status
Not open for further replies.
My impression was improvement, but still spotty. They offer a bare minimum of features and only need to keep a tiny GPU with little parallelism busy with work.

Idk, they featured VRS and adaptive sync pretty quickly, no? I think they will be fine with drivers. I don't trust the hardware because I don't trust Raja, but I hope to be wrong, competition is good (most of the time...)
 
Idk, they featured VRS and adaptive sync pretty quickly, no? I think they will be fine with drivers. I don't trust the hardware because I don't trust Raja, but I hope to be wrong, competition is good (most of the time...)
They also were first to start adding new features after the long silence after DX11?.
 
Idk, they featured VRS and adaptive sync pretty quickly, no? I think they will be fine with drivers. I don't trust the hardware because I don't trust Raja, but I hope to be wrong, competition is good (most of the time...)
I meant user features. Things like DRS, freestyle, GF Experience, input lag reductions etc
 
IMO in the current market, all Intel needs to be ultra successful with DG2 is:

- 6700XT performance
- prices starting at $300 (512 EU version)
- lots of availability
- enable anti-mining at a silicon level.


If they get these four right, gamers won't care if the card is consuming 250W or if the drivers aren't top notch. They'll steadily gain marketshare among gamers because miners and scalpers are hoarding most Radeons and Geforces.



What intrigues me the most is the upscaling method they are going to use, that neural supersampling sounds exciting, specially the comparisons with DLSS and FSR, but much like AMD, Intel tends to publish open drivers.

Yes, let's hope they'll open up their solution. I haven't seen any clue to Intel having dedicated ML units on HPG, but their earlier Xe presentations did mention matrix multipliers (and RAMBO Cache which sounds a whole lot like Infinity Cache).
Regardless, there's no reason to believe their AI upscaling solution isn't just coming from the EUs' FP16/INT8/INT4 throughput, so it should work on RDNA2 just fine.

Though the best outcome IMO would be for them to pair up with AMD and develop a temporal-based reconstruction (FSR 2.0?) that brought TSR-like quality and performance to non-UE5 games.
 
Bruh it's N6.
What lots of availability?

If it can't mine, then it'll be much more available than current Geforces and Radeons.
Unless they're not getting any relevant proportion of TSMC's N6 output.
 
Though the best outcome IMO would be for them to pair up with AMD and develop a temporal-based reconstruction (FSR 2.0?) that brought TSR-like quality and performance to non-UE5 games.
The best outcome for AMD? Can't see why Intel would want to do something like this.
As for their XeSS solution they can easily tie that down to their h/w only through things like OneAPI. It's not a driver level solution so them providing an open source driver for Linux means nothing for it.
 
Unless they're not getting any relevant proportion of TSMC's N6 output.
a) no shit
b) ahoy it ramps the same Q as RMB and two new MTK SoCs.
As for their XeSS solution they can easily tie that down to their h/w only through things like OneAPI.
It already runs dp4a. Any kind of packed math solution works, sorry.
Why would Intel pair up with AMD for that ?
They won't.
I bet they have the ressources to do it alone.
Yes but also no.
Intel devrel is a miserable husk.
 
As for their XeSS solution they can easily tie that down to their h/w only through things like OneAPI
Previous Kaplanyan's super-resolution work at Facebook was horribly slow, 18.3 ms at 1080p (and that's for fast version) on TitanV that's compared to something like 1 ms for DLSS on 3090 at 4K, both have comparable DL flops and bandwith, so intel's super-resolution must be faster by a factor of 40x or slightly more to be practical in real games with Titan V class of HW (> 100 FP16 tflops).
They don't need SW to differentiate, they need HW to make the XeSS thingie practical and something tells me that packed math alone would not suffice because it will mean all kinds of problems with precision and NN's generalization.
 
Does any gaming API allows for packed INT?
I'm not sure that an AI upscaler would even use INTs. FP16 seems like a bare minimum.

They don't need SW to differentiate, they need HW to make the XeSS thingie practical and something tells me that packed math alone would not suffice because it will mean all kinds of problems with precision and NN's generalization.
That's why I've said that it will be interesting to see how that would run between different h/w.
Unless it's so fast that it's impact will be completely hidden by lowering the resolution even when running via "packed math" it will likely end up running better on Nvidia h/w - which would be a fun situation for Intel.
 
New Why would Intel pair up with AMD for that ? I bet they have the ressources to do it alone.

It would be in both AMD and Intel's best interests to stop a nvidia-exclusive tech like DLSS from ever becoming a de facto standard of upsampling/reconstruction on PC gaming.
The best way to do it would be to offer decent, open-source and/or IHV-agnostic competition. A path which FSR already started.

And as @Bondrewd said, Intel's gaming devrels aren't up to par with AMD or Nvidia, so trying to insert another exclusive AI-driven supersampling into random games this late into the game doesn't exactly sound like a recipe for success to me.
 
Status
Not open for further replies.
Back
Top