Intel ARC GPUs, Xe Architecture for dGPUs [2018-2022]

Status
Not open for further replies.
If it's SYCL then it will run on OpenCL/CUDA/OneAPI/whatever AMD has.

Okay, they will maybe write a wrapper. Guess it will be much slower than DLSS on nVidia hardware so developer will use it like DLSS as a libary with interfaces. Do not think anybody will rewrite it for TensorCores.
 
Okay, they will maybe write a wrapper. Guess it will be much slower than DLSS on nVidia hardware so developer will use it like DLSS as a libary with interfaces. Do not think anybody will rewrite it for TensorCores.
I've been saying for some time now that Nv should've ported DLSS to something widely compatible too.
Would've likely lead to low resolution slideshows on cards without ML h/w in the same fashion as RT did on Pascals.
...And everyone would be happy...
 
Intel Xe-HPC Ponte Vecchio GPU features up to 128 Xe-Cores and 128 Ray Tracing Units

Intel-Ponte-Vecchio-5.jpg


Less SIMDs per MP but twice wider?
One would say that such config would be a better fit for gaming.
 
Less control per logik, less perf in games (iso-all else). With Supercomputer scale problems, you can more easily compensate for a few underutilized cores or nodes even.
 
At the timestamp, they say they "will open up the tools and SDKs" of XeSS for everyone.

Could we see XeSS on consoles?
 
I kinda wonder what backend a console would run a SYCL code through? Is there anything in modern console APIs which is compatible with SYCL compilers?
 
So is Intel going to sell their their own graphics cards and use their CPU distribution channels?
Intel's video mentions ISVs "and partners", but I heard no mention of any AIBs specifically.

That could be one way to avoid GPU hoarding at the manufacturing plants' exit.
 
Yes.
They run packed math like every other RDNA2 part out there.
You sure it's simply RPM? I've not checked out any of this yet.

One thing that is nice is that my interpretation of something they said months ago turned out to be right.
So I'm not surprised they have ML upscaling and plan to open source it, as it was exactly as I expected.

Depending on price they could have some very compelling products, even if its at the mid to lower end of performance
 
The same place as native 4k FPS number there.
So nowhere. This slide means little and is basically an artistic drawing of intent. We have to wait for the actual code to say anything on how it will perform on different h/w.

Codeplay is writing one.
Codeplay is targeting OpenCL which is basically Z-class citizen on Nv h/w.
And CUDA choices are down to Intel and AMD implementations right now I think?
 
And they need Matrix engines for it when they can use just DP4a?
Turing delivers 4x more INT8 performance with TensorCores than with DP4a. And you saying that 1/4 INT8 performance is "very much fast enough" for ML approaches?

/edit: Looked at DLSS and it scales just fine with more compute performance:
3090 vs 2060 Super: 2,5x more FP16 performance with TensorCores is 2,5x faster with DLSS Performance scaling a 1080p -> 2160 picture.
 
Last edited:
Status
Not open for further replies.
Back
Top