Intel ARC GPUs, Xe Architecture for dGPUs [2018-2022]

Status
Not open for further replies.
With C++ everybody can choose how much language features he wants to use, and many request OOP on GPU.
Also many consider something like OpenCL to be too cumbersome to work with, although it's luxury in comparison to low level gfx APIs.
Though, going from NV lock to Intel lock seems no win.
Maybe Intel tries to be more open. The article mentions they plan a mix of C++ and CYCL (https://www.khronos.org/sycl/).
Never tried CYCL. Could be interesting for tools development maybe? Actually i shy away from using CL myself here - afraid of code becoming too hard to maintain :|
 
What exactly does this mean?
I only mean one can use just the C subset if he wants, for example.
(And i assume CUDA does not have much OOP features, and is similar C alike like shading languages or OpenCL 1.x)
 
Nothing is too big to fail in the software world.

Just look at flash and java.

You can make the argument that java & flash are going away because they were not well taken care of.

Of course Cuda can fail, but if nVidia doesn't let it die, I don't see big company moving from it, it's too ahead of other "language" in this sector right now. And you don't have to propose a similar alternative, but a much better one to have all the big players moving from it. Will it happen someday, sure, but my guess is it will be replace by another nVidia thing, rather than taken down by anyone else.
 
t's too ahead of other "language" in this sector right now.
Why ahead? (Seriously asking - never used it myself. It's no option for games.)

Will it happen someday, sure, but my guess is it will be replace by another nVidia thing, rather than taken down by anyone else.
If Intel spans the whole field from CPU, GPU, FPGA, Tensors as said, and they do a uniform programming model well, NV might have a hard time to compete on the long run.
 
Why ahead? (Seriously asking - never used it myself. It's no option for games.)

The API is very complete and lets you do the same as what other open source competing APIs but much faster to implement with less code needed. Then there's a company behind who have a bunch of people whose job is to provide technical support. Then nvidia themselves provide research grants for students to work in their API.

None of this means that Intel + ARM + Google + AMD + PowerVR + Apple + many others can't work together towards a better solution that is IHV agnostic.
 
Nothing is too big to fail in the software world.

Just look at flash and java.

Java is failing? That’s news to me. What’s it being replaced by?

In the Anandtech article posted above they claim Intel is investing in CUDA conversion tools because they acknowledge Nvidia’s current api advantage. I don’t know how technically feasible that is but the api itself is just one aspect.

Nvidia has been pushing CUDA for a long time at a grass roots level and have built a very strong ecosystem of tools and frameworks covering everything from raytracing to physics and AI. Intel has a tall hill to climb.
 
Nvidia has been pushing CUDA for a long time at a grass roots level and have built a very strong ecosystem of tools and frameworks covering everything from raytracing to physics and AI. Intel has a tall hill to climb.
Also strengthened with CUDA now running on ARM processors.
NVIDIA's full stack of AI and HPC software is being made available to the ARM ecosystem. That includes all of its CUDA X AI and HPC libraries, GPU-accelerated AI frameworks, and software development tools, such as PGI compilers with OpenACC support.
...
This is a big move for both NVIDIA and ARM. As it pertains to the former, once stack optimization is complete, the company can boast support for every major CPU platform, including ARM, IBM Power, and of course x86. And for ARM, access to NVIDIA's CUDA stack is a major boost in GPU horsepower, and a selling point to clients.
https://hothardware.com/news/nvidia-cuda-software-stack-arm-exascale-computing
 
Why ahead? (Seriously asking - never used it myself. It's no option for games.)

If Intel spans the whole field from CPU, GPU, FPGA, Tensors as said, and they do a uniform programming model well, NV might have a hard time to compete on the long run.

One of my questions here would be: Which CPUs? Will Intel really build something with good ARM support? I have my doubts. But Cuda is supporting ARM now and with Amazon, Huawei, Fujitsu etc. we also have some not so small players there. Not to forget IBM and Power.
Of course x86 market share at the moment is overwhelming, but maybe arm can win some market share with the support it has.
 
Nothing is too big to fail in the software world.

Just look at flash and java.
Nowadays the whole client stack shrunk to browser (well, in fact Chrome) and its HTML5 and JS engine. It's simple. The legacy client stuff was insecure and unfriendly by design - Flash, Java Applets and Web* or Silverlight.

The OneAPI talks reminds me the early days of AMD Fusion fantasies. Let's see.
 
Status
Not open for further replies.
Back
Top