Intel ARC GPUs, Xe Architecture for dGPUs

Discussion in 'Architecture and Products' started by DavidGraham, Dec 12, 2018.

Tags:
  1. JoeJ

    JoeJ Veteran

    With C++ everybody can choose how much language features he wants to use, and many request OOP on GPU.
    Also many consider something like OpenCL to be too cumbersome to work with, although it's luxury in comparison to low level gfx APIs.
    Though, going from NV lock to Intel lock seems no win.
    Maybe Intel tries to be more open. The article mentions they plan a mix of C++ and CYCL (https://www.khronos.org/sycl/).
    Never tried CYCL. Could be interesting for tools development maybe? Actually i shy away from using CL myself here - afraid of code becoming too hard to maintain :|
     
  2. tuna

    tuna Veteran

    What exactly does this mean?
     
  3. tuna

    tuna Veteran

    As far as I understand, Intels compute stack will be based on open standards with multiple implementations (unlike nVidia's CUDA).
     
  4. JoeJ

    JoeJ Veteran

    I only mean one can use just the C subset if he wants, for example.
    (And i assume CUDA does not have much OOP features, and is similar C alike like shading languages or OpenCL 1.x)
     
  5. Rootax

    Rootax Veteran

    Cuda is too well implented right now to be moved away by something else imo.
     
    egoless and xpea like this.
  6. Nothing is too big to fail in the software world.

    Just look at flash and java.
     
    Kej, AlphaWolf, Alexko and 2 others like this.
  7. Rootax

    Rootax Veteran

    You can make the argument that java & flash are going away because they were not well taken care of.

    Of course Cuda can fail, but if nVidia doesn't let it die, I don't see big company moving from it, it's too ahead of other "language" in this sector right now. And you don't have to propose a similar alternative, but a much better one to have all the big players moving from it. Will it happen someday, sure, but my guess is it will be replace by another nVidia thing, rather than taken down by anyone else.
     
    pharma likes this.
  8. JoeJ

    JoeJ Veteran

    Why ahead? (Seriously asking - never used it myself. It's no option for games.)

    If Intel spans the whole field from CPU, GPU, FPGA, Tensors as said, and they do a uniform programming model well, NV might have a hard time to compete on the long run.
     
  9. The API is very complete and lets you do the same as what other open source competing APIs but much faster to implement with less code needed. Then there's a company behind who have a bunch of people whose job is to provide technical support. Then nvidia themselves provide research grants for students to work in their API.

    None of this means that Intel + ARM + Google + AMD + PowerVR + Apple + many others can't work together towards a better solution that is IHV agnostic.
     
    Rootax, pharma and JoeJ like this.
  10. trinibwoy

    trinibwoy Meh Legend

    Java is failing? That’s news to me. What’s it being replaced by?

    In the Anandtech article posted above they claim Intel is investing in CUDA conversion tools because they acknowledge Nvidia’s current api advantage. I don’t know how technically feasible that is but the api itself is just one aspect.

    Nvidia has been pushing CUDA for a long time at a grass roots level and have built a very strong ecosystem of tools and frameworks covering everything from raytracing to physics and AI. Intel has a tall hill to climb.
     
  11. pharma

    pharma Veteran

    Also strengthened with CUDA now running on ARM processors.
    https://hothardware.com/news/nvidia-cuda-software-stack-arm-exascale-computing
     
  12. Samwell

    Samwell Newcomer

    One of my questions here would be: Which CPUs? Will Intel really build something with good ARM support? I have my doubts. But Cuda is supporting ARM now and with Amazon, Huawei, Fujitsu etc. we also have some not so small players there. Not to forget IBM and Power.
    Of course x86 market share at the moment is overwhelming, but maybe arm can win some market share with the support it has.
     
  13. yuri

    yuri Regular

    Nowadays the whole client stack shrunk to browser (well, in fact Chrome) and its HTML5 and JS engine. It's simple. The legacy client stuff was insecure and unfriendly by design - Flash, Java Applets and Web* or Silverlight.

    The OneAPI talks reminds me the early days of AMD Fusion fantasies. Let's see.
     
  14. JoeJ

    JoeJ Veteran

    The day after CUDA runs on AMD :)
     
    xpea likes this.
  15. Alexko

    Alexko Veteran Subscriber

  16. digitalwanderer

    digitalwanderer Dangerously Mirthful Legend

  17. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■) Moderator Legend Alpha

    People haven't learned and keep making the same mistakes. :lol:
     
  18. yuri

    yuri Regular

    "Data analysis", ML, being the 1st (introductory) lang taught at unis, etc.
     
    Kej, pharma, Frenetic Pony and 2 others like this.
  19. pcchen

    pcchen Moderator Moderator Veteran Subscriber

    Python is quite easy to learn, and it's now the prime choice of teaching programming language in high schools.
     
  20. DavidGraham

    DavidGraham Veteran

    Intel shared the first ever live demo of it's dGPU called DG1, running Destiny 2 in a laptop format.

    The game appears to be running at sub 30fps, with low graphics/textures, and with horrible AA!

     
Loading...

Share This Page

Loading...