Intel ARC GPUs, Xe Architecture for dGPUs

Discussion in 'Architecture and Products' started by DavidGraham, Dec 12, 2018.

Tags:
  1. DegustatoR

    DegustatoR Veteran

  2. Dayman1225

    Dayman1225 Newcomer

    More rumours pointing to 3070/6700XT level perf for DG2



     
  3. trinibwoy

    trinibwoy Meh Legend

    That would be amazeballs. Hopefully it actually competes and not just in 1 or 2 hand picked games.
     
    Dayman1225 and Lightman like this.
  4. pharma

    pharma Veteran

    I imagine it would definitely compete in RT games, somewhat.
     
  5. DegustatoR

    DegustatoR Veteran

    Hopefully it sucks at mining so much that all cards will be bought by gamers.
     
    Silent_Buddha and Kyyla like this.
  6. Bondrewd

    Bondrewd Veteran

    lol
     
  7. Silent_Buddha

    Silent_Buddha Legend

    If it can actually be sold for a sane price, I'd take a chance on one even knowing that Intel doesn't have the greatest track record when it comes to driver support for their GPUs.

    Regards,
    SB
     
    Dictator likes this.
  8. Intel need to have a footprint on the market and build driver support with gaming industry on top of that. Probably miners will give that footprint to Intel, since it doesn't need that much driver integration like games. Gamers mostly expect Intel to be competitively priced so that they can buy AMD or Nvidia cards for cheaper.
     
  9. Nah, nowadays gamers will buy Intel GPUs if miners don't take them all, and have sufficient performance + stock.
     
  10. Bondrewd

    Bondrewd Veteran

    Do you really expect a 256b GDDR6 GPU to not be used for mining?
     
    Lightman likes this.
  11. I half-expect Intel to create hard and soft limitations to hash rate. Considering the Xe's development started around the time the first crypto boom occurred, they'd be dumb not to think of some safeguards IMO.
     
  12. Bondrewd

    Bondrewd Veteran

    lol wtf. Why?
    The only limitation there'd ever be is Intel's OpenCL driver being kinda busted.
     
    Man from Atlantis likes this.
  13. DegustatoR

    DegustatoR Veteran

  14. Xmas

    Xmas Porous Veteran Subscriber

  15. techuse

    techuse Veteran

    All Nvidia GPUs starting with Kepler have quad rate Int8 I believe.
     
  16. Xmas

    Xmas Porous Veteran Subscriber

    DP4A is not the same as supporting packed 8-bit integer math of some kind, it's a specific instruction doing 8x8->16bit multiplication and saturating accumulation into a 32-bit integer (supporting various combinations of signed and unsigned operands). Presumably DP4A and DP2A (counted as a single instruction, not a quad/double rate operation) run at the same rate as either 32-bit integer or float instructions.

    I can't find much reference to DP4A/DP2A being supported on anything other than Pascal GP102/104/106. On later chips it might run on tensor cores, it might run at float rate or at int32 rate, or it might be emulated via bit shifts/masks and int16 multiplications.


    https://developer.nvidia.com/blog/mixed-precision-programming-cuda-8/
    "For such applications, the latest Pascal GPUs (GP102, GP104, and GP106) introduce new 8-bit integer 4-element vector dot product (DP4A) and 16-bit 2-element vector dot product (DP2A) instructions."

    https://docs.nvidia.com/cuda/pascal-tuning-guide/index.html#int8
    "GP104 provides specialized instructions for two-way and four-way integer dot products. These are well suited for accelerating Deep Learning inference workloads. The __dp4a intrinsic computes a dot product of four 8-bit integers with accumulation into a 32-bit integer. Similarly, __dp2a performs a two-element dot product between two 16-bit integers in one vector, and two 8-bit integers in another with accumulation into a 32-bit integer. Both instructions offer a throughput equal to that of FP32 arithmetic."
     
  17. DegustatoR

    DegustatoR Veteran

    AFAIK all GPUs starting with GP104 (so all desktop Pascal chips) support this on Nv's side.
    The rates though can differ depending on how said support is provided on chips with tensor cores.
    Would be cool if anyone would write a throughput benchmark now when both DX and VK support the feature.
     
    pharma and Man from Atlantis like this.
  18. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■) Moderator Legend Alpha

  19. Cyan

    Cyan orange Legend

    also read that it will be priced between 300€ to 500€, not bad for medium tier.

    Also read that they are planning to launch a low tier GPU, a market long forgotten by nVidia and AMD, meant to compete with the likes of the nVidia 1650, priced at around 150-200€. Also not bad.
     
  20. Kaotik

    Kaotik Drunk Member Legend

    upload_2021-9-14_7-44-58.png
     
Loading...

Share This Page

Loading...