Intel ARC GPUs, Xe Architecture for dGPUs

Discussion in 'Architecture and Products' started by DavidGraham, Dec 12, 2018.

Tags:
  1. Frenetic Pony

    Regular

    Joined:
    Nov 12, 2011
    Messages:
    807
    Likes Received:
    478
    RGB Cloud

    [​IMG]
     
  2. pcchen

    pcchen Moderator
    Moderator Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,018
    Likes Received:
    581
    Location:
    Taiwan
    I think it's probably just a misunderstanding. Intel is promoting the "oneAPI" to unify workloads across CPU, GPU, and FPGA (the underlying programming language is C++). There's no need for Xe GPU to support x86 under this arrangement.
     
  3. Leovinus

    Newcomer

    Joined:
    May 31, 2019
    Messages:
    142
    Likes Received:
    73
    Location:
    Sweden
    Just thinking out loud here considering there is so little we know about the architecture. But does Intel posses any GPU patents that hint at what they hope to bring to the table? Considering both AMD and nVidia have a volumetric fuck-ton of patents Intel wouldn't want to step on I'm just assuming they'll look into their own goodie-bag in parallell with developing new patents.

    Both Raja Koduri and Tom Petersen can at the very least make "educated guesses" as to where their former employers are heading without breaking NDA I'd wager. So they'll know what to look for to be competitive. For example we know Xe will probably be capable of ray-tracing. There AMD has its texture cache BVH patent while nVidia has their tensor solution. Does Intel have anything on the subject?

    (Sometimes I wish there'd be a rumour wikipedia where all those tasty tidbits are collected instead of strewn out across message boards, blogs, and tech news sites.)
     
  4. BRiT

    BRiT (>• •)>⌐■-■ (⌐■-■)
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    20,502
    Likes Received:
    24,397
    There's always the AGP Texturing but I dont know if they have a patent for it. :runaway:
     
  5. Rootax

    Veteran

    Joined:
    Jan 2, 2006
    Messages:
    2,400
    Likes Received:
    1,845
    Location:
    France
    if XE is crap for gaming, I just hope, in one last effort, they buy Img Tech, and make a biiiig Series A based GPU :eek: I want my PowerVR back.
     
    digitalwanderer likes this.
  6. Leovinus

    Newcomer

    Joined:
    May 31, 2019
    Messages:
    142
    Likes Received:
    73
    Location:
    Sweden
    Why buy ImgTech when they have the glorious ghost of Larrabee for their tiled rendering needs? All hail Larrabee!
     
  7. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    18,987
    Likes Received:
    3,529
    Location:
    Winfield, IN USA
    Are you trying to put me out of a job?!?!


    ;)
     
    Leovinus likes this.
  8. Rootax

    Veteran

    Joined:
    Jan 2, 2006
    Messages:
    2,400
    Likes Received:
    1,845
    Location:
    France
    Tiled rendering yes, but not tiled deferred rendering. Don't compare the two, "'it's totally inappropriate. It's lewd, vesuvius, salacious, outrageous!"© :grin:

    More seriously, do we know if they started from scratch (well, as close as possible in this area), or started from their igpu tech ?
     
    Leovinus likes this.
  9. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,462
    Location:
    Finland
    They have included elements of previous Gen-iGPU tech
    https://www.anandtech.com/show/1513...an-interview-with-intels-raja-koduri-about-xe
     
    Leovinus and Rootax like this.
  10. Leovinus

    Newcomer

    Joined:
    May 31, 2019
    Messages:
    142
    Likes Received:
    73
    Location:
    Sweden
    I wouldn't dream of it!

    Hmm, considering the time constraints that's probably a prudent decision. Intels Gen architecture is a known and solid quantity. But I've never heard it described as performant, so I started to look it up. ExtremeTech had an interesting, if brief, overview on the differences between Gen 9 and Gen 11. Seems like the most important changes are A) a general beefing up in both execution units and bandwidth, and B) the introduction of their POSH (Position Only Tile-Based Rendering) system ti improve certain geometry processing to improve memory bandwidth efficiency. If you read the Intels own Gen11 architecture presentation it further presents Coarse Pixel Shading (a variant of variable pixel shading from what I can tell) as an important part.

    It does seem that they're focusing heavily on tile based rendering, at least in part, with a much improved L3 cache to facilitate rapid memory reads and writes.

    All in all it seems like a solid starting point for Xe really. I wonder how they'll adapt it for ray tracing though.
     
    digitalwanderer and BRiT like this.
  11. Frenetic Pony

    Regular

    Joined:
    Nov 12, 2011
    Messages:
    807
    Likes Received:
    478
    New overarching discussion on Anandtech: https://www.anandtech.com/show/1518...rete-xe-hpc-graphics-disclosure-ponte-vecchio

    The big reveal here: There's SIMT and SIMD units. I guess that's how they'll be doing raytracing, just straight up multi simd units? Seems flexible at least, as in, "whatever AMD and Nvidia are doing, and thus gamedevs are going to do, we can do to". Well, maybe? I dunno.

    Also interesting for any number of other reasons, but details are close to none, just starting to drill down from high level goals on a journey towards specifics. Thought it also tells us Intel is interested in HPC, a bit obvious with the exascale contract, but there it is; oh and 8 stacks of HBM? Geeze, though it seems to be a weird multi die setup or something. Edit - ok finished reading, derr. They also mention they've got their own coding language, "Distributed Parallel C++", but I don't know what it's like. AFAIK the reason everyone likes CUDA is it's just so nice and clean, unlike the monster C++ has slowly become, so, I dunno there either.
     
    #71 Frenetic Pony, Dec 25, 2019
    Last edited: Dec 25, 2019
    pharma, JoeJ and DavidGraham like this.
  12. JoeJ

    Veteran

    Joined:
    Apr 1, 2018
    Messages:
    1,523
    Likes Received:
    1,772
    Wow, this reads as a complete and ambitious vision about future hard and software. Much bigger than just dGPUs for games. Exciting.
     
  13. Rootax

    Veteran

    Joined:
    Jan 2, 2006
    Messages:
    2,400
    Likes Received:
    1,845
    Location:
    France
    Larrabee sounded ambitious too at the time. I'll wait &see. Plus we don't know if Intel production tools (I mean 14/10/7nm/...) will allow the birth of the planned products, or if they will have to dial back some stuff.
     
    milk and xpea like this.
  14. rSkip

    Newcomer

    Joined:
    Jan 10, 2012
    Messages:
    18
    Likes Received:
    35
    Location:
    Shanghai
    IMO, "SIMT and SIMD units" = variable vector width ( SIMD8 / SIMD16 / SIMD32 ). It gives you (or the compiler) an option to trade TLP <-> DLP. Gen Graphics has this for years. It's similar to the choice of wave32 / wave64 modes on AMD RDNA.
    Intel might add more modes to the existing 8/16/32 for Gen12.

    Also I would expect intel to put a hardware ray-tracing block in subslice, sharing texture cache (or L1 cache) with TMU.
     
  15. JoeJ

    Veteran

    Joined:
    Apr 1, 2018
    Messages:
    1,523
    Likes Received:
    1,772
    I'm unsure they mean it this way.
    Could it be the SIMT unit sits on top the SIMD unit, meaning you could process say vec4 as a single SIMD data element, with a single instruction?
    So you could divide your SIMT unit once into 8/16/32... threads like now, which then operate on any of 1,2,4,8... wide vector data elements as needed?
    Though, sounds overkill. I did not understand this point very well.
     
  16. Frenetic Pony

    Regular

    Joined:
    Nov 12, 2011
    Messages:
    807
    Likes Received:
    478
    Don't really know either. From the initial impression of variable width, it sounded something like AMDs ability to do double rate fp16/quad rate int8 and their new "double compute unity/work group processor" thing.

    But then Cutress started talking about their SIMT units basically being regular GPU threadgroup procs, and their SIMD units being more flexible CPU like units. The below graphics seems to support something like that, showing SIMT as full baseline performance and their added "SIMD" units enhancing performance under some workloads; I'm assuming the enhanced ones are capable of GPU/CPU hybrid work. It's an odd route to go down, and smacks a little of not getting over the "Mill" series, but maybe it'll pay off, assuming that's correct at all.

    [​IMG]
     
    digitalwanderer likes this.
  17. JoeJ

    Veteran

    Joined:
    Apr 1, 2018
    Messages:
    1,523
    Likes Received:
    1,772
    Makes sense although what you say is more the other way around. But yes, i also had the impression of SIMD being more similar to CPUs.
    Not sure if this finds its way into consumer products at all. The complexity could make give them a hard start, but also the option to catch up faster and beat the competition later maybe.

    I hope it works out and they also have some positive influence on software / game APIs.
     
  18. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,462
    Location:
    Finland
    What's really curious, is the fact that supposedly the DG1 discrete part in fact has 96 EU's just like the integrated on Tiger Lake. Also, why do the latter two links clearly indicate it's a damn cpu (6+2, 8+2) even though it's supposed to be "Discrete Graphics 96EU DG1"?
    https://portal.eaeunion.org/sites/o...8&ListId=d84d16d7-2cc9-4cff-a13b-530f96889dbc
    https://portal.eaeunion.org/sites/o...0&ListId=d84d16d7-2cc9-4cff-a13b-530f96889dbc
    https://portal.eaeunion.org/sites/o...4&ListId=d84d16d7-2cc9-4cff-a13b-530f96889dbc
     
  19. tuna

    Veteran

    Joined:
    Mar 10, 2002
    Messages:
    3,550
    Likes Received:
    589
    How is CUDA better than modern C++? Also, if you use CUDA you are locked in to nVidea HW, which I do not think is something Intel desire...
     
  20. Frenetic Pony

    Regular

    Joined:
    Nov 12, 2011
    Messages:
    807
    Likes Received:
    478
    EDIT - I regret everything, never get into a discussion about the merits of varyious programming languages, it will never end :runaway:
     
    #80 Frenetic Pony, Dec 27, 2019
    Last edited: Dec 29, 2019
    vjPiedPiper likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...