Intel ARC GPUs, Xe Architecture for dGPUs

Discussion in 'Architecture and Products' started by DavidGraham, Dec 12, 2018.

Tags:
  1. trinibwoy

    trinibwoy Meh Legend

    BRiT likes this.
  2. Seanspeed

    Seanspeed Newcomer

    That's almost assuredly Ponte Vecchio, not HPG.
     
  3. Frenetic Pony

    Frenetic Pony Regular

    Shite the ALU allocation for matrix math is dumb here. The only GPU matrix workload is upscaling, and that's not even necessary as stuff like UE5 and RE Engine show.

    The only immediate application for ML at all in games beyond that is for animation and maybe some traditional "AI" behavior, both very, very assumedly to be run on the CPU. Even the physics guessing stuff or denoising isn't really workable in realtime, and that's after years and years and years of research. Good job Raj, once again obsessing over fancy "future tech" likely to be little over fundamental engineering decisions.
     
  4. Wait what? Why are there RT units in Ponte Vecchio?

    [​IMG]
     
  5. Bondrewd

    Bondrewd Veteran

    Someone was too lazy to gut them out I guess.
     
  6. Could they be planning a super expensive halo GPU on consumers?
    Though perhaps it's too hot and power hungry to fit in a ATX case..
     
  7. Cyan

    Cyan orange Legend

    soooooo true. If I could I'd erect a statue in your honour. :D FXAA triumphed in a world where msaa was a fps killer. Many years later and we are still there, new times same situation.

    FXAA might fall short in some games compared to MSAA but compared to non AA it makes a world of a difference. In addition, one of the best FXAA implementations ever, imho, was featured in Skyrim. I never ever seen a jaggie in that game, and I played the game on a X360 at 720p.

    History tends to repeat itself, but FSR is going to be a success 'cos like XeSS is cross-platform and open source so you can find several ways to implement it via mods, Proton, whatever.

    On a different note, dunno if I understood the innards of render slice of the cores, but it looks like every Xe-Core has a RT unit accompanying it.
     
  8. Esrever

    Esrever Regular

    Or maybe they want to do some some extreme RT gaming on Aurora.
     
  9. trinibwoy

    trinibwoy Meh Legend

    No mention of rasterizers or ROPs. Maybe there are HPC use cases that would benefit from casting rays at stuff.
     
  10. Dictator

    Dictator Regular

    There are Tons of ML related techniques that are Emerging to enhance Video game graphics and Real time Generation of surfaces.
    https://fuentitech.com/nvidia-demon...raphics-technologies-at-siggraph-2021/190386/

    ML used to augment GI giving it more stabile bounces and convergence. ML used to generator complex surfaces like fur, etc. IT will just continue over time. ML is the new field for graphics alongside RT.

    ML will definitely move beyond just being used for Image reconstruction as a post-process, and rather soon at that.
     
    tinokun, Jay, DavidGraham and 5 others like this.
  11. Remij

    Remij Regular

    Yep. In the future it's not simply going to augment how the GPU generates images and effects at runtime... but also, perhaps even more importantly.. how content is made/generated in the first place. AI and ML will touch every single facet of game development and real-time graphics. Development studios will simply not be able to generate the type of assets required, at the fidelity required, in the timeframe required for the state of the art without it. Animation routine models, AI pathing models, Audio models, physics models, world states... ect ect.. the list goes on and on. Code that programmers couldn't possibly hope to write by themselves.

    One only needs to look at the very early beginnings of these efforts with Microsoft Flight Simulator 2020 to see how far reaching this will become in the future. Worlds and objects generated and synthesized on the spot from databases of objects defined by DL NNs which contain all the information required. How they should look and function, as well as how they should physically behave within a set of parameters.

    All this vast amount of data being generated every day by humans being crunched down and digested by supercomputers into networks of "intelligence" that can be used and refactored into ways we've not yet even imagined. Waiting to be tapped into. It's going to get real interesting real quick IMO.
     
    tinokun and PSman1700 like this.
  12. Lurkmass

    Lurkmass Regular

    I think his argument was that you don't need to design hardware for ML acceleration in a lot of cases to make use of ML in games. For offline applications like content generation, the argument in favour of HW acceleration for ML is virtually moot since you can just throw a supercomputer to do this work for the duration of the project so that when the product is released, the computational cost is amortized among their customers. In this case the end user here clearly don't need specific hardware with ML acceleration to reap the benefits of ML ...

    For real-time applications such as animation, physics, or AI that use ML inferencing, HW acceleration isn't even close to being necessary in a lot of cases since the models are often evaluated on the CPUs. Again, a lot of the end users didn't need ML accelerated HW to reap most of the benefits ...

    Gaming or graphics aren't compelling reasons to include HW acceleration for ML and this wasn't the intention behind the architects. The basis as to why they're doing it is so data scientists can work with ML frameworks like PyTorch or Tensorflow. Nvidia weren't even thinking about using ML for gaming/graphics initially during their first implementation of tensor cores which can't run DLSS. Right now, the existence of ML hardware hinges on the whims of the data scientists so on the instance that they don't find it useful anymore to do what they want is immediately when the feature becomes instant grounds for removal ...
     
    milk likes this.
  13. nAo

    nAo Nutella Nutellae Veteran

    I have the feeling this statement is not going to age very well, but hey, I could be wrong. We shall see..
     
  14. DegustatoR

    DegustatoR Veteran

    First implementation of Nvidia tensor cores can and did run DLSS. What did they use to train it on you think?
    From all the research surrounding ML applications you can clearly see that Nvidia and about the whole of the industry is certainly thinking about ML for gaming and graphics.
    Having h/w assisted ML instructions isn't that much different in concept from having full blown matrix multiplication units - both are ML h/w added to the chip to perform ML acceleration.
    The question of die area is a moot one since no gaming chip in existence right now is peaking at what's possible from this perspective. They all are 100% power limited.
     
    tinokun, pharma, DavidGraham and 2 others like this.
  15. Lurkmass

    Lurkmass Regular

    You know it's true because image reconstruction is the only gaming/graphics application for HW accelerated ML so far after several years of going deep into the field. For end users, the majority of the benefits behind ML aren't gated by needing a specific HW ...

    It's amusing how the new competitor who also decides to do the same for their upcoming product is thinking up of copying the exact same idea as well. It's almost as if the industry can't conjure up any different utility ...

    For all I could care, they could train their models on just about anything because that's an offline process so HW acceleration doesn't really matter here outside of iteration times. Training is different to supporting it for real-time use because you can take as much time as you want for an offline process ...
     
    milk likes this.
  16. TopSpoiler

    TopSpoiler Newcomer

    Try find out the slide titled "Deep Learning: The Future of Real-Time Rendering?", presented by Marco Salvi from Nvidia. It was in 2017.
     
    tinokun, PSman1700 and nAo like this.
  17. nAo

    nAo Nutella Nutellae Veteran

    The field might not be moving as fast as you wish but saying that the only application for HW accelerated ML is image reconstruction is factually wrong.

    Just a recent example: https://www.nvidia.com/en-us/on-demand/session/gtcspring21-e31307/
    and more here: https://wccftech.com/nvidia-demonst...other-graphics-technologies-at-siggraph-2021/

    ML/DL is going to massively impact the way we render images in real-time and offline.
    While the gfx pipeline hasn't changed much in the last 10 years (excluding the introduction of RT HW) the next 10 years are going to be quite different.
    Exciting times ahead..
     
    tinokun, pjbliverpool, Cyan and 6 others like this.
  18. Lurkmass

    Lurkmass Regular

    Remains to be seen how it'll compare favourably towards other solutions and has yet to be seen in any real application so technically I'm not even wrong. Offline graphics applications isn't interesting since using more hardware is an easy cheat and high-end quality production rendering is mostly done on CPUs which usually has no ML hardware acceleration to speak of. That just leaves real-time applications left ...

    What else is there out there ? Denoising ? The examples given thus far have simply been unimpressive and not transformative ...

    I wonder how exciting it will remain if a competitor keeps duplicating the same concepts from others ?
     
  19. trinibwoy

    trinibwoy Meh Legend

    Or maybe it’s the same as every other ML application. Everybody is convinced of the “potential” of ML and don’t want to be left out when there’s the inevitable breakthrough. Nvidia’s recent proposal on real-time training for caching bounce lighting is just one example. You can’t innovate on software without the hardware to run it.

    This is a really weird take. Why should there be faster hardware at all for offline processes if you can just “take as much time as you want”. Obviously this is not true in the real world.
     
  20. troyan

    troyan Regular

    And yet DLSS/XeSS is the biggest jump in effciency since a decade. And ML upscaling surpasses every other upscaling technique in games within three years.
     
    Cyan, pharma, xpea and 1 other person like this.
Loading...

Share This Page

Loading...