Intel ARC GPUs, Xe Architecture for dGPUs [2018-2022]

Status
Not open for further replies.
sqZgubF.jpg

Is that card just a mock up for marketing purposes or is the gpu really that big? It looks F**kin huge or am I just behind the times and that's what size gpu's are theses days?
That's almost assuredly Ponte Vecchio, not HPG.
 
Shite the ALU allocation for matrix math is dumb here. The only GPU matrix workload is upscaling, and that's not even necessary as stuff like UE5 and RE Engine show.

The only immediate application for ML at all in games beyond that is for animation and maybe some traditional "AI" behavior, both very, very assumedly to be run on the CPU. Even the physics guessing stuff or denoising isn't really workable in realtime, and that's after years and years and years of research. Good job Raj, once again obsessing over fancy "future tech" likely to be little over fundamental engineering decisions.
 
I think the comparison of FSR to FXAA as if it was a "bad thing" and how "it will quickly be forgotten" is really funny. Only one year after having its source code released to the public, FXAA was the most widely used form of AA in the whole market, due to how resource-efficient it was on the 7th-gen console GPUs and games. And at the time is was considered a very important development to achieve better looking games.
Even funnier is the fact that FXAA was developed by Timothy Lothes whom, according to AMD, was also the "main implementer of FSR" while working there.

So yeah, let's all hope that an IHV-agnostic upscaling solution like FSR "fails" as much as FXAA did 10 years ago.



I'm pretty sure they could find a way to neuter SHA instruction throughput in the hardware. Even if some games may use it (GPU-based data decompression / decryption? is that ever used in the PC?), I doubt it would cause any meaningful performance impact.


And the LHR are only using a firmware+driver hack. It's just an afterthought of sorts.



My guess is FSR 1.0 will eventually be enabled through drivers, with FSR 2.x/3.x using linear + non-linear neural networks and probably needing more developer intervention.
soooooo true. If I could I'd erect a statue in your honour. :D FXAA triumphed in a world where msaa was a fps killer. Many years later and we are still there, new times same situation.

FXAA might fall short in some games compared to MSAA but compared to non AA it makes a world of a difference. In addition, one of the best FXAA implementations ever, imho, was featured in Skyrim. I never ever seen a jaggie in that game, and I played the game on a X360 at 720p.

History tends to repeat itself, but FSR is going to be a success 'cos like XeSS is cross-platform and open source so you can find several ways to implement it via mods, Proton, whatever.

On a different note, dunno if I understood the innards of render slice of the cores, but it looks like every Xe-Core has a RT unit accompanying it.
 
Shite the ALU allocation for matrix math is dumb here. The only GPU matrix workload is upscaling, and that's not even necessary as stuff like UE5 and RE Engine show.

The only immediate application for ML at all in games beyond that is for animation and maybe some traditional "AI" behavior, both very, very assumedly to be run on the CPU. Even the physics guessing stuff or denoising isn't really workable in realtime, and that's after years and years and years of research. Good job Raj, once again obsessing over fancy "future tech" likely to be little over fundamental engineering decisions.
There are Tons of ML related techniques that are Emerging to enhance Video game graphics and Real time Generation of surfaces.
https://fuentitech.com/nvidia-demon...raphics-technologies-at-siggraph-2021/190386/

ML used to augment GI giving it more stabile bounces and convergence. ML used to generator complex surfaces like fur, etc. IT will just continue over time. ML is the new field for graphics alongside RT.

ML will definitely move beyond just being used for Image reconstruction as a post-process, and rather soon at that.
 
There are Tons of ML related techniques that are Emerging to enhance Video game graphics and Real time Generation of surfaces.
https://fuentitech.com/nvidia-demon...raphics-technologies-at-siggraph-2021/190386/

ML used to augment GI giving it more stabile bounces and convergence. ML used to generator complex surfaces like fur, etc. IT will just continue over time. ML is the new field for graphics alongside RT.

ML will definitely move beyond just being used for Image reconstruction as a post-process, and rather soon at that.
Yep. In the future it's not simply going to augment how the GPU generates images and effects at runtime... but also, perhaps even more importantly.. how content is made/generated in the first place. AI and ML will touch every single facet of game development and real-time graphics. Development studios will simply not be able to generate the type of assets required, at the fidelity required, in the timeframe required for the state of the art without it. Animation routine models, AI pathing models, Audio models, physics models, world states... ect ect.. the list goes on and on. Code that programmers couldn't possibly hope to write by themselves.

One only needs to look at the very early beginnings of these efforts with Microsoft Flight Simulator 2020 to see how far reaching this will become in the future. Worlds and objects generated and synthesized on the spot from databases of objects defined by DL NNs which contain all the information required. How they should look and function, as well as how they should physically behave within a set of parameters.

All this vast amount of data being generated every day by humans being crunched down and digested by supercomputers into networks of "intelligence" that can be used and refactored into ways we've not yet even imagined. Waiting to be tapped into. It's going to get real interesting real quick IMO.
 
There are Tons of ML related techniques that are Emerging to enhance Video game graphics and Real time Generation of surfaces.
https://fuentitech.com/nvidia-demon...raphics-technologies-at-siggraph-2021/190386/

ML used to augment GI giving it more stabile bounces and convergence. ML used to generator complex surfaces like fur, etc. IT will just continue over time. ML is the new field for graphics alongside RT.

ML will definitely move beyond just being used for Image reconstruction as a post-process, and rather soon at that.

I think his argument was that you don't need to design hardware for ML acceleration in a lot of cases to make use of ML in games. For offline applications like content generation, the argument in favour of HW acceleration for ML is virtually moot since you can just throw a supercomputer to do this work for the duration of the project so that when the product is released, the computational cost is amortized among their customers. In this case the end user here clearly don't need specific hardware with ML acceleration to reap the benefits of ML ...

For real-time applications such as animation, physics, or AI that use ML inferencing, HW acceleration isn't even close to being necessary in a lot of cases since the models are often evaluated on the CPUs. Again, a lot of the end users didn't need ML accelerated HW to reap most of the benefits ...

Gaming or graphics aren't compelling reasons to include HW acceleration for ML and this wasn't the intention behind the architects. The basis as to why they're doing it is so data scientists can work with ML frameworks like PyTorch or Tensorflow. Nvidia weren't even thinking about using ML for gaming/graphics initially during their first implementation of tensor cores which can't run DLSS. Right now, the existence of ML hardware hinges on the whims of the data scientists so on the instance that they don't find it useful anymore to do what they want is immediately when the feature becomes instant grounds for removal ...
 
Nvidia weren't even thinking about using ML for gaming/graphics initially during their first implementation of tensor cores which can't run DLSS.
First implementation of Nvidia tensor cores can and did run DLSS. What did they use to train it on you think?
From all the research surrounding ML applications you can clearly see that Nvidia and about the whole of the industry is certainly thinking about ML for gaming and graphics.
Having h/w assisted ML instructions isn't that much different in concept from having full blown matrix multiplication units - both are ML h/w added to the chip to perform ML acceleration.
The question of die area is a moot one since no gaming chip in existence right now is peaking at what's possible from this perspective. They all are 100% power limited.
 
I have the feeling this statement is not going to age very well, but hey, I could be wrong. We shall see..

You know it's true because image reconstruction is the only gaming/graphics application for HW accelerated ML so far after several years of going deep into the field. For end users, the majority of the benefits behind ML aren't gated by needing a specific HW ...

It's amusing how the new competitor who also decides to do the same for their upcoming product is thinking up of copying the exact same idea as well. It's almost as if the industry can't conjure up any different utility ...

First implementation of Nvidia tensor cores can and did run DLSS. What did they use to train it on you think?
From all the research surrounding ML applications you can clearly see that Nvidia and about the whole of the industry is certainly thinking about ML for gaming and graphics.
Having h/w assisted ML instructions isn't that much different in concept from having full blown matrix multiplication units - both are ML h/w added to the chip to perform ML acceleration.
The question of die area is a moot one since no gaming chip in existence right now is peaking at what's possible from this perspective. They all are 100% power limited.

For all I could care, they could train their models on just about anything because that's an offline process so HW acceleration doesn't really matter here outside of iteration times. Training is different to supporting it for real-time use because you can take as much time as you want for an offline process ...
 
You know it's true because image reconstruction is the only gaming/graphics application for HW accelerated ML so far after several years of going deep into the field. For end users, the majority of the benefits behind ML aren't gated by needing a specific HW ...
The field might not be moving as fast as you wish but saying that the only application for HW accelerated ML is image reconstruction is factually wrong.

Just a recent example: https://www.nvidia.com/en-us/on-demand/session/gtcspring21-e31307/
and more here: https://wccftech.com/nvidia-demonst...other-graphics-technologies-at-siggraph-2021/

ML/DL is going to massively impact the way we render images in real-time and offline.
While the gfx pipeline hasn't changed much in the last 10 years (excluding the introduction of RT HW) the next 10 years are going to be quite different.
Exciting times ahead..
 
The field might not be moving as fast as you wish but saying that the only application for HW accelerated ML is image reconstruction is factually wrong.

Just a recent example: https://www.nvidia.com/en-us/on-demand/session/gtcspring21-e31307/
and more here: https://wccftech.com/nvidia-demonst...other-graphics-technologies-at-siggraph-2021/

ML/DL is going to massively impact the way we render images in real-time and offline.
While the gfx pipeline hasn't changed much in the last 10 years (excluding the introduction of RT HW) the next 10 years are going to be quite different.
Exciting times ahead..

Remains to be seen how it'll compare favourably towards other solutions and has yet to be seen in any real application so technically I'm not even wrong. Offline graphics applications isn't interesting since using more hardware is an easy cheat and high-end quality production rendering is mostly done on CPUs which usually has no ML hardware acceleration to speak of. That just leaves real-time applications left ...

What else is there out there ? Denoising ? The examples given thus far have simply been unimpressive and not transformative ...

I wonder how exciting it will remain if a competitor keeps duplicating the same concepts from others ?
 
It's amusing how the new competitor who also decides to do the same for their upcoming product is thinking up of copying the exact same idea as well. It's almost as if the industry can't conjure up any different utility ...

Or maybe it’s the same as every other ML application. Everybody is convinced of the “potential” of ML and don’t want to be left out when there’s the inevitable breakthrough. Nvidia’s recent proposal on real-time training for caching bounce lighting is just one example. You can’t innovate on software without the hardware to run it.

For all I could care, they could train their models on just about anything because that's an offline process so HW acceleration doesn't really matter here outside of iteration times. Training is different to supporting it for real-time use because you can take as much time as you want for an offline process ...

This is a really weird take. Why should there be faster hardware at all for offline processes if you can just “take as much time as you want”. Obviously this is not true in the real world.
 
You know it's true because image reconstruction is the only gaming/graphics application for HW accelerated ML so far after several years of going deep into the field. For end users, the majority of the benefits behind ML aren't gated by needing a specific HW ...

And yet DLSS/XeSS is the biggest jump in effciency since a decade. And ML upscaling surpasses every other upscaling technique in games within three years.
 
Status
Not open for further replies.
Back
Top