Intel ARC GPUs, Xe Architecture for dGPUs [2018-2022]

Status
Not open for further replies.
The degree to which Raja may or may not have been responsible for this, however, I do not know. The fact that (to my knowledge) no quantitative claims were made about Navi prior to its launch suggests that at least some lessons were learned.
Raja was the lead of Radeon Technologies Group during that period. So he was fully responsible for that situation.
 
Anyway, It's nice to see such a a major shift in the industry, a few years ago some people were arguing that dGPUs are going to die, now we have the largest semiconductor company of the world (Intel) making huge dGPUs and betting on them big to drive their future growth. To the point that it's downplaying CPUs in favor of GPUs and FPGAs. We came full circle!

I'm trying to destroy the thinking about having 90% share inside our company because, I think it limits our thinking, I think we miss technology transitions. we miss opportunities because we're, in some ways pre-occupied with protecting 90, instead of seeing a much bigger market with much more innovation going on, both Inside our four walls, and outside our four walls, so we come to work in the morning with a 30% share of all silicon, with every expectation over the next several years, that we will play a larger and larger role in our customers success, and that doesn't just a CPUs.

It means GPUs, it means Al, it does mean FPGAs, it means bringing these -technologies together so we're solving customers' problems. So, were looking at a company with roughly 30% share in a $288 silicon TAM, not CPU TAM but silicon TAM.

https://wccftech.com/intel-ceo-beyond-cpu-7nm-more/
 
Raja was the lead of Radeon Technologies Group during that period. So he was fully responsible for that situation.

Ultimately, yes, but what I mean is that I don't know whether he was responsible for the decision to put so much pressure on engineering teams, whether he knew said engineers weren't telling the truth and just ran with it or whether he was fooled, whether he tried to set more realistic targets and/or budgets and was overruled, and if so, how much he fought, etc. I only have one person's perspective on what happened, and I don't know what role he played exactly—or at all, actually.
 
Raja was the lead of Radeon Technologies Group during that period. So he was fully responsible for that situation.
There was a LOT of infighting at AMD during that time, much of it between the CPU division and RTG. Ryzen's success was a huge force internally for politicizing and I think Threadripper threw everyone for a loop even at AMD and they weren't sure how to deal with it for a while.

I haven't heard all the details or any from Raja, but I know RTG was being controlled more by the bean counters than Raja after a while and I'm pretty sure that had a large part to do with the whole RTG exodus.

It's not all on Raja, he had bosses too. I had a smoke and a good conversation with one at Siggraph in 2016, (I think, a year +/- because I'm foggy), and he seemed really cool; but then again I didn't work for him.

RTG wasn't independent of AMD, it wasn't like Raja had total say. AMD held the purse strings and could, (and did), steal talent from his group at their discretion. (All my own conjecture based on a myriad of things)

He had a ton of responsibility, but no where near the authority to carry it out. He was fighting with both arms tied behind his back at some points. I don't think it ended very pretty. :(

That being said I haven't kept up on where the Intel GPU is at except for the hirings and firings there, and those have been pretty damned interesting especially of late! AdoredTV released a new video today that I still haven't fully digested, but it's looking like we're in for a wild ride! I mean seriously, don't the terms "real world vs synthetic" benchmarks just take you back to the days of FX? I get warm fuzzies. :D
 
Anyway, It's nice to see such a a major shift in the industry, a few years ago some people were arguing that dGPUs are going to die, now we have the largest semiconductor company of the world (Intel) making huge dGPUs and betting on them big to drive their future growth. To the point that it's downplaying CPUs in favor of GPUs and FPGAs. We came full circle!





https://wccftech.com/intel-ceo-beyond-cpu-7nm-more/
What bothers me a bit with posts like this and the the ”gaming tech web” in general is the idea that Intel will be targeting gaming with their dGPUs, whereas everything I’ve heard out of them speaks of general compute.

I can’t for the life of me believe that Intel could make a competitive gaming GPU, at least not without subsidising them heavily. And their $10billion mobile debacle should have made them a bit cautious when it comes to buying their way into markets. And the gaming dGPU market is not exactly promising huge revenue for the future, at least it made some sense to buy their way into mobile.

No, Intel needs graphics for their CPUs, and they need stronger parallel compute for certain server tasks and they can combine these to some extent. But going after gamers? Nah. I haven’t heard anything in their PR that say that this is their goal. Which makes perfect sense. That impression is created elsewhere.
 
Last edited:
Even worse for consoles there as margins are even lower there.
Of course.
Gaming revenue expansion is in mobile, not PC or console where revenue stays largely constant while numbers are dropping (less people pay more essentially).

If you are going to invest to break into a market it had better be an expanding one, otherwise the best you can hope for is a slice of the pie with depressed margins all around. (You want a market where you can help shape future development and directions, thus steering the future revenue your way more exclusively, improving margins.)

Investing in data center computation makes sense for intel. Creating dGPUs for gamers to play RedDeadRedemption a bit cheaper doesn’t.
 
Last edited:
I think going after gamers would be the cherry on the cake, only if their architecture could do well in games too, then why not go there...
It's a big if. But it can be a way to fight against AMD on another front.
 
This really doesn't look like a GPU destined for the data center.
69123_08_intel-gpu-boss-raja-koduri-teases-mother-gpus-xe.jpg
 
He had a ton of responsibility, but no where near the authority to carry it out. He was fighting with both arms tied behind his back at some points. I don't think it ended very pretty. :(

I believe that's the recipe for burning people out. And that situation is sort of what I gleaned during and after Vega's launch. Which is why I give Raja the benefit of the doubt generally. Anyone in those positions to be honest. I just hope he finds the work on Xe fulfilling and restorative.

Moving to the present. I haven't had time to check the AdoredTV video yet. Has anyone gleaned what the rumoured reasons behind the rumoured performance deficits are?
 
Just dont plug in any speakers to those data center servers. You'd be amazed at what sometimes makes it into production code, like playing audio file every time an exception is thrown.
 
You'd be amazed at what sometimes makes it into production code, like playing audio file every time an exception is thrown.
First my BIOS is doing seems running lightweight libJpg to display logo with shiny compression artifacts :O

This really doesn't look like a GPU destined for the data center.
I think all those designs were fanart.
 
BTW, what really confuses me are those x86 rumors: https://www.techpowerup.com/261125/7nm-intel-xe-gpus-codenamed-ponte-vecchio
Other sites already confirmed this, but guess they are misinformed. Likely they mean more C++ than x86? Otherwise i would remember Larrabee and think this is no real GPU at all, or not related to the expected Xe dGPU.

I read that as mening that the architecture is capable of running x86 code, not that it is essentially an x86 design. So far as I know most (all?) x86 CPU's these days are more akin to RISC cores with an x86 translation layer on top of it anyway though (an extremely simplified explanation...). So for Intel to develop an architecture, and a compatibility layer in hardware or software, to maximise execution efficiency while still remaining, in essence, a traditional GPU wouldn't be entirely beyond the realm of feasibility. But it sounds dubious to me to let x86 compatibility in any way shape GPU design. At least if it in any way threatened to introduced inefficiencies within the GPU realm of things.
 
Yeah, translation layer in every core does sound inefficient and a waste. It already did for Larrabee. What's the point? Need to rewrite code for GPUs anyway, x86 won't help here.
But may be a marketing stunt to compete CUDA, and i hope this does not end up in gaming GPUs - if ture at all.
 
Status
Not open for further replies.
Back
Top