Intel CEO confirms first dGPUs in 2020

So you really think they went out and said "We will build a dGPU" went out and made a snazzy promo video for it, announced it in a conference, created some hype, and than after that and was said and done they called up their engineers and said "ok guys, so now you have to start thinking about how we are gonna make this."
Of course not but it's likely all in design phase. It's way too early for any hardware IMO.
 
Of course not but it's likely all in design phase. It's way too early for any hardware IMO.

I actually disagree. I imagine Intel dGPU N is essentially done, design for GPU N+1 is probably wrapping up (at least the "big picture aspects") and GPU N+2 has at least been started. FWIW I'm betting it's based off their current iGPU architecture (some enhancements + the ability to scale up EUs).

I think people greatly underestimate how early hardware design is "done". Very rarely do companies make "bad hardware", they make bad guesses about where the market will be. ;)
 
So you really think they went out and said "We will build a dGPU" went out and made a snazzy promo video for it, announced it in a conference, created some hype, and then after that and was said and done they called up their engineers and said "ok guys, so now you have to start thinking about how we are gonna make this."

I think releasing on 14nm is a bad idea, just look at NV with the new 104 being something like 500mm sq and 102 around 750mm. A 2019 intel dgpu on 14nm is going to be slaughtered by both AMD and NV. a 2020 one will be beyond a joke, but if intel can't get ice lake out the door before H2 2020 you really think they will get a 10nm GPU out the door?

Here's a question for you, what are your expectations of a intel GPU, die size, process, performance etc.

I suppose intel could go use a working foundry 7nm process.......... :runaway:
 
4 years ago, sure. But now the reality is their 10nm will likely be worse then tmsc 7nm and will be over a year later


Not really... We'll see, but when you see the capability of their 14nm++ (or whatever it's called), I'm not worrying too much about Intel vs other's 7nm.

On the subject, I liked this exchange with David Kanter from Real World Tech :

 
I actually disagree. I imagine Intel dGPU N is essentially done, design for GPU N+1 is probably wrapping up (at least the "big picture aspects") and GPU N+2 has at least been started.
I actually disagree ... though it depends on what you mean “essentially done” and “design”.

If Intel needs at least 18 months to go from a chip being “essentially done” to going to market, they have a really major problem with execution.

Even 12 months would be kind of pathetic, though I’d give them a pass on that if it’s a brand new architecture.

And before everybody jumps with snarky comments about execution and their 10nm CPUs: that’s different and kind of irrelevant. First, because the comment I’m replying to is a more general statement and not really Intel related. And second because high end CPUs are much harder to get to market than a GPU. The latter may be “the most complex chips in the world” according to some CEOs, but they are really not. Or only if you use transistors as a measure of complexity.
 
To be clear I mean design, not production.

When I interviewed at [mobile gpu company], they had just released [some gpu] the week before. I congratulated them on the launch and found it very interesting that [some gpu + 1] was already about complete design wise. They mentioned work had even started on [some gpu + 2]. [Some gpu] had just launched the week before! So by the time we (the consumer) get our hands on a new gpu, it's already old news. :p
 
To be clear I mean design, not production.
I know. But I still don’t know what you mean by “design”.

Does that mean some grand ideas on a slide? The general architecture? Does it mean RTL complete by not yet fully verified? Does it mean frozen netlist? Or does it mean layout complete and ready for tape-out?

The delta between the first and last milestone can easily be 2 years, but IMO “design complete” in at the very least RTL complete. I’ve seen substantial diving-catch new features added to a chip *after* RTL complete.

When I interviewed at [mobile gpu company], they had just released [some gpu] the week before.
The mobile part is the big clue there, since it’s only a smaller part of a larger SOC that will probably run a full OS. This chip will be marketed to customers who very often wait to decide until they’ve seen working silicon. Eventually the chip will end up in a complex system (a mobile phone) that will go through endless iterations of all kinds of certs.

An Intel discrete GPU won’t have go through most of that.
 
Last edited:
Wouldn't there be more than one scenario so it is possible you are both right.
Pascal and Volta would fall into willardjuice concept while Turing as an example would align more with silent_guy.
Volta Tensor Cores were a R&D design input decision quite early in the Volta phase (according to a CUDA engineer involved in such input) even while Pascal was still in design; there is a large degree of synergy between Pascal and Volta from a Tesla/Quadro perspective and this can be seen from design even to production deployment.
Turing stands out as a more separate product entity.

Just using those as an example as to me they fit both your points.
 
Don't Intel have a different ruler to everyone else, making it not entirely straightforward to compare processes?

Everyone has the same rulers, but no one uses them. Process names are not related to physical properties of the associated transistors, but determined by marketing considerations.

That being said, most foundries have, for comparably named processes, comparable feature sizes—except for Intel.
 
I know. But I still don’t know what you mean by “design”.

Does that mean some grand ideas on a slide? The general architecture? Does it mean RTL complete by not yet fully verified? Does it mean frozen netlist? Or does it mean layout complete and ready for tape-out?

The delta between the first and last milestone can easily be 2 years, but IMO “design complete” in at the very least RTL complete. I’ve seen substantial diving-catch new features added to a chip *after* RTL complete.


The mobile part is the big clue there, since it’s only a smaller part of a larger SOC that will probably run a full OS. This chip will be marketed to customers who very often wait to decide until they’ve seen working silicon. Eventually the chip will end up in a complex system (a mobile phone) that will go through endless iterations of all kinds of certs.

An Intel discrete GPU won’t have go through most of that.

Yes you are correct, I don't mean the RTL is ready to be shipped to the fab, but rather it's already too late for Intel dGPU N+1 to make radical/large changes. The context of the discussion I was responding to was "Maybe it was a mistake that Nvidia start the ray tracing race early. Now Intel knows there goals and can react.". My point was not only do I think it's too late for Intel to make changes to dGPU N based on whatever Nvidia showed, but it's also too late to make meaningful changes to N + 1. If Intel was not planning on adding non-trivial RT hw to their gpus (not saying this is the case, I'm sure RT in DX was not a surprise to them...), I don't think they could "scramble" and add it to N or N + 1 (other than some small tweaks that might "help", but not in a revolutionary way).
 
Intel Iris Gallium3D Forming As Their Future OpenGL Driver, Promising Early Results

Looks like Intel is planning to fully replace their 'classic' i965 driver for Linux with a new ground up driver built on top of Gallium.
image.php

Interesting results so far.
 
So tom's has revealed some info from a recent driver that shows codenames.

https://www.tomshardware.com/news/intel-dg1-dg2-discrete-graphics-xe-gpus-rocket-lake,40029.html

They're speculating that the 128, 256, and 512 would indicate EU count, putting these gpu's at 1024, 2048, and 4096 shaders respectively. A scaled up iris 580 would have 4096 shaders 512 tmu's and 64 rops.... which seems kinda imbalanced but who knows, very powerful!

Looking forward to it out in the wild, pricing and such, very exciting!
 
Back
Top