Intel CEO confirms first dGPUs in 2020

Discussion in 'Graphics and Semiconductor Industry' started by Dayman1225, Jun 12, 2018.

  1. Malo

    Malo Yak Mechanicum
    Legend Veteran Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    7,029
    Likes Received:
    3,101
    Location:
    Pennsylvania
    Of course not but it's likely all in design phase. It's way too early for any hardware IMO.
     
  2. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha Subscriber

    Joined:
    May 14, 2005
    Messages:
    1,373
    Likes Received:
    242
    Location:
    NY
    I actually disagree. I imagine Intel dGPU N is essentially done, design for GPU N+1 is probably wrapping up (at least the "big picture aspects") and GPU N+2 has at least been started. FWIW I'm betting it's based off their current iGPU architecture (some enhancements + the ability to scale up EUs).

    I think people greatly underestimate how early hardware design is "done". Very rarely do companies make "bad hardware", they make bad guesses about where the market will be. :wink:
     
    Silent_Buddha and BRiT like this.
  3. itsmydamnation

    Veteran Regular

    Joined:
    Apr 29, 2007
    Messages:
    1,298
    Likes Received:
    396
    Location:
    Australia
    I think releasing on 14nm is a bad idea, just look at NV with the new 104 being something like 500mm sq and 102 around 750mm. A 2019 intel dgpu on 14nm is going to be slaughtered by both AMD and NV. a 2020 one will be beyond a joke, but if intel can't get ice lake out the door before H2 2020 you really think they will get a 10nm GPU out the door?

    Here's a question for you, what are your expectations of a intel GPU, die size, process, performance etc.

    I suppose intel could go use a working foundry 7nm process.......... :runaway:
     
    DavidGraham likes this.
  4. cheapchips

    Regular Newcomer

    Joined:
    Feb 23, 2013
    Messages:
    702
    Likes Received:
    425
    Don't Intel have a different ruler to everyone else, making it not entirely straightforward to compare processes?
     
  5. itsmydamnation

    Veteran Regular

    Joined:
    Apr 29, 2007
    Messages:
    1,298
    Likes Received:
    396
    Location:
    Australia
    4 years ago, sure. But now the reality is their 10nm will likely be worse then tmsc 7nm and will be over a year later
     
  6. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,173
    Likes Received:
    576
    Location:
    France

    Not really... We'll see, but when you see the capability of their 14nm++ (or whatever it's called), I'm not worrying too much about Intel vs other's 7nm.

    On the subject, I liked this exchange with David Kanter from Real World Tech :

     
    cheapchips likes this.
  7. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    I actually disagree ... though it depends on what you mean “essentially done” and “design”.

    If Intel needs at least 18 months to go from a chip being “essentially done” to going to market, they have a really major problem with execution.

    Even 12 months would be kind of pathetic, though I’d give them a pass on that if it’s a brand new architecture.

    And before everybody jumps with snarky comments about execution and their 10nm CPUs: that’s different and kind of irrelevant. First, because the comment I’m replying to is a more general statement and not really Intel related. And second because high end CPUs are much harder to get to market than a GPU. The latter may be “the most complex chips in the world” according to some CEOs, but they are really not. Or only if you use transistors as a measure of complexity.
     
  8. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha Subscriber

    Joined:
    May 14, 2005
    Messages:
    1,373
    Likes Received:
    242
    Location:
    NY
    To be clear I mean design, not production.

    When I interviewed at [mobile gpu company], they had just released [some gpu] the week before. I congratulated them on the launch and found it very interesting that [some gpu + 1] was already about complete design wise. They mentioned work had even started on [some gpu + 2]. [Some gpu] had just launched the week before! So by the time we (the consumer) get our hands on a new gpu, it's already old news. :p
     
    Silent_Buddha, Kej, Lightman and 2 others like this.
  9. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
    I know. But I still don’t know what you mean by “design”.

    Does that mean some grand ideas on a slide? The general architecture? Does it mean RTL complete by not yet fully verified? Does it mean frozen netlist? Or does it mean layout complete and ready for tape-out?

    The delta between the first and last milestone can easily be 2 years, but IMO “design complete” in at the very least RTL complete. I’ve seen substantial diving-catch new features added to a chip *after* RTL complete.

    The mobile part is the big clue there, since it’s only a smaller part of a larger SOC that will probably run a full OS. This chip will be marketed to customers who very often wait to decide until they’ve seen working silicon. Eventually the chip will end up in a complex system (a mobile phone) that will go through endless iterations of all kinds of certs.

    An Intel discrete GPU won’t have go through most of that.
     
    #69 silent_guy, Aug 19, 2018
    Last edited: Aug 19, 2018
  10. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Wouldn't there be more than one scenario so it is possible you are both right.
    Pascal and Volta would fall into willardjuice concept while Turing as an example would align more with silent_guy.
    Volta Tensor Cores were a R&D design input decision quite early in the Volta phase (according to a CUDA engineer involved in such input) even while Pascal was still in design; there is a large degree of synergy between Pascal and Volta from a Tesla/Quadro perspective and this can be seen from design even to production deployment.
    Turing stands out as a more separate product entity.

    Just using those as an example as to me they fit both your points.
     
  11. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    Everyone has the same rulers, but no one uses them. Process names are not related to physical properties of the associated transistors, but determined by marketing considerations.

    That being said, most foundries have, for comparably named processes, comparable feature sizes—except for Intel.
     
  12. willardjuice

    willardjuice super willyjuice
    Moderator Veteran Alpha Subscriber

    Joined:
    May 14, 2005
    Messages:
    1,373
    Likes Received:
    242
    Location:
    NY
    Yes you are correct, I don't mean the RTL is ready to be shipped to the fab, but rather it's already too late for Intel dGPU N+1 to make radical/large changes. The context of the discussion I was responding to was "Maybe it was a mistake that Nvidia start the ray tracing race early. Now Intel knows there goals and can react.". My point was not only do I think it's too late for Intel to make changes to dGPU N based on whatever Nvidia showed, but it's also too late to make meaningful changes to N + 1. If Intel was not planning on adding non-trivial RT hw to their gpus (not saying this is the case, I'm sure RT in DX was not a surprise to them...), I don't think they could "scramble" and add it to N or N + 1 (other than some small tweaks that might "help", but not in a revolutionary way).
     
    pharma, Silent_Buddha and BRiT like this.
  13. Dayman1225

    Newcomer

    Joined:
    Sep 9, 2017
    Messages:
    57
    Likes Received:
    78
  14. Dayman1225

    Newcomer

    Joined:
    Sep 9, 2017
    Messages:
    57
    Likes Received:
    78
  15. snarfbot

    Regular Newcomer

    Joined:
    Apr 23, 2007
    Messages:
    574
    Likes Received:
    188
    So tom's has revealed some info from a recent driver that shows codenames.

    https://www.tomshardware.com/news/intel-dg1-dg2-discrete-graphics-xe-gpus-rocket-lake,40029.html

    They're speculating that the 128, 256, and 512 would indicate EU count, putting these gpu's at 1024, 2048, and 4096 shaders respectively. A scaled up iris 580 would have 4096 shaders 512 tmu's and 64 rops.... which seems kinda imbalanced but who knows, very powerful!

    Looking forward to it out in the wild, pricing and such, very exciting!
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...