AMD and Samsung Announce Strategic Partnership in Mobile IP

Discussion in 'Mobile Graphics Architectures and IP' started by del42sa, Jun 3, 2019.

Tags:
  1. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,998
    Likes Received:
    4,571
    Especially if it never materialized into an end-user product after 7 years of development.
     
  2. Nebuchadnezzar

    Legend

    Joined:
    Feb 10, 2002
    Messages:
    974
    Likes Received:
    141
    Location:
    Luxembourg
    More on this later.
    Keep in mind the first 5 years were just R&D, only after 2017 did they put it into silicon. They had 10nm and 7nm tests chips and performace looked alright albeit not anything special.
     
  3. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,062
    Likes Received:
    1,021
    I have to admit, what is quite opaque to me is just what Samsung gets out of this deal.
    What can AMD bring to the table that ARM/Qualcomm/et. al cannot in this market segment?
    Why would the AMD RTG provide better IP for ultra low power GPUs than AMDs own low-power GPU group that they sold to Qualcomm that kept working on the problems and which has lived a successful and relatively well funded life since?
    Nvidia, even though they've thrown silicon at the problem with their Tegras, don't really impress particularly with their efficiency in mobile space at Maxwell/Pascal tech level (that is quite competitive with anything out of AMD in desktop space in terms of power efficiency). Intel...well...(*cough*). So what exactly can AMDs RTG bring to the table that would provide a decisive advantage over players who have had long experience designing for mobile already? Is it simply mostly about dodging patent litigation?
     
    Picao84 likes this.
  4. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,998
    Likes Received:
    4,571
    For starters, Samsung doesn't want to depend on Qualcomm as much as they do, which is part of the reason why they keep funding S.LSI throughout the years. Same thing with Huawei and HiSilicon AFAIK.

    So in this context, AMD's first advantage is they're not Qualcomm.
    As for ARM, the logical conclusion should be that their Mali GPUs haven't kept up Adreno in performance or efficiency.


    I guess (and hope) the latest Adreno 6xx GPUs have little to nothing in common with the ~12 year-old AMD Z430 / Adreno 200 that was sold to Qualcomm back in 2009.
    Just like RDNA has very little in common with the DX10 Terascale 1 GPUs of that time.
    Both architectures have evolved in parallel and should be very different at the moment.

    Both RTG and nVidia have very close relationships with game and application developers, and they both offer development optimization tools for their GPU architectures.
    Switch and Tegra/Shield optimized AAA games seem to have shown that if Android is ever going to step up the game on decent ports from PC and consoles, most devs need these tools. Otherwise they're stuck with 6th-gen (PS2, Xbox) era looking games despite the higher-end SoCs being more powerful than the PS360.

    And then I'd guess for Samsung the difference between AMD and nVidia is that nVidia might offer a higher-performing architecture at ISO power and die-area and has better tools, but should be considerably more expensive and less permissive about how much the customer can customize their tech (they probably try to put as much black boxes in it as possible).
     
    #64 ToTTenTranz, Jun 6, 2019
    Last edited: Jun 6, 2019
    Picao84 likes this.
  5. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,062
    Likes Received:
    1,021
    You have a point, but the optimum for Samsung is to hold their own IP, not shift IP provider. We'll see how the newest Mali performs, I hope Nebuchadnessar will graciously provide us with data eventually.

    And the one that has evolved to provide optimum power/performance/area characteristics for mobile applications is...?

    I think you overstate this. Unity/Unreal Engine and so on is probably more important when it comes to the look of the games (and some are far beyond PS2 level.)
    The main issue with mobile game graphics quality is that similar to how PC games typically need to run on a run-of-the-mill Intel PC laptop, they are developed to a lowest common denominator, which unfortunately tend to be a three year old cheap phone.

    No, I still can't see (apart from the patent angle) the reasoning here. But maybe that is enough.
     
    #65 Entropy, Jun 6, 2019
    Last edited: Jun 6, 2019
  6. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,998
    Likes Received:
    4,571
    It's the one that Samsung can't implement in their own SoCs. None of this would be happening if Qualcomm had Adreno IP for sale.

    Not look of the games, but rather performance. They should also have the capability of bringing down the clocks and power consumption when sufficient performance is reached, and stuff similar to AMD's Chill.
    Games like hearthstone and clash of clans shouldn't be battery killers.
     
  7. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    10,873
    Likes Received:
    767
    Location:
    London
    10 years more IP is a hell of a lot...

    Maybe. AMD owns approximately half of all graphics IP in theory.
     
  8. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,062
    Likes Received:
    1,021
    Well, it is in plain writing:"As part of the partnership, Samsung will license AMD graphics IP and will focus on advanced graphics technologies and solutions that are critical for enhancing innovation across mobile applications including smartphones."
    AMD is not actually going to do anything in particular according to the press release. Their bullet point looks like this:
    • Samsung will pay AMD technology license fees and royalties.
    So if I have the order of things correct, Samsung, like Qualcomm/Apple/et cetera figured that it would be a good idea to be in control of their own destiny when it came to the GPU of their mobile SoCs. However, it seems like they couldn't quite get it to the point where it was sufficiently competitive in terms of Power/Performance/Area, meaning that using their own design would presumably be a potential competitive liability compared to those that simply licensed from ARM. Also presumably, during the course of their work, they had to work around some patents, and also identified issues with their design where having access to some key IP would help out. Hence licensing IP from AMD. AMD on their part pick up license fees and royalties from products in segments where they weren't competing anyway. Pure win essentially. Samsung is going to do all the work.

    Why does this matter at all? Well it sets the stage for an interesting article from Nebuchadnessar. :) Once the fruits of this shows up in SoCs, not only will it be interesting to compare it to other mobile graphics solutions, it will also be interesting to make a straight comparison to AMDs PC space offerings. Similarities, differences, a discussion about designing for mobile space vs. PC. I doubt it will be a "small Navi", but then, that and the discussion around it would be the juicy stuff.
     
    snarfbot likes this.
  9. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,420
    Likes Received:
    179
    Location:
    Chania
    Actually some sort of Navi grandchild makes more sense than anything else. AMD's so far GPU architectures weren't designed for something as low power like ULP SoCs and GPU designers there always cut quite a few corners compared to desktop solutions. I find it hard to believe that any of so far of AMD's designs are as low power and small in die area to compete on comparable terms with a recent Adreno. Here the million $ question is rather if we're talking about pure GPU IP here, because if yes it's not really in AMD's best business interest either. The way IHVs like AMD or NV etc are structured IP development costs are way too high compared to ridiculously low royalty amounts the IP provider gets per unit sold.

    I don't have insight how much ARM sells its Mali GPU IP, however for IMG/Apple the newest high end GPU IP was somewhere around 1$ per unit and IMG's average royalty per IP core sold was in their heydays less than 1/3rd than that.

    IMHO AMD came up with a formula with something like Navi (or even one architecture beyond?) for an architecture that is as scalable from ULP mobile all the way to the high end HPC GPUs.

    Most interesting comparison of the future will be: Apple's own GPU vs Qualcomm Adreno, vs. Samsung w/ AMD GPU IP.

    ***edit: just reading it https://www.anandtech.com/show/14492/samsung-amds-gpu-licensing-an-interesting-collaboration

    ******edit 2: ok the theory of some sort of joint development makes far more sense and answers most of the questions.
     
    #69 Ailuros, Jun 12, 2019
    Last edited: Jun 12, 2019
    AlBran likes this.
  10. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    When I first saw this announcement, Samsung's internal GPU effort was what came to my mind as well. If the narrative that there were patent concerns with Nvidia has merit, it seems interesting that a product that hadn't been rolled out would be spiked before being publicly visible. The Apple/IMG spat didn't seem to enter lawsuit territory until Apple announced to shareholders that it created supposedly non-infringing IP and wouldn't be paying IMG anymore.
    To get preemptive notice that Nvidia could be a problem, could mean that Nvidia was involved or made aware of internal details already. If it were just a matter of finding someone to license, there might have to be something more concretely Nvidia-specific that another cross-license couldn't salvage (perhaps a claim of willful use of IP, staff transfer, or failed undisclosed collaboration with Nvidia).

    I do recall some patents AMD has made to the effect of creating customized CUs with most of the SIMDs removed or whole instruction types and their CU subsections stripped out, which may point to a desire to make the CU flexible as a template.
    It's tough to compare mobile graphics to the implementations AMD has that can operate at a steady-state of power consumption 2 orders of magnitude above where some Samsung's devices might go.
    I'm curious to find out what elements Samsung would really want to leverage, and what it means to be RDNA based in this context--just the CUs or some of the surrounding SOC?
    Another random thought is that perhaps AMD's offer is flexible enough for creating GPU compute elements that can interface in a heterogeneous memory and execution environment. I thought there was mention that Samsung's custom ARM cores did not (or could not?) use ARM's coherent interconnect, and perhaps there are elements of Samsung's proprietary system that it'd like to embed it its graphics architecture that an ARM-derived GPU would not provide the leeway for.
     
    AlBran likes this.
  11. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,420
    Likes Received:
    179
    Location:
    Chania
    Meh as others have quoted many times every idea worth a damn has been patented at least twice. And I'm not really sure that NV's patent lawsuit attempts were actually directly related to Samsung already working on its own GPU design; it would be absurd in any case because without final hw in NV's hands to investigate they wouldn't have had a case. On the flip side of things if (mark IF with block letters) Apple's own GPU design which will appear some time in future Axx SoCs infriges any of IMG's patents it will be interesting to see if IMG or better Canyon Lake will file any suit against Apple in the future. Without a released GPU though to investigate there's no case until then.

    Other then that it's my layman's understanding that one of the major differences between desktop and ULP mobile GPU designs are things like TMU to ALU dependency which seems to be somewhat common ground in the latter as one example. It's still my understanding that things like SIMD lanes are compared to other elements in a GPU among the cheapest.
     
  12. itsmydamnation

    Veteran Regular

    Joined:
    Apr 29, 2007
    Messages:
    1,298
    Likes Received:
    396
    Location:
    Australia
    There was an update in the q2 call, Amd are doing actual work for Samsung right now somewhere in the tune of 100mill a quarter.
     
  13. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    From the transcript I read, the $100 million was for 2019. Some in Q2, but most split between Q3 and Q4.

    $100 million per quarter would be noteworthy in that some old estimates (circa 2013 AMD Seamicro statements: https://www.semiaccurate.com/2013/05/15/amds-andrew-feldman-talks-about-arm/) for designing an x86 server processor put chip design cost in the $400 million range over roughly 3 years.
    The time frame seemed a bit compressed for a full architecture, but may have excluded parts of the later stages of validation, ramp, and market rollout that went into the ~5 year rule of thumb.
    Putting aside the likely increase in costs since then, the quarterly burn rate for a server chip would have been a third of the Samsung revenue if it were all pulled in during a single quarter.

    Spreading the revenue over three quarters makes it seem more in line with a one or more chip projects, and unlike an internal chip design AMD would presumably expect to have some profit from this revenue instead of it translating directly to design cost. The size of the project give that might still be significant, although a deal with a large IP component can pull in more money as margin versus being a reflection of what's going into the project's actual costs (the latter number may be a better proxy for project size/complexity). Perhaps seeing if this income is sustained over time would give hints as to what sort of legs this project has.
     
    w0lfram and Lodix like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...