AMD and Samsung Announce Strategic Partnership in Mobile IP

Sounds like an oxymoron to me. If something is "that" much better how can it not be a flop?
Especially if it never materialized into an end-user product after 7 years of development.
 
Sounds like an oxymoron to me. If something is "that" much better how can it not be a flop?
More on this later.
Especially if it never materialized into an end-user product after 7 years of development.
Keep in mind the first 5 years were just R&D, only after 2017 did they put it into silicon. They had 10nm and 7nm tests chips and performace looked alright albeit not anything special.
 
More on this later.
Keep in mind the first 5 years were just R&D, only after 2017 did they put it into silicon. They had 10nm and 7nm tests chips and performace looked alright albeit not anything special.
I have to admit, what is quite opaque to me is just what Samsung gets out of this deal.
What can AMD bring to the table that ARM/Qualcomm/et. al cannot in this market segment?
Why would the AMD RTG provide better IP for ultra low power GPUs than AMDs own low-power GPU group that they sold to Qualcomm that kept working on the problems and which has lived a successful and relatively well funded life since?
Nvidia, even though they've thrown silicon at the problem with their Tegras, don't really impress particularly with their efficiency in mobile space at Maxwell/Pascal tech level (that is quite competitive with anything out of AMD in desktop space in terms of power efficiency). Intel...well...(*cough*). So what exactly can AMDs RTG bring to the table that would provide a decisive advantage over players who have had long experience designing for mobile already? Is it simply mostly about dodging patent litigation?
 
I have to admit, what is quite opaque to me is just what Samsung gets out of this deal.
What can AMD bring to the table that ARM/Qualcomm/et. al cannot in this market segment?
For starters, Samsung doesn't want to depend on Qualcomm as much as they do, which is part of the reason why they keep funding S.LSI throughout the years. Same thing with Huawei and HiSilicon AFAIK.

So in this context, AMD's first advantage is they're not Qualcomm.
As for ARM, the logical conclusion should be that their Mali GPUs haven't kept up Adreno in performance or efficiency.


Why would the AMD RTG provide better IP for ultra low power GPUs than AMDs own low-power GPU group that they sold to Qualcomm that kept working on the problems and which has lived a successful and relatively well funded life since?
I guess (and hope) the latest Adreno 6xx GPUs have little to nothing in common with the ~12 year-old AMD Z430 / Adreno 200 that was sold to Qualcomm back in 2009.
Just like RDNA has very little in common with the DX10 Terascale 1 GPUs of that time.
Both architectures have evolved in parallel and should be very different at the moment.

So what exactly can AMDs RTG bring to the table that would provide a decisive advantage over players who have had long experience designing for mobile already?
Both RTG and nVidia have very close relationships with game and application developers, and they both offer development optimization tools for their GPU architectures.
Switch and Tegra/Shield optimized AAA games seem to have shown that if Android is ever going to step up the game on decent ports from PC and consoles, most devs need these tools. Otherwise they're stuck with 6th-gen (PS2, Xbox) era looking games despite the higher-end SoCs being more powerful than the PS360.

And then I'd guess for Samsung the difference between AMD and nVidia is that nVidia might offer a higher-performing architecture at ISO power and die-area and has better tools, but should be considerably more expensive and less permissive about how much the customer can customize their tech (they probably try to put as much black boxes in it as possible).
 
Last edited by a moderator:
For starters, Samsung doesn't want to depend on Qualcomm as much as they do, which is part of the reason why they keep funding S.LSI throughout the years. Same thing with Huawei and HiSilicon AFAIK.

So in this context, AMD's first advantage is they're not Qualcomm.
As for ARM, the logical conclusion should be that their Mali GPUs haven't kept up Adreno in performance or efficiency.
You have a point, but the optimum for Samsung is to hold their own IP, not shift IP provider. We'll see how the newest Mali performs, I hope Nebuchadnessar will graciously provide us with data eventually.

I guess (and hope) the latest Adreno 6xx GPUs have little to nothing in common with the ~12 year-old AMD Z430 / Adreno 200 that was sold to Qualcomm back in 2009.
Just like RDNA has very little in common with the DX10 Terascale 1 GPUs of that time.
Both architectures have evolved in parallel and should be very different at the moment.
And the one that has evolved to provide optimum power/performance/area characteristics for mobile applications is...?

Both RTG and nVidia have very close relationships with game and application developers, and they both offer development optimization tools for their GPU architectures.
Switch and Tegra/Shield optimized AAA games seem to have shown that if Android is ever going to step up the game on decent ports from PC and consoles, most devs need these tools. Otherwise they're stuck with 6th-gen (PS2, Xbox) era looking games.
I think you overstate this. Unity/Unreal Engine and so on is probably more important when it comes to the look of the games (and some are far beyond PS2 level.)
The main issue with mobile game graphics quality is that similar to how PC games typically need to run on a run-of-the-mill Intel PC laptop, they are developed to a lowest common denominator, which unfortunately tend to be a three year old cheap phone.

No, I still can't see (apart from the patent angle) the reasoning here. But maybe that is enough.
 
Last edited:
And the one that has evolved to provide optimum power/performance/area characteristics for mobile applications is...?
It's the one that Samsung can't implement in their own SoCs. None of this would be happening if Qualcomm had Adreno IP for sale.

I think you overstate this. Unity/Unreal Engine and so on is probably more important when it comes to the look of the games (and some are far beyond PS2 level.)
Not look of the games, but rather performance. They should also have the capability of bringing down the clocks and power consumption when sufficient performance is reached, and stuff similar to AMD's Chill.
Games like hearthstone and clash of clans shouldn't be battery killers.
 
What can AMD bring to the table that ARM/Qualcomm/et. al cannot in this market segment?
Why would the AMD RTG provide better IP for ultra low power GPUs than AMDs own low-power GPU group that they sold to Qualcomm that kept working on the problems and which has lived a successful and relatively well funded life since?
10 years more IP is a hell of a lot...

Nvidia, even though they've thrown silicon at the problem with their Tegras, don't really impress particularly with their efficiency in mobile space at Maxwell/Pascal tech level (that is quite competitive with anything out of AMD in desktop space in terms of power efficiency). Intel...well...(*cough*). So what exactly can AMDs RTG bring to the table that would provide a decisive advantage over players who have had long experience designing for mobile already? Is it simply mostly about dodging patent litigation?
Maybe. AMD owns approximately half of all graphics IP in theory.
 
Maybe. AMD owns approximately half of all graphics IP in theory.
Well, it is in plain writing:"As part of the partnership, Samsung will license AMD graphics IP and will focus on advanced graphics technologies and solutions that are critical for enhancing innovation across mobile applications including smartphones."
AMD is not actually going to do anything in particular according to the press release. Their bullet point looks like this:
  • Samsung will pay AMD technology license fees and royalties.
So if I have the order of things correct, Samsung, like Qualcomm/Apple/et cetera figured that it would be a good idea to be in control of their own destiny when it came to the GPU of their mobile SoCs. However, it seems like they couldn't quite get it to the point where it was sufficiently competitive in terms of Power/Performance/Area, meaning that using their own design would presumably be a potential competitive liability compared to those that simply licensed from ARM. Also presumably, during the course of their work, they had to work around some patents, and also identified issues with their design where having access to some key IP would help out. Hence licensing IP from AMD. AMD on their part pick up license fees and royalties from products in segments where they weren't competing anyway. Pure win essentially. Samsung is going to do all the work.

Why does this matter at all? Well it sets the stage for an interesting article from Nebuchadnessar. :) Once the fruits of this shows up in SoCs, not only will it be interesting to compare it to other mobile graphics solutions, it will also be interesting to make a straight comparison to AMDs PC space offerings. Similarities, differences, a discussion about designing for mobile space vs. PC. I doubt it will be a "small Navi", but then, that and the discussion around it would be the juicy stuff.
 
Actually some sort of Navi grandchild makes more sense than anything else. AMD's so far GPU architectures weren't designed for something as low power like ULP SoCs and GPU designers there always cut quite a few corners compared to desktop solutions. I find it hard to believe that any of so far of AMD's designs are as low power and small in die area to compete on comparable terms with a recent Adreno. Here the million $ question is rather if we're talking about pure GPU IP here, because if yes it's not really in AMD's best business interest either. The way IHVs like AMD or NV etc are structured IP development costs are way too high compared to ridiculously low royalty amounts the IP provider gets per unit sold.

I don't have insight how much ARM sells its Mali GPU IP, however for IMG/Apple the newest high end GPU IP was somewhere around 1$ per unit and IMG's average royalty per IP core sold was in their heydays less than 1/3rd than that.

IMHO AMD came up with a formula with something like Navi (or even one architecture beyond?) for an architecture that is as scalable from ULP mobile all the way to the high end HPC GPUs.

Most interesting comparison of the future will be: Apple's own GPU vs Qualcomm Adreno, vs. Samsung w/ AMD GPU IP.

***edit: just reading it https://www.anandtech.com/show/14492/samsung-amds-gpu-licensing-an-interesting-collaboration

******edit 2: ok the theory of some sort of joint development makes far more sense and answers most of the questions.
 
Last edited:
When I first saw this announcement, Samsung's internal GPU effort was what came to my mind as well. If the narrative that there were patent concerns with Nvidia has merit, it seems interesting that a product that hadn't been rolled out would be spiked before being publicly visible. The Apple/IMG spat didn't seem to enter lawsuit territory until Apple announced to shareholders that it created supposedly non-infringing IP and wouldn't be paying IMG anymore.
To get preemptive notice that Nvidia could be a problem, could mean that Nvidia was involved or made aware of internal details already. If it were just a matter of finding someone to license, there might have to be something more concretely Nvidia-specific that another cross-license couldn't salvage (perhaps a claim of willful use of IP, staff transfer, or failed undisclosed collaboration with Nvidia).

I do recall some patents AMD has made to the effect of creating customized CUs with most of the SIMDs removed or whole instruction types and their CU subsections stripped out, which may point to a desire to make the CU flexible as a template.
It's tough to compare mobile graphics to the implementations AMD has that can operate at a steady-state of power consumption 2 orders of magnitude above where some Samsung's devices might go.
I'm curious to find out what elements Samsung would really want to leverage, and what it means to be RDNA based in this context--just the CUs or some of the surrounding SOC?
Another random thought is that perhaps AMD's offer is flexible enough for creating GPU compute elements that can interface in a heterogeneous memory and execution environment. I thought there was mention that Samsung's custom ARM cores did not (or could not?) use ARM's coherent interconnect, and perhaps there are elements of Samsung's proprietary system that it'd like to embed it its graphics architecture that an ARM-derived GPU would not provide the leeway for.
 
Meh as others have quoted many times every idea worth a damn has been patented at least twice. And I'm not really sure that NV's patent lawsuit attempts were actually directly related to Samsung already working on its own GPU design; it would be absurd in any case because without final hw in NV's hands to investigate they wouldn't have had a case. On the flip side of things if (mark IF with block letters) Apple's own GPU design which will appear some time in future Axx SoCs infriges any of IMG's patents it will be interesting to see if IMG or better Canyon Lake will file any suit against Apple in the future. Without a released GPU though to investigate there's no case until then.

Other then that it's my layman's understanding that one of the major differences between desktop and ULP mobile GPU designs are things like TMU to ALU dependency which seems to be somewhat common ground in the latter as one example. It's still my understanding that things like SIMD lanes are compared to other elements in a GPU among the cheapest.
 
From the transcript I read, the $100 million was for 2019. Some in Q2, but most split between Q3 and Q4.

$100 million per quarter would be noteworthy in that some old estimates (circa 2013 AMD Seamicro statements: https://www.semiaccurate.com/2013/05/15/amds-andrew-feldman-talks-about-arm/) for designing an x86 server processor put chip design cost in the $400 million range over roughly 3 years.
The time frame seemed a bit compressed for a full architecture, but may have excluded parts of the later stages of validation, ramp, and market rollout that went into the ~5 year rule of thumb.
Putting aside the likely increase in costs since then, the quarterly burn rate for a server chip would have been a third of the Samsung revenue if it were all pulled in during a single quarter.

Spreading the revenue over three quarters makes it seem more in line with a one or more chip projects, and unlike an internal chip design AMD would presumably expect to have some profit from this revenue instead of it translating directly to design cost. The size of the project give that might still be significant, although a deal with a large IP component can pull in more money as margin versus being a reflection of what's going into the project's actual costs (the latter number may be a better proxy for project size/complexity). Perhaps seeing if this income is sustained over time would give hints as to what sort of legs this project has.
 
Possible leak of an AMD ULP SoC, Ryzen C7:



tRQZ9PV.png




4 things that set me back on this leak:


1 - Samsung name not appearing anywhere, and AFAIK AMD can't do any chips that compete in the ultra low power market with Samsung. Also, 5nm TSMC so definitely not Samsung?
2 - 4 CUs at 700MHz means 358GFLOPs. This is supposed to be 45% faster than an Adreno 650? Sounds hard to believe.
3 - MediaTek 5G modem? Again, not from Samsung.
4 - Real time raytracing on a 358 GFLOPs GPU?
 
Possible leak of an AMD ULP SoC, Ryzen C7:



tRQZ9PV.png




4 things that set me back on this leak:


1 - Samsung name not appearing anywhere, and AFAIK AMD can't do any chips that compete in the ultra low power market with Samsung. Also, 5nm TSMC so definitely not Samsung?
2 - 4 CUs at 700MHz means 358GFLOPs. This is supposed to be 45% faster than an Adreno 650? Sounds hard to believe.
3 - MediaTek 5G modem? Again, not from Samsung.
4 - Real time raytracing on a 358 GFLOPs GPU?

It also spells Gauguin wrong. And ARMs DynamIQ allows one X1 based core (or rather, one "fastest core"), not two.
 
Fake slide aside, I can't figure out what theoretically speaks against implementing RT on a hypothetical 358 GFLOP GPU which is meant for a low power mobile SoC.
Nothing, of course. Theoretically.
The question is whether it's a sensible idea or not.

My problem with RTRT in PC space is that although it makes sense for nVidia to attack the rendering market and let gamers foot the bill, the technology as such is costly, both in die area (money) and power. And while its may suit a producer of add in graphics solutions, it is really tone deaf in the PC market as a whole, which is shrinking consistently since 2011, and which gravitates away from desktop systems in favour of laptops (2/3s of the market), and the laptop segment in itself is gravitating towards lighter units with longer battery life.
RTRT fits this as a foot in a glove. It specifically emphasises aspects of PCs that the general market find undesirable, and very justifiably so.

This is even more true in the mobile space. Why would I want a technology that makes my device cost more and draw more power, while only being useful to produce slightly more physically correct aspects of game rendering that my mind does its best to discard anyway?
Who wouldn't take a cheaper device with longer battery life instead, or spend those gates in better places for more generally applicable computing power?
 
This is even more true in the mobile space. Why would I want a technology that makes my device cost more and draw more power, while only being useful to produce slightly more physically correct aspects of game rendering that my mind does its best to discard anyway?
Who wouldn't take a cheaper device with longer battery life instead, or spend those gates in better places for more generally applicable computing power?

AR/VR probably? (just one example). If an IHV integrates RT into the pipeline and the transistor investment is not exclusively just for RT, how big is the added die area in the end? For the AMD GPU IP I could imagine Samsung wanting it amongst others for something like a Nintendo Swift successor as an example. If yes then skipping RT would be a rather dumb idea.
 
Nothing, of course. Theoretically.
The question is whether it's a sensible idea or not.

My problem with RTRT in PC space is that although it makes sense for nVidia to attack the rendering market and let gamers foot the bill, the technology as such is costly, both in die area (money) and power. And while its may suit a producer of add in graphics solutions, it is really tone deaf in the PC market as a whole, which is shrinking consistently since 2011, and which gravitates away from desktop systems in favour of laptops (2/3s of the market), and the laptop segment in itself is gravitating towards lighter units with longer battery life.
RTRT fits this as a foot in a glove. It specifically emphasises aspects of PCs that the general market find undesirable, and very justifiably so.

This is even more true in the mobile space. Why would I want a technology that makes my device cost more and draw more power, while only being useful to produce slightly more physically correct aspects of game rendering that my mind does its best to discard anyway?
Who wouldn't take a cheaper device with longer battery life instead, or spend those gates in better places for more generally applicable computing power?

This demonstrate how ray tracing (hybrid rendering) delivers significant reductions in memory bandwidth and power consumption over traditional rasterized methods (i.e. cascaded shadow maps)
PowerVR-Ray-Tracing-efficiency-analysis.png

For ray tracing, there is an initial one-time setup cost of 61 MB due to the acceleration structure that must be built for the scene.

This structure can be reused from frame to frame, so it isn’t part of the totals for a single frame. We’ve also measured the G-Buffer independently to see how much of our total cost results from this pass.

Therefore, by subtracting the G-Buffer value from the total memory traffic value, shadowing using cascaded maps requires 136 MB while ray tracing is only 67 MB, a 50% reduction in memory traffic.
Memory Bandwith Saving
PowerVR-Ray-Tracing-efficiency-analysis.png


Frame Times Saving
PowerVR-Ray-Tracing-efficiency-analysis-3.png


Source

With memory bandwidth and frame time saving at average 50%, I think dedicated ray tracing on mobile space is very suitable.
 
Back
Top