AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
AMD on numerous occasions have said APUs will be monolithic.
Have they though? I remember when everyone thought they said it after Matisse was revealed with just 1 chiplet and they were asking about putting GPU on the empty spot, and AMD denied it. But they never said it wouldn't or couldn't use chiplets, they specifically said it won't use Matisse-design.
That said, I do believe it will be monolithic.
 
Have they though? I remember when everyone thought they said it after Matisse was revealed with just 1 chiplet and they were asking about putting GPU on the empty spot, and AMD denied it. But they never said it wouldn't or couldn't use chiplets, they specifically said it won't use Matisse-design.
That said, I do believe it will be monolithic.

Exactly.
They said they wouldn't use the same Matisse MCM arrangement by replacing one CCD with a GPU chiplet.

That somehow resulted in people assuming AMD wouldn't just use a different I/O chip with an IGP.

There isn't any official source claiming Renoir is monolithic.

And I don't think it is monolithic BTW.
2020 will see Intel launching the 10-core Comet Lake APU in both desktops and 35W laptops.
AMD has a real chance to launch 10-16 core mobile APUs by using a dual CCD + I/O&GPU approach, and offer an unstoppable alternative to Intel in the mobile space for a long, long time.
 
Intel has yet to show an APU.
Except every CPU with on-die graphics since 2010? And certainly the *mont SoCs.

Granted, not as graphics performant as AMDs APUs (at least until Ice Lake) and (deliberately, I would assume) not using the APU moniker, but otherwise what's the difference?
 
2020 will see Intel launching the 10-core Comet Lake APU in both desktops and 35W laptops.
AMD has a real chance to launch 10-16 core mobile APUs by using a dual CCD + I/O&GPU approach, and offer an unstoppable alternative to Intel in the mobile space for a long, long time.

The problem with chiplets is that in very low power configurations, the extra bus inside the package still costs the same power, and can really hurt them. If they want to be able to ship high-core count APUs in the AM4 socket, I think a rather cheap method that still makes their low-core-count APUs optimally power-efficient would be to build a monolithic 4C apu, but put a single inter-chiplet link into it. That way, they can ship up to 12 cores in an APU, but still use the same cheap chip for 4C ones.
 
Granted, not as graphics performant as AMDs APUs (at least until Ice Lake) and (deliberately, I would assume) not using the APU moniker, but otherwise what's the difference?
I'm wondering that myself.
One could assume he meant "full SoC", i.e. without needing an additional north/southbridge chipset, but that would deny AMD's first 3 generations of APUs of being called APUs.



The problem with chiplets is that in very low power configurations, the extra bus inside the package still costs the same power, and can really hurt them. If they want to be able to ship high-core count APUs in the AM4 socket, I think a rather cheap method that still makes their low-core-count APUs optimally power-efficient would be to build a monolithic 4C apu, but put a single inter-chiplet link into it. That way, they can ship up to 12 cores in an APU, but still use the same cheap chip for 4C ones.
Well if you lower the CPU and cache clocks on mobile versions, perhaps you can also lower the IF clocks as well in the same fashion, saving without having it become too much of a bottleneck.
Extra chip I/O is indeed more power consuming, but the matter of the fact is Intel is still not using "full SoCs" even on their 4.5W Y-series offerings.

My suggestion of using the 8-core Zen 2 CCD would be to take advantage of a massive economy of scale, effectively using the same small die everywhere from 35W APUs to the 280W Epyc.
But for 15W I concede that a 8-core CCD might be way too much, unless they throttle it very aggressively.
 
I'm wondering that myself.
One could assume he meant "full SoC", i.e. without needing an additional north/southbridge chipset, but that would deny AMD's first 3 generations of APUs of being called APUs.

Well if you lower the CPU and cache clocks on mobile versions, perhaps you can also lower the IF clocks as well in the same fashion, saving without having it become too much of a bottleneck.
Extra chip I/O is indeed more power consuming, but the matter of the fact is Intel is still not using "full SoCs" even on their 4.5W Y-series offerings.

My suggestion of using the 8-core Zen 2 CCD would be to take advantage of a massive economy of scale, effectively using the same small die everywhere from 35W APUs to the 280W Epyc.
But for 15W I concede that a 8-core CCD might be way too much, unless they throttle it very aggressively.

They managed 4 cores on the previous node. Could probably manage 6 in 15 watts in the upcoming mobile release which would necessitate two chiplets anyway. That being said, don't mobile variants switch semi conductor types from high performance libraries to power efficient libraries anyway?
 
That being said, don't mobile variants switch semi conductor types from high performance libraries to power efficient libraries anyway?
AFAIK Raven Ridge and Picasso use the same chips for desktop and mobile, just binned differently.
 
I'm pretty sure we're going to need more than Fuads word for that, considering the Samsung and TSMC processes definitely aren't design compatible with each other

True. However it would make sense for AMD to use Samsung as a fab. AMD has already licensed its RDNA ip to Samsung. The cost of designing and pushing RDNA chips out of Samsung fabs would be shared acrossed multiple chips. Plus it would be easier for Samsung to work on an existing AMD design before pumping out its own wares.
 
I'm pretty sure we're going to need more than Fuads word for that, considering the Samsung and TSMC processes definitely aren't design compatible with each other

For the layman among us, how incompatible are we talking? Redesign everything or "general overhaul"? If it's the latter I can see how it'd make at least a little sense.

TSMC is suffering from restricted production slots. AMD has problems to deliver in their time to shine. If one or more large OEMs like Apple or Dell choses to standardise on the 5000M series for parts of their lineup it just might be worth it. Customers with comparatively large volumes and mind share could get RTG into a strategically important door.

Not suggesting Fuad is right, but that under the right circumstances it might make sense. It obviously hinges on acceptable ROI either way though. And, you know, wether or not processes aren't far too alien to each other.
 
For the layman among us, how incompatible are we talking? Redesign everything or "general overhaul"? If it's the latter I can see how it'd make at least a little sense.

TSMC is suffering from restricted production slots. AMD has problems to deliver in their time to shine. If one or more large OEMs like Apple or Dell choses to standardise on the 5000M series for parts of their lineup it just might be worth it. Customers with comparatively large volumes and mind share could get RTG into a strategically important door.

Not suggesting Fuad is right, but that under the right circumstances it might make sense. It obviously hinges on acceptable ROI either way though. And, you know, wether or not processes aren't far too alien to each other.

I honestly don't know, but my uneducated guess would be completely new physical design from the IP blocks for different processes.

Also one needs to remember that even if you disregard the redesign needs, "same node" from two manufacturers can have quite different properties, not to mention the fact that Samsung only has 7nm EUV-process, while AMD uses TSMCs DUV-process
 
Samsung is using RDNA in their phones next year.
I suspect, AMD is going to position themselves for when 8cx stuff starts kicking and gets into the phone GPU game. A new handheld surface devices (on ARM?), using the same uArach (for gaming) as the new Xbox..? (rDNA).

win/win..
 
The problem with chiplets is that in very low power configurations, the extra bus inside the package still costs the same power, and can really hurt them. If they want to be able to ship high-core count APUs in the AM4 socket, I think a rather cheap method that still makes their low-core-count APUs optimally power-efficient would be to build a monolithic 4C apu, but put a single inter-chiplet link into it. That way, they can ship up to 12 cores in an APU, but still use the same cheap chip for 4C ones.
Mobile would also likely want the IO die to be as low-power as AMD can make it, and the 12/14nm process is unlikely to provide this option. If the IO section is 7nm, there's less benefit in keeping it separate since the IO tends to be more defect-tolerant and its area cost is moderated by the node transition and likely more mature yields at this point.
AMD seems content in the desktop and server markets to keep the IO die's power consumption at a consistent baseline for some reason. It should be able to manage itself due to the in-built platform processor, but could there be some function the CCD might occasionally perform for transitioning the IO die's activity level that makes it preferable to have the IO die idling at a level mobile wouldn't accept?

One-die integration would ditch the external link and its housekeeping, and could allow for shorter response times for waking up the various blocks even with the IO and CPU sections deeply throttled separately.
Perhaps a newer PSP could come with a newer node, although the A5 core does go to 7nm.

Samsung is using RDNA in their phones next year.
I suspect, AMD is going to position themselves for when 8cx stuff starts kicking and gets into the phone GPU game. A new handheld surface devices (on ARM?), using the same uArach (for gaming) as the new Xbox..? (rDNA).

win/win..
Are you speculating that AMD itself would get into the phone GPU field separate from Samsung? Wouldn't Samsung benefit by restricting AMD from taking money from them just to compete right away?
 
I think a shrunk down xbox- mobile device, will eventually be a thing. Or even a Surface phone with scaled down rdna graphics. Has to be better than adreno, etc.
 
Status
Not open for further replies.
Back
Top