Intel Kaby Lake + AMD Radeon product *spin-off*

  • Thread starter Deleted member 13524
  • Start date
The leaks also said it uses PCIe x8 for connecting the CPU to the GPU, which makes sense considering they aren't using a custom die tailored for this product. The package size is 58.5mm x 31mm, which seems to fit with the promo shots they are showing.

Hum.. 8x PCIe might be a problem for HBCC to work properly. Besides, why would they need the other 8 lanes for?
The H-series CPUs still need a separate PCH for southbridge functionality, and the dedicated DMI with 4GB/s should suffice for SSDs and USB.


So not a mobile SKU, but meant for all-in-ones (iMac)
3 years ago the Macbook Pros (15" late 2014) were being sold with a 40W GK107 together with a 47W Haswell. Besides, TDP numbers might vary between the NUC and laptop versions.
Not to mention the versatility of TDP-down configurations.

This is most definitely going into Macbook Pros. You can write that down.
Apple wouldn't demand a solution like this just for imacs.
 
So not a mobile SKU, but meant for all-in-ones (iMac)

How dissapointing.

Cheers

Why not? That's what they are presenting it for. Besides, a 45W part is going to extremely limit GPU portion in performance. Seems like a good HQ chip + 1050 Ti Laptop replacement, not just sucking up for Apple.
 
Wasn't it clever to make HBM's base die compatible with EMIB - well, a happy accident, after all it makes sense that the HBM die biases its data interface to a single edge. So that means cheap HBM incoming...

Also, AMD keeps its "crown jewels" (Vega, whoops) while Intel is stuck with Polaris for the next 2 years. So Intel gets Vega just as AMD launches Navi? Well, if AMD actually launches Navi...

Is there any reason for AMD to make discrete GPUs after this launches?
 
Is there any reason for AMD to make discrete GPUs after this launches?
I'm not sure how this affects the need for discrete GPUs or not. Are you saying we should move to this design on a motherboard with CPU and GPU built-in or slotted in, killing the enthusiast PC builder market?
 
Wasn't it clever to make HBM's base die compatible with EMIB - well, a happy accident, after all it makes sense that the HBM die biases its data interface to a single edge. So that means cheap HBM incoming...
It could be the other way around. HBM's data rates are not going to be sustainable if the most important traces are not kept short, which means close proximity to the neighboring chip. The rest of the HBM base doesn't need the interposer, or is substantially hindered by it. Fury partially worked around area constraints by having parts of the HBM base extending outside the patterned area of the interposer.

EMIB's chief insight is that in an up to ~1000mm2 passive interposer of the sort AMD uses, literally only the area of the bridge was what was actually needed for the microbumps and traces.
The test, power, and ground pads use the same comparatively massive solder balls, and the vias drilled through the interposer to service them is part of the additional expense of the interposer.

This and the apparent use of a standard PCIe interface makes me think AMD's chip effort was a modest customization in this regard, since the bumps do not change and Intel would likely worry about most of the mechanical/alignment/assembly concerns.
 
I'm not sure how this affects the need for discrete GPUs or not. Are you saying we should move to this design on a motherboard with CPU and GPU built-in or slotted in, killing the enthusiast PC builder market?
And even worst would be how does one then "upgrade" to next generation of GPU, consumers will not spend a small fortune replacing the EMIB solution just for next generation of GPU; consumers upgrade GPUs much more frequently than CPUs.
 
This product basically wipes out $200 and cheaper discrete. That's the bulk of all discrete GPUs sold.

People who like to build their own PC can carry on buying NVidia enthusiast class GPUs. Pity that the entry level for these will be $1000 after a few years, but I'm sure the NVidia shareholders in this thread are more than happy at that prospect.
 
I'm not sure about this getting to the $200 and cheaper range just yet. EMIB might, if the issues are worked out, get closer to it than an interposer. It's not clear if that threshold has been reached just because an interposer is likely more expensive than that. There's still the HBM and non-standard packaging, for example.

2.5D integration does stand to remove some of the barriers to the bog-standard discrete setup, although this specific package is acting more like a really close discrete module than a truly integrated solution.
 
This product basically wipes out $200 and cheaper discrete. That's the bulk of all discrete GPUs sold.
Yeah it probably does... It's going to create some very interesting gaming capable tiny PCs. That market doesn't exactly upgrade CPUs often or separate from the GPU. An 8Gb HBM version? Ouch.
 
If they are cheap it might go somewhere. The PC gaming mini boxes haven't exactly set the world on fire thus far AFAIK.
 
Last edited:
I'm not sure about this getting to the $200 and cheaper range just yet. EMIB might, if the issues are worked out, get closer to it than an interposer. It's not clear if that threshold has been reached just because an interposer is likely more expensive than that. There's still the HBM and non-standard packaging, for example.

2.5D integration does stand to remove some of the barriers to the bog-standard discrete setup, although this specific package is acting more like a really close discrete module than a truly integrated solution.
Yeah they are targetting enthusiast mobile albeit lower end of that market for now if the following is correct.
PCWorld report that an AMD representative told them (context is whole product):
The idea, according to an AMD representative, is that these notebooks won’t be priced in the value segment at all, but in the neighborhood of $1,200 to $1,400 apiece.
https://www.pcworld.com/article/323...md-ship-a-core-chip-with-radeon-graphics.html
 
New This product basically wipes out $200 and cheaper discrete. That's the bulk of all discrete GPUs sold.
If that happens then AMD have shot themselves in the foot! all those CPU sales are now going to Intel instead of AMD. AMD will also have to pump out competitive GPUs to NVIDIA, as they can't stand to be the less attractive option anymore.
 
It could be the other way around. HBM's data rates are not going to be sustainable if the most important traces are not kept short, which means close proximity to the neighboring chip. The rest of the HBM base doesn't need the interposer, or is substantially hindered by it. Fury partially worked around area constraints by having parts of the HBM base extending outside the patterned area of the interposer.
I was being sarcastic...

EMIB's chief insight is that in an up to ~1000mm2 passive interposer of the sort AMD uses, literally only the area of the bridge was what was actually needed for the microbumps and traces.
The test, power, and ground pads use the same comparatively massive solder balls, and the vias drilled through the interposer to service them is part of the additional expense of the interposer.
The costly part of an interposer is the ultra-fine-pitch bumping and performing assembly that doesn't break the ultra-fine pitch joints. The extra area of an interposer for the low-density TSVs is pretty much free. Don't forget this is area from super-low-tech wafers, 65nm, 130nm (I can't remember)?

https://www.3dincites.com/2016/04/2-5d-and-3d-opportunities-for-cost-reduction/

OK so we don't have a TSV cost per mm² there, but the 2.5D pie chart looks favourably upon the costs of TSVs.

Also can you test the EMIB with just the memory in place? With an interposer you can test with just the memory installed, before installing the GPU, defraying some of the assembly yield loss (GPUs lost to bad HBM assembly).

So EMIB costs are not looking favourable head-to-head versus interposer. And, well, we'll never know for sure, because the kind of detailed cost analysis we're looking for never exists...

This and the apparent use of a standard PCIe interface makes me think AMD's chip effort was a modest customization in this regard, since the bumps do not change and Intel would likely worry about most of the mechanical/alignment/assembly concerns.
Oh there's no doubt AMD's effort was low and as you noted earlier in this thread, it frees AMD from having to pay for interposer assembly and HBM inventory.

Unless, erm, AMD actually supplies the complete EMIB sub-assembly, with HBM. Hmm, now I think about it, I wouldn't be surprised if AMD is doing that. It would make sense for Intel to let someone else pay for packaging yield. Especially as there is no EMIB twixt GPU and CPU.
 
The costly part of an interposer is the ultra-fine-pitch bumping and performing assembly that doesn't break the ultra-fine pitch joints. The extra area of an interposer for the low-density TSVs is pretty much free. Don't forget this is area from super-low-tech wafers, 65nm, 130nm (I can't remember)?

https://www.3dincites.com/2016/04/2-5d-and-3d-opportunities-for-cost-reduction/

OK so we don't have a TSV cost per mm² there, but the 2.5D pie chart looks favourably upon the costs of TSVs.

To quote text below:
"The TSV create and reveal activities account for about 17% of the total cost for this case, and the combination of BEOL and FEOL accounts for 36%. Most of this 36% is from the RDLs on both the top and bottom of the interposer. The raw interposer cost is significant given that the interposer must be large enough to accommodate four HBM stacks and one large ASIC."

I forgot that the adjunct to the TSVs for IO and power/ground pass-through is the connectivity for their interface with the other layers. EMIB is one-sided, as it no longer needs pass things through, and since it is purely a bridge most RDL is in the standard package material.

Also can you test the EMIB with just the memory in place? With an interposer you can test with just the memory installed, before installing the GPU, defraying some of the assembly yield loss (GPUs lost to bad HBM assembly).
I am not aware of the specific testing paths for that stage. If there us ubump evaluation, EMIB's behavior is supposed to match the ubump behavior of the interposer. If it's using the coarser contacts that do not use ubumps, I'm not sure why that would be different.

It's likely harder to get the alignment and consistency, particularly for a less-resourced company than Intel.

Unless, erm, AMD actually supplies the complete EMIB sub-assembly, with HBM. Hmm, now I think about it, I wouldn't be surprised if AMD is doing that. It would make sense for Intel to let someone else pay for packaging yield. Especially as there is no EMIB twixt GPU and CPU.
EMIB is embedded in the package, and the process may involve building the substrate around it. This is proprietary to Intel's vertically integrated packaging division. AMD's already spun off what package/test infrastructure it had. 2.5D products depend on companies like Amkor or TSMC's in-house solution, with whom Intel hasn't shared anything.
 
If they are cheap it might go somewhere. The PC gaming mini boxes haven't exactly set the world on fire thus far AFAIK.
This is for medium to high-end laptops, all-in-ones and NUCs.

If we take those three categories of PC and compare their sales against conventional "tower" PCs (including mini-ITX mini-towers) what do we see? I don't know, but we do know that laptops are already way ahead of desktops:

https://www.statista.com/statistics...forecast-for-tablets-laptops-and-desktop-pcs/
https://www.statista.com/statistics...forecast-for-tablets-laptops-and-desktop-pcs/
And we haven't even got to whatever tiny fraction of sales is left for "self-build"...
 
If that happens then AMD have shot themselves in the foot!
No. AMD is still selling at least the same number of GPUs. In fact, AMD is selling GPUs for laptops, NUCs and AIOs, products that it has mostly been absent from for, oh about a decade. Though I admit Apple AIOs are the main exception. So that could be a bump in sales numbered in millions per year. AMD isn't cannibalising itself when AMD isn't currently selling anything to that market (except Apple, and well, it's clearly going to be selling exactly the same category of GPU there).

all those CPU sales are now going to Intel instead of AMD.
When AMD gets its arse in gear and produces a Ryzen based "APU" in the same power/performance category, it will have better graphics. ODMs will have an AMD part to drop into their designs, if they can overcome the Thunderbolt and "interposer thickness" problems. Intel marketing will have laid the foundations for "gaming without the desktop" (I'm not sure what else non-Apple users would buy these things for) and then it's a question of whether AMD can succeed as a parasite.

AMD will also have to pump out competitive GPUs to NVIDIA, as they can't stand to be the less attractive option anymore.
I can't work out what you're saying here. AMD doesn't need discrete performance and enthusiast GPUs if it's selling all the performance GPUs it can make to Intel. Unless you think Intel will switch to using NVidia after a year or two?

The real question ought to be, will these sell (excluding the Apple incarnations, which will obviously sell)?
 
For desktop, if these are considerably more powerful than AMD's new APUs then they seem fantastic for high end AIO and NUCs. If not then... idk.
 
This is for medium to high-end laptops, all-in-ones and NUCs.

If we take those three categories of PC and compare their sales against conventional "tower" PCs (including mini-ITX mini-towers) what do we see? I don't know, but we do know that laptops are already way ahead of desktops:

https://www.statista.com/statistics...forecast-for-tablets-laptops-and-desktop-pcs/
And we haven't even got to whatever tiny fraction of sales is left for "self-build"...
I'm very aware that self built boxes and towers aren't popular anymore. The world is notebooks, phones and consoles.

I look forward to pricing, power consumption and seeing where it actually slots in on performance. How it does against 3 year old 980M + Haswell notebooks and such. Also if driver support appears to require special attention. I would want to avoid that from AMD.
 
Last edited:
This is most definitely going into Macbook Pros. You can write that down.
Apple wouldn't demand a solution like this just for imacs.
The only problem with this idea is that it was announced by Intel and confirmed by AMD. Apple has said absolutely nothing as far as I'm aware. If it was a custom design intended for Apple, they'd have seized the marketing opportunity or kept it NDA'd until they announced it. NUCs are far more appealing targets for these things, although I'd think an even larger design may be warranted. Really no reason a Nano equivalent couldn't be created. A couple hundred watts for a heavy NUC wouln't be a problem. Power supply might be interesting however.

And even worst would be how does one then "upgrade" to next generation of GPU, consumers will not spend a small fortune replacing the EMIB solution just for next generation of GPU; consumers upgrade GPUs much more frequently than CPUs.
The upgrade cycle wouldn't be all that different from most smartphones. Costs may come down as well once the design scales up. NUCs for example would be far cheaper than laptop designs. Outside of enterprise, I'm unsure how much demand will exist for discrete cards. There's really no reason a larger, or more power efficient GPU couldn't be included. Power usage might be high, but with an embedded chip the size of a Threadripper or larger cooling isn't that difficult. Also means one giant heatsink instead of multiple smaller ones with airflow issues.

I can't work out what you're saying here. AMD doesn't need discrete performance and enthusiast GPUs if it's selling all the performance GPUs it can make to Intel. Unless you think Intel will switch to using NVidia after a year or two?
It wouldn't be unreasonable to see mostly embedded solutions in the future. So Nvidia and Intel could do something similar.

A more interesting question might be if the CPU and GPU can share memory more freely. Move towards only having HBM(or similar stacked solution) present in the design. AMD also has an X86 license where they may be able to make a GPU work more closely with Intel's memory controller or vice versa. Going to be interesting to see how that plays out, or if Nvidia is largely removed from the graphics market.

But I'm just not sure I grasp what's so exciting about this product. It saves some space over more typical discrete hardware, but I don't see it having special performance and I have doubts about it being priced in an exciting way considering its perfomance level. Though I sense that some people here are excited about having MacBook Pros with it for some reason.
Form factor is the most interesting part. Smaller size and even cooling in the case of NUCs. One big heatsink can cool everything far more efficiently than several smaller ones. Further the case and board could be designed with no internal space. A single fan pulling cool outside air right across the chips. That should provide some headroom to start pushing clocks. Shorter data paths and power efficient memory certainly won't hurt either.
 
The i7 G-series must be not a niche or premium product. Instead of it, it needs to be mainstream. For $1000 notes would be awesome the i7G.
 
Back
Top