Intel Kaby Lake + AMD Radeon product *spin-off*

  • Thread starter Deleted member 13524
  • Start date
But it also stops AMD CPU division launching any other product above those because the EMIB deal cements Intel's position in that market all for a low margin revenue for Radeon, which is what I said from the beginning; no-one knows what other products the CPU division had lined up or was thinking of doing.
Not necessarily, as the deal could be structured so Intel is always a generation behind on GPUs. Polaris vs Vega currently, ignoring the fact AMD doesn't currently have a competing Vega. Possibly a 32 CU Nano or Vega11 if the Intel part isn't it.

My money would be on Imacs and eventually blade products as well. Quad-Core (15W) plus HBM2 (10ish W) plus mid-sized GPU (45-ish W best case) is too much TDP to get rid of in a more business oriented chassis with accetable noise levels. Specialized gaming laptops, their users deal with high noise output, but on a Macbook (Pro), chances are, users want more quiet operation and for that you need cooling material (bulk), not high air flow.
If it were Apple I'd have expected them to say more about it and/or Intel and AMD to keep quiet. I really think they're just pushing high end NUCs to the PC sector. Hades Canyon lines up perfectly with past products and a larger case with power requirements a possibility. AIOs for everyone but high end gamers, and there's the possibility they stick them into TVs or STBs. Fast enough one could sit in an entertainment center and process and stream content everywhere in the house.
 
Not necessarily, as the deal could be structured so Intel is always a generation behind on GPUs. Polaris vs Vega currently, ignoring the fact AMD doesn't currently have a competing Vega. Possibly a 32 CU Nano or Vega11 if the Intel part isn't it.
Putting aside none of us will know if CPU division intended to target this segment anytime soon, the deal is still a risk because Intel is the entrenched customer perceived product, the EMIB deal has made it that much harder for AMD in general to compete in this segmemt even if there is a generation difference.
How many times have people here lamented consumer perception when choosing between Nvidia/AMD products below the enthusiast line or those thinking automatically of Intel over AMD?
We also do not know any details on how the deal protects AMD if at all; no guarantee which GPU design is used, no knowledge on upgrade-update options going forward, no knowledge if it also applies to other model options, no knowledge if this will creep up and down the consumer ladder.
 
Nvidia is currently dominating the laptop GPU market. Nvidia is always paired with Intel CPU. If a customer instead uses Intel EMIB based solution, that customer is using an AMD GPU instead of Nvidia, and AMD gets money. This is one way for AMD to get back to the laptop discrete GPU business.
You said in your previous post that you could see EMIB being used only in the MacBook Pro since competing Windows laptops don't have enough volume. But Nvidia isn't used in MacBooks and therefore wouldn't be affected by EMIB.
 

Is nobody going to point out the fact the board doesn't have a secondary main chip, aka PCH? Does the chip have everything, making it a real SoC? Is the "custom" AMD part providing the PCH function? Or is the leftover x8 slot on the CPU used for IO? I think this information is significant in itself.

Not necessarily, as the deal could be structured so Intel is always a generation behind on GPUs. Polaris vs Vega currently, ignoring the fact AMD doesn't currently have a competing Vega. Possibly a 32 CU Nano or Vega11 if the Intel part isn't it.

Why does it have to be that complicated? It may be just a matter of Vega part Intel wanted not being ready.

By the way, doesn't EMIB allow chiplets to have its own communication interface? Meaning even if the GPU didn't support HBM2, the EMIB chiplet can.
 
Last edited:
But it also stops AMD CPU division launching any other product above those because the EMIB deal cements Intel's position in that market all for a low margin revenue for Radeon, which is what I said from the beginning;

It won't stop AMD from launching a product they effectively don't have.



no-one knows what other products the CPU division had lined up or was thinking of doing.
AMD knows. If they had a 50-100W mobile APU with HBM2 lined up for 2018/2019 release, perhaps this deal wouldn't even exist.
Instead, what they have right now is only 2 dies: 2*CCX GPU-less and 1*CCX + 11*NCU.

The only other APU solutions we've heard about so far in leaks are the 4*CCX (maybe just 2 of the current dies) + Vega Greenland for HPC which is definitely unable to fit within that 50-100W budget even at 7nm, and the 0.5*CCX + 3 NCU SoC at 4-15W.



Furthermore, Intel will now be doing their own discrete GPUs, meaning this semi-custom deal from AMD won't be a long one.
If Intel comes up with their own GPU family in 3/4 years, that's how much time AMD has to develop a high performance APU within the 50-100W range.

Until then, AMD just gains marketshare in the notebook market.



Quad-Core (15W) plus HBM2 (10ish W) plus mid-sized GPU (45-ish W best case) is too much TDP to get rid of in a more business oriented chassis with accetable noise levels.
10W for a single stack of HBM2? Where have you seen that? Last I checked HBM1 was at less than 5W per stack.
EDIT: All I could find was anandtech's math on HBM1:
https://www.anandtech.com/show/9883/gddr5x-standard-jedec-new-gpu-memory-14-gbps

15W for 4x stacks in Fiji. Less than 4W per HBM1 stack. You think a single HBM2 stack is going to consume 10W?



You said in your previous post that you could see EMIB being used only in the MacBook Pro since competing Windows laptops don't have enough volume. But Nvidia isn't used in MacBooks and therefore wouldn't be affected by EMIB.
The Macbook Pro has enough demand to justify the existence of this chip, unlike sparse models from other OEMs within the same price range.
This doesn't stop other OEMs from purchasing Kaby Lake G solutions from Intel, when it becomes available.

Same thing happened with apple being partially responsible for Intel developing their GT3 and GT3e solutions. Apple was the customer whose demand justified the development, but then other OEMs adopted the solution.


If the Kaby Lake G has better performance and efficiency, other OEMs will want to use it instead of the typical KBL-H + GTX1050. For the appropriate price points, of course.


Is nobody going to point out the fact the board doesn't have a secondary main chip, aka PCH? Does the chip have everything, making it a real SoC? Is the "custom" AMD part providing the PCH function? Or is the leftover x8 slot on the CPU used for IO? I think this information is significant in itself.
Core H models use an external PCH in the motherboard. Only the dual-core Y/U series have a PCH in the substrate.

I don't think the semi-custom GPU has southbridge functionality. Intel's Core H have a dedicated 4GB/s connection (4*DMI 3 links) for the PCH, and I've seen several sources claiming the CPU-GPU connection is simply the PCIe links from the integrated northbridge.
 
Last edited by a moderator:
I do not get this AMD CPU Division can only do 15W mindset, even though they have an incredibly competitive performance/efficiency Ryzen CPU (showing a high level of expertise and resources behind the CPU Division).

AMD was pretty good in the past with APUs, so why now is everyone assuming they could only do the 15W variants and take over 2 years to do anything else...
Ryzen is clearly an efficient and performance related CPU, it stands out they do have the expertise and resources.

As I mentioned going by this logic everyone should had written off AMD enthusiast/HPC GPUs because AMD Radeon only brought out initially non-enthusiast GPUs (lets say 1070 is enthusiast), and their budget/resources is substantially less than the CPU division, meaning one should expect more where resources are pumped.
By that logic is it ok for AMD CPU Division to work with Nvidia Tesla as AMD CPU division needs to break the stranglehold of Intel CPUs in HPC-science-analytics-compute space along with it being the most margin lucrative?
I would possibly say no as it consolidates Nvidia HPC even if does mean CPU Division combats Intel better, but I would say the same about this EMIB deal as well that helps Intel at the possible risk to any plans it would overlap now or eventually with AMD CPU division.
One aspect that is fundamentally different between Radeon and the CPU division is the control of information is better within the CPU division, in other words we really do not know much about what the CPU division unannounced product/R&D strategy is.
 
Last edited:
Core H models use an external PCH in the motherboard. Only the dual-core Y/U series have a PCH in the substrate.

I don't think the semi-custom GPU has southbridge functionality. Intel's Core H have a dedicated 4GB/s connection (4*DMI 3 links) for the PCH, and I've seen several sources claiming the CPU-GPU connection is simply the PCIe links from the integrated northbridge.

Why point out the obvious? I know that H models are 2-chip and U/Ys use 1 chip.

This is an H class chip. Yet its only one chip. The question is again: Where is the PCH chip?
 
I do not get this AMD CPU Division can only do 15W mindset, even though they have an incredibly competitive performance/efficiency Ryzen CPU (showing a high level of expertise and resources behind the CPU Division).

AMD was pretty good in the past with APUs, so why now is everyone assuming they could only do the 15W variants and take over 2 years to do anything else...
Ryzen is clearly an efficient and performance related CPU, it stands out they do have the expertise and resources.

Raven Ridge will scale up to 35W in the mobile version and up to 95W in the AM4 version.

DYLsN0e.png

PyOTDjP.png



The 35W version might get its GPU clocks up to 1.3GHz or so, and the desktop AM4 might go >1.6GHz and support DDR4 3200, effectively making all Polaris 12 and GP108 worthless to use with it.




Why point out the obvious? I know that H models are 2-chip and U/Ys use 1 chip.

This is an H class chip. Yet its only one chip. The question is again: Where is the PCH chip?

Not so obvious if you still didn't get it.
In H solutions, the PCH is located in the motherboard's PCB. In Y/U solutions, it's located in the main CPU's substrate.

In the picture linked by @iamw the PCH is partially visible under the M.2 SSD. Again, in the motherboard's PCB like in all H series.
 
Last edited by a moderator:
We have yet to see exactly how many CUs there will be, but if there are 24 NCUs at 1.2GHz in the higher-end model as seen in some of the leaked examples, I think this is what to expect.
Though the size of the GPU in Intel's promo screenshot seems to point towards >200mm^2, which makes me question those 24NCUs.

Has anyone tried to calculate the GPU die size using the HBM2 stack at the far left as comparison?
Also with the Kaby Lake H at the right which is 123mm^2. The GPU seems to be twice as big.
From the photo in this post, and assuming the four holes around the SoC form a perfect square (for perspective correction), I get
  • ~222 mm^2 from the SK Hynix HBM2 dimensions (but I don't know which HBM2 is being used here),
  • ~205 mm^2 from the Kaby Lake-H 123 mm^2 die size.
I agree the 24 CU number seems odd. Could some CUs be disabled to reach 24?
 
Last edited:
From the Chiphell photo here, and assuming the four holes around the SoC form a perfect square (for perspective correction), I get
  • ~222 mm^2 from the SK Hynix HBM2 dimensions (but I don't know which HBM2 is being used here),
  • ~205 mm^2 from the Kaby Lake-H 123 mm^2 die size.
Sounds like what should be a Polaris 10 with a few mm^2 shaved off from trading a 256bit GDDR5 bus for a single 1024bit HBM2.
Except Polaris 10 has 36 CUs, which is 50% more than than the rumored 24 NCUs in this solution.



I agree the 24 CU number seems odd. Could some CUs be disabled to reach 24?
Could be. Another thing that gets me is the supposed VR certification for the higher-end models.
The lowest-end VR solution that I know of is the RX470, which does close to 5 TFLOPs FP32.
A 24 NCU GPU would need over 1.5GHz to reach that kind of performance. And if Vega 10 is used as reference, those clocks are really hard to reach with a TDP limit of 75W or so.

So the way I see it, here are some options:

- There are way more than 24 CUs/NCUs that may only enable in e.g. a high-end NUC version
- The clocks can go up to 1.6GHz that may only enable in a high-end NUC version
- A combination of the above (e.g. 28 NCUs/CUs at 1.35GHz)
- The VR "stamp" doesn't refer to the original R9 290 / RX470 performance level, but it's rather based on the "Minimum VR" specs dictated by Oculus when they enabled asynchronous timewarp.
 
You said in your previous post that you could see EMIB being used only in the MacBook Pro since competing Windows laptops don't have enough volume. But Nvidia isn't used in MacBooks and therefore wouldn't be affected by EMIB.
I meant that no Windows laptop manufacturer has enough influence to force Intel and AMD to work together on a integrated solution. Apple uses both Intel and AMD products, buys big batches of top end high profit CPU/GPU models and is nowadays developing their own CPUs and GPUs. If Apple wants more integration and faster memory bandwidth (their iPad Pro exceeds Intel i7 in bandwidth) then Intel and AMD will provide that. That's my prediction. Why else would the product launch of a chip like this be so strange? No news about products using this chip at all. Of course I might be completely wrong here, and this announcement was done early because of Raja Koduri joined Intel. Who knows. I am just speculating. Nobody expected this product, so speculation is warranted.
 
Raven Ridge will scale up to 35W in the mobile version and up to 95W in the AM4 version.

DYLsN0e.png

PyOTDjP.png



The 35W version might get its GPU clocks up to 1.3GHz or so, and the desktop AM4 might go >1.6GHz and support DDR4 3200, effectively making all Polaris 12 and GP108 worthless to use with it.






Not so obvious if you still didn't get it.
In H solutions, the PCH is located in the motherboard's PCB. In Y/U solutions, it's located in the main CPU's substrate.

In the picture linked by @iamw the PCH is partially visible under the M.2 SSD. Again, in the motherboard's PCB like in all H series.

12 Next Gen GFX Cores and 95W for the desktop RR-APU. Not really.
 
No, 11 nCUs with 10 being active for 2700U and 8 for 2500U.
That alone doesn't invalidate the whole roadmap though, as it could have been a late change.

Regardless, AMD themselves have confirmed that Raven Ridge will be available for socket AM4. If the Core i7 7700K has a 91W TDP then why wouldn't the AM4 version of a multiplier-unlocked Raven Ridge go up to a similar TDP as well?
And there's really no reason for AMD (edit) not to release lower-binned Raven Ridge FP5 for 35W, either.




No unboxing video and no release-date reviews. That's as strange as it gets IMO.
 
Last edited by a moderator:
It's not strange, because HP, Lenovo etc send review samples to reviewers and not AMD. Some bigger online magazines buy laptops and review them.

Its not strange, but tells you how much OEM's are interested in Raven Ridge. Not a whole lot, that is. Maybe Intel is playing shenanigans again? Or the AMD brand on laptops got so burned out in the last years that they have no confidence in sales. Unfortunately for me that means that I ended up ordering a Dell XPS 13 with the i7 8550u (a little over my budget but Dell is giving a £100 discount plus £100 cash back so I find it worth it). I was holding up for the release but not benchmarks and reviews whatever just made me lose my appetite for it. I can understand that supply would come later but there needed to be at least some previews of the hardware itself...
 
Back
Top