AMD RyZen CPU Architecture for 2017

No, no, they didn't go through official channels, it was a leak.

Anyway, Sam also says that the L1 has a latency of 4~5 cycles (7~8 when loading into FP pipes), the L2's latency is 12 cycles, and the L3's, 35 cycles. I might have missed something, but I think this is new information.
It is and pretty good news.

It's quite amazing how far amd has manage to archived in just one generation from something like BZ to something 180° in the opposite direction like zen. I think it would be interesting to see how this design will evolve in the near future but things looks really good right now.

Enviado desde mi HTC One mediante Tapatalk
 
It is and pretty good news.

It's quite amazing how far amd has manage to archived in just one generation from something like BZ to something 180° in the opposite direction like zen. I think it would be interesting to see how this design will evolve in the near future but things looks really good right now.

Enviado desde mi HTC One mediante Tapatalk

Well, they "took their time" though...

I'm curious about the real product, turbo frequencies, temp, etc. I'm good with my 5820k@4.2ghz, but for the sake of competition, I hope AMD delivers.
 
(I'd expect a base clock of 3.5GHz, with Turbo up to 3.8GHz, or perhaps a bit more. That should be enough to beat most Intel CPUs at most multithreaded tasks, though the Core i7-7700K (with its 4.2GHz base clock and 4.5GHz Turbo) should be untouchable in games, at least in non-DX12/Vulkan games.)

By the way, Sam from CPC also says that the Zen-based APU will have 11 CUs (704 ALUs) at 1.2GHz, with 4MB of L3. It will also decode h.265 (Main 10) in 4K@60FPS. I can't remember whether that's news.

Interestingly, Summit Ridge (the 8-core, big Ryzen) is actually an SoC. Beyond the dual DDR4-3200 controllers, it features 8 SATA 3.0 links, three USB 3.0 links, four (yes, the article says four) Ethernet links, HDA support, etc. There are also two PCIE 3.0 controllers with 16 lanes each. Of those 32 lanes, 8 are dedicated to internal communication between all the IP blocks, while the remaining 24 can be used externally: 16 for graphics, 4 for external storage (NVMe, SATA Express) and 4 to connect to a southbridge. Obviously, the latter isn't strictly necessary and entirely optional.

Presumably, those 16 graphics lanes can be split into 2×8 in Crossfire configuration.

Given the lack of southbridge, and the fairly reasonable 95W TDP, I'd expect AM4 motherboards to be very affordable. Motherboard manufacturers, however, might not be thrilled about this, but at this point it's hardly a surprise.
 
Well, they "took their time" though...

I'm curious about the real product, turbo frequencies, temp, etc. I'm good with my 5820k@4.2ghz, but for the sake of competition, I hope AMD delivers.
It may look like a long time but its just the normal time that AMD takes to change architecture so its one generation for them and a lot of first implementations also(14nm, SMT,, etc) so coming from BZ and now fighting in equal terns to Intel in only 4 years its pretty impressive.
 
Interestingly, Summit Ridge (the 8-core, big Ryzen) is actually an SoC. Beyond the dual DDR4-3200 controllers, it features 8 SATA 3.0 links, three USB 3.0 links, four (yes, the article says four) Ethernet links, HDA support, etc. There are also two PCIE 3.0 controllers with 16 lanes each.

This is sounding better and better. Does it support ECC? I guess Intel might be able to counter with higher wattage Xeon-D SKUs, but otherwise, this is at the top of my list for my next desktop CPU.
 
This is sounding better and better. Does it support ECC? I guess Intel might be able to counter with higher wattage Xeon-D SKUs, but otherwise, this is at the top of my list for my next desktop CPU.
Even if Intel does it'll be a niche market (the top of the top) and amd could fight in the rest, vast majority of the market below.

I don't think zen will support ecc cuz that's Naples territory.

Enviado desde mi HTC One mediante Tapatalk
 
Some of these I/O might be disabled on desktop, perhaps not connected to the socket. E.g. 4x 10GbE might be there on the die, but might be expensive on the motherboard and even eat into the TDP. So it'd be walled off and only enabled in Naples.
Similar thing with eight SATA? On existing AM4 motherboards, I don't believe so many SATA come out of the CPU - I only know of the aforementioned 4 lanes for storage, which might typically be partitionned as two SATA and one 2x PCIe slot.

What I believe to be likely, in addition to Naples and AM4 and if there's that many I/O built into the die, would be a server version soldered onto the motherboard. Low wattage just like the Xeon D, no unused display connectors.
That would come out later, as I think wrestling out a place in the more traditional two socket, high wattage, high perf/watt server market is a bigger priority. Ideas that people would strieve to replace servers with 4x-10x smaller servers or even 100x microservers turned out hilariously wrong.
 
This is sounding better and better. Does it support ECC? I guess Intel might be able to counter with higher wattage Xeon-D SKUs, but otherwise, this is at the top of my list for my next desktop CPU.

Yes, ECC is mentioned in the article, and it's necessarily available in hardware, since Naples is nothing more than an MCM based on four Summit Ridge dies. Whether commercial SKUs (as opposed to this ES) actually support ECC in practice is another story.
 
Interestingly, Summit Ridge (the 8-core, big Ryzen) is actually an SoC. Beyond the dual DDR4-3200 controllers, it features 8 SATA 3.0 links, three USB 3.0 links, four (yes, the article says four) Ethernet links, HDA support, etc. There are also two PCIE 3.0 controllers with 16 lanes each. Of those 32 lanes, 8 are dedicated to internal communication between all the IP blocks, while the remaining 24 can be used externally: 16 for graphics, 4 for external storage (NVMe, SATA Express) and 4 to connect to a southbridge. Obviously, the latter isn't strictly necessary and entirely optional.

It's curious that the controllers lose 8 lanes to internal communication. The southbridge complex itself seems to be modularized, but I feel like the bandwidth offered indicates it's not part of the main data fabric.

Earlier in the thread, I tried to reconcile some of the slides and rumors surrounding Summit Ridge, the HPC APU, and the server variants.
One of the items was trying to make sense of the claim that the HPC APU (using Zeppelin) was going to have 64 lanes of PCIe while at the same time being able to interface with 4 GMI links.
The guess at the time was that GMI was overlaid on the PCIe links, but if Zeppelin is 2x Summit Ridge the above information would contradict Zeppelin needing more than 32 lanes per chip to have anything left over to connect to the GPU.

Since then, Gen-Z was announced, and it uses Ethernet (IEEE_802.3) for its physical layer. It's not coherent, but is agnostic to coherence primitives being passed through it. That might free up the PCIe lanes, although it's still not a full match since Summit Ridge is reserving links for some reason and some of the disk IO doesn't add up.
This might give an idea as to why AMD is showing up in OpenCAPI (PCIe), CCIX(PCIe), and Gen-Z(IEEE_802.3).

That doesn't rule out Zeppelin not being a straightforward doubling of Suumit Ridge, or possibly a more flexible link strategy where lane reservations change or can be augmented based on GMI, an MCM, or a GPU being linked. (ed: For example, if GMI goes over Ethernet this might allow a Vega GPU's own 16x PCIe link to be used externally. This implies a Vega variant with a significant amount of network communication capability if there's a standalone version.)

Motherboard manufacturers, however, might not be thrilled about this, but at this point it's hardly a surprise.
If there's a cost reduction of some kind, it might encourage some board makers to not sideline AMD as much, or at least leave them fewer corners to cut.
However, that would depend on the quality of the IP AMD has integrated. AMD has some partnerships for southbridge IP that raised some questions as to overall competitiveness. I'm not sure where its current and prior generation CPU memory controllers were sourced, but they haven't impressed either.
 
Last edited:
Don't forget the PGA board are cheaper also. And that 6 and 4 core zen will have lower TDPs maybe 65w? so board prices won't be a problem for AMD.
 
It's curious that the controllers lose 8 lanes to internal communication. The southbridge complex itself seems to be modularized, but I feel like the bandwidth offered indicates it's not part of the main data fabric.
It feels more like the Ethernet, USB and SATA are what occupied those eight (internal) lanes. It is actually the case in the current APUs, where the integrated FCH sits behind GNB, which owns the PCIe root complex.

Earlier in the thread, I tried to reconcile some of the slides and rumors surrounding Summit Ridge, the HPC APU, and the server variants.
One of the items was trying to make sense of the claim that the HPC APU (using Zeppelin) was going to have 64 lanes of PCIe while at the same time being able to interface with 4 GMI links.
The guess at the time was that GMI was overlaid on the PCIe links, but if Zeppelin is 2x Summit Ridge the above information would contradict Zeppelin needing more than 32 lanes per chip to have anything left over to connect to the GPU.
IMO three to four 16-lane links are highly possible, considering that 32-core Naples is allegedly MCM of four 8-core chips. Assuming the chips in the MCM would be fully interconnected, that would take three links per chip. Accounting of a fourth link would put four 16-lane links available for external connections.

Then in the workstation APUs' case, two links can be used to glue the two CPU dies together, while the rest (2 per die) goes to the GPU. Though in this case, I am not sure if it would be possible to share a socket with Naples.
 
IMO three to four 16-lane links are highly possible, considering that 32-core Naples is allegedly MCM of four 8-core chips.
Is this 4 Summit Ridge chips or something different?
The claim is that Summit Ridge has 32 lanes per-die, and 8 of them are unavailable off-die.
That's 1.5 16-lane links.

Assuming the chips in the MCM would be fully interconnected, that would take three links per chip. Accounting of a fourth link would put four 16-lane links available for external connections.
There are rumors that Naples has 128 external links, and making them fully connected means there's a negative number of links available to the outside world. Something would need to be invalidated since Summit Ridge provides no room for external connectivity if PCIe is 24 external links per chip.

Then in the workstation APUs' case, two links can be used to glue the two CPU dies together, while the rest (2 per die) goes to the GPU. Though in this case, I am not sure if it would be possible to share a socket with Naples.
Since the HPC APU is supposedly using Zeppelin, does it have more than 2x the claimed link count of Summit Ridge?

There's still a mathematical pathway to getting 64 links of external connectivity with a 2x Summit Ridge scenario, even if that's only 48 lanes available externally from the CPUs, but only if via some coherent magic a Vega GPU's own 16x PCIe link can be used.
 
Is this 4 Summit Ridge chips or something different?
The claim is that Summit Ridge has 32 lanes per-die, and 8 of them are unavailable off-die.
That's 1.5 16-lane links.

According to Canard PC, it's 4 × Summit Ridge.
 
Last edited:
The canardPC magazine had a word in binary saying "ZenOCAir@5G" hmm what could that mean...:oops:

AMD have put a Samsung 5G LTE modules in it ? i joke..

Well if they oc so well, it will be much that interessant and i feel so Quadcore could have a nice base /turbo clock so. ( still need see the scaling )
 
Back
Top