AMD RyZen CPU Architecture for 2017

Btw does having more than 2 way SMT will be beneficial for customers? and what change will you think we will have in zen+? yes its early but im bored :p
 
I dont know for the performance, you have 2 versions for what i have understand FP5 and AM4, both up to 4 cores / 8threads.. and coupled with Vega ( 12 and 16CU ). respectively 4-35W and up to 95W for the AM4 version. the AM4 version have HBM2+DDR4 ..
Are you sure the AM4 version will have HBM2?
I hope there will be at least some models with HBM2, but I haven't seen it suggested anywhere so far.

I'm also not sure about the 16 CU amount. All I've seen so far is 12 CU for the 4-35W mobile parts.
 
Last edited by a moderator:
On reflection, I wonder if a performance-targeted quad-core Zen with higher clocks would be better off being a Summit Ridge with half its cores off. The power delivery would be sized to the 8-core, and the MIM capacitor layer might provide the remaining cores with more decoupling capacity per core than a single CCX, depending how how much is truly per-core.
The downside could be the now split LLC versus quad when it comes to application performance.
 
Are you sure the AM4 version will have HBM2?
I hope there will be at least some models with HBM2, but I haven't seen it suggested anywhere so far.

I'm also not sure about the 16 CU amount. All I've seen so far is 12 CU for the 4-35W mobile parts.
A rumor from Bits 'n Chips mentions two Raven Ridge APUs, one with 12 CUs and DDR4 and the other with 16 CUs and DDR4 + HBM2. But I don't think there's anything official (yet) regarding Raven Ridge and HBM.
 
A rumor from Bits 'n Chips mentions two Raven Ridge APUs, one with 12 CUs and DDR4 and the other with 16 CUs and DDR4 + HBM2. But I don't think there's anything official (yet) regarding Raven Ridge and HBM.

This is what i have find back, but from other site..

Socket FP5 AM4
TDP 4-35 W 35-95W
CPU uArch Zen Zen
Core/Thread 4/8 4/8
GPU uArch Vega Vega
GPU CUs 12 16
IMC DDR4 DDR4+HBM2
Process Node 14nm FinFET 14nm FinFET
Die Size ~ 170 mm2 ~ 210 mm2

my bad, i have the impression it have been confirmed.. but as i see it , nothing yet official.. . It was make maybe too much sense for me. (good thing with memory )
 
A rumor from Bits 'n Chips mentions two Raven Ridge APUs, one with 12 CUs and DDR4 and the other with 16 CUs and DDR4 + HBM2. But I don't think there's anything official (yet) regarding Raven Ridge and HBM.

That could explain why they're going again with the "4GB is more than enough" statements. A Raven Ridge with a 16CU GPU and a single 4GB HBM2 stack would be perfect for so many applications.
Now if only those AM4 HBM2 APUs could find their way into some laptops..
 
That could explain why they're going again with the "4GB is more than enough" statements. A Raven Ridge with a 16CU GPU and a single 4GB HBM2 stack would be perfect for so many applications.
Now if only those AM4 HBM2 APUs could find their way into some laptops..

In my opinion if RR has HBM 2 ( i dont think it does) it will be 1 or 2 GB stack. Thats kind of the whole point of the HBCC in Vega (RR is Vega level IP). So 2x DDR4 and 1-2gb of HBM.

I also think your massively off in terms of Zen's performance in the sub 35watt space. look at Cazzario performance ( ones actually running dual channel) vs intel. This is 28nm vs 14nm finfet on a "poor" core. So you get Zen which is 40% perf improvement per clock at the same power and then you get the benifit of finfets for low power and then the actual process improvement as well. Zen isn't going to be between i3 and i5. its going to be @ i7 level in CPU and its going to pummel it in GPU.
 
In my opinion if RR has HBM 2 ( i dont think it does) it will be 1 or 2 GB stack. Thats kind of the whole point of the HBCC in Vega (RR is Vega level IP). So 2x DDR4 and 1-2gb of HBM.
I could see an argument for a 4-8GB HBM2 option that doesn't use DDR. Far more compact and power efficient for a SFF design or laptop without DIMMs.
 
In my opinion if RR has HBM 2 ( i dont think it does) it will be 1 or 2 GB stack. Thats kind of the whole point of the HBCC in Vega (RR is Vega level IP). So 2x DDR4 and 1-2gb of HBM.
1 or 2GB would certainly make more sense in a 16 CU GPU, but there's only 4GB stacks being produced.
Unless Hynix is still making HBM1 stacks for AMD's Fiji products and they might use one of those. 1GB 128GB/s + large DDR4 pool at 40-60GB/s is probably more than plenty for a GPU in the RX460 performance range.

Again, it's a shame if this doesn't come to laptops. Apple would probably give an arm and a leg for this.

I also think your massively off in terms of Zen's performance in the sub 35watt space. look at Cazzario performance ( ones actually running dual channel) vs intel. This is 28nm vs 14nm finfet on a "poor" core. So you get Zen which is 40% perf improvement per clock at the same power and then you get the benifit of finfets for low power and then the actual process improvement as well. Zen isn't going to be between i3 and i5. its going to be @ i7 level in CPU and its going to pummel it in GPU.

I don't think I'm massively off. For multitasking and DX12/Vulkan then sure, it'll be close to Core i7 products. But for single-threaded programs and DX11 games (using discrete GPUs) then everything Skylake and newer will be superior.
There's a reason why AMD is pricing the 8-core Ryzen 7 against the 4-6 core i7, 4-6 core + HT Ryzen 5 against 4 core i5 with no HT and 4 Core Ryzen 3 against 2 core + HT i3.
They know they'll be losing on single-threaded performance at the same power consumption.



I could see an argument for a 4-8GB HBM2 option that doesn't use DDR. Far more compact and power efficient for a SFF design or laptop without DIMMs.

Probably too much of an overkill of bandwidth and price. Samsung has LPDDR4 packages of up to 6GB. Two of those in clamshell and you get 12GB. Two clamshells side by side and you get a whopping 24GB of RAM occupying very little space.
 
Last edited by a moderator:
79.90$ AMD FX-4350 versus Skylake i3 6100 (dual core, 3.7 GHz with hyperthreading):
http://www.anandtech.com/bench/product/1682?vs=1273

i3 6100 is 200 MHz faster (3.7 GHz) than Pentium G4560 (3.5 GHz). Pentium doesn't have hyperthreading, meaning that it is significantly slower in some multi-threaded tests. The FX-4350 beats the i3 in many benchmarks. It would beat the 3.5 GHz Pentium in most benchmarks (as Kaby has same IPC as Skylake). AMD FX-4350 is definitely not "worse in every possible way" compared to the Pentium G4560. FX-4350 beat the Pentium especially bad in integer heavy multi-threaded tasks.

Pentium G4560 actually does have hyper threading.
 
There's a reason why AMD is pricing the 8-core Ryzen 7 against the 4-6 core i7, 4-6 core + HT Ryzen 5 against 4 core i5 with no HT and 4 Core Ryzen 3 against 2 core + HT i3.
They know they'll be losing on single-threaded performance at the same power consumption.
Thats an assumption you and i dont share. How do you go from matching performance and power consumption in high power consumption workloads like rendering and encoding but then loose on power by dropping clock and cores in a comparable manner? Also the Slit has said that Zen power consumption per core is relatively low*, he hasn't said if he is under NDA or has a chip ( of course) but he has confirmed he has several of the X370 motherboards for testing :) .

How does AMD take market share, revenue and GP by pricing like Intel? Maybe you need to go back and look at things like the launch price of Kentsfeild or yorkfield for example your the one who has been conditioned to spend 1700USD for the high end. Back then even with the drubbing that those chips gave AMD the high end was still what 500-700 USD.
AMD doesn't have the same costs in the same structure as Intel, intel has 100k more employee's more OPEX costs and manufacturing costs are very low.
AMD are still charging upto $500 USD for a piece of 180-190mm silicon to consumers, i bet Nvidia wish they could do that................

*= https://forums.anandtech.com/threads/summit-ridge-zen-benchmarks.2482739/page-213#post-38736348
 
There are Xeons based on Broadwell EP that are cut down to 4 cores with a 140W TDP. Single-core turbo can get to 4 GHz.
A design targets a certain level of power delivery and dissipation per core, and turbo already allows a single core to act a lot like it is nearly alone or in a low-count chip--with some added dissipation area for good measure. I'm not sure if single-core turbo can max out at 140W on a sustained basis like having all cores running.

The gains may be somewhat limited due to this, and getting more out of it may require designing a core or CCX for an increasingly niche level of power delivery/density that raises the question of who buys a value-oriented 4-core that needs a custom water loop. In that regard, modern GPUs take advantage of parallelism (area, pins, units) and specialization (dedicated cooler, own VRMs) to eat up the extra TDP over a physically smaller CPU.

The power density problem can be acute enough that it can be a question of whether a core's expected performance can suffer if measures are taken to combat it. A core's thermal density can drop if it has more area, but there's a limit to how much heat can be transferred to neighboring power-gated silicon (having an inactive GPU or CCX can actually help), and a smaller native quad has less. Having more dissipation area in the core increases wire length--which counters clock or power scaling.
I would like to see more consumer desktop CPU models at 95W-125W. Give the mainstream consumer an option to choose between energy efficient model and performance oriented model. Intel already has some energy efficient models and K models, but only at the top price category. Clocking desktop chips a bit over the optimal power curve isn't a big problem in desktops (big chassis, big cooler, always active ventilation, power always connected). 220W of FX-9590 is obviously too much for consumer products, as it needs extreme cooling solution, but 125W can be cooled efficiently with a normal blower.

Higher TDP would be especially important when AMD introduces a Vega + Zen APU. All the current Intel models with high end GPU (GT4e) are TDP constrained. 6785R has 72 EUs but only 65W TDP. 3x faster GPU and 30% lower TDP doesn't match well. As a result we see 700 MHz reduction in CPU clock rate compared to 6700K. And yet you can't run the CPU and the GPU at full steam without throttling the GPU. This is not an optimal product for entry level gaming (not even talking about the price).

Hopefully AMD introduces aggressively priced quad core Zen's with a fat Vega GPU. And 125W TDP to ensure that applications that stress both CPU and GPU (like games) run perfectly. AMD has the experience from consoles to integrate a fat GPU to a APU. This could be the perfect cost efficient entry level gaming desktop. Intel is pricing the GT4e models as luxury items, so there wouldn't be any competition in this field either. iGPU is more cost efficient to manufacture than a low end discrete GPU (board filled with electronics). AMD should be able to price it aggressively (less than Pentium + cheap discrete GPU).
Pentium G4560 actually does have hyper threading.
I confused with Skylake Pentiums. Skylake Pentiums didn't have hyper threading (but have otherwise identical IPC). Kaby Pentiums are definitely a nice improvement over Skylake Pentiums. However both Skylake and Kaby Pentium do not support AVX. All other CPUs have supported AVX since Sandy Bridge (including AMD Jaguar found in current gen consoles). You will see a big performance difference in software using AVX extensively. When future apps/games begin to require AVX, then these Skylake/Kaby Pentiums simply can't play them. But future proofing isn't the main priority when getting the best value of the buck. G4560 is definitely a highly competitive CPU right now. But FX-4350 supports AVX and has more (integer) cores, so it might actually be more future proof.
 
Last edited:
It's like you're ignoring Raven Ridge on purpose :D

No, hence the reference to APUs, but my understanding is that it will be released later, perhaps several months later. I could be wrong about that, of course.

For what it's worth, 45W 8-core CPUs are an intriguing proposition as well, but I don't know that there's a market for that.
 
No, hence the reference to APUs, but my understanding is that it will be released later, perhaps several months later. I could be wrong about that, of course.

For what it's worth, 45W 8-core CPUs are an intriguing proposition as well, but I don't know that there's a market for that.

Outside laptop ( professional laptop who are used as "mobile" workstation), the only thing is maybe All-in-One...? But it is more a question of integrator than CPU makers. ( specific demand for specific case).
 
Outside laptop ( professional laptop who are used as "mobile" workstation), the only thing is maybe All-in-One...? But it is more a question of integrator than CPU makers. ( specific demand for specific case).
Could be useful in embedded systems. Power efficient micro servers, blades, and virtualization.
 
I don't think AMD plans on putting Summit Ridge into laptops, but I guess >4-core CPUs in desktop replacements could make some sense at 10nm.
 
A bit off topic, but I've always wondered why AMD never produced APU pcie cards. What would stop them putting raven ridge on a pcie board with a lot of ram, essentially an add-on SOC?

Would there be some value in them adding a small CPU component to all their products on the otherside side of the PCIe bus?
 
Back
Top