AMD RyZen CPU Architecture for 2017

Both windows xp and server 2003 have x64 versions. Which cand run on Ryzen. Thus the issue is for existing deployments rather.

What that article seemed to imply was that this info was under NDA for a while. For me, that's the alarming bit. That sound unprofessional on AMD's part?

The article indicated a section of AMD's processor documentation is under NDA, which I think is the overall errata section for the chip, not just this specific problem.
Bugs are inevitable, and one possible reason is that there are other bugs in that section that AMD doesn't want to talk about yet.

As far as professionalism goes, the x86 vendors have historically been above-average in how much they have publicly documented hardware faults, even if timeliness or transparency are imperfect. In various ways, x86 in modern times has regressed to the mean when it comes to openness, but whether that's in play for the Ryzen errata section is unclear.


One random coincidence is that the TDP of the Banded Kestrel or River Hawk embedded APUs is roughly in the acceptable range for AMD's various processor/DRAM stacks. The logic layer is capped at ~10W, and a stack of DRAM could be ~4-5W. CPU hot spots might make the situation too disparate from the HPC concepts, however.
 
AMD probably just doesn't bother to market it anymore. Its ROCm stack is built upon parts from the HSA effort anyway, and AMD can very well roll cache coherency and platform atomics for Raven Ridge in ROCm without HSA compliance at launch.

It just costs more or less the effort of building a normal monolithic SoC, with extra cost in assembly. TBH it would just be a giant APU with a different configuration of GPU local memory. Whether it is marketable is another story.

Why would it? The so-called "HBCC" already addressed the issue, and is working very well for Nvidia under the name Unified Virtual Memory. The HMM patch in Linux is also on its way to upstream. Let alone the fact that the APU may already access pageable system memory directly without it.

Well, yeah, but you'd need to have some kind of main (DDR4?) memory available in the system for that, which would require a standard memory controller + PHY… unless of course you were to use APUs on add-in cards only, but that wouldn't be very convenient.
 
Well, yeah, but you'd need to have some kind of main (DDR4?) memory available in the system for that, which would require a standard memory controller + PHY
TBH I don't see why this is a problem. Such memory controller is literally in all machines with dGPU as the platform memory controller for... decades. Now it is just about integrating literally a dGPU with its own local memory subsystem and leverage the interconnect for performance and/or power efficiency.
 
Last edited:
The data center "Naples" chip is officially branded as EPYC.

enwjhSO.png
 
Jim Anderson just showed a slide that claims the Ryzen ultra-mobile SoCs coming Q3 will bring Vega graphics, 55% more CPU performance, 40% more GPU performance and 50% less power consumption.
I'm guessing it's the big Raven Ridge (4-core, 11CU) that will consume half of Intel's current 45W models (22.5W).


Also, Threadripper HEDT just announced. New socket, 16-cores, quad-channel.
 
big Raven Ridge (4-core, 11CU) that will consume half of Intel's current 45W models (22.5W).
Well heck, with that kind of TDP I can certainly forgive lack of HBM :yep2:
Vegan FTW!

"All-new HEDT platform" implies not pin compatible after all...
 
Last edited:
Jim Anderson just showed a slide that claims the Ryzen ultra-mobile SoCs coming Q3 will bring Vega graphics, 55% more CPU performance, 40% more GPU performance and 50% less power consumption.
I'm guessing it's the big Raven Ridge (4-core, 11CU) that will consume half of Intel's current 45W models (22.5W).
7th Gen APU. Bistrol Ridge.
 
I need that right into my veins ! I would go $700 for this
Core i7-6950X (10 cores, 3.0 GHz) costs $1723. It has 62.5% core count and a lower clock rate. Rumors (few months ago) speculated on 999$ price point for the 16 core (32 thread) Ryzen flagship. 999$ would be a steal for this CPU. At that price point this would sell like hot cakes. I would assume that quad channel memory solves Ryzen's memory bottlenecks. Eagerly waiting for benchmarks.

It's going to be interesting to see how Intel prices the forthcoming i9 CPUs, especially the 12 core flagship. That's going to be the main competitor for the 16 core AMD CPU. Maybe they need to lower prices a bit. I'd expect something around 1500$. Even at that price point, it would be a steal compared to current 12 core (single socket) Xeon flagship (which is 2 gens older architecture and lower clock).
 
Last edited:
Wrt to Ryzen memory bottlenecks the speculation was that they're partially caused by the interconnect fabric limitations. Quad channel doesn't appear to help here

But yeah, the clocks look good for this one, so do for the skylake-x.

I find this exciting as it means such a high-end desktop workstations perform less worse when gaming
 
But yeah, the clocks look good for this one, so do for the skylake-x.
I expect the 12 core Intel i9 to trade blows with the 16 core AMD R9 in multi-threaded benchmarks, with R9 being slightly ahead in general. Broadwell was already winning clock to clock, and Skylake-X improves both IPC and clock rate. Should be enough to reclaim most of the performance lost by having 4 cores less. In productivity apps and games which commonly do not scale much beyond 4 cores the i9 will be obviously faster thanks to higher clock rate, better IPC and the big shared L3 cache. Thus i9 should be generally a bit better CPU. But I also expect the 12 core i9 to cost at least 50% more than the 16 core R9.
 
I expect the 12 core Intel i9 to trade blows with the 16 core AMD R9 in multi-threaded benchmarks, with R9 being slightly ahead in general. Broadwell was already winning clock to clock, and Skylake-X improves both IPC and clock rate. Should be enough to reclaim most of the performance lost by having 4 cores less. In productivity apps and games which commonly do not scale much beyond 4 cores the i9 will be obviously faster thanks to higher clock rate, better IPC and the big shared L3 cache. Thus i9 should be generally a bit better CPU. But I also expect the 12 core i9 to cost at least 50% more than the 16 core R9.

Intel have give the clock speed of them ? i have only see the leaked benchmark one ( i ask because i have maybe miss them ), who had effectively high clock rate (4.3ghz on the 10cores ), but low base core ( 3.1-3.3Ghz ).. a bit odd numbers ( more than 1 ghz Turbo clock ), who was make me ask me if they was not a bit overclocked .

It is clear that Intel have all the reason to play the mhz race on single core ( turbo mode ), this said, im not quite sure that the 12cores will really match the 16 cores on multithreaded scenario.
 
Last edited:
It is clear that Intel have all the reason to play the mhz race on single core ( turbo mode ), this said, im not quite sure that the 12cores will really match the 16 cores on multithreaded scenario.
Quad core i7 7700K (8 threads) fares very well against 6-core (12 thread) Ryzen (+50% cores) in MT benchmarks . Most software doesn't scale perfectly to 32 threads. I expect the 24 thread Intel CPU with better IPC to be pretty close. Let's wait for benchmarks.
 
Quad core i7 7700K (8 threads) fares very well against 6-core (12 thread) Ryzen (+50% cores) in MT benchmarks . Most software doesn't scale perfectly to 32 threads. I expect the 24 thread Intel CPU with better IPC to be pretty close. Let's wait for benchmarks.

I aggree that this is completely depend of the softwares ( i dont speak about gaming ofc ).
 
I aggree that this is completely depend of the softwares ( i dont speak about gaming ofc ).
I am personally mostly interested about C++ compile benchmarks. If 16-core Threadripper beats 12-core i9 in C++ compile benchmarks, my choice will be clear. Both will be perfectly adequate for gaming (high turbo clocks in low core situations). I am a game dev after all, so my CPU choice needs to run games as well. i9 will certainly be a bit better for gaming at 1080p with 144 Hz monitor, but I have a Titan X + 60 Hz 4K display on my workstation. I don't play at 1080p.
 
Don't forget that SMT performs better than HT in MT applications so it's not only a core advantage but the extra threads from SMT perform better than the extra threads from HT. It'll most probably cost less as well, hopefully they'll release a pricepoint at computex.
 
Don't forget that SMT performs better than HT in MT applications so it's not only a core advantage but the extra threads from SMT perform better than the extra threads from HT. It'll most probably cost less as well, hopefully they'll release a pricepoint at computex.
Hyperthreading is simply Intel's marketing name for their SMT implementation. I don't see any big differences in Intel's and AMDs SMT implementation. Do you have links to professional workload benchmarks showing better scaling with AMDs SMT implementation vs Intel's?
 
Back
Top