AMD RyZen CPU Architecture for 2017

From the reddit AMA:




Those 4W minimum on the mobile Raven Ridge aren't just for show.
:D

So we can finally have affordable and somewhat lightweight (not expecting ultra book, 2Kg would be fine) quad core laptops. I have no need for a gaming laptop, but it is hard as hell to find a quad core laptop that is not build for gaming and costs a max of 700 pounds!
 
Core 2 Duo is dead, but you can still use your Core 2 Quad for gaming.
That one has been dead as well. By the time Quad Core 2 became relevant, people have moved on from Core 2 completely, i7 was introduced. Quad Core 2 now lags behind even the basic Pentium G.
http://www.techspot.com/article/1313-intel-q6600-ten-years-later/
Vulkan and DX12 are gaining traction. In 2 years there will be AAA game engines designed solely around Vulkan and DX12.
I am sorry, but too much of what you say is still not set in stones, Ryzen doesn't provide much boost in DX12 compared to KabyLake, in fact it seems to lose just as badly as in DX11. Even in AMD's posterchild benchmark "Ashes". Too many optimistic variables, DX12 and Vulkan might not get "enough" traction, 8 Cores might not get traction due to disappointing or small sales.
We have already seen huge gains in some games with disabled SMT
Despite that, Ryzen 7 is still behind in those games. Some games actually scale worse with it disabled as well. Most titles just don't care though.
but it also helps a lot in cooking data and recompiling shaders. Ryzen will become the first AMD CPU since 10 years that's used by professional game developers.
It may help or it may not, we've seen how having competent hardware can mean nothing if market share doesn't increase significantly. Developers have the tendency to compile for the vendor with the largest market share.
Let's see how 7700K (quad) performs vs 6-core and 8-core CPUs in 2 years. My bet is that 1800X has overtaken 7700K in some latest AAA games.
Let's not forget that the 7700 is cheaper and faster right now, if consumers have to wait 2 years for Ryzen to gain traction, then I can understand people's decision not to wait, and they can't be faulted for that.

The original statement was the media is focusing on a small minority of players "those who have 144Mhz panels", however that is not true, the crux of the mater is the certain usability right now and in the future. In which Ryzen doesn't inspire much confidence based on today's variables. Some would say: too many things have to change to make Ryzen work in the future, they can't be faulted for not wanting to stick around based on a myriad of "ifs"
 
Going for a 7700k purely for high fps gaming now and even likely the next couple of years absolutely makes sense. If you're the kind of PC builder that swaps out every couple of years then doubt you can go wrong.

IMO the industry is clearly moving to more cores/threads for gaming, not less with focus on single thread performance. Especially as a result of the high core consoles and games being developed on that platform. They absolutely have to be written to take advantage of highly threaded engines since the single thread performance is so weak.

Everyone has their own opinion on how the industry is moving and will base their decisions upon those opinions. Who will be "right" in the end? Who the fuck knows.
 
Hard to understand some people negativity on Ryzen. It is a huge come back for AMD and will surely benefit us as consumers. Plus it is a brand new architecture, so it has good prospects of bringing increasingly more parity with Intel. It is also quite energy efficient at the same time, even beating Intel somewhat, while Intel has been more busy striving to do the same than pushing absolute performance.
 
Nobody is saying that 6900K isn't a future proof CPU. It is a very good and very well balanced CPU indeed. I would personally feel more future proof with 6900K than 7700K, including my gaming needs. But 1089$ is too much for most people to spend on a CPU. 499$ of Ryzen 1800X is more reasonable. 1800X is not as good as 6900K, but it has fantastic price/performance ratio. This is the first time we have a competitive 8-core CPU in the same price class as fast quads.
Dont forget that the 1700 gives same performance with OC for 330 or less than a third(!!!!!) of the price.
 
The original statement was the media is focusing on a small minority of players "those who have 144Mhz panels", however that is not true, the crux of the mater is the certain usability right now and in the future. In which Ryzen doesn't inspire much confidence based on today's variables. Some would say: too many things have to change to make Ryzen work in the future, they can't be faulted for not wanting to stick around based on a myriad of "ifs"

Thats not true. in a real life situation you will use a much less powerful GPU to play at 1080p or higher res. and also you will have background apps like browsers, antivirus(yes ppl still use them), music player. etc. So in a rela life scenario the difference will be negligible, you will simply not see them.

And ryzen is not more expensive than the 7700k its a little bit cheaper, the 1700 is 330. either way if ppl dont like ryzen then why the 6900k won so many awards when it came out having same performance for more than 3 times the price?

I would wait for the 6 core to came out and see how things set up with everything parched and optimize, ppl don't realize the importance of Ryzen: for the first time we(normal ppl) have access to 8 core CPU so it is not an enthusiast feature anymore, the market will react and adapt and in the near future we will better performance not just for AMD but also Intel and Intel will have to lower the prices of their CPU to compite.
 
That one has been dead as well. By the time Quad Core 2 became relevant, people have moved on from Core 2 completely, i7 was introduced. Quad Core 2 now lags behind even the basic Pentium G
No Man's Sky launch proved that lots of people still use Core 2 Quad for gaming. That game required SSE4 and crashed on startup on Intel Core 2 and AMD Phenom. Lots of complaints.

I remember playing Xbox One and PS4 launch games just fine with my Core 2 Quad. It was 2014 summer. Bought my Core 2 Quad in 2006. Eight years of gaming is pretty good for a single CPU. Core 2 Duo didn't even last half of that. Reviewers recommended Core 2 Duo for gamers. It was cheaper and slightly faster in games. Just like i7 7700k is currently vs Ryzen 1800X.
I am sorry, but too much of what you say is still not set in stones, Ryzen doesn't provide much boost in DX12 compared to KabyLake, in fact it seems to lose just as badly as in DX11. Even in AMD's posterchild benchmark "Ashes". Too many optimistic variables, DX12 and Vulkan might not get "enough" traction, 8 Cores might not get traction due to disappointing or small sales.
AMD 8-cores are sold out already. Coffee Lake standard (non-HEDT) 6-core i7s will be coming soon (with dual-channel memory controllers = cheaper mobos). I am sure Intel will clock these chips at ~4.0 GHz. AAA game developers will certainly ensure that their game takes advantage of Intel's latest and greatest. 4 core will not be the most important enthusiast CPU in the near future. Both Intel and AMD are pushing higher core counts.

Brand new Frostbite rendering architecture was presented at GDC. It will power all future EA games. This clearly shows how big jump an architecture designed from ground up for DX12 and Vulkan offers. They show huge GPU memory consumption improvements (over their older DX11 based tech that was also ported to DX12). DAG execution is trivial to parallelize over multiple CPU cores and multiple GPU pipelines.

This is how a properly done DX12 renderer looks like:
http://www.frostbite.com/2017/03/framegraph-extensible-rendering-architecture-in-frostbite/
It may help or it may not, we've seen how having competent hardware can mean nothing if market share doesn't increase significantly. Developers have the tendency to compile for the vendor with the largest market share.
I am not talking about compile target. I am talking about the workstations used to develop the game. When we started our company one year ago, we bought i7 6700Ks to everybody. At that time Haswell based 8-cores were 1000$ and weren't universally better in all cases. 650$ extra per CPU is considerable cost. You also need more expensive HEDT quad channel mobos and four memory sticks. The situation hasn't changed. 6900K is 700$ more than 7700K. Doesn't matter whether you are indie or 200+ people AAA team, this much extra for each CPU is noticeable addition to your project cost. If I would buy new computers to a game development studio right now, I would choose Ryzen 1800X over both 7700K and 6900K. Many others would do the same.

And even if your company doesn't buy Ryzens, it doesn't matter. Intel is dropping 6900K prices and single socket 6-core and 8-core Xeon prices to be competitive in the current market situation. Coffee Lake 6-cores will also drop the prices of the single socket 6-core and 8-core Xeons. Game developers love higher core counts. Faster compile and cooking times are saving lots of time every day in long projects. The 8-core CPU price is the only barrier right now. But this is changing rapidly. Nobody in game development business will use 4-core CPUs in a few years.
Let's not forget that the 7700 is cheaper and faster right now, if consumers have to wait 2 years for Ryzen to gain traction, then I can understand people's decision not to wait, and they can't be faulted for that.
100% agreed. 7700K is faster for pure gaming PC right now. But I disagree that it's more future proof than 8-core CPUs. It is entirely possible that you want to upgrade your 7700K in 3 years, but your 1800X lasts for 4 years. There are so many things going on that make 8-core CPUs better in future. The are many "ifs", but not all of them need to happen.
The original statement was the media is focusing on a small minority of players "those who have 144Mhz panels", however that is not true, the crux of the mater is the certain usability right now and in the future. In which Ryzen doesn't inspire much confidence based on today's variables. Some would say: too many things have to change to make Ryzen work in the future, they can't be faulted for not wanting to stick around based on a myriad of "ifs"
Ryzen is currently faster in Battlefield 1 (multiplayer) and Mafia 3 and ties with 7700K in Mirror's Edge and For Honor. This is at 1080p. These are all modern games on modern AAA game engines. More similar games with similar tech will certainly come in the future, while engines that favor 2-4 core CPUs will be on decline. 7700K is currently the better gaming CPU, but there's no indication that 7700K will be a more future proof choice.

Good review about gaming on Ryzen (released today):
http://www.techspot.com/review/1348-amd-ryzen-gaming-performance/
 
Last edited:
Playstation VR games are commonly 60 Hz with two time-warps per one rendered frame -> 120 Hz output. It offers reduced head tracking latency compared to 90 Hz. Head movement is 120 fps, animation is 60 fps. IMHO this is the best compromise right now. Obviously 90 fps rendering at 180 Hz time-warp would be even better, but it is not cost effective right now.

By compromise, do you mean the best option for a given performance budget or the best option regardless of available performance? I.e even when not performance constrained at all (already running at max settings and resolution) it would be preferable for the end user to run at 60 re-projected to 120 than run at 90hz native if given the option? If so then that makes a strong case for the next gen of PC headsets to do away with the 90hz mode altogether and target exclusive 60fps re-projection mode like the PS4. Which would in turn make Ryzen a more realistic target as a future VR ready CPU as we can assume that will be the direction of the market if it's the better solution.
 
By compromise, do you mean the best option for a given performance budget or the best option regardless of available performance? I.e even when not performance constrained at all (already running at max settings and resolution) it would be preferable for the end user to run at 60 re-projected to 120 than run at 90hz native if given the option? If so then that makes a strong case for the next gen of PC headsets to do away with the 90hz mode altogether and target exclusive 60fps re-projection mode like the PS4.
90 Hz is obviously better than 60 Hz. 90 Hz + 2x time-warp to 180 Hz is obviously better than 60 Hz + 2x time-warp to 120 Hz.

But 60 Hz + 2x time warp to 120 Hz vs 90 Hz (no double time warp) is a harder comparison. Both have advantages. First option has less head movement lag and smoother head movement, but second has smoother animation. First option looks better when the scene is mostly static or objects move slowly, but the second option is better for fast moving scenes.

Also Oculus drops to 45 fps + double time-warp when it fails to reach 90 fps lock. They even added more advanced version of this recently (https://www.pcper.com/news/Graphics-Cards/Oculus-Launches-Asynchronous-Spacewarp-45-FPS-VR). Reviewers are saying that the results are acceptable. 60 fps at double time-warp to 120 fps is obviously better than 45 fps -> 90 fps. Maybe 90 fps is too ambitious with current hardware. 60 fps with 2x async timewarp to 120 fps allows better image quality with the same GPU. But obviously it also needs a HMD with a 120 Hz display.

For VR gaming I would buy a 7700K at the moment. VR has so many question marks in the future. We don't even know what kind of hardware and software solutions the next gen VR headsets are going to have.
 
It's interesting to speculate about what AMD might do next year. My guess would be that given Intel's intention to move to 6 cores in mainstream CPUs, given Ryzen's fairly modest size, and given the likelihood that top HEDT parts from Intel will include 12 cores or more, AMD will move to 12 cores, and perhaps 3 memory channels.

Whether that ends up being with 3 CCXs or 2 CCXs with 6 cores each, or a monolithic architecture, I don't know, but I think it just makes sense, otherwise AMD's competitive position will suffer. Similarly, in 2019, with the expected move to 10nm, I would anticipate 16 cores from AMD, at a 200~280mm² die size.
 
Probably time to start a Zen 2 thread.

I've been wondering: will the AMD socket support a major iteration (Zen 2). Will the Intel socket (for 7700K) support a major iteration?

Is the ability to change processor while not changing mobo/RAM of interest to people who build their own PC? Is it a factor in their purchase decision?
 
Wasn't 90 Hz settled upon initially because of latency and persistence back before there was certainty low-latency warp support would be available on a large chunk of the graphics hardware?
Also, wouldn't these methods have needed the time period up to now to be researched and developed on the HMDs based on the older assumptions.

Back on the topic of Zen, the most curious discrepancies to me are Zen's CS compile times doing so much better with only one CCX and a large deficit on a DX12 draw call benchmark.
Those would seemingly be stressing some corner cases of the architecture. I would expect a modest penalty in various scenarios, but I'm curious what it might take to get that level of drop-off, such as some kind of unfriendly synchronization methods or pathological access pattern.

The nature of the L3 and the communications between the CCXs is still unclear to me.

For one thing, AMD has chosen to keep its cache protocol as MOESI, and the L3 is a "mostly" exclusive victim cache.
The S state doesn't forward data between processors per the description of MOESI, and there is no F state like Intel's.
The name seems to conflict with a generally exclusive L3, since even more cores have a line when in this state than at any other time.
The L3's ability to conserve or magnify bandwidth for commonly shared lines is limited.
This may also be one reason why the L2s are larger in AMD's design, as they need to absorb more traffic in part to compensate for the fact that multiple cores are going to be pulling data from DRAM despite their neighbors having the same line.

MOE would seem to matter the most for the inter-CCX scenario.
Exclusive is funny in that by its name it would seemingly be unlikely to be found in the L3, for most of its lifetime--being exclusive to a requesting core. The instant another core tries to read it, it goes to shared and would then seemingly drop from the L3 and from snooping for the reasons noted above (and to avoid any confusing cases of an E state in more than one cache).

Modified or Owned might be where the L3 and inter-CCX transfers matter.
Modified/owned lines can forward data to the other CCX, but it might be operating as an independent processor. If the remote L2s are reading the data, they get those lines as Shared--and so each core would need to request the same data fresh each time if there's heavy sharing, and as shared lines they might not be populating the other L3.
The lines are dirty, so perhaps some of these transitions may force a writeback to DRAM or otherwise require more complex transitions, which might bottleneck a bandwidth test that is trying to pass the same cache lines back and forth.
 
Probably time to start a Zen 2 thread.

I've been wondering: will the AMD socket support a major iteration (Zen 2). Will the Intel socket (for 7700K) support a major iteration?

Is the ability to change processor while not changing mobo/RAM of interest to people who build their own PC? Is it a factor in their purchase decision?

AMD has pledged to stick to AM4 for four years if I remember correctly. That does have some appeal to me, at least.
 
AMD has pledged to stick to AM4 for four years if I remember correctly. That does have some appeal to me, at least.
So that's not very long. Makes me think of gaming Steam sales: "How long do you want to wait until a game you would like to play hits your target price during a sale?"

Does Intel ever make this kind of statement for consumer sockets?

Is it quite likely that AMD will introduce a "workstation" socket? e.g. quad memory channel, cut-down server socket supporting 16 cores in its first iteration? Would that socket be more attractive to "balanced gamer-and-productivity" type users. Which appears to be a substantial portion of the users that AMD is targetting with Ryzen.
 
So that's not very long. Makes me think of gaming Steam sales: "How long do you want to wait until a game you would like to play hits your target price during a sale?"

Does Intel ever make this kind of statement for consumer sockets?

Is it quite likely that AMD will introduce a "workstation" socket? e.g. quad memory channel, cut-down server socket supporting 16 cores in its first iteration? Would that socket be more attractive to "balanced gamer-and-productivity" type users. Which appears to be a substantial portion of the users that AMD is targetting with Ryzen.

Wouldn't that just be an Opteron? Or whatever Zen-based server CPUs will be called.
 
I trickle everything down in a mobo+cpu+memory package. The most cpu intensive thing I do is gaming and in that case you can easily do 4 ~ 5 years with a cpu. I've got a i7 4770 now but replacing that with the latest i7 equivalent would offer little real world extra performance so usually by the time I want to upgrade there are so many advances in other areas that not going with a completely new system doesn't make sense to me. The only thing you could potentially carry over is memory.

E.g. My latest gen Intel NUC i3 boots sooo much faster than my main pc even though both have the same Samsung SSD. I believe this is because the chipset in the NUC is much newer and making better use of things like SSD's.
 
Back
Top