AMD RyZen CPU Architecture for 2017

As for pricing, I think I said this many pages back, but if they have a product that's really competitive with a $1k+ Intel product and they are going to sell it for half that then if I was an AMD shareholder (or debt-holder) I'd be booting up my lawyers.

And you would get your butt kicked, AMD have made it very clear they are targeting in the range of 40% Gross Margin across their business. Currently they are low 30's with cpu's that are selling for max $200USD, There is nothing in the prices of Zepplin that even remotely indicates that they are setting prices that wont enable that ( a 1800x will have somewhere around 500-750% gross margin). AMD knows the quickest way to get back brand recognition and to get revenue and bottom line ( shareholders like this by the way) is to move product and AMD know there is a large audience that have nelehem to ivb quad core owners who can't justify the price to upgrade to a skylake/HEDT intel processor.

Server market will take longer to ramp and Raven Ridge isn't ready yet.

Also legally there is no requirement to put shareholder value above other business considerations, its just if your shareholders dont agree with you it can get messy but given the last 12 month of stock prices it seems shareholders are happy.......
 
Disappointing.
I'd rather been hoping it'd just keep going until it gets too hot.
Perhaps someone can de-lid the chip to check the quality of the interface material?

The nature of AMD's increasingly precise DVFS means that the margins increasingly gone for overclocking, and this time Intel's cores have been tic-ing along on a well-characterized process gaining manufacturing improvements for an unusually long time.

If it's not a matter of insufficient heat transport, it can be that Zen is close to balancing the limits of motherboard VRM delivery, the on-chip power delivery and regulation, and what it thinks it can safely provide for the current understanding of the physical cores. Perhaps over time process improvements and better measurements of real-life operation can give a later bin more margin for turbo. However, it might take a corner case where such margin wouldn't also translate to higher speeds overall that leave a similar XFR gain.
 
There will always be fans that consider it the second coming. The sensible ones will see it as what it is. That they finally have an (apparently) high performing CPU that is competitive with Intel, allow consumers not specifically targetting ultra-cheap PC builds to have a choice.

To make it clear; I absolutely detest Windows PC; the operating system, the fan base, lack of AAA exclusive development (aside from Crytek Space Simulator). But I love really technology, and AMD as well.
The salty Intel tears are a nice bonus.

Though Intel tears are not enough, I want AMD to bring me Nvidia tears as well. I remember R300 from my windows days. Rocking a 4600Ti myself, the first Nvidia tears I tasted were my own. Was able to sell the 4600 for a good price, and got a 9700pro. Best card I ever owned
 
To make it clear; I absolutely detest Windows PC; the operating system, the fan base, lack of AAA exclusive development (aside from Crytek Space Simulator). But I love really technology, and AMD as well.

There's also the Total War series. The Warhammer 40K series from Relic (new one in development). Civilization series. Etc.

And the most important genre to me, RPG games, have far more quality exclusives (either timed like Wasteland 2, permanent like Pillars of Eternity) on PC. Those are a step down from AAA games in terms of budget, but not in terms of quality.

However, and here's the thing. I don't dislike any platform for games. I do, however, dislike exclusives. I believe the more people that can play a game regardless of their chosen platform, the better. I wish Mac got more games. I wish Linux got more games. I wish PC got all the console games. I wish consoles got all the vast library of PC exclusives.

When the majority of games I play don't even exist on console it makes it difficult to justify going with a console. Lack of keyboard and mouse controls is also a hindrance. Being forced to control a camera with an analog stick on console is so archaic and unrealistic. Controlling camera with mouse is far quicker, more accurate, and more like how you look around in real life. But that's all personal preference, I can understand how some prefer a console controller for those things.

Bleh. Sorry for the OT, everyone. Mods can feel free to delete this. :p

Regards,
SB
 
Last edited:
At some point Flash games were taking over the world. Flash video likely drove CPU upgrades (a whole new PC, for most people)
These did work anywhere, a lot better than HTML5 games did :)

(rest of the post deleted to make room)
 
Last edited:
That would be PCWorld grasping at straws, I mean they know the difference already:

http://www.pcworld.com/article/2982...e-shocking-truth-about-their-performance.html

edit:they already state this in their article, lol..
This benchmark was using an older six-core CPU. Quad channel isn't that useful for slower 4/6-core CPUs. Faster 8/12/16-core CPUs show larger gains. It is all about balance. More CPU throughput needs more bandwidth.

If you want to see the real advantage of quad channel memory config, you need to run HPC and database benchmarks, not consumer apps. This benchmark has just games and some ALU heavy tests (which are the opposite to bandwidth heavy). The only bandwidth bound applications in that benchmark were compression tests. 7-zip (7% gain) has proven to be bandwidth bound in the past (noticeable gains with EDRAM in Broadwell 5775C).

Even if quad channel memory gives only a few percent gain in select highly threaded consumer apps, there's no acceptable reason to disable quad channel in benchmarks. Let the consumer see every CPU the way the CPU is meant to be used (4 memory sticks for quad channel CPUs). In the past, we have seen awful benchmarks where AMDs APUs (dual channel) were equipped with a single memory stick, halving the bandwidth (iGPU perf drops drastically).This is not a good benchmarking practice.

I would be happy to see Ryzen (dual channel) beating a 2x more expensive quad channel system. Ryzen seems to be the perfect CPU for programmers. 8-core boosts compile times noticeable = less waiting = more productivity. But I would also love to see a few benchmarks selected purely to show the performance difference in AVX heavy and bandwidth heavy use cases. As I am the person responsible in recommending hardware for our company, I also want to know whether a product I am interested has some downsides, which applications are affected and how big performance impact these downsides have in the worst case. It is up to me to decide whether these downsides matter to our use cases.
 
Disappointing.
I'd rather been hoping it'd just keep going until it gets too hot.

Since XFR is about exploiting thermal headroom, it would make sense if it mostly raises the clock in the many core situations. I doubt desktop cpus are anywhere close to their tdp when running single threaded at their max turbo (that would probably take some serious extra voltage), so better cooling shouldn't matter too much there anyway.
 
This benchmark was using an older six-core CPU. Quad channel isn't that useful for slower 4/6-core CPUs. Faster 8/12/16-core CPUs show larger gains. It is all about balance. More CPU throughput needs more bandwidth.

If you want to see the real advantage of quad channel memory config, you need to run HPC and database benchmarks, not consumer apps. This benchmark has just games and some ALU heavy tests (which are the opposite to bandwidth heavy). The only bandwidth bound applications in that benchmark were compression tests. 7-zip (7% gain) has proven to be bandwidth bound in the past (noticeable gains with EDRAM in Broadwell 5775C).

Even if quad channel memory gives only a few percent gain in select highly threaded consumer apps, there's no acceptable reason to disable quad channel in benchmarks. Let the consumer see every CPU the way the CPU is meant to be used (4 memory sticks for quad channel CPUs). In the past, we have seen awful benchmarks where AMDs APUs (dual channel) were equipped with a single memory stick, halving the bandwidth (iGPU perf drops drastically).This is not a good benchmarking practice.

I would be happy to see Ryzen (dual channel) beating a 2x more expensive quad channel system. Ryzen seems to be the perfect CPU for programmers. 8-core boosts compile times noticeable = less waiting = more productivity. But I would also love to see a few benchmarks selected purely to show the performance difference in AVX heavy and bandwidth heavy use cases. As I am the person responsible in recommending hardware for our company, I also want to know whether a product I am interested has some downsides, which applications are affected and how big performance impact these downsides have in the worst case. It is up to me to decide whether these downsides matter to our use cases.

I largely agree, but to be fair using two more memory channels would also increase power draw, perhaps more than it would improve performance.
 
If you want to see the real advantage of quad channel memory config, you need to run HPC and database benchmarks, not consumer apps. This benchmark has just games and some ALU heavy tests (which are the opposite to bandwidth heavy). The only bandwidth bound applications in that benchmark were compression tests. 7-zip (7% gain) has proven to be bandwidth bound in the past (noticeable gains with EDRAM in Broadwell 5775C).

Even if quad channel memory gives only a few percent gain in select highly threaded consumer apps, there's no acceptable reason to disable quad channel in benchmarks. Let the consumer see every CPU the way the CPU is meant to be used (4 memory sticks for quad channel CPUs).

I can't find any benchmarks where the percentage difference is not in the single digits.. Aside for a "maximum bandwidth test" :)
But fair enough; they should have shown the Intel in the best possible way, if that means all memory slots filled, then they should have done that.
 
I can't find any benchmarks where the percentage difference is not in the single digits.. Aside for a "maximum bandwidth test" :)
But fair enough; they should have shown the Intel in the best possible way, if that means all memory slots filled, then they should have done that.
To be fair, some of the machines had 32GB in the intel one and 16GB in the corisponding ryzen one, and the intel chip set to it's higher boost mode. All things they never needed to do.
It seems like they've been pretty fair (as fair as a company would be), now it's up to the independent reviews and benchmarks to show where ryzen actually stands.
 
Ever since the i7-860 came out ( not long after i had bought a X58 920 with a stupid expensive MB because i had no other choice) i have been firmly in the camp of people vastly over estimating the amount of memory bandwidth that is actually needed for the vast majority of tasks enthusiasts perform.

Unless your hobby is running something like LINPACK, 7ZIP etc..............
 
If AMD went to dual channel is because is a better compromise overall. Also as right now there is no an Intel cpu with the same price as a ryzen that have better overall performance.

There is a bench of GTA with I don't know if it's real but if it is the 1700 have same performance clock for clock than an 7700k with is just astonishing.

Enviado desde mi HTC One mediante Tapatalk

6c91e4d581f1484e8255fc5954aba65d.jpg
 
If AMD went to dual channel is because is a better compromise overall. Also as right now there is no an Intel cpu with the same price as a ryzen that have better overall performance.

There is a bench of GTA with I don't know if it's real but if it is the 1700 have same performance clock for clock than an 7700k with is just astonishing.
Oh please no don't go posting those GTAV benchmarks floating around. They're all different and all fake.
 
Well all I know for sure is that my sense of panic/urgency to find a way to get a Ryzen system by launch date is starting to make my brain go something something....
 
Ever since the i7-860 came out ( not long after i had bought a X58 920 with a stupid expensive MB because i had no other choice) i have been firmly in the camp of people vastly over estimating the amount of memory bandwidth that is actually needed for the vast majority of tasks enthusiasts perform.

Unless your hobby is running something like LINPACK, 7ZIP etc..............

let say, actually memory, ram bandwith are not a bottleneck factor anymore.. 16GB-32GB of DDR4, even on dual channel is allready way enough.. With quad channel, outside specific cases, the bandwith is way higher than the need.

That is a 7700K OC with fast DDR4 dual channel..
index.php


index.php
 
Since XFR is about exploiting thermal headroom, it would make sense if it mostly raises the clock in the many core situations. I doubt desktop cpus are anywhere close to their tdp when running single threaded at their max turbo (that would probably take some serious extra voltage), so better cooling shouldn't matter too much there anyway.

Well i dont know if there's a way to set it differently, and you can still OC yourself on all cores.

Here's a very recent test of DDR4 memory bandwidths from 2133 up to 4000 on applications and games.

http://www.techspot.com/article/1171-ddr4-4000-mhz-performance/page3.html

However, you need to be at the high end scale of your gaming rig to see those very noticeable differences. Awesome for Photoshop and Handbrake though!

Honestly, im surprised by thoses results.
 
Do not confuse throughput and latency, a certain troll on another forum tried to use that to prove that 6700/7700 k is bandwidth "starved" in games. while i dont have a 6700/7700 i do have an ivb 3770k @ 4.3

http://www.users.on.net/~rastus/ScreenshotWin32_0003_Final 13-14-13.png
http://www.users.on.net/~rastus/ScreenshotWin32_0003_Final 10-11-10.png

2000mhz 10-11-10 vs 2000mhz 13-14-13 , DIA all gpu settings lowest res 3840x1024 on a 290 @ 1050/1250

10% difference in performance in a low load situation. im going to test some higher load sections across a greater variance of timings and speeds.
 
Back
Top