AMD RyZen CPU Architecture for 2017

Replicating the user experience is the number one rule in benchmarking, par none [...].

And no, benchmarks are designed to replicate user experience, apps like Cinebench and Handbrake simulate rendering of a specific template/workload [...]

Again the number one undeniable rule is to replicate end user configuration. Anything else is irrelevant to benchmarking and encroaches on the academic side. [...]

Sure, like the speed of computing milions of PI's decimals is the very definition of user experience.

Benchmarks provide scenarios which may or may not replicate plausible uses of a system. It's up to them.

Your overall point might be that it's very hard, if not impossible to devise a useful interpretation of the benchmark result in this particular case (3d games, HPET on).
But stick to that please, don't try to generalize the findings here to the very concept of benchmarking. Not like this.

It's interesting to note that back when Ryzen 1000 launched AMD was advising testers to completely switch off HPET from the BIOS as it was negatively impacting Ryzen scores. Most people here let that slide, now when the same thing is imapcting Intel more than AMD, suddenly it became more accurate to test with it forced (which isn't even the default option). Consistency should be key.

I can't fathom what you are implying. I am saying that it would be more accure to use HPET just above, yes. I definetly didn't say the oposite when Ryzen 1000 launched. Nor do I've heard of anyone proposing to use HPET for benchmarking now, with its just discovered ill-effects
So what do you find so interesting?

If HPET proved to have no negative impact on the experience of the user, by all means use it, but when it is having this much negative imapact (70% less fps, seriosuly?!), then no it's not relevant

yep
 
But BTW, all that AT article and they still don't tell us how to check on our systems if HPET is forced or not
Yeah, when I first started reading about HPET here in this thread I did some basic googling, and almost all discussions I found, guides, advice, benchmarks and so on were seriously outdated, like from half a decade ago OR MORE. :(

There should be some investigations made here and now in this new post-Meltdown/Win10 era that goes through this subject more thoroughly and which includes advice like checking if HPET is active, how to de/activate, and some advice on if or when you'd want to do it. That would be real useful methinks. :)
 
Yeah, when I first started reading about HPET here in this thread I did some basic googling, and almost all discussions I found, guides, advice, benchmarks and so on were seriously outdated, like from half a decade ago OR MORE. :(

There should be some investigations made here and now in this new post-Meltdown/Win10 era that goes through this subject more thoroughly and which includes advice like checking if HPET is active, how to de/activate, and some advice on if or when you'd want to do it. That would be real useful methinks. :)

Well AT said HPET is forced in the Window's BCD . I don't know if that's a single file or not, but there are BCD editors out there . Tried https://www.boyans.net/DownloadVisualBCD.html but for me it crashes, sadly..
 
And by default HPET is not on, it needs to be enabled but that is a lot of faith without being able to check for sure.

To enable/disable the commands seem to be (work for me anyway).
to disable bcdedit /deletevalue useplatformclock
to enable bcdedit /set useplatformclock true

Edit:
Doing a search seems some motherboard BIOS settings also support enabling/disabling HPET even up to X299/X370/etc.
Usually under the more advanced options it seems, MSI is one for sure that supports it on certain motherboards.
 
Last edited:
From AT article, comments section :
bcdedit /enum (from admin cmd)
if there is a line like "use platformclock on" , then it is forced on
 
Seems to be pretty stable with the setup of PE3 + XFR2 - 0.1vcore offset:

realbenchsxuda.png


41.24 at 1.22 vcore is the minimum at 100% load with AVX instructions, temperature maxed out at 60C. Runs anywhere between 41.25 and 4.35 GHz during gameplay. The 2700X is a solid improvement over my 1700, although I don't think such a jump is worth it, I just did it because I like hardware :)

Edit: Euler3D benchmark at the same settings:
eulerx0uz9.png


And Corona
corona5cuge.png
 
Last edited:
Overclocking is not really worth it unless you want to undervolt (4.0GHz at 1.15 vcore is stable on the 2700X). XFR2 is pretty smart, better to just focus on memory overclocking instead imo. I was able to run all cores at 4.35 at 1.45 vcore but that's only for benchmarking :)

So mine is not as good, 1.2V for 4GHz stability. Still, boosts to 4.35GHz quite often and I need to find if I can get it to 4.3GHz all core at all.
 
Some already probably follow Stilt's advice for overclocking/voltage regarding Ryzen but worth putting in here:
Stilt other forum said:
The seen behavior suggests that the full silicon reliability can be maintained up to around 1.330V in all-core workloads (i.e. high current) and up to 1.425V in single core workloads (i.e. low current). Use of higher voltages is definitely possible (as FIT will allow up to 1.380V / 1.480V when scalar is increased by 10x), but it more than likely results in reduced silicon lifetime / reliability.
By how much? Only the good folks at AMD who have access to the simulation data will know for sure.
.....
Also note that the figures stated here relate to the actual effective voltage, and not to the voltage requested by the CPU. The CPU is aware of the actual effective voltage, so things like load-line adjustments and voltage offsets will modify the CPUs voltage request from the VRM controller accordingly. The most accurate method to measure the effective voltage on AM4 platform is to monitor the “VDDCR_CPU SVI2 TFN” voltage, which is available in HWInfo
I emphasised by bolding important aspect on monitoring voltage and actual effective in context of his values.
Full quote and context here (relevant aspect is where he quotes himself from a few days ago): http://www.overclock.net/forum/27240681-post220.html
 
Last edited:
Anyone going to try bclock OC? 103mhz will get you about 4.5ghz on XFR2.

My board lacks bclock, but if this works as expected, I will finally have a reason to upgrade my X370 to a better X470 board with BCLK adjustments.

BTW, I've tried 4.3GHz all cores, but it would not finish CB15 test even with 1.425V (CPU LLC overvolts to 1.45V). I will not try to go higher as there is no 24/7 stability to be had at that clock. Thankfully 4.25GHz is fully stable at 1.425V and 4.2GHz at 1.35V. Not the best sample, but it still is about 250MHz better at Fmax than my 1700 and a lot more efficient at anything below 4.1GHz.
Now waiting for a tool to adjust individual boost levels and increase XFR2 frequency to 4.4 on the best cores (my CPU have 2 really good ones).
 
Last edited:
Raven Ridge without graphics should still be different than a Zeppelin with half the cores enabled, since the second has a lot more L3 cache. Raven Ridge Ryzen 5 2400G has 4MB L3, while the Ryzen 5 1400 has 8MB and the 1500/1500X has a whopping 16MB.
I guess we'll know for sure if they're using Raven Ridge or Zeppelin (or Zeppelin+ ?) cores when they say how much L3 there is.


There's another Raven Ridge coming which is the 2800U. My guess is it's a high-binned 15W Raven Ridge with the full Vega 11 and somewhat higher clocks than the 2700U.
 
Pure quadcores without graphics?
Would that be a cut-down quadcore with graphics disabled, or a cut-down octacore with a CPU quad disabled...?

I'm assuming it's not a unique die of its own, seeing as AMD probably don't want to spend the resources validating a ton of different die designs...
 
Raven Ridge without graphics should still be different than a Zeppelin with half the cores enabled, since the second has a lot more L3 cache. Raven Ridge Ryzen 5 2400G has 4MB L3, while the Ryzen 5 1400 has 8MB and the 1500/1500X has a whopping 16MB.
I guess we'll know for sure if they're using Raven Ridge or Zeppelin (or Zeppelin+ ?) cores when they say how much L3 there is.


There's another Raven Ridge coming which is the 2800U. My guess is it's a high-binned 15W Raven Ridge with the full Vega 11 and somewhat higher clocks than the 2700U.

Would that be a cut-down quadcore with graphics disabled, or a cut-down octacore with a CPU quad disabled...?

I'm assuming it's not a unique die of its own, seeing as AMD probably don't want to spend the resources validating a ton of different die designs...

I'm a little bit surprised, I thought, RR desktop APUs will replace quadcores.

PS:
 
Last edited:
Apparently, Raven Ridge actually has a bigger die than Pinnacle Ridge (about 210mm² vs. 192mm²). So it makes some sense to use Pinnacle Ridge dies for low-end machines if they do not require integrated graphics: you save a couple of bucks and get more cache.
 
Back
Top