AMD RyZen CPU Architecture for 2017

Let us know how the voltage/thermals pans out when you decide to either push all cores to higher fixed frequency or general overclocking.
The core voltage as standard seems pretty nice relative to previous Ryzen generation.

Overclocking is not really worth it unless you want to undervolt (4.0GHz at 1.15 vcore is stable on the 2700X). XFR2 is pretty smart, better to just focus on memory overclocking instead imo. I was able to run all cores at 4.35 at 1.45 vcore but that's only for benchmarking :)
 
Overclocking is not really worth it unless you want to undervolt (4.0GHz at 1.15 vcore is stable on the 2700X). XFR2 is pretty smart, better to just focus on memory overclocking instead imo. I was able to run all cores at 4.35 at 1.45 vcore but that's only for benchmarking :)
Definitely an improvement then for vcore relative to previous gen and that is really nice within the envelope and the Precision Boost 2 mechanism improvements.
What cooling solution you using and what frequency are you personally comfortable for all cores to operate at or just as good leaving dynamic and also separately DDR4 memory used?

I wonder if the Infinity Fabric associated with the CPU influences the TDP-voltages/Precision Boost 2 mechanism depending upon the DDR4 memory used as they are linked; so highest memory clocks creates higher Ryzen TDP-vcore (just musing and do not know and not a behaviour any reviewers have looked at as the benefits are too good to ignore anyway) - indirectly it may feed back into SenseMI Pure Power/Precision Boost-XFR but considering the benefit of higher DDR4 it is more academic musing than influencing memory decisions.
Thanks
 
Last edited:
Definitely an improvement then for vcore relative to previous gen and that is really nice within the envelope and the Precision Boost 2 mechanism improvements.
What cooling solution you using and what frequency are you personally comfortable for all cores to operate at or just as good leaving dynamic and also separately DDR4 memory used?

I've upgraded to a watercooling loop and have a CPU/GPU block in a single 360 rad. I'm running my fans at very low RPM though (600-800) because I like a silent system :)

I think anything up to 1.35 - 1.4 is fine on air to be honest, and the CPU will cap at 4.1 - 4.25 in that range. 4.0 - 4.1 should be achievable on pretty much all 2xxx series CPUs, but like I said, it's better to just leave it at auto, it's pretty smart!

As for the memory thing, I've noticed that's it's harder to reach stable cpu overclocks when running faster ram, and that might be because of increased power draw, i'm not 100% sure that is the case.
 
I've upgraded to a watercooling loop and have a CPU/GPU block in a single 360 rad. I'm running my fans at very low RPM though (600-800) because I like a silent system :)

I think anything up to 1.35 - 1.4 is fine on air to be honest, and the CPU will cap at 4.1 - 4.25 in that range. 4.0 - 4.1 should be achievable on pretty much all 2xxx series CPUs, but like I said, it's better to just leave it at auto, it's pretty smart!

As for the memory thing, I've noticed that's it's harder to reach stable cpu overclocks when running faster ram, and that might be because of increased power draw, i'm not 100% sure that is the case.

The cpu simply gets less rest when it's fed by faster memory, so if you are right on the edge of stability, those extra cycles of idle might have been needed to maintain it.
 
Yeah it seems the age of manual overclocking is over. C6H has the Asus precision boost overdrive secret sauce as well. This results in 4,1Ghz all core boost with my NH-D15. I can even keep the fans reasonably quiet. Not whisper quiet like with stock settings though.
 

Attachments

  • 2700x 3200 14 LL boost on.jpg
    2700x 3200 14 LL boost on.jpg
    291.5 KB · Views: 14
  • 2700x 3200 14 LL boost on.png
    2700x 3200 14 LL boost on.png
    6.9 KB · Views: 12
  • 2700x 3200 14-LL boost on.png
    2700x 3200 14-LL boost on.png
    14.7 KB · Views: 13
I set PE3 - 0.1 vcore offset, which results to 1.24 vcore in all core loads and 1.375 in single core loads with this result:

stock_1bdul6.png


55C max :)
 
Is the CPU inserting some NOPs or something after reading the HPET, to fudge any hostile algorithms trying to suss out system secrets, or why else would reading a timer cause performance degradation?
 
Is the CPU inserting some NOPs or something after reading the HPET, to fudge any hostile algorithms trying to suss out system secrets, or why else would reading a timer cause performance degradation?

Probably because the timers are memory mapped. And because the memory region is privileged, you need to go through the kernel to access it; you now feel the full brunt of KPTI/Meltdown. If you sample the timers 500.000 times per second, you get a 20% performance hit (by the chart in this article).

Cheers
 
Anandtech's gaming results discrepancy from other tests was caused by their use of forced HPET, apparently it looks like the Spectre/Meltdown fixes might increase the performance penalty for HPET forced.
That was painfully obvious, Anand results were so far off the mark there was bound to be a major flaw in their methodology.
Anyway. here is their corrected gaming benchmark 8700K vs 2700X:

GamingResults_575px.png

https://www.anandtech.com/show/12678/a-timely-discovery-examining-amd-2nd-gen-ryzen-results/5
 
Is the CPU inserting some NOPs or something after reading the HPET, to fudge any hostile algorithms trying to suss out system secrets, or why else would reading a timer cause performance degradation?

Intuitively, HPET is an IO operation. And IO was particularly hurt with Spectre / Meltdown.
Also AT suggested that Intel's timers are set to a higher frequency, hence they get called more often.

That was painfully obvious, Anand results were so far off the mark there was bound to be a major flaw in their methodology.

Putting that bombastic style asside, there was no flaw to Anand's methodology. It was just different and that seemed to matter more than expected, and now we know why

___

But BTW, all that AT article and they still don't tell us how to check on our systems if HPET is forced or not
 
That was painfully obvious, Anand results were so far off the mark there was bound to be a major flaw in their methodology.
AT's results are not off the mark, they are on the mark, everybody else are off. The whole point of HPTE is to facilitate cheap, precise timers for performance analysis. Something that is no longer viable on Intel platforms.

What is the point of having 40ns resolution if you can't use it ?

Cheers
 
AT's results are not off the mark, they are on the mark, everybody else are off. The whole point of HPTE is to facilitate cheap, precise timers for performance analysis. Something that is no longer viable on Intel platforms.
It's not the default OS behavior nor the user's behavior, which is the whole point of benchmarking in the first place, to capture the user's experience and compare it across different platforms. Thus their old results is massively off the marks. And this is their own admission no less. In fact they will be retesting ALL of their old and new CPUs going forward on account of the anomalies they experienced before.

First and foremost, we have decided that force-enabling HPET is not how we want to test systems, as this is non-default behavior. While it has an important role in extreme overclocking, to verify accurate timing, ultimately it was akin to taking a sledgehammer to cracking an egg for our testing - we need to be testing systems at stock. So all further CPU testing going forward will be using HPET's default behavior, and we have even put checks in our scripts to ensure this is now the case.

https://www.anandtech.com/show/12678/a-timely-discovery-examining-amd-2nd-gen-ryzen-results/5

This isn't the first time Anandtech's game bench results have been odd.
Their Coffee Lake review showed the i5-8400 often clearly beating the i7-8700k.
Now we know why.
 
It's not the default OS behavior nor the user's behavior, which is the whole point of benchmarking in the first place, to capture the user's experience and compare it across different platforms.

Au contraire, the point of benchmarking is not to capture user experience but to measure performance with high accuracy, in repetable conditions.

In this case of course, it didn't work out well. So AT will change their metodology, consequently losing the precision of their benchmarks (as the alternative seems to be even worse, in this case).
So it is a tradeoff, not a wrong method or a right one
 
Au contraire, the point of benchmarking is not to capture user experience but to measure performance with high accuracy, in repetable conditions.
In this case the HPET is far from accurate, since it impacts the user experience negatively even on AMD's platform. Worse yet it's not a default option, and is only really useful in certain overclocking scenarios. Going so far to claim all other sites are off the mark and Anand's flawed methodology is on the mark is nothing short of ridiculous, especially in light of recent discoveries. I don't even know why we are still debating this.
 
I don't even know why we are still debating this.

Because your incorrect claims that bechmarking is about reproducing user experience (which by definition isn't reproducible) , perhaps ?

I guess the way to formulate this in more precisely would be : acuracy vs overhead.
AT's benchmarks with their methodology were the most accurate, by design. In the sense that all timing data was the closest to what really was happening.

So the issue was that the price for their accuracy was an unexpectedly large overhead (underestimated by AT, but apparently also by Intel themselves). And worse, this overhead is different for different processors.

We're discussing this (not the tongue-in-cheek reason this time) just to emphasize that once the overhead of timers is fixed, we should all want to go back to using HPET everywhere for benchmarks
 
Because your incorrect claims that bechmarking is about reproducing user experience (which by definition isn't reproducible) , perhaps ?
Replicating the user experience is the number one rule in benchmarking, par none. Especially in reviewing products. In this case testing at stock settings is the optimal solution, if testing at any thing other than stock user configuration, the tester should take care to ensure his method doesn't skew results negatively or positively. That's how proper benchmarking is done. Anything else is not representative of the end user experience and it's flaws rest entriely on the shoulder of the tester.

And no, benchmarks are designed to replicate user experience, apps like Cinebench and Handbrake simulate rendering of a specific template/workload to achieve a simulated equal user experience on different platforms, and so do timedemos and internal benchmarks for games.

Again the number one undeniable rule is to replicate end user configuration. Anything else is irrelevant to benchmarking and encroaches on the academic side. You don't castrate the clock speeds of 8700K to down to 1800X speeds to ensure equal conditions. You test at default speeds and compare products, you may castrate when doing IPC comparisons, but that's academic and irrelevant to the end user experience or product evaluation for the purpose of purchasing decision.
AT's benchmarks with their methodology were the most accurate, by design. In the sense that all timing data was the closest to what really was happ
They were not. The very act of overhead induction is enough to invalidate the whole methodology. They are not accurate by design (academic standards) nor by the end user experience.

just to emphasize that once the overhead of timers is fixed, we should all want to go back to using HPET everywhere for benchmarks
It's interesting to note that back when Ryzen 1000 launched AMD was advising testers to completely switch off HPET from the BIOS as it was negatively impacting Ryzen scores. Most people here let that slide, now when the same thing is imapcting Intel more than AMD, suddenly it became more accurate to test with it forced (which isn't even the default option). Consistency should be key.

If HPET proved to have no negative impact on the experience of the user, by all means use it, but when it is having this much negative imapact (70% less fps, seriosuly?!), then no it's not relevant
 
Back
Top