AMD RyZen CPU Architecture for 2017

Discussion in 'PC Industry' started by fellix, Oct 20, 2014.

Tags:
  1. Clukos

    Clukos Bloodborne 2 when?
    Veteran Newcomer

    Joined:
    Jun 25, 2014
    Messages:
    4,462
    Likes Received:
    3,793
    Overclocking is not really worth it unless you want to undervolt (4.0GHz at 1.15 vcore is stable on the 2700X). XFR2 is pretty smart, better to just focus on memory overclocking instead imo. I was able to run all cores at 4.35 at 1.45 vcore but that's only for benchmarking :)
     
    DavidGraham, Lightman, BRiT and 2 others like this.
  2. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Definitely an improvement then for vcore relative to previous gen and that is really nice within the envelope and the Precision Boost 2 mechanism improvements.
    What cooling solution you using and what frequency are you personally comfortable for all cores to operate at or just as good leaving dynamic and also separately DDR4 memory used?

    I wonder if the Infinity Fabric associated with the CPU influences the TDP-voltages/Precision Boost 2 mechanism depending upon the DDR4 memory used as they are linked; so highest memory clocks creates higher Ryzen TDP-vcore (just musing and do not know and not a behaviour any reviewers have looked at as the benefits are too good to ignore anyway) - indirectly it may feed back into SenseMI Pure Power/Precision Boost-XFR but considering the benefit of higher DDR4 it is more academic musing than influencing memory decisions.
    Thanks
     
    #2882 CSI PC, Apr 22, 2018
    Last edited: Apr 22, 2018
    Lightman likes this.
  3. Clukos

    Clukos Bloodborne 2 when?
    Veteran Newcomer

    Joined:
    Jun 25, 2014
    Messages:
    4,462
    Likes Received:
    3,793
    I've upgraded to a watercooling loop and have a CPU/GPU block in a single 360 rad. I'm running my fans at very low RPM though (600-800) because I like a silent system :)

    I think anything up to 1.35 - 1.4 is fine on air to be honest, and the CPU will cap at 4.1 - 4.25 in that range. 4.0 - 4.1 should be achievable on pretty much all 2xxx series CPUs, but like I said, it's better to just leave it at auto, it's pretty smart!

    As for the memory thing, I've noticed that's it's harder to reach stable cpu overclocks when running faster ram, and that might be because of increased power draw, i'm not 100% sure that is the case.
     
    DeeJayBump and CSI PC like this.
  4. Dygaza

    Newcomer

    Joined:
    Aug 27, 2015
    Messages:
    40
    Likes Received:
    39
    The cpu simply gets less rest when it's fed by faster memory, so if you are right on the edge of stability, those extra cycles of idle might have been needed to maintain it.
     
    Clukos, pharma and Lightman like this.
  5. Kyyla

    Veteran

    Joined:
    Jul 2, 2003
    Messages:
    1,004
    Likes Received:
    293
    Location:
    Finland
    Yeah it seems the age of manual overclocking is over. C6H has the Asus precision boost overdrive secret sauce as well. This results in 4,1Ghz all core boost with my NH-D15. I can even keep the fans reasonably quiet. Not whisper quiet like with stock settings though.
     

    Attached Files:

  6. Clukos

    Clukos Bloodborne 2 when?
    Veteran Newcomer

    Joined:
    Jun 25, 2014
    Messages:
    4,462
    Likes Received:
    3,793
    I set PE3 - 0.1 vcore offset, which results to 1.24 vcore in all core loads and 1.375 in single core loads with this result:

    [​IMG]

    55C max :)
     
    Grall, fellix, Lightman and 2 others like this.
  7. Lightman

    Veteran Subscriber

    Joined:
    Jun 9, 2008
    Messages:
    1,804
    Likes Received:
    475
    Location:
    Torquay, UK
    Excellent results from undervolting!
    I'm ordering my 2700X tomorrow, should be with me in time for this weekend :)
     
  8. HMBR

    Regular

    Joined:
    Mar 24, 2009
    Messages:
    416
    Likes Received:
    105
    Location:
    Brazil
    DavidGraham and Lightman like this.
  9. Grall

    Grall Invisible Member
    Legend

    Joined:
    Apr 14, 2002
    Messages:
    10,801
    Likes Received:
    2,172
    Location:
    La-la land
    Is the CPU inserting some NOPs or something after reading the HPET, to fudge any hostile algorithms trying to suss out system secrets, or why else would reading a timer cause performance degradation?
     
  10. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,528
    Likes Received:
    862
    Probably because the timers are memory mapped. And because the memory region is privileged, you need to go through the kernel to access it; you now feel the full brunt of KPTI/Meltdown. If you sample the timers 500.000 times per second, you get a 20% performance hit (by the chart in this article).

    Cheers
     
    Grall likes this.
  11. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,791
    Likes Received:
    2,602
    That was painfully obvious, Anand results were so far off the mark there was bound to be a major flaw in their methodology.
    Anyway. here is their corrected gaming benchmark 8700K vs 2700X:

    [​IMG]
    https://www.anandtech.com/show/12678/a-timely-discovery-examining-amd-2nd-gen-ryzen-results/5
     
  12. entity279

    Veteran Regular Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,229
    Likes Received:
    422
    Location:
    Romania
    Intuitively, HPET is an IO operation. And IO was particularly hurt with Spectre / Meltdown.
    Also AT suggested that Intel's timers are set to a higher frequency, hence they get called more often.

    Putting that bombastic style asside, there was no flaw to Anand's methodology. It was just different and that seemed to matter more than expected, and now we know why

    ___

    But BTW, all that AT article and they still don't tell us how to check on our systems if HPET is forced or not
     
    Laurent06, Grall and Alexko like this.
  13. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,528
    Likes Received:
    862
    They don't get called more often. They have higher resolution. It's the client code (games in this case) that makes the syscalls.

    Cheers
     
    entity279 likes this.
  14. entity279

    Veteran Regular Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,229
    Likes Received:
    422
    Location:
    Romania
    Of course, silly me. The higher resolution should make no difference
     
  15. Gubbi

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    3,528
    Likes Received:
    862
    AT's results are not off the mark, they are on the mark, everybody else are off. The whole point of HPTE is to facilitate cheap, precise timers for performance analysis. Something that is no longer viable on Intel platforms.

    What is the point of having 40ns resolution if you can't use it ?

    Cheers
     
    Grall and entity279 like this.
  16. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,791
    Likes Received:
    2,602
    It's not the default OS behavior nor the user's behavior, which is the whole point of benchmarking in the first place, to capture the user's experience and compare it across different platforms. Thus their old results is massively off the marks. And this is their own admission no less. In fact they will be retesting ALL of their old and new CPUs going forward on account of the anomalies they experienced before.

    https://www.anandtech.com/show/12678/a-timely-discovery-examining-amd-2nd-gen-ryzen-results/5

    Now we know why.
     
    pharma likes this.
  17. entity279

    Veteran Regular Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,229
    Likes Received:
    422
    Location:
    Romania
    Au contraire, the point of benchmarking is not to capture user experience but to measure performance with high accuracy, in repetable conditions.

    In this case of course, it didn't work out well. So AT will change their metodology, consequently losing the precision of their benchmarks (as the alternative seems to be even worse, in this case).
    So it is a tradeoff, not a wrong method or a right one
     
  18. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,791
    Likes Received:
    2,602
    In this case the HPET is far from accurate, since it impacts the user experience negatively even on AMD's platform. Worse yet it's not a default option, and is only really useful in certain overclocking scenarios. Going so far to claim all other sites are off the mark and Anand's flawed methodology is on the mark is nothing short of ridiculous, especially in light of recent discoveries. I don't even know why we are still debating this.
     
    pharma and homerdog like this.
  19. entity279

    Veteran Regular Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,229
    Likes Received:
    422
    Location:
    Romania
    Because your incorrect claims that bechmarking is about reproducing user experience (which by definition isn't reproducible) , perhaps ?

    I guess the way to formulate this in more precisely would be : acuracy vs overhead.
    AT's benchmarks with their methodology were the most accurate, by design. In the sense that all timing data was the closest to what really was happening.

    So the issue was that the price for their accuracy was an unexpectedly large overhead (underestimated by AT, but apparently also by Intel themselves). And worse, this overhead is different for different processors.

    We're discussing this (not the tongue-in-cheek reason this time) just to emphasize that once the overhead of timers is fixed, we should all want to go back to using HPET everywhere for benchmarks
     
  20. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,791
    Likes Received:
    2,602
    Replicating the user experience is the number one rule in benchmarking, par none. Especially in reviewing products. In this case testing at stock settings is the optimal solution, if testing at any thing other than stock user configuration, the tester should take care to ensure his method doesn't skew results negatively or positively. That's how proper benchmarking is done. Anything else is not representative of the end user experience and it's flaws rest entriely on the shoulder of the tester.

    And no, benchmarks are designed to replicate user experience, apps like Cinebench and Handbrake simulate rendering of a specific template/workload to achieve a simulated equal user experience on different platforms, and so do timedemos and internal benchmarks for games.

    Again the number one undeniable rule is to replicate end user configuration. Anything else is irrelevant to benchmarking and encroaches on the academic side. You don't castrate the clock speeds of 8700K to down to 1800X speeds to ensure equal conditions. You test at default speeds and compare products, you may castrate when doing IPC comparisons, but that's academic and irrelevant to the end user experience or product evaluation for the purpose of purchasing decision.
    They were not. The very act of overhead induction is enough to invalidate the whole methodology. They are not accurate by design (academic standards) nor by the end user experience.

    It's interesting to note that back when Ryzen 1000 launched AMD was advising testers to completely switch off HPET from the BIOS as it was negatively impacting Ryzen scores. Most people here let that slide, now when the same thing is imapcting Intel more than AMD, suddenly it became more accurate to test with it forced (which isn't even the default option). Consistency should be key.

    If HPET proved to have no negative impact on the experience of the user, by all means use it, but when it is having this much negative imapact (70% less fps, seriosuly?!), then no it's not relevant
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...