AMD RyZen CPU Architecture for 2017

Discussion in 'PC Industry' started by fellix, Oct 20, 2014.

Tags:
  1. Osamar

    Newcomer

    Joined:
    Sep 19, 2006
    Messages:
    231
    Likes Received:
    43
    Location:
    40,00ºN - 00,00ºE
    What is really well optimized for, is i3 :roll:
     
  2. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    The article indicated a section of AMD's processor documentation is under NDA, which I think is the overall errata section for the chip, not just this specific problem.
    Bugs are inevitable, and one possible reason is that there are other bugs in that section that AMD doesn't want to talk about yet.

    As far as professionalism goes, the x86 vendors have historically been above-average in how much they have publicly documented hardware faults, even if timeliness or transparency are imperfect. In various ways, x86 in modern times has regressed to the mean when it comes to openness, but whether that's in play for the Ryzen errata section is unclear.


    One random coincidence is that the TDP of the Banded Kestrel or River Hawk embedded APUs is roughly in the acceptable range for AMD's various processor/DRAM stacks. The logic layer is capped at ~10W, and a stack of DRAM could be ~4-5W. CPU hot spots might make the situation too disparate from the HPC concepts, however.
     
  3. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,541
    Likes Received:
    964
    Well, yeah, but you'd need to have some kind of main (DDR4?) memory available in the system for that, which would require a standard memory controller + PHY… unless of course you were to use APUs on add-in cards only, but that wouldn't be very convenient.
     
  4. pTmdfx

    Regular

    Joined:
    May 27, 2014
    Messages:
    415
    Likes Received:
    379
    TBH I don't see why this is a problem. Such memory controller is literally in all machines with dGPU as the platform memory controller for... decades. Now it is just about integrating literally a dGPU with its own local memory subsystem and leverage the interconnect for performance and/or power efficiency.
     
    #1944 pTmdfx, May 15, 2017
    Last edited: May 15, 2017
  5. Malo

    Malo Yak Mechanicum
    Legend Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,929
    Likes Received:
    5,529
    Location:
    Pennsylvania
    The data center "Naples" chip is officially branded as EPYC.

    [​IMG]
     
    Cyan, hoom, Heinrich4 and 3 others like this.
  6. Jim Anderson just showed a slide that claims the Ryzen ultra-mobile SoCs coming Q3 will bring Vega graphics, 55% more CPU performance, 40% more GPU performance and 50% less power consumption.
    I'm guessing it's the big Raven Ridge (4-core, 11CU) that will consume half of Intel's current 45W models (22.5W).


    Also, Threadripper HEDT just announced. New socket, 16-cores, quad-channel.
     
  7. Clukos

    Clukos Bloodborne 2 when?
    Veteran

    Joined:
    Jun 25, 2014
    Messages:
    4,688
    Likes Received:
    4,353
    [​IMG]

    A 16 core part released before the new 12 core i9, that's interesting :yep2:
     
  8. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    13,878
    Likes Received:
    4,724
    I need that right into my veins ! I would go $700 for this
     
  9. hoom

    Veteran

    Joined:
    Sep 23, 2003
    Messages:
    3,261
    Likes Received:
    813
    Well heck, with that kind of TDP I can certainly forgive lack of HBM :yep2:
    Vegan FTW!

    "All-new HEDT platform" implies not pin compatible after all...
     
    #1949 hoom, May 17, 2017
    Last edited: May 17, 2017
  10. pTmdfx

    Regular

    Joined:
    May 27, 2014
    Messages:
    415
    Likes Received:
    379
    7th Gen APU. Bistrol Ridge.
     
  11. entity279

    Veteran Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,332
    Likes Received:
    500
    Location:
    Romania
    That was very much to be expected due to AM4 being limited to 2 memory channels
     
    BRiT likes this.
  12. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
    Core i7-6950X (10 cores, 3.0 GHz) costs $1723. It has 62.5% core count and a lower clock rate. Rumors (few months ago) speculated on 999$ price point for the 16 core (32 thread) Ryzen flagship. 999$ would be a steal for this CPU. At that price point this would sell like hot cakes. I would assume that quad channel memory solves Ryzen's memory bottlenecks. Eagerly waiting for benchmarks.

    It's going to be interesting to see how Intel prices the forthcoming i9 CPUs, especially the 12 core flagship. That's going to be the main competitor for the 16 core AMD CPU. Maybe they need to lower prices a bit. I'd expect something around 1500$. Even at that price point, it would be a steal compared to current 12 core (single socket) Xeon flagship (which is 2 gens older architecture and lower clock).
     
    #1952 sebbbi, May 17, 2017
    Last edited: May 17, 2017
    BRiT likes this.
  13. entity279

    Veteran Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,332
    Likes Received:
    500
    Location:
    Romania
    Wrt to Ryzen memory bottlenecks the speculation was that they're partially caused by the interconnect fabric limitations. Quad channel doesn't appear to help here

    But yeah, the clocks look good for this one, so do for the skylake-x.

    I find this exciting as it means such a high-end desktop workstations perform less worse when gaming
     
  14. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
    I expect the 12 core Intel i9 to trade blows with the 16 core AMD R9 in multi-threaded benchmarks, with R9 being slightly ahead in general. Broadwell was already winning clock to clock, and Skylake-X improves both IPC and clock rate. Should be enough to reclaim most of the performance lost by having 4 cores less. In productivity apps and games which commonly do not scale much beyond 4 cores the i9 will be obviously faster thanks to higher clock rate, better IPC and the big shared L3 cache. Thus i9 should be generally a bit better CPU. But I also expect the 12 core i9 to cost at least 50% more than the 16 core R9.
     
  15. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Intel have give the clock speed of them ? i have only see the leaked benchmark one ( i ask because i have maybe miss them ), who had effectively high clock rate (4.3ghz on the 10cores ), but low base core ( 3.1-3.3Ghz ).. a bit odd numbers ( more than 1 ghz Turbo clock ), who was make me ask me if they was not a bit overclocked .

    It is clear that Intel have all the reason to play the mhz race on single core ( turbo mode ), this said, im not quite sure that the 12cores will really match the 16 cores on multithreaded scenario.
     
    #1955 lanek, May 17, 2017
    Last edited: May 17, 2017
  16. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
    Quad core i7 7700K (8 threads) fares very well against 6-core (12 thread) Ryzen (+50% cores) in MT benchmarks . Most software doesn't scale perfectly to 32 threads. I expect the 24 thread Intel CPU with better IPC to be pretty close. Let's wait for benchmarks.
     
  17. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    I aggree that this is completely depend of the softwares ( i dont speak about gaming ofc ).
     
  18. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
    I am personally mostly interested about C++ compile benchmarks. If 16-core Threadripper beats 12-core i9 in C++ compile benchmarks, my choice will be clear. Both will be perfectly adequate for gaming (high turbo clocks in low core situations). I am a game dev after all, so my CPU choice needs to run games as well. i9 will certainly be a bit better for gaming at 1080p with 144 Hz monitor, but I have a Titan X + 60 Hz 4K display on my workstation. I don't play at 1080p.
     
    Lightman, DavidGraham and BRiT like this.
  19. Clukos

    Clukos Bloodborne 2 when?
    Veteran

    Joined:
    Jun 25, 2014
    Messages:
    4,688
    Likes Received:
    4,353
    Don't forget that SMT performs better than HT in MT applications so it's not only a core advantage but the extra threads from SMT perform better than the extra threads from HT. It'll most probably cost less as well, hopefully they'll release a pricepoint at computex.
     
    BRiT likes this.
  20. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,296
    Location:
    Helsinki, Finland
    Hyperthreading is simply Intel's marketing name for their SMT implementation. I don't see any big differences in Intel's and AMDs SMT implementation. Do you have links to professional workload benchmarks showing better scaling with AMDs SMT implementation vs Intel's?
     
    Gubbi likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...