AMD Ryzen Threadripper Reviews

Discussion in 'PC Industry' started by Clukos, Aug 10, 2017.

  1. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,423
    Likes Received:
    1,804
    AMD dserves kodus for bringing a NUMA CPU into consumer space, though it's not without it's challenges.
     
    w0lfram, xEx, Lightman and 1 other person like this.
  2. monstercameron

    Newcomer

    Joined:
    Jan 9, 2013
    Messages:
    127
    Likes Received:
    101
    I won't claim to be smarter than these professional reviewers but CPU performance is only half the story. The other half is 64 pcie lands and both pcper and anandtech didn't really show us what that means.


    Sent from my iPhone using Tapatalk
     
    digitalwanderer likes this.
  3. Rootax

    Regular Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    928
    Likes Received:
    423
    Location:
    France
    Yeah, I don't like it. I read Anandtech and hardware.fr reviews, and, it's like you need to switch a lot between NUMA and UMA to get the best performances for each application. Caring about NUMA / UMA was a server stuff for me, I don't want this at home too...
     
  4. fellix

    fellix Hey, You!
    Veteran

    Joined:
    Dec 4, 2004
    Messages:
    3,445
    Likes Received:
    326
    Location:
    Varna, Bulgaria
    This time AMD did much better work, than their first attempt a decade ago
     
    Lightman and digitalwanderer like this.
  5. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    16,784
    Likes Received:
    1,412
    Location:
    Winfield, IN USA
    Someone asked about that at the AMD press events before Capsaicin. The choice to include the ability to turn it on and off was included as just that, a choice. Under some circumstances games/apps will run better with it, some without...they give you the option to choose between the two, but you don't have to.

    There were some oddball games/apps that really respond well to it, and others just hate it. Again, it's just another thing to play with to try and allow you to maximize your own experience. (Blargh, that last sentence tasted bad coming out of my brain...made me feel like I was in marketing! Next I'll be talking about how this is a paradigm shift in the industry that's long overdue with a full vertical integration plan along with a robust new product cycle should bring exciting changes to AMD's future outlook....I gotta go smoke some pot before it takes effect permanently!!!)
     
    hoom, Jawed, Lightman and 1 other person like this.
  6. Rootax

    Regular Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    928
    Likes Received:
    423
    Location:
    France
    Yeah but in the end, you have to try&see if you want the best performances. It's not the end of the world, but still, it's not a great solution imo. I want some "predictability" from my cpu, not some "well, let see...."

    Anyway, it's still great for the price (except for gaming), don't get me wrong, but I hope they can find a way to fix this with Zen2 (but maybe it's not even possible, with their multiple dies architecture...)
     
    digitalwanderer likes this.
  7. rcf

    rcf
    Regular Newcomer

    Joined:
    Nov 6, 2013
    Messages:
    323
    Likes Received:
    247
    But if you don't choose you may be leaving a lot of performance on the table.
    And you won't be able to choose anything if you're multitasking with radically different kinds of programs.
     
  8. swaaye

    swaaye Entirely Suboptimal
    Legend

    Joined:
    Mar 15, 2003
    Messages:
    8,391
    Likes Received:
    531
    Location:
    WI, USA
    Maybe something like automatic CPU app profiles could make sense. Yay extra complexities.

    Or just leave it manual.
     
  9. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    15,409
    Likes Received:
    4,324
    Well is that worse than the situation now, where you can't choose? And so will never get the benefits from one of them and always get the drawbacks of the current?

    Or to put it another way. Would it be better if it was just UMA only? So you always have worse performance in apps that could benefit from NUMA?

    After all if they did that, you'd never have known that the performance in those apps was lower than it could be.

    It's not much different than Hyperthreading (HT) off and HT on which we've had to deal with on Intel CPUs for well over a decade now. Granted the differences in performance between on and off are generally lower now, but it's still there. Would it have been better if Intel had never introduced HT to the consumer market and instead reserved it only for the server market?

    Regards,
    SB
     
    Lightman likes this.
  10. Rootax

    Regular Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    928
    Likes Received:
    423
    Location:
    France
    "You can't choose now" ? Well yeah because before TR you didn't have 2 Numa node/domains CPU in that market...(If I consider than TR is against Intel 2011/2066).
    They introduce a new variable... About HT, like you said, the variation is often less important, and in most situation it's better when enabled. You also can say that now on TR you have to worry about SMT and Numa/uma.
    I really don't get how you can spin the numa apparition in that segment as a good thing...
     
  11. Entropy

    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    2,894
    Likes Received:
    745
    The benefit is obvious. You have access to an unprecedented number of processing cores in a consumer oriented platform at costs that aren't too prohibitive.
    Memory latency hasn't been deterministic for ages. This is really not a big deal. Come back when we are talking memory accesses between racks!

    If it for whatever reason disturbs you, and you typically really don't need 12+ physical cores, you might prefer the 8 core option, where you get the platform benefits at a small price penalty vs. the regular. We'll see what the motherboard options will be.
     
    Kaarlisk, Lightman and sebbbi like this.
  12. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,283
    Location:
    Helsinki, Finland
    Threadripper and i9 aren't exactly mainstream consumer oriented products. If you run only consumer software (office/productivity apps, games, web browsing, lightweight image/video editing), you don't benefit anything by buying HEDT (Threadripper or i9). If you need best consumer software performance, then you choose 7700K. Intel is bringing 6-core consumer models (Coffee Lake) later this year (6 cores with 4.7 GHz turbo) and Ryzen already scales up to 8-cores (albeit with separate L3 cache per 4-core cluster). These options are both much cheaper and perform (slightly) better than i9 or Threadripper for mainstream consumers.

    Threadripper and i9 are aimed mainly for workstation use: programmers and video/audio professionals who need to do large scale batch processing (compiling code/shaders, video processing, batch image/audio conversion, etc) or large scale data visualization, data mining or simulation. Most software that scales well to 32 threads is already written to be NUMA aware. NUMA isn't a big problem for most applications in this segment. Of course if you need a fast workstation CPU, but also want to run performance intensive consumer software, then the 10-core i9 is a good trade-off. It has almost as good gaming perf as 7700K, but is 2x+ faster in professional software.
     
  13. xEx

    xEx
    Regular Newcomer

    Joined:
    Feb 2, 2012
    Messages:
    898
    Likes Received:
    366
    So will you buy the TR or the i9 we all wanna know :-D
     
    digitalwanderer likes this.
  14. w0lfram

    Newcomer

    Joined:
    Aug 7, 2017
    Messages:
    83
    Likes Received:
    20
    Like to see an AM4 8-core vs TR4 8-core, when it's released.

    Interesting to note the difference in memory too.
     
    Kej and Lightman like this.
  15. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,737
    Likes Received:
    1,970
    Location:
    Germany
    A big problem, as sebbbi already explained, is the lack of mainstream software that really makes use of all those cores - and yet AMD and Intel continue to push those platforms for the "enthusiast" segment. Defying all the marketing about creative professionals, you can clearly see it in the obligatory "bling bling" the X299 and X399 motherboards come with. You barely can get a board for the workstation marked, trimmed for efficiency.

    That said, consumer software for real world application really is challenged with 16 Cores. Take 7-Zip for example. The integrated benchmark lets me choose up to twice the amount of (virtual) threads my system is capable of - in my personal rig, that would be 16 threads total I can choose having a SMT-enabled Quadcore. Now, with a TR1950X, I can go to 64 threads in benchmark mode. Very nice! But for real world packing, my limit is 32. Not twice the amount I was expecting or led to believe there is with the benchmark mode. Especially sad, because for normal operation, ultra-compression with LZMA2 mostly runs faster if you oversubscribe threads - I've found 1,5× virtual threads to be a good measure. With 7-Zip, that get's me 85% CPU-load on a TR 1920X. Now why use 7-Zip? Winrar 5.50 (august `17) for example gives me a larger archive as well as only 30-60% CPU load. :( NUMA-mode here is actually one example I found to be counter-productive with significantly longer compression time.

    Another example why Threadripper is a daring move and maybe too much ahead of it's time in the consumer space is RAW image conversion. Now Lightroom 5 - which is the last non-clouded version many professionals still are using because they don't want the cloud - scales not very good with more than 8 cores. Capture One Pro 10 however uses even 32 threads on a TR1950X - but, insert sadface, it also has OpenCL acceleration available, which speeds up RAW conversion batches by a large amount, depending on your GPU of course.

    So, for normal operation, 8 cores is more than enough already and for jobs than can effectively be parallelized, even 16 CPU cores are usually inferior to using the GPU via Open CL in the first place. That puts the use cases for home applications into a very tight spot IMHO - the same is/will be true of course for Intels i9-lineup.
     
    Lightman, DavidGraham and BRiT like this.
  16. Clukos

    Clukos Bloodborne 2 when?
    Veteran Newcomer Subscriber

    Joined:
    Jun 25, 2014
    Messages:
    4,426
    Likes Received:
    3,737
    Lightman, BRiT and Rootax like this.
  17. Alessio1989

    Regular Newcomer

    Joined:
    Jun 6, 2015
    Messages:
    554
    Likes Received:
    264
    Lightman and CarstenS like this.
  18. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,737
    Likes Received:
    1,970
    Location:
    Germany
    I did not and did not mean to attribute Adobes slugishness to TR alone, but as I said, very much diminishing returns above 4-6 cores, becoming almost unnoticeable at 8 cores+, be they Intel or AMD.
     
  19. Lightman

    Veteran Subscriber

    Joined:
    Jun 9, 2008
    Messages:
    1,782
    Likes Received:
    445
    Location:
    Torquay, UK
    Your only option to utilize these many core platforms is to MuliTask more than one heavy application at a time.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...