AMD Ryzen Threadripper Reviews

Clukos

Bloodborne 2 when?
Veteran
Supporter
http://www.anandtech.com/show/11697/the-amd-ryzen-threadripper-1950x-and-1920x-review
https://www.techspot.com/review/1465-amd-ryzen-threadripper-1950x-1920x/
http://www.guru3d.com/articles-pages/amd-ryzen-threadripper-1950x-review,1.html
http://www.guru3d.com/articles-pages/amd-ryzen-threadripper-1920x-review,1.html
https://www.pcper.com/reviews/Processors/AMD-Ryzen-Threadripper-1950X-and-1920X-Review

Apparently TR is only using the top 5% of dies:

kbzfdjitb3WbJUji.jpg
 
Last edited:
I won't claim to be smarter than these professional reviewers but CPU performance is only half the story. The other half is 64 pcie lands and both pcper and anandtech didn't really show us what that means.


Sent from my iPhone using Tapatalk
 
AMD dserves kodus for bringing a NUMA CPU into consumer space, though it's not without it's challenges.

Yeah, I don't like it. I read Anandtech and hardware.fr reviews, and, it's like you need to switch a lot between NUMA and UMA to get the best performances for each application. Caring about NUMA / UMA was a server stuff for me, I don't want this at home too...
 
Yeah, I don't like it. I read Anandtech and hardware.fr reviews, and, it's like you need to switch a lot between NUMA and UMA to get the best performances for each application. Caring about NUMA / UMA was a server stuff for me, I don't want this at home too...
Someone asked about that at the AMD press events before Capsaicin. The choice to include the ability to turn it on and off was included as just that, a choice. Under some circumstances games/apps will run better with it, some without...they give you the option to choose between the two, but you don't have to.

There were some oddball games/apps that really respond well to it, and others just hate it. Again, it's just another thing to play with to try and allow you to maximize your own experience. (Blargh, that last sentence tasted bad coming out of my brain...made me feel like I was in marketing! Next I'll be talking about how this is a paradigm shift in the industry that's long overdue with a full vertical integration plan along with a robust new product cycle should bring exciting changes to AMD's future outlook....I gotta go smoke some pot before it takes effect permanently!!!)
 
Yeah but in the end, you have to try&see if you want the best performances. It's not the end of the world, but still, it's not a great solution imo. I want some "predictability" from my cpu, not some "well, let see...."

Anyway, it's still great for the price (except for gaming), don't get me wrong, but I hope they can find a way to fix this with Zen2 (but maybe it's not even possible, with their multiple dies architecture...)
 
Someone asked about that at the AMD press events before Capsaicin. The choice to include the ability to turn it on and off was included as just that, a choice. Under some circumstances games/apps will run better with it, some without...they give you the option to choose between the two, but you don't have to.
But if you don't choose you may be leaving a lot of performance on the table.
And you won't be able to choose anything if you're multitasking with radically different kinds of programs.
 
Maybe something like automatic CPU app profiles could make sense. Yay extra complexities.

Or just leave it manual.
 
Yeah but in the end, you have to try&see if you want the best performances. It's not the end of the world, but still, it's not a great solution imo. I want some "predictability" from my cpu, not some "well, let see...."

Anyway, it's still great for the price (except for gaming), don't get me wrong, but I hope they can find a way to fix this with Zen2 (but maybe it's not even possible, with their multiple dies architecture...)

Well is that worse than the situation now, where you can't choose? And so will never get the benefits from one of them and always get the drawbacks of the current?

Or to put it another way. Would it be better if it was just UMA only? So you always have worse performance in apps that could benefit from NUMA?

After all if they did that, you'd never have known that the performance in those apps was lower than it could be.

It's not much different than Hyperthreading (HT) off and HT on which we've had to deal with on Intel CPUs for well over a decade now. Granted the differences in performance between on and off are generally lower now, but it's still there. Would it have been better if Intel had never introduced HT to the consumer market and instead reserved it only for the server market?

Regards,
SB
 
"You can't choose now" ? Well yeah because before TR you didn't have 2 Numa node/domains CPU in that market...(If I consider than TR is against Intel 2011/2066).
They introduce a new variable... About HT, like you said, the variation is often less important, and in most situation it's better when enabled. You also can say that now on TR you have to worry about SMT and Numa/uma.
I really don't get how you can spin the numa apparition in that segment as a good thing...
 
"You can't choose now" ? Well yeah because before TR you didn't have 2 Numa node/domains CPU in that market...(If I consider than TR is against Intel 2011/2066).
They introduce a new variable... About HT, like you said, the variation is often less important, and in most situation it's better when enabled. You also can say that now on TR you have to worry about SMT and Numa/uma.
I really don't get how you can spin the numa apparition in that segment as a good thing...
The benefit is obvious. You have access to an unprecedented number of processing cores in a consumer oriented platform at costs that aren't too prohibitive.
Memory latency hasn't been deterministic for ages. This is really not a big deal. Come back when we are talking memory accesses between racks!

If it for whatever reason disturbs you, and you typically really don't need 12+ physical cores, you might prefer the 8 core option, where you get the platform benefits at a small price penalty vs. the regular. We'll see what the motherboard options will be.
 
Threadripper and i9 aren't exactly mainstream consumer oriented products. If you run only consumer software (office/productivity apps, games, web browsing, lightweight image/video editing), you don't benefit anything by buying HEDT (Threadripper or i9). If you need best consumer software performance, then you choose 7700K. Intel is bringing 6-core consumer models (Coffee Lake) later this year (6 cores with 4.7 GHz turbo) and Ryzen already scales up to 8-cores (albeit with separate L3 cache per 4-core cluster). These options are both much cheaper and perform (slightly) better than i9 or Threadripper for mainstream consumers.

Threadripper and i9 are aimed mainly for workstation use: programmers and video/audio professionals who need to do large scale batch processing (compiling code/shaders, video processing, batch image/audio conversion, etc) or large scale data visualization, data mining or simulation. Most software that scales well to 32 threads is already written to be NUMA aware. NUMA isn't a big problem for most applications in this segment. Of course if you need a fast workstation CPU, but also want to run performance intensive consumer software, then the 10-core i9 is a good trade-off. It has almost as good gaming perf as 7700K, but is 2x+ faster in professional software.
 
Threadripper and i9 aren't exactly mainstream consumer oriented products. If you run only consumer software (office/productivity apps, games, web browsing, lightweight image/video editing), you don't benefit anything by buying HEDT (Threadripper or i9). If you need best consumer software performance, then you choose 7700K. Intel is bringing 6-core consumer models (Coffee Lake) later this year (6 cores with 4.7 GHz turbo) and Ryzen already scales up to 8-cores (albeit with separate L3 cache per 4-core cluster). These options are both much cheaper and perform (slightly) better than i9 or Threadripper for mainstream consumers.

Threadripper and i9 are aimed mainly for workstation use: programmers and video/audio professionals who need to do large scale batch processing (compiling code/shaders, video processing, batch image/audio conversion, etc) or large scale data visualization, data mining or simulation. Most software that scales well to 32 threads is already written to be NUMA aware. NUMA isn't a big problem for most applications in this segment. Of course if you need a fast workstation CPU, but also want to run performance intensive consumer software, then the 10-core i9 is a good trade-off. It has almost as good gaming perf as 7700K, but is 2x+ faster in professional software.
So will you buy the TR or the i9 we all wanna know :D
 
A big problem, as sebbbi already explained, is the lack of mainstream software that really makes use of all those cores - and yet AMD and Intel continue to push those platforms for the "enthusiast" segment. Defying all the marketing about creative professionals, you can clearly see it in the obligatory "bling bling" the X299 and X399 motherboards come with. You barely can get a board for the workstation marked, trimmed for efficiency.

That said, consumer software for real world application really is challenged with 16 Cores. Take 7-Zip for example. The integrated benchmark lets me choose up to twice the amount of (virtual) threads my system is capable of - in my personal rig, that would be 16 threads total I can choose having a SMT-enabled Quadcore. Now, with a TR1950X, I can go to 64 threads in benchmark mode. Very nice! But for real world packing, my limit is 32. Not twice the amount I was expecting or led to believe there is with the benchmark mode. Especially sad, because for normal operation, ultra-compression with LZMA2 mostly runs faster if you oversubscribe threads - I've found 1,5× virtual threads to be a good measure. With 7-Zip, that get's me 85% CPU-load on a TR 1920X. Now why use 7-Zip? Winrar 5.50 (august `17) for example gives me a larger archive as well as only 30-60% CPU load. :( NUMA-mode here is actually one example I found to be counter-productive with significantly longer compression time.

Another example why Threadripper is a daring move and maybe too much ahead of it's time in the consumer space is RAW image conversion. Now Lightroom 5 - which is the last non-clouded version many professionals still are using because they don't want the cloud - scales not very good with more than 8 cores. Capture One Pro 10 however uses even 32 threads on a TR1950X - but, insert sadface, it also has OpenCL acceleration available, which speeds up RAW conversion batches by a large amount, depending on your GPU of course.

So, for normal operation, 8 cores is more than enough already and for jobs than can effectively be parallelized, even 16 CPU cores are usually inferior to using the GPU via Open CL in the first place. That puts the use cases for home applications into a very tight spot IMHO - the same is/will be true of course for Intels i9-lineup.
 
I did not and did not mean to attribute Adobes slugishness to TR alone, but as I said, very much diminishing returns above 4-6 cores, becoming almost unnoticeable at 8 cores+, be they Intel or AMD.

Your only option to utilize these many core platforms is to MuliTask more than one heavy application at a time.
 
Back
Top