AMD RyZen CPU Architecture for 2017

AFAIK Ryzen supports up to 2666mhz DDR4 but they used 2133mhz sticks in the review. Meanwhile Intel CPUs support up to 2400mhz DDR4 but they used 3200mhz sticks in the review? Is that correct?
 
Maybe we shouldn't pay attention to some review of an engineering sample with very little information and just wait until tomorrow.
 
Performance seems to be quite qood. The winrar result is weird though. Still happy with my preorder.
WinRAR scales with cache and memory subsystem really well, so testing with DDR4 2133 is a definitive handicap. Still, gap to best Intel CPU's is quite large in this test, so faster RAM will allow Ryzen to surpass i7 7700k but not Intel 8 core or 10 core monsters in this test.

Overlay performance is really good, and pricing is fantastic for what it offers!
I will be switching from i7 4790k @4.4GHz to hopefully 4.1 or 4.2GHz Ryzen and will not look back :)
 
Ryzen supports 3.4K memories as well. this is more like worst case scenario for ryzen and still manage to show pretty well.
 
Yep. The gigabyte motherboard I'm planning on getting lists up to 3200 support and that's only B350.
 

Some small tidbits from that is that there's a recent branch predictor included in the BTB and ITA section that enables 0-cycle branch prediction, which sounds like it is able to predict a small number of recently hit branches without injecting a bubble of several cycles for each branch like AMD predictors have done for generations.

Also, the L3 apparently matches the clock of the fastest core. To what extent that influences where turbo, XFR, and power consumption behave in overclocking or the highest bins is unclear.
 
Common wisdom says they will tune for lower cost and lower power consumption per transistor and stack things vertically.
That is not really scalable, if you have 3 stacks you have 3 times power comp. and temp.
I marked the important part.

Of course cooling will be a challenge. But what else do you want to do, if one can't make transistors any smaller (that was your premise)? You increase power efficiency, lower the cost per transistor and try to cram more in the same area by stacking, if area is a constraint, which is the case in a lot of applications (outer dimensions of the device or ultimately the distance one has to drive a signal).
 
Last edited:
I marked the important part.

But isn't there a limit on low efficient you can make them? Even if you can make same performance with half less energy you would only get access to 1 stack. So it can make up for some years but it is not a solution for a long tern.
 
But isn't there a limit on low efficient you can make them? Even if you can make same performance with half less energy you would only get access to 1 stack. So it can make up for some years but it is not a solution for a long tern.
Every scaling ultimately has its limit somewhere. Making the transistors smaller is also not a solution for the long term. But it served us a few decades (and will continue to do so for at least another). I see no argument there.
Or in other words: Nobody can tell you today, how to solve the problems arising in 50 years from now.
Some wise man once said: Predictions are difficult, especially of the future. ;)
 
Last edited:
But isn't there a limit on low efficient you can make them? Even if you can make same performance with half less energy you would only get access to 1 stack. So it can make up for some years but it is not a solution for a long tern.
There's a theoretical limit to the energy efficiency of computing (per the common use of the term) anything at all, although silicon transistors are nowhere near it and might not be the way to get close.
Thermodynamics compels that a computation cost at least something that cannot be gotten back, per https://en.wikipedia.org/wiki/Landauer's_principle.

It might also help to define long term. Long-term can be until the heat death of the universe which for our purposes is effectively forever if we don't want to go ahead and say forever. And no improvement curve goes on forever.

There's not going to be a catch-all answer for all situations, especially going forward at the end of an era of relatively straightforward physical improvement.
Specialization, tuning to serial or parallel scaling, higher-temperature substrates, microfluidics or solid-state heat transport, and different models of computation or operation are being looked into.

Either that, or you can hope for a set of science-fiction answers, which ironically ties back into all of this:
http://multivax.com/last_question.html
 
I think long term is to when we can make quantum pcs. For example if we discover a way of using grapheme to make cpus we can probably get significant improvements in performance for decades or more.

Enviado desde mi HTC One mediante Tapatalk
 
I think long term is to when we can make quantum pcs.
Even that would only get you so far, the many challenges to that method aside.

For example if we discover a way of using grapheme to make cpus we can probably get significant improvements in performance for decades or more.
Not for serial execution. Silicon processors are able to be limited in certain cases to propagation delay, which comes down to the speed of light or a fraction of it. Graphene doesn't change the universal constant, and being made of a similar lattice of atoms can only do so much for the distances traveled.
 
U r entering the philosophical aspects... I'm talking about practical things. What is practical to do and what is not.

Enviado desde mi HTC One mediante Tapatalk
 
U r entering the philosophical aspects... I'm talking about practical things. What is practical to do and what is not.

Propagation delay is not a philosophical musing; it is a major constraint on wire length and pipeline staging. There are already portions of CPUs at 4 GHz whose designs reached an inflection point where if they were any bigger they would be slower because delays would be limited by fundamental limits of how fast signals could cross the longer distances.
 
Propagation delay is not a philosophical musing; it is a major constraint on wire length and pipeline staging. There are already portions of CPUs at 4 GHz whose designs reached an inflection point where if they were any bigger they would be slower because delays would be limited by fundamental limits of how fast signals could cross the longer distances.
I was referring to that "theres a limit to everything" Of course there is, my point is that we are approaching the limit of the current CPU manufacturing technology and that there is no clear answer to what will come next.
 
I was referring to that "theres a limit to everything" Of course there is, my point is that we are approaching the limit of the current CPU manufacturing technology and that there is no clear answer to what will come next.

When getting down to a few atomic layers and picoseconds, those fundamental limits show up.
Attempts to work around the fundamental cost of computation include trying to perform computation differently.
Per the Landauer article, there is research into forms of logic and computation that try to avoid or reduce how much is changed in the system. That includes items like reversible computing and also one of the attempts at getting quantum computing via adiabatic computation: https://en.wikipedia.org/wiki/Adiabatic_quantum_computation
 
Back
Top