AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

IDk if there will be something interesting but here is AMD schedule streaming for the week: link

Also is this vid:
linus said that AMD is keeping the AM4 socket until 2020 with DDr5 and PCE4 comes out.
 
Btw reading an old Anandtech article I found this :"The timeframe for Raja’s influence depends on what you’re talking about. Raja’s immediate goal is to ensure that AMD has the best GPU architecture/hardware possible. Unfortunately, it will likely take 2 - 3 years to realize this goal - putting the serious fruits of Raja’s labor somewhere around 2015 - 2016. Interestingly enough, that’s roughly the same time horizon for the fruits of Jim Keller’s CPU work at AMD." surprise how prophetic it was also how planned since I don't see this as a coinsurance: Lisa joined AMD in January 2012, 6 months later they get Jim back and a year later Raja and all of that gets us to this point where AMD is presenting its(apparently) bright future. I would really like to ask Lisa if she planned this all or the pieces just came together at the right place at the right time(but I don't believe much in coincidence like that).
 
She needs to rock a leather jacket like Jen-Hsung. Or maybe she should go for an Eliza Cassan look and one up him....
 
I do not get they hype. When they started with Ryzen they probably knew everything about Sandy Bridge. The problem of AMD was that the FX series was a monumentally wrong decision and Ryzen brings them back to where they should be.
 
The problem of AMD was that the FX series was a monumentally wrong decision and Ryzen brings them back to where they should be.
Intel did similar bad decisions: Pentium 4 and Itanium. Both companies noticed that scaling up the clock rate of their single core designs were doomed, and tried to make bold moves and failed. Unfortunately AMD had less resources, so it took them longer time to get back to business. It's not easy to make right decisions when the projections said 30+ GHz in 2010 :). Of course you could argue that Core 2 success should have pointed AMD to the right direction years earlier than Bulldozer shipped.
 
Intel did similar bad decisions: Pentium 4 and Itanium. Both companies noticed that scaling up the clock rate of their single core designs were doomed, and tried to make bold moves and failed. Unfortunately AMD had less resources, so it took them longer time to get back to business. It's not easy to make right decisions when the projections said 30+ GHz in 2010 :). Of course you could argue that Core 2 success should have pointed AMD to the right direction years earlier than Bulldozer shipped.

Sure nobody is free of mistakes. It seems like the old Pentium ideas and the hope of a quick breakthrough of multicore soft ware pointed AMD in the FX direction. Do not get me wrong, I am happy when Ryzen turns out great, but bettering the FX line was a very easy target, when you had full knowledge of Intel´s Sandy Bridge and follow-up CPUs during your development. If Ryzen ends up on the Broadwell level, it would deliver what can be expected and it would nearly be back to the very early days, with AMD being a few percent behind Intel but still competitive. This might sound strange but in the last 16 years we never were at this point.
 
Sure nobody is free of mistakes. It seems like the old Pentium ideas and the hope of a quick breakthrough of multicore soft ware pointed AMD in the FX direction. Do not get me wrong, I am happy when Ryzen turns out great, but bettering the FX line was a very easy target, when you had full knowledge of Intel´s Sandy Bridge and follow-up CPUs during your development. If Ryzen ends up on the Broadwell level, it would deliver what can be expected and it would nearly be back to the very early days, with AMD being a few percent behind Intel but still competitive. This might sound strange but in the last 16 years we never were at this point.

Except it looks like Zen is clocking higher then broadwell-E with low power consumption ( 6 core atleast 3.4ghz base has 65watt tdp :) ).

The big one i want to know is does Zepplin have the on SOC crypto engines like the early leaks said. If it does and it can match the throughput of the network interfaces then thats a massive ACE in the hole for Naples vs E5 V5. Massive on SOC PCI-E and NVMe also looks to be a very nice server advantage.

We also know that the follow on to Zen is a "tock" followed by a "tock" follwoed by another "tock". Intel have Skylake Mk3 which is really haswell Mk5 in cannonlake.

I think things are looking better for AMD then how you painted them :).
 
Last edited:
I want an Opteron desktop edition of Naples with quad-penta channels, with blackjack and hookers...
... (and to be honest, "Naples" or in Italian "Napoli" it's not the best codenamed I ever heard...).
 
Yes AMD made 2 mistakes the K9? and BZ but at least with BZ the concept itself wasn't the fail but the lack or resources put into it. BZ was a big bet and so is Ryzen, the difference in both are 2 things: Jim Keller and hundreds of millions of dollars. If BZ had then the story could have ended different, PD vs Nahalen wasn't that bad it was actually competitive but they never met in the field. PD was what BZ should have been but it came too late. And even worse AMD did not talk to MS to support their new CPUs on windows making then consume a lot more and have worse performance.(it was a case when everything went wrong to AMD) and I dont judge them to not wanted to keep burning resources into a fail product without future.

And what I meant to that is not that BZ was the right decision, is that what cause it to be so bad was a combination of things. Also that was the AMD before Lisa and we have seem what AMD after Lisa is looking(a company that really know how to use its limited resources but also its talent to great good products that last)

All in all the hype is actually well deserve and its not just hype for Ryzen, part of the hype is to see rysen, part of it is to see what intel responses is and part of the hype is to get used cpus at actual normal(low) prices. So its not just hype for AMD but for the turn point the industry is about to have where we all will get benefits from.
 
I did not make any predictions on the future. If the IPC of Zen turns out to be on the Broadwell level and the frequency and power consumption is competitive to Skylake, it is a good CPU. How the future will unfold is something we will see. In the end Bulldozer was so received so badly, not only because AMD went for this design but also because they failed with in in execution and also in making the windows environment more suitable for the needs of the CPU.

A line of mistakes let to BZ, so far Lisa seems to have avoided making such mistakes.
 
[offtopic]

Yes AMD made 2 mistakes the K9? and BZ but at least with BZ the concept itself wasn't the fail but the lack or resources put into it. BZ was a big bet and so is Ryzen, the difference in both are 2 things: Jim Keller and hundreds of millions of dollars. If BZ had then the story could have ended different, PD vs Nahalen wasn't that bad it was actually competitive but they never met in the field. PD was what BZ should have been but it came too late.

No, Bulldozer concept was bad. The idea of sacrificing ANY single-thread performance for multi-threaded performance for CPU is stupid when there is enough die area available to put multiple cores onto the die anyway.

The 2 high level design "directions" were that
1) They can trade a little single-thread performance for more multi-threaded performance
2) They can trade some IPC for clock rate if the increase in the clock rate is higher than the decrease in IPC.

1 was simply wrong decision. Single-thread performance is the most important thing for a CPU.
2 is a thing were they failed on implementation;
2A) Even though they designed the long pipeline and suffered the IPC decrease , they still could not reach very high clock speeds, and had to use very high voltages(increasing power consumption) to achieve the frequencies they achieved. The reason was mostly some bad critical paths on their L2 caches.
2B) The ILP suffered more than what was originally planned; Their cache prefetchers did not work so well as they were supposed to work, they had aliasing problems on L1I cache etc. They had put in an unbalanced cache structure(small WT L1D, big slow L2) hoping their prefetchers will make the cache structure work, but it did not.

And even worse AMD did not talk to MS to support their new CPUs on windows making then consume a lot more and have worse performance.(it was a case when everything went wrong to AMD) and I dont judge them to not wanted to keep burning resources into a fail product without future.

any basis for these claims?

[/offtopic]
 
[offtopic]



No, Bulldozer concept was bad. The idea of sacrificing ANY single-thread performance for multi-threaded performance for CPU is stupid when there is enough die area available to put multiple cores onto the die anyway.

The 2 high level design "directions" were that
1) They can trade a little single-thread performance for more multi-threaded performance
2) They can trade some IPC for clock rate if the increase in the clock rate is higher than the decrease in IPC.

1 was simply wrong decision. Single-thread performance is the most important thing for a CPU.
2 is a thing were they failed on implementation;
2A) Even though they designed the long pipeline and suffered the IPC decrease , they still could not reach very high clock speeds, and had to use very high voltages(increasing power consumption) to achieve the frequencies they achieved. The reason was mostly some bad critical paths on their L2 caches.
2B) The ILP suffered more than what was originally planned; Their cache prefetchers did not work so well as they were supposed to work, they had aliasing problems on L1I cache etc. They had put in an unbalanced cache structure(small WT L1D, big slow L2) hoping their prefetchers will make the cache structure work, but it did not.

And what I meant to that is not that BZ was the right decision, is that what cause it to be so bad was a combination of things.

You should had read my whole comment :p

any basis for these claims?

[/offtopic]

That after someone found that the OS was threading AMDs core SMT and using one ALU of each module and jumping to the next making the modules consume like twice as much power that AMD said they were in talk to MS to fix the issue.

What I said was not that BZ was the correct decision. what I said was that if AMD would have used more resources and talent into BZ development it could have ended being competitive in performance. Also ST performance is a combination of IPC and frequency, AMD on paper didn't tried to trade ST performance they just try to archive it in a different way and there is extremely rare just one answer to a problem, AMD tried to find another one, whether or not was the correct we can't be sure. I would really love to see excavator with Zen caches and front end to see how well it could do with proper resources I know I probly wont but it could be the definitive proof of concept.

Also we need balance. a super IPC design is not the answer either, I remember that IBM chip with giant, enormous, huge, immense, massive INT cores I cant remember the name but I do remember that it was a failure

In summary I am not saying that BZ is the future of CPUs design....What I said was that the BZ failure was more a topic of resources and mistakes than concept. And yes an IPC design is better in my(extremely ignorant) opinion because it doesnt rely that much in process technology and its easier to manufacture and to archive goals but I am sure its not the only way to make a CPU.
 
[offtopic]

You should had read my whole comment :p
That after someone found that the OS was threading AMDs core SMT and using one ALU of each module and jumping to the next making the modules consume like twice as much power that AMD said they were in talk to MS to fix the issue.

Treating it exactly as SMT easily gave the best performance in most workloads, by considerable margin.

and "like twice as much power" is a big exaggeration.
What I said was not that BZ was the correct decision. what I said was that if AMD would have used more resources and talent into BZ development it could have ended being competitive in performance.

They could have fixed the speedpath issues and gotten like 10% higher clock speeds with like 10% lower power consumption, yes.
But it would still have had considerably lower single-thread performance than Sandy Bridge.

Also ST performance is a combination of IPC and frequency, AMD on paper didn't tried to trade ST performance they just try to archive it in a different way and there is extremely rare just one answer to a problem, AMD tried to find another one, whether or not was the correct we can't be sure. I would really love to see excavator with Zen caches and front end to see how well it could do with proper resources I know I probly wont but it could be the definitive proof of concept.

Processor core parts are not legoes, you cannot just "switch caches".

The Bulldozer L2 cache was slow because..
1) It was big. This can be changed. Halving the L2 cache size means 2 cycles saved on L2 access time on bulldozer (found on some AMD technical manuals, they were considering smaller L2 cache even before excavator)
2) It had to serve three L1 caches that were away from each other. Needed extra cycles for data routing. This was fundamental to the Bulldozer CMT design. So The CMT increased L2 cache latency and so decreased single-thread performance.

Also the floating point load and store datapaths needed to be much more complex to be able to go from either L1D cache.
probably added at least one clock to floating point load latency.

So, the CMT definetely did hurt single-thread performance. And AMD very well knew it will.

Also we need balance. a super IPC design is not the answer either, I remember that IBM chip with giant, enormous, huge, immense, massive INT cores I cant remember the name but I do remember that it was a failure

You don't remember anything about chip that was a failure? what about if you also misremember the "failure" part? Or the IBM part? Or the "super IPC part"?
Maybe you are confusing this with EV8 alpha which was canned (because itanium was going to take over the world), never finished, never failed on technical ground.

In summary I am not saying that BZ is the future of CPUs design....What I said was that the BZ failure was more a topic of resources and mistakes than concept.

It was a mistake of both, but the bigger mistakes were on the concept side.

And yes an IPC design is better in my(extremely ignorant) opinion because it doesnt rely that much in process technology and its easier to manufacture and to archive goals but I am sure its not the only way to make a CPU.

Wrong. If you have nice long pipeline and very short critical paths, you don't need good process to achieve high clock speeds, and if your manufacturing process sucks, then your low-clock, short pipeline architecture with long delays clocks even lower. We have a multiplication here, not an addition.

And you can also equally screw a short-pipeline design: AMD K5 is a good example of a failed short-pipeline design, as is rise mp6.


What matters is how balanced the pipeline is and what kind of tradeoffs are done to achieve the high clock speeds.
Both Bulldozer and P4 had sucky L1D caches to achieve those high clock speeds, and so went too far on the "high clock speed" side.
Though Bulldozer might have done ok with it's L1D cache if it had a faster L2 cache though, but it did not have.

P4 also went even further and had multi-cycle schedulers which required the "replay" mechanism which considerably increased power consumption.

[/offtopic]
 
Intel did similar bad decisions: Pentium 4 and Itanium. Both companies noticed that scaling up the clock rate of their single core designs were doomed, and tried to make bold moves and failed. Unfortunately AMD had less resources, so it took them longer time to get back to business. It's not easy to make right decisions when the projections said 30+ GHz in 2010 :). Of course you could argue that Core 2 success should have pointed AMD to the right direction years earlier than Bulldozer shipped.
I am working off hearsay and fuzzy memory, but AMD's P4-like direction was K9, which (I think?) was cancelled in 2004--the same year Prescott showed how badly that direction was faring.

According to the following, K9 had a pipeline that would have had approximately 2/3 the complexity per stage that BD wound up having.
https://groups.google.com/d/msg/comp.arch/RQGL8yFPYEk/oFeUF4cueKMJ

If that timing is accurate, Bulldozer was AMD's turnaround from a P4 core.
With regards to Core2, the success was 2006 and later, and Bulldozer's development process may have been underway for some time. Despite Bulldozer's 2011 release date and the 5-year rule of thumb for rolling out a new high-end core, 32nm Bulldozer was the result of a delay from 45nm, so that on top of other rumored problems in development might have meant a fair amount was decided upon prior to then.


Yes AMD made 2 mistakes the K9?
I've read about a brief 6-month foray into a wide brainiac-type core as a K8 replacement, which I suppose isn't quite like a mistake beyond the lost time and expense. There was also speculation on patents seemingly covering a different Bulldozer-type core, perhaps an alternate candidate.

and BZ but at least with BZ the concept itself wasn't the fail but the lack or resources put into it. BZ was a big bet and so is Ryzen, the difference in both are 2 things: Jim Keller and hundreds of millions of dollars.
AMD had way more of those hundreds of millions of dollars when BD was being designed. Even if we accept that BD was somehow much more constrained, then it seems having a top-flight executive and money leads to the conclusion of not doing BD.

And even worse AMD did not talk to MS to support their new CPUs on windows making then consume a lot more and have worse performance.(it was a case when everything went wrong to AMD) and I dont judge them to not wanted to keep burning resources into a fail product without future.
That was a pretty minor penalty, and I would primarily blame AMD on that one. AMD's supposed optimal point was a module-oriented thread allocation policy to fill up one module at a time so that the others could be gated off for turbo headroom.
Turbo's ceiling was too low, modules too leaky, multithreading penalties too high, base performance per core too low, and penalties for twitchy module gating too excessive. The best solution was to do the opposite of what AMD said and just treat the modules like HT cores, since the design was apparently going to burn too much power anyway and not go anywhere with turbo if you managed to guess the future on module/thread utilization.
 
I've read about a brief 6-month foray into a wide brainiac-type core as a K8 replacement, which I suppose isn't quite like a mistake beyond the lost time and expense. There was also speculation on patents seemingly covering a different Bulldozer-type core, perhaps an alternate candidate.


AMD had way more of those hundreds of millions of dollars when BD was being designed. Even if we accept that BD was somehow much more constrained, then it seems having a top-flight executive and money leads to the conclusion of not doing BD.


That was a pretty minor penalty, and I would primarily blame AMD on that one. AMD's supposed optimal point was a module-oriented thread allocation policy to fill up one module at a time so that the others could be gated off for turbo headroom.
Turbo's ceiling was too low, modules too leaky, multithreading penalties too high, base performance per core too low, and penalties for twitchy module gating too excessive. The best solution was to do the opposite of what AMD said and just treat the modules like HT cores, since the design was apparently going to burn too much power anyway and not go anywhere with turbo if you managed to guess the future on module/thread utilization.

I agreed that the design was pretty bad but I'm not convinced that the concept was. We also need to remember the talent exodus from AMD that made things harder and worse. If we compare the 9590 is not bad against the 2500k in CPU benchs, although its below an i3 in most games. But if we imagine that 9590 as the first BZ launched back in 2011 then it would have been a good processor in terns of performance. Of course that was not what happen but its a way of visualizing what could happen.

Its also kind of funny/sad how the 9590 the latest and fastest BZ cpu is actually what AMD tried to made when design the concept for BZ and how they finally managed to archived the 5Ghz goal but unfortunately it was 3 years(to say at least) too late.

Also the IBM core was the P6 and by failure I mean in terns of desing where IBM put a lot of money and then had to throw it away and get back like Intel with the P4.
 
Last edited:
I agreed that the design was pretty bad but I'm not convinced that the concept was.
Per the creator of the concept, it was to enable one or two things: a very tight critical loop for very high clocks, or clustered execution that allows for fast thread-level migration and speculation.

The first seems definitively out, and the latter appears to be problematic enough that attempts at it from before BD to now have failed and no major vendor has approached it. Does it make the concept a bad one, perhaps not? It doesn't seem to make it good.
Perhaps the charitable interpretation is that it wasn't the best fit for what could be done then or up to now.

We also need to remember the talent exodus from AMD that made things harder and worse.
Perhaps, although a good chunk of those were working on things not-Bulldozer, and might not have been working to make it better.
Another interpretation is that they were working on things not-Bulldozer, but Bulldozer for all its flaws was the only one AMD could make into a product.
The loss of talent may have stemmed from among other things like a wrongly-decided or absent vision on the part of leadership, and they settled on the BD concept.

Its also kind of funny/sad how the 9590 the latest and fastest BZ cpu is actually what AMD tried to made when design the concept for BZ and how they finally managed to archived the 5Ghz goal but unfortunately it was 3 years(to say at least) too late.
The 9590 is a water-cooled 220W 8-core Vishera, which isn't the newest in the line and it wouldn't have been acceptable in 2011 as a mainline product.

*edit: fixed spelling
 
Last edited:
Per the creator of the concept, it was to enable one or two things: a very tight critical loop for very high clocks, or clustered execution that allows for fast thread-level migration and speculation.

The first seems definitively out, and the latter appears to be problematic enough that attempts at it from before BD to now have failed an no major vendor has approached it. Does it make the concept a bad one, perhaps not? It doesn't seem to make it good.
Perhaps the charitable interpretation is that it wasn't the best fit for what could be done then or up to now.

Yes but a lack of technical knowledge/expertise on how to do something doesn't make that "something" bad per se. A concept and an implementation are separate things. That doesn't mean that we should just try to create things without looking at the viability, our technical knowledge/expertise, cost, etc. Just that sometimes you got to take the bet and try and some times you will succeed and others you will fail. AMD saw the BZ concept on paper and it looked pretty good from theoretical performance and specially marketing.

The 9590 is a water-cooled 220W 8-core Vishera, which isn't the newest in the line and it wouldn't have been acceptable in 2011 as a mainline product.

Yes I didn't say it would have the best CPU on the world but if it was competitive against Intel things would have ended different for AMD. if ur CPU consumes more but beats the rival then its a half win but BZ consume a lot more and perform a lot worse, it was just a massive lost. Also it could have been a much better starting point and perhaps AMD wouldn't have the need to stop making [strike]high end [/strike] mid range CPUs while Zen arrived.
 
Yes but a lack of technical knowledge/expertise on how to do something doesn't make that "something" bad per se. A concept and an implementation are separate things.
What if a concept is found to only allow for bad implementations?
Half of the concept's motivation was found to be physically impractical, for example.
The other half has been found wanting by an industry that's spent years trying, and other techniques have been implemented successfully and sometimes to better results.

Yes I didn't say it would have the best CPU on the world but if it was competitive against Intel things would have ended different for AMD. if ur CPU consumes more but beats the rival then its a half win but BZ consume a lot more and perform a lot worse, it was just a massive lost. Also it could have been a much better starting point and perhaps AMD wouldn't have the need to stop making [strike]high end [/strike] mid range CPUs while Zen arrived.
That's just saying that if BD performed up to the level of a vanity SKU like a heavily binned and watercooled future core, things wouldn't be bad. This cannot be done without separating the performance from all the unacceptable things it takes to get it.
Why stop there? If BD launched with Zen's performance during an LN2 suicide run, AMD would have done better as well.
 
Back
Top