AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

What was wrong with Llano? The Stars cores were competitive enough back in 2011 and the integration with the TeraScale 2 GPU was very decent too. It blasted the Sandy Bridge models for gaming loads.


IIRC, what went wrong with the later solutions was that they were carrying Bulldozer CPUs. Then there was Kaveri that brought a powerful 8 CU GPU but was starved for bandwidth because AMD originally planned to use it with GDDR5M, which went down with Elpida's bankruptcy. Then they had a huge fabrication node disadvantage because GF took ages to get out of 28nm..

And then laptop OEMs just associated AMD with budget solutions and paired their decent APUs with single-channel RAM and disc spinning hard drives, bringing their reputation even lower.
 
What was wrong with Llano? The Stars cores were competitive enough back in 2011 and the integration with the TeraScale 2 GPU was very decent too. It blasted the Sandy Bridge models for gaming loads.


IIRC, what went wrong with the later solutions was that they were carrying Bulldozer CPUs. Then there was Kaveri that brought a powerful 8 CU GPU but was starved for bandwidth because AMD originally planned to use it with GDDR5M, which went down with Elpida's bankruptcy. Then they had a huge fabrication node disadvantage because GF took ages to get out of 28nm..

And then laptop OEMs just associated AMD with budget solutions and paired their decent APUs with single-channel RAM and disc spinning hard drives, bringing their reputation even lower.
Llano was delayed.
And delayed.
And delayed.
Then they lied about yields to shareholders.
The product itself was decent but it arrived too late to compete against SB.
 
Ah of course. Had Llano arrived 1 year earlier during the Phenom II and Nehalem age, it could have taken the market by storm for sure. And AMD could even have pitched a variant of it for the Wii U.

But after releasing to the public, there wasn't anything broken with Llano. The iGPU performed close to its discrete Redwood counterpart.
 
But after releasing to the public, there wasn't anything broken with Llano.
It took some time to make it work.
The schedule slipped and by the time Llano launched Intel offered mobile SB parts, which quite frankly were better.
Making a good design is one thing, launching it in timely manner (and in an actual working state, what the hell is this Vega mess) is another.
That's where Zen shines and Vega fails.
 
Last edited by a moderator:
Yes, but here we have fundamental uArch feature being marketed despite being possibly broken down to hardware level. It's not FP coprocessing or turbo misses, it's something fundamental for Vega to be competetive.
The fundamental need or competitive necessity of a feature is not on the same axis as the difficulty/complexity of it, however.

AMD may have a severe need for primitive shaders to be effective in order to compete, but that's not the same thing as being able to make it so. The problem space from external descriptions appears to span a number of complex projects, such as modifications to the geometry pipeline hardware, revamped shader generation, optimal solutions looking like inline assembly, and at least some changes bleeding into the ISA.

Choices made years ago likely didn't get an acid test in the real world until more recently, and the full range of unknowns wouldn't be felt until something closer to the final product was available.
Perhaps reality has shown some of the early bets didn't pay off as hoped, or some of the inevitable unknowns hit harder than expected.

Llano backfired on AMD hard enough already for them not to market something fundamentally broken that will take a few years to fix.
The definition of broken versus competitive may need to be clarified. The already mentioned Zen money machine (potentially) is an example of a fix to a multi-year problem.
AMD has had a number of initiatives that took a significant amount of time to show significant results, assuming they have by now.
Some items like the original form of TrueAudio were damp squibs. I can point to some ancillary features or design efforts that made it to presentation slides that either went nowhere, or were touted as new multiple launches in a row.

Also, RTG's marketing has been a bit less controlled in what it's promised or marketed. Whatever lessons may have been learned from Llano, have not been integrated well into the messaging of the marketing or leadership.
I've been cautiously supportive if some of the more assertive stance and a recognition of needing to make investment into graphics for some time, but one thing I've commented on several product generations now is that RTG's bluster has managed to stay bigger than what it's delivered.
 
What I really don't understand, this time they promoted the new features really strong . There was a 30 minutes interview on gamersnexus with Mike mantor only about primitive shader. And a big part in the white paper was also about primitive shader. Why they do this? I thought they knew before release what's broken and then they shut down the advertisement about this feature?
 
What I really don't understand, this time they promoted the new features really strong . There was a 30 minutes interview on gamersnexus with Mike mantor only about primitive shader. And a big part in the white paper was also about primitive shader. Why they do this? I thought they knew before release what's broken and then they shut down the advertisement about this feature?
Looks like everything related to Radeon Pro took a huge chunk of resources.
Support for Radeons in most major CAD/VFX applications does not appear overnight.
They've started developing gaming (well, consumer) drivers somewhere mid-July, considering fetch once appeared literally days before RX Vega launch.
Juggling three lineups worth of products while creating ecosystem from scratch is a difficult task, I admit it.
 
AMD sure is pumping out new gaming drivers at a rapid pace, though.

What I really don't understand this time the promoted the features really strong. There was a 30 minutes interview on gamersnexus with Mike mantor about primitive shader. Why they do this?

They also demoed Deus Ex's benchmark run at 4K with and without DSBR and it showed a very large performance difference, yet nowadays in Vega it behaves just like an overclocked Fury X.
 
Makes you wonder even more, doesn't it?
I don't even know what they were even thinking launching RX Vega as it is now.
I mean happy shareholders are nice, but people (as in your average consumer) want an NVidia killer, and hopefully for cheaper.
 
Many people just wanted an alternative to nvidia at a similar price/performance but with FreeSync support.


I know I did.
These are few.
Most people always want for flagship Radeon to be an nVidia killer so that nVidia cuts prices and they can get their GeForce cheaper.

It's all about mindshare and to regain it AMD needs to launch something that mercilessly murders nVidia, R300-style. That's not going to happen though.
 
These are few.
Most people always want for flagship Radeon to be an nVidia killer so that nVidia cuts prices and they can get their GeForce cheaper.

And where are the numbers to back that up those expectations?

How many times have AMD come up with a "flagship nvidia killer" within the last 10 years, so that people should expect it to happen again this year?
 
How many times have AMD come up with a "flagship nvidia killer" within the last 10 years, so that people should expect it to happen again this year?
Cypress, Tahiti, Hawaii and Fiji were all that, some more successful than the others.
Still not enough to get the mindshare back.

Actually Polaris was the first time AMD went back to small die strategy in ages and that was because it doesn't scale well with ALU count as evidenced by current Vega.
 
Last edited:
What I really don't understand, this time they promoted the new features really strong . There was a 30 minutes interview on gamersnexus with Mike mantor only about primitive shader. And a big part in the white paper was also about primitive shader. Why they do this? I thought they knew before release what's broken and then they shut down the advertisement about this feature?
The interview wasn't marketing promoting the primitive shader though it partially has that effect. Gamersnexus asked AMD to explain how culling works and AMD obliged. Polaris and Vega both advertised features that targeted improving culling performance so Gamersnexus wanted to explain the concept to their audience.
 
Looks like everything related to Radeon Pro took a huge chunk of resources.
Support for Radeons in most major CAD/VFX applications does not appear overnight.
They've started developing gaming (well, consumer) drivers somewhere mid-July, considering fetch once appeared literally days before RX Vega launch.
Juggling three lineups worth of products while creating ecosystem from scratch is a difficult task, I admit it.

If you can not do it, then you need to have less market coverage or invest heavily into your software development and grow the manpower there by a lot. NV often calls itself a software company and this is the right mindset in the market. You can have the world´s most advanced GPU, it is worth nothing if the drivers can not support it.
 
How many times have AMD come up with a "flagship nvidia killer" within the last 10 years, so that people should expect it to happen again this year?
RV770, Tahiti and Hawaii come to mind, if you extend the 10-yrs-timeframe a little, also R580 and before that of course R300/350/360. All of those times, AMD or ATi was able to regain marketshare. With generously applied artistic freedom, you could even conjure up the paradigm of every other major generation featuring such a killer. After Fiji being only halfway succesfully at this and Polaris explicitly being a mid-range offering, I think expectations were high for Vega and AMD itself did at least willfully endure if not actively influence the hype building up.

But it's basically a moot point. Vega, if not broken as suggested, was at least rushed out the door months too early. While AMDs CEO commited early to a launch in H1-2017, I do not think it helped much to rush gaming Vega out the door soon after.
 
They also demoed Deus Ex's benchmark run at 4K with and without DSBR and it showed a very large performance difference, yet nowadays in Vega it behaves just like an overclocked Fury X.
Well, it really doesn't surprise me - AMD's GFX marketing stuff has been totally fake since Fury.

The drivers/fw/sw side of AMD is just plain sad. For instance check AMD's ROCm issues at GH... I'm wondering who would pick Vega/AMD with this kind of support.
 
RV770, Tahiti and Hawaii come to mind

The comment was "most people always want for flagship Radeon to be an nVidia killer".
RV770 wasn't a flagship killer as the GTX 280 was definitely faster. Tahiti was the fastest single-GPU card for one month until GTX680 released. It wasn't a nvidia killer and AMD had to release the "GHz Edition" only 5 months later to counter GK104. Hawaii was the fastest card for one week until the 780 Ti released, but it was panned by reviewers due to the loud cooling solution and 3rd party solutions took several months to appear, so in the end it didn't really matter.

R300 is no less than 15 (fifteen) years old. Does it make any sense to expect new AMD GPUs to get a similar release success to a single 15 year-old release?


If we look at Radeon history, we can definitely expect them to release cards that are simply competitive-per-price at launch and then age better than nvidia counterparts.
We definitely cannot count on new Radeons being flagship killers during their launch period.
 
Back
Top