AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

16k isn't even close to paper-launch. Many high-end parts were launched even with lower stock of GPUs.
16k is bad if someone believes it's going to be a GTX 1070 competitor (upper mainstream part). But then we need to assume that Vega barely faster than Fury X. The clock difference alone would make it a GTX 1080 competitor. With all the hardware improvements AMD has revealed and with the rumoured 1.5 GHz clocks, Vega should be somewhere between GTX 1080 and 1080 Ti. With this kind of performance, we are talking about a 500$+ card. Not exactly a high volume product.
 
Well, something suprise me a little bit, we will have heard something from SKhynix ( i doubt it is the interposer the problem ), as been able to release only 30-50K chips seems way out of target..

I can imagine, there is more tesla with HBM2 who are in the wild.

Im allways a little bit cautionous with this type of informations who seems come from nowhere. ( but it is quite possible ).
 
I could have sworn AMD confirmed in a conference call a few Q's ago, that they had pre purchased several tens of millions of dollars of HBM2..... I'll see if i can find it, but these two idea's seem quite mutually exclusive.......

edit: I can find references to the statement but i cant find the statement itself, at the end of Q4 16 AMD exercised its option to spent $80 million dollars on GPU components that they didn't need at the time. Doesn't say HBM2 specifically but it is most logical.
 
Last edited:
It was a question in the February 1. earnings call. The question was in reference to inventory coming in ~90M over previous guidance.

First of all, it was higher than anticipated due to product ramps, product mix, and also our higher expected revenue in the first half of 2017. We also had an opportunity to purchase some inventory in a tight PC supply environment at commercially favorable terms. And we took the opportunity to go ahead and purchase the inventory, given what we see from a revenue standpoint for the first half of 2017.

https://finance.yahoo.com/news/edited-transcript-amd-earnings-conference-042023719.html
 
Well, something suprise me a little bit, we will have heard something from SKhynix ( i doubt it is the interposer the problem ), as been able to release only 30-50K chips seems way out of target..
Each Vega has 2 HMB2 stacks, so 2*16K = 32k chips.


It was a question in the February 1. earnings call. The question was in reference to inventory coming in ~90M over previous guidance.
That quote does sound like they purchased the whole inventory available. It could be the main reason why SKHynix took the 2Gbps HBM2 stacks out of their portfolio during this last quarter.
 
BTW, did HBM "1" became easier to produce and cheaper during Fury X life cycle ?

Can it be known? Perhaps they just made the production line run until it is not needed, rather than spend resources improving it.
What we do know is HBM "3" was said to be the cheaper one. So easier to produce, cheaper, high volume will only really be found with HBM 3 and later. I have the feeling they iterate rather than try to beat a dead horse.

If HBM2 is a dead horse, that is a fine and fascinating one though!
 
What we do know is HBM "3" was said to be the cheaper one. So easier to produce, cheaper, high volume will only really be found with HBM 3 and later. I have the feeling they iterate rather than try to beat a dead horse.

From what I remember of last year's Hot Chips presentations from Samsung and SKHynix, HBM3 and Low-cost HBM aren't necessarily the same thing.
Samsung's proposition for Low-cost HBM has actually a lower bandwidth-per-stack than HBM2:

udUVkZD.png


OTOH, SKHynix's presentation clearly mentioned HBM3 getting higher bandwidth than HBM2.

We may be seeing two new different HBM standards from JEDEC this year or the next. For example, HBM3 would go into high-end graphics cards (e.g. 2 stacks giving close to 1TB/s), with Low-cost HBM going into mid-range graphics cards and APUs (e.g. single stack doing 200GB/s).
 
What I find odd is that a couple of posts below a company called System Plus Consulting was linked portraying the cost difference at the European 3D Summit (http://www.semi.org/eu/european-3d-summit-2016), seemingly an industry conference from/by companies involved in the promotion of 2.5D/3D production designs. There, the cost difference could be interpreted as not so substantial, even though at the time of the conference, GDDR5 was well matured and HBM available from a single vendor for half a year.

Now, Samsung, who has always been an advocate of margins and high volume, is linked here with a HC-presentation proposal of reducing cost for another iteration of HBM in order to increase competitiveness. Somehow, both these things do not mix well in my mind.

Or is "consumer segment" as of Samsungs proposal to be interpreted as mainstream/entry-level market as opposed to the high-end cards that were referenced by System Plus Consulting?
 
Last edited:
We may be seeing two new different HBM standards from JEDEC this year or the next.
JEDEC has two tiers of 2.5/3D memory standards. HBM is one, Wide I/O another. Wide I/O does have x512 I/O, but it doesn't seem to have a high traction. I guess perhaps they are going to unify both standards, at least in their names, huh?
 
JEDEC has two tiers of 2.5/3D memory standards. HBM is one, Wide I/O another. Wide I/O does have x512 I/O, but it doesn't seem to have a high traction. I guess perhaps they are going to unify both standards, at least in their names, huh?

They're both 2.5D memory standards but very different in target applications. Wide IO is for ultra-low power devices (the PS Vita uses it for VRAM).
AFAIR from older docs, predicted bandwidth-per-watt was actually similar between HBM and Wide I/O2, but the latter is supposed to be implemented as a single stack on top of an ULP SoC using TSV (no interposer).
HBM uses higher clocks, more pins and and multiple stacks through an interposer to achieve very high bandwidth levels.


If I had to guess, Wide I/O didn't really lift off on smartphone/tablet SoCs because:

- Price for implementation is high compared to LPDDR, so it could only go to high-end SoCs in high-end devices
- Memory density is limited. This article from last year shows a 1GB module, which was obviously not enough for high-end devices in 2016.
- High-end SoCs ballooned in power consumption and heat output during the past few years, peaking in the pre-FinFet SoCs like Snapdragon 810 and Tegra X1. Soldering a memory chip on top of those SoCs would present an even larger problem if you can only get tiny heatsinks at best.


Wide IO2 would have been very interesting for a SoC that required large bandwidth for a priority in graphics performance but needed to keep power and heat low through moderate clocks on both the CPU and GPU.
This means Wide IO2's 68GB/s would've been good to use as VRAM for a mobile console, just like Wide IO was great for the Vita back in 2011.



But to end the rambling, point was we might see low-cost HBM (they'll probably call it HBM-LE or something) and HBM3, but neither has reason to fuse with Wide IO.
 
We may be seeing two new different HBM standards from JEDEC this year or the next. For example, HBM3 would go into high-end graphics cards (e.g. 2 stacks giving close to 1TB/s), with Low-cost HBM going into mid-range graphics cards and APUs (e.g. single stack doing 200GB/s).

Indeed "Remove ECC" gives it away, and might even call into question whether it will be used with an APU. By that I mean an APU for traditional desktop might be branded as Xeon or Radeon Pro / Opteron, if it supports ECC (or Xeon M)
If Low-cost HBM gets adopted on a laptop GPU, that will give it a large market.
 
What I find odd is that a couple of posts below a company called System Plus Consulting was linked portraying the cost difference at the European 3D Summit (http://www.semi.org/eu/european-3d-summit-2016), seemingly an industry conference from/by companies involved in the promotion of 2.5D/3D production designs. There, the cost difference could be interpreted as not so substantial, even though at the time of the conference, GDDR5 was well matured and HBM available from a single vendor for half a year.

Now, Samsung, who has always been an advocate of margins and high volume, is linked here with a HC-presentation proposal of reducing cost for another iteration of HBM in order to increase competitiveness. Somehow, both these things do not mix well in my mind.

Or is "consumer segment" as of Samsungs proposal to be interpreted as mainstream/entry-level market as opposed to the high-end cards that were referenced by System Plus Consulting?

Yes, I interpret "consumer segment" as in application to cheaper devices than enthusiast/professional graphics cards or installations (machines) using enthusiast/professional level graphics cards. As such it could be used to target more price sensitive segments of the market. One area of interest for a cheaper HBM implementation would be APU/SOC based consumer devices, especially Windows devices. Probably not smartphones as would they actually benefit from the bandwidth?

However, will it be priced competitively with GDDR in those segments? Or would a slight price premium be justified by increased bandwidth and power savings combined with PCB savings (smaller main board) in certain scenarios? Smaller PCB wouldn't save much in cost, but could be attractive for space limited designs.

A gaming oriented Windows NUC sized (or smaller) device featuring an APU with HBM would certainly be interesting if the APU had a capable CPU and GPU (Raven Ridge or something similar?).

Regards,
SB
 
This has no credibility at all but! if its true...

https://www.reddit.com/r/Amd/comments/6aeat3/full_vega_lineup_and_release_date_revealed/

-
RX Vega "Core" - RRP: $399.99 (1070 perf or better)

RX Vega "Eclipse" - RRP: $499.99 (1080 perf or better)

RX Vega "Nova" - RRP: $599.99 (1080TI perf or better)

Supposedly he has a friend who worked in AMD that told him that. Again this is just for fun I think.

Eclipse could be a good upgrade for my 290. Would have perfered it a bit cheaper but we will see. Not sure if i'm upgrading my cpu or gpu this year
 
Eclipse could be a good upgrade for my 290. Would have perfered it a bit cheaper but we will see. Not sure if i'm upgrading my cpu or gpu this year

I've got a 290 and I'm also looking to upgrade. A $500 1080-like Vega doesn't really do it for me though. I'd just get a 1080 instead. They've gone as low as $420 in the past month and there are routine $440-460 sales.

But ~30% more performance for only 20% more cost makes that Nova look appetizing.

Honesty, it's the Nova that really kills this rumor's credibility. No way AMD doesn't release a $700 Vega. Even if it needs a CLC and a 300W TDP to barely keep up with the 1080 Ti, AMD will do it. Otherwise, they are leaving money on the table via unexploited market segmentation.
 
Anyone can make rumours and this one sounds BS. That's an awful wide range for Vega to cover from 1070 to 1080Ti or better... It would be nice, but I just don't believe they have an answer for the 1080Ti. I hope it can beat the 1080 in wide range of games, not just in Doom Vulcan.
 
Anyone can make rumours and this one sounds BS. That's an awful wide range for Vega to cover from 1070 to 1080Ti or better... It would be nice, but I just don't believe they have an answer for the 1080Ti. I hope it can beat the 1080 in wide range of games, not just in Doom Vulcan.
I mean looking at performance a 64 rop Polaris with higher clocks would be 1080 performance levels . So unless AMD did nothing but that they should be close to 1080TI performance imo
 
Back
Top