AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

I think the plus signs are adopted (involuntarily) from Intel's marketing slides who had lately fallen in love with them. Whatever one asumes that means for Intel, it's a starting point for interpreting AMD's slides. So a comparsion is not unwarranted , IMO
 
It wouldn't be the first time GloFo silently developed a process for AMD without ever listing it. Carrizo used such a process. 28nm SHP IIRC.

LPP is designed for operating in the lower frequency range, while going up causes it to spike in power and Fmax is low compared to what they could do with a process specifically designed for HP.

If they pull another Carrizo, there's a process in development without being publically documented.
 
Last edited:
I think this goes here.


HBMs cost 175 dollars 3 times more than GRRX5 but VEGA with gddr5 would consume around 100W more with would basically make it a nuclear reactor.
I stopped reading when they butchered that cost estimate. With half the bandwidth costing 3x as much. Would have been much simpler to just say HBM2 costs ~50% more for 75% less power in 75% less area. That seems a rather easy choice to make. I believe Raja stated the HBM was the largest component by cost as well.

Wait, so they're using memory that requires far less power than gddr5, but they're still significantly higher total power than the competing Nvidia cards?
With the features that reduce the amount of work required not enabled, that's not all that surprising. That also let's the clocks down from the insane levels they seem to be currently.
 
That's why the improved performance with undervolting intrigues me so much, as do all the power saving features that none of the reviewers seem to bring up. They REALLY save some power with very minimal impact on performance.

Not being a fanboy, just the bits I remember liking and getting excited about at some presentations that I'm just not seeing in the reviews.

Doesn't really matter, I won't be able to get one and if I did my rig couldn't power it. :)
 
That's why the improved performance with undervolting intrigues me so much, as do all the power saving features that none of the reviewers seem to bring up. They REALLY save some power with very minimal impact on performance.

Not being a fanboy, just the bits I remember liking and getting excited about at some presentations that I'm just not seeing in the reviews.

Doesn't really matter, I won't be able to get one and if I did my rig couldn't power it. :)

I agree about the power saving mode. Makes a huge difference, but it still draws more than the GTX cards. Undervolting is more a "mileage may vary" thing, so I can see why it's not brought up in reviews of standard performance.
 
With the features that reduce the amount of work required not enabled, that's not all that surprising. That also let's the clocks down from the insane levels they seem to be currently.
AMD spent (according to themselves) enormous xtor budget and eng. resources to reach higher clocks than with Polaris. AMD somehow aimed for this "insanity"...
 
Interesting! From what I see there, they only talk in an abstract manner about "the HBM". Neither „gen2“ (even though that would make sense given the capacity) nor how many stacks exactly. From what's shown there and given how much emphasis they put on power optimized operation, 4-GByte-stacks with ~150 GB/s each seem more likely.
 
Vega competes with with 1 card using HBM2, 2 cards using GDDR5X and 1 card using GDDR5. Why the fixation on comparing HBM2 to GDDR5?

Then they say 2-stack HBM2 consumes up to 20W in Vega FE and a 512bit GDDR5 setup would consume 80-100W. Then they say using GDDR5 would consume 100W more. So first they imagine a GPU memory config that wouldn't make sense to exist because GDDR5X in a 384bit bus would be a much more logical choice (GP102 says hi). Then they also continually ignore HBM2's own power consumption just to claim using GDDR5 would consume 100W more (as if HBM2 consumed 0W), when in reality it would be a 80W difference.

And that 3x price comparison refers to which type of HBM2? To Samsung's stacks (4-Hi or 8-Hi?) or to SK Hynix stacks? Supposedly Samsung is making the 1.9Gbps 4-Hi stacks for Vega 64 and SK Hynix is making the 1.6Gbps 4-Hi stacks for the Vega 56 (which is the card actually being compared to the GDDR5 GTX1070). I'd have a hard time believing Samsung and SK-Hynix are charging the same for their differently-performing HBM2 stacks. Even more with SK-Hynix having shared the R&D for HBM with AMD (meaning there are most probably deals involving IP sharing).
Furthermore, the $175 price difference then magically turns into a $200 price difference mid-way into the article?


Really sounds like gamers nexus wanted to exaggerate a certain conclusion before the article was even written.
They claimed they were coming out with an article that displays the total costs for Vega and "how RX Vega isn't making a profit at $500", but this clearly isn't it.



Interesting to see Google's TPU2 achieves 600GBps from its two HBM stacks, a fair bit more than Vega.
https://www.servethehome.com/case-study-google-tpu-gddr5-hot-chips-29/
Where does it say it's 2 stacks?
 
Vega competes with with 1 card using HBM2, 2 cards using GDDR5X and 1 card using GDDR5. Why the fixation on comparing HBM2 to GDDR5?
I think the much more pressing problem for AMD might be, that atm Vega is competing for HBM2 allocation only with professional SKUs like Vega FE, GP100, GV100 and, come end of year, Googles TPU2 as well as Intels Crest-Family starting with Lake Crest. If demand outpaces supply, which is more likely the more demand there is, chances are, that prices go up from whatever level they are right now.

And IMHO, that would mean a world of monetary pain for AMD.
 
AMD spent (according to themselves) enormous xtor budget and eng. resources to reach higher clocks than with Polaris. AMD somehow aimed for this "insanity"...
NVIDIA made the same investment from Kepler to Maxwell. The difference is they had the efficiency to back it up.
 
I think the much more pressing problem for AMD might be, that atm Vega is competing for HBM2 allocation only with professional SKUs like Vega FE, GP100, GV100 and, come end of year, Googles TPU2 as well as Intels Crest-Family starting with Lake Crest. If demand outpaces supply, which is more likely the more demand there is, chances are, that prices go up from whatever level they are right now.

And IMHO, that would mean a world of monetary pain for AMD.

Neither Lake Crest or TPU2 are high-volume parts, certainly not in the same order of magnitude as Vega 10 which reaches a multitude of markets including gamers and miners.

I also doubt the sales contracts for HBM2 (or any other components) don't include price-regulating clauses for at least a >1 year medium term. They can probably charge as much as they want for new contracts (e. g. Google and Intel), but not for contract extensions/renewals.
 
Not compared to mid-range graphics cards, true. But in order to discuss this, you probably need to define "high volume". Google's TPUv2 for example will probably be used in 100k region, given that google would use some amounts for themselve, rent some out via cloud and even made 1000 Cloud-TPUs (á 4 TPUv2 chips, so that's 4K already, near Vegas launch quantity) available for free for researchers. Depending on ongoing success of AI applications - and a lot of companies are betting on this - there could be tremendous amounts of "AI-PUs" needed. Large supercomputer installations already use 10s of thousands of accelerators each.

I also doubt the sales contracts for HBM2 (or any other components) don't include price-regulating clauses for at least a >1 year medium term. They can probably charge as much as they want for new contracts (e. g. Google and Intel), but not for contract extensions/renewals.
Since it's inception, HBM has been touted as being useful in power constrained appliances, be they accelerators or networks. Vendor must have been deaf dumb and blind, if they really let themselves be locked in via pre-determined prices for commodity hardware, knowing that the professional market would pay through the nose once memory capacities would suffice for their applications. That was not the case with 4-Gb-HBM gen1, but very much with gen2. So, maybe you're right and if so, each consumer of HBM gen2 could count themselves very lucky to have struck such a deal. But if you're right, investors in the memory sector will surely be after the memory vendor's sales people with pitchforks and torches.
 
Last edited:
I think this goes here.


HBMs cost 175 dollars 3 times more than GRRX5 but VEGA with gddr5 would consume around 100W more with would basically make it a nuclear reactor.
So, they're basing it on just David Kanters estimation?
According to analysts 4 stacks of HBM1 on Fiji's cost under $50, I seriously doubt 2 stacks of HBM2 would cost $150
 
Back
Top