AMD Radeon VII Announcement and Discussion

are less than 50%, probably less than 40%. So if they can get 60 functional chips and have to pay 12k for wafer, it is ~$200

Anything even remotely based on reality to back that up, your basically saying that after ~7 months of 7nm mass production including having more capacity then what apple needs yields are in the toilet. If they are why aren't they cutting more CU's, let alone rops and MTU's? They have a 64CU MI60 a 60CU MI50 and Radeon VII.


edit: taken from TMSC annual report, cant find anything remotely credible for yields, positive or negative.

A very fast yield ramp-up is expected as more than 95% of tools for 7nm FinFET technology are compatible with those for 10nm FinFET technology. Compared to 10nm FinFET technology, 7nm FinFET offers approximately a 25% speed improvement or a 35% power reduction. In addition, 7nm FinFET technology can be optimized for mobile applications and high-performance computing devices.
 
Last edited:
7nm is the most expensive and most complex process so far. And increase is exponential, not liner. I.e. I think this die costs ~$200 or even more. Add complex packaging, 16GB HBM, expensive board (VRM, cooling)... and fact that maybe 0,5-1% gamers are willing to pay $500+ for AMD card, and it seems like ~$700 is more or less the lowest reasonable price.

The assumptions around that math seem very misguided.
TSMC has their 7nm process underutilized, which goes hand-in-hand with apple severely downsizing their iphone sales estimates for 2019 and rumors/confirmations that the current 7nm process isn't anywhere near TSMC's performance projections.
TSMC isn't charging as much of a premium for 7nm wafers as you apparently believe.

Samsung just announced a 30% drop in quarterly revenue from last year due to lower-than-expected demand for memory chips. This means Samsung isn't charging more for HBM2 or any other form of memory. They're charging less. Those statements about HBM2 demand are 7 months old, and they won't hold true for today, after recent news.

Also, less than 40% yields on a 330mm^2 chip based on what? TSMC kicked off 7nm volume production 9 months ago. They've been making at least 5 different chips since then.


Unless you have close ties with someone at AMD, you have absolutely no idea how much Vega 7 costs to make, and how much AMD makes from it by selling it at $700.
 
TSMC isn't charging as much of a premium for 7nm wafers as you apparently believe.

Your implied asumption is then that TSMC's 7 nm process is not (significantly) more expensive than 12nm? That TSMC is offering 7 nm at (he he, just think about the ammount of R&D and bilions of $ they put into it) a discount?

Which is the cause and effect? Maybe 7 nm is underutilized also because it's too expensive (low yields factor as well into making it expensive from a customer's point of view). Underutilized could mean "cheap" but it's definetly not a requirement


On the other hand, I'm sure AMD had made their math work too.
If VII are indeed harvested products, then they are free profit anyway. Regardless of their costs.
If not and the margins would be low, AMD will order low quantities and get away mostly (not that this is an easy thing to do. Point is there are variables they can play with ..)
 
By best you mean highest? Because there have been way better price/die-size ratios in the past.
And may I remind you that GK104 on the GTX 680 was a fully enabled die?

Yes highest price for a smaller die. I ended up forgetting the part about the cut down die. In that case this part is quite unique in that respect, yes.
 
7nm costs 2 to 3 times the cost of 16/14nm. And that's a fact.

2ywsupl.png


4r9c2e.jpg
 
7nm costs 2 to 3 times the cost of 16/14nm. And that's a fact.

2ywsupl.png


4r9c2e.jpg
We have discussed these slides before. While they are qualitatively valid, they are quantitatively not.
(It will be interesting to see how EUV actually affects cost. Reduced number of masks, and possibly higher yields vs. throughput limitations, mask yields, resist and pellicle issues.)
 
Or maybe they ended up surprised by the "Meh" RTX series.

RTX series is anything but meh. In normal rasterization the 2080/ti is much faster then 1080/ti. 2060 performing like a 1080. The added features are a bonus to me. Not that RT, DLSS, vrs and mesh shading are a bad thing. Its a whole new architecture aswell, with very reasonable performance/watt/heat stats.

The only thing that is meh is the price and lack of competition from AMD. They should have shown Navi, high end part if that exists.
 
The only thing that is meh is the price and lack of competition from AMD. They should have shown Navi, high end part if that exists.

Precisely. A product merit cannot be evaluated independently from its price. However, lack of competition is not the sole factor behind the pricing. RTX die sizes are huge compared to previous generations, so NVIDIA would always price them high (when was the last time NVIDIA was cheaper than AMD anyway? Three generations ago when GTX680 was cheaper than HD7970?). Had NVIDIA choosen to not invest in RT cores, die size would have been smaller and /or had more resources for traditional rendering.

So from the point of view of die size and price points vs performance, the RTX performance is Meh, judging from past years "standards".

Edit - If AMD would have heard that Rtx2080 was a 545mm2 monster, they would surely have expected more performance than it actually has.
 
oh really ? And these tests have what connections to utilizing SSD in the memory pool ?
Absolutely nothing to do with games. For large data sets it's for commercial applications like large format video rendering, AI etc. Games obviously aren't relevant when we're discussing any possible limitations of 16Gb of memory.
 
Back
Top