AMD: Navi Speculation, Rumours and Discussion [2017-2018]

Status
Not open for further replies.
60% of the cost compared to a monolithic die with 10% overhead from using inter-chip IF.

With the cost difference they could even quadruple the IF overhead and it would still be worth it.
Vega 20 might be AMD's last big GPU.

It would indicate that AMD has some work to do on its GMI links, and consider not making an MCM GPU fully connected like EPYC. The matching of memory to fabric bandwidth would mean IF overhead would be increased by an order of magnitude if using the current tech.
 
What the ideal configuration would be?
- multiple mini-complete-gpu linked together, with a memory channel for every gpu
or
- master block with any not redundant block (like video decode), memory channels, management logic, linked to multiple slave NCU blocks
 
What the ideal configuration would be?
- multiple mini-complete-gpu linked together, with a memory channel for every gpu
or
- master block with any not redundant block (like video decode), memory channels, management logic, linked to multiple slave NCU blocks

The proposals given so far tie memory to the largest direct consumer, which at a minimum the CU array but usually a mostly complete graphics SOC.

Centralizing links and memory to a single die is going to put a hard limit on how far the system can be scaled. That has a large number of connections running into one spot, and the master die's area and perimeter cap what the overall MCM can implement in terms of the number of chips and bandwidth.
 
I have a little hard time figuring out how 4 or more HBCC can collaborate togheter, or basically how it can hide latencies with more than 2 blocks.
It must be a driver's hell :S

Anyone has done the math to calculate the size of a gpu with 16 NCU, one memory channel, and all the spare stuff? (like for epyc's)
 
I have a little hard time figuring out how 4 or more HBCC can collaborate togheter, or basically how it can hide latencies with more than 2 blocks.
I'm not sure they would. Each paging as necessary with locality left to the programmer or driver. As a victim cache it would just pull and duplicate data as needed. The GPU should already mask the latency sufficiently.

Anyone has done the math to calculate the size of a gpu with 16 NCU, one memory channel, and all the spare stuff? (like for epyc's
I'm thinking a pair of NCUs may make more sense. Similar to 8 core Ryzen(two cluster) design. Otherwise you would double bandwidth and capacity relative to Vega. No die shots, so I doubt there is a basis for estimating size.
 
I know this refers to Epyc, but these numbers sure make a MCM GPU very appealing.

AcmTUqW.jpg


60% of the cost compared to a monolithic die with 10% overhead from using inter-chip IF.

With the cost difference they could even quadruple the IF overhead and it would still be worth it.
Vega 20 might be AMD's last big GPU.

If Vega 20 is built on 7nm as the rumors suggest, it's hardly big.
 
If Vega 20 is built on 7nm as the rumors suggest, it's hardly big.
With 1:2 DP rate, twice the memory channels and maybe some more low hanging fruit from Vega 10, even at 7nm it might be over 300mm^2. Navi being a modular multi-chip GPU I think it each die would be in the 200-250mm^2 range. That's the die size AMD is using for Zen and Polaris, for example.
 
If Vega 20 is built on 7nm as the rumors suggest, it's hardly big.

If Vega is 20 is built on 7nm, it's not coming out any time soon either; however somebody correct me if I am wrong but every slide mentioning 7nm does so in conjunction to Navi. What is the source of Vega 20-7nm rumor?
 
According to these leaks that so far have been pretty accurate so far, Vega 20 is coming next year at 7nm:

EAomXlW.jpg


The roadmap doesn't show it as a gaming card, though. It's a direct replacement to Hawaii for HPC DP compute. The 4 stacks might be there mostly to reach the 32GB total HBC, which seems to be pretty relevant for DP compute.
 
Last edited by a moderator:
According to these leaks that so far have been pretty accurate so far, Vega 20 is coming next year at 7nm:

EAomXlW.jpg


The roadmap doesn't show it as a gaming card, though. It's a direct replacement to Hawaii for HPC DP compute. The 4 stacks might be there mostly to reach the 32GB total HBC, which seems to be pretty relevant for DP compute.

I would have expected there would have been heck of a lot more news regarding 7nm progress if it is going to be used for mass production of 15B transistor chips in less than a year from now. In fact, Vega 20 should be taping out right about now to hit 2H18 schedule.
 
I would have expected there would have been heck of a lot more news regarding 7nm progress if it is going to be used for mass production of 15B transistor chips in less than a year from now. In fact, Vega 20 should be taping out right about now to hit 2H18 schedule.
Well, GloFo does claim risk production H1/18 ramping to mass production H2/18
 
According to these leaks that so far have been pretty accurate so far, Vega 20 is coming next year at 7nm:

EAomXlW.jpg


The roadmap doesn't show it as a gaming card, though. It's a direct replacement to Hawaii for HPC DP compute. The 4 stacks might be there mostly to reach the 32GB total HBC, which seems to be pretty relevant for DP compute.

Unless GF's 7nm process turns out to be a complete failure with very little improvement in performance or power, it's definitely a gaming card. AMD just cannot afford to have something that's maybe 20~50% better than Vega and not bring it to market as a gaming product.
 
Status
Not open for further replies.
Back
Top