But then there's the NAND crash from a 13 minute power outage for WD and Toshiba, who are around 25% of the market. So prices could/might raise from that.
It's a shame all these memory manufacturers all keep having mishaps. Fortunately, they only seem to occur when the market is oversupplied, so it just has the effect of raising prices.
But then there's the NAND crash from a 13 minute power outage for WD and Toshiba, who are around 25% of the market. So prices could/might raise from that.
Return to standard manufacturing rates is expected to only occur by mid-July.
Damage includes impacted wafers that were being processed, the facilities, and production equipment, hence the need for an extended inoperability period to seriously assess damages and required reinvestment. 35% of the world's NAND supply is produced at this Yokkaichi Operation campus (which includes six factories and an R6D center), so this outage and NAND flash loss is likely to impact the global markets.
This isn't going to impact anybody who has a contract for NAND production for mass market products, this will merely limit the amount of NAND available outside contracted bulk production which will likely drive up consumer prices a little for a few months if stock becomes low.
I wonder what they categorize as "Large Die". 300 mm² and more or like what is likely for monolithic console chips, or ridiculous dies like Knights Landing (682 mm²) and Nvidia TU102 (754 mm²)?
I wonder what they categorize as "Large Die". 300 mm² and more or like what is likely for monolithic console chips, or ridiculous dies like Knights Landing (682 mm²) and Nvidia TU102 (754 mm²)?
HVM of 7nm kicked off during April 2018 for TSMC so the graph for large die refers to a Q3 2018-Q1 2019 timeline. You wouldn’t expect a next gen chip to be in high volume production during that time.
- Sony is further ahead with their console
- We know for fact that they sent dev kits in January, but there is also rumor of dev kits being sent in 2018
- Dev kits sent in Jan 19' or half a year before would not have SOC containing Navi
- January (let alone 18') was too early for 7nm and too early for production silicon
- January SDK contains Vega 56 GPU running at 1.8GHz equaling 12.9TF
- Gonzalo chip, noticed back in Jan running at 1.1GHz and then 4 months later at 1.8GHZ is Navi GPU found in retail PS5 unit
- PS5 GPU will be around 8.5-9.5TF + RT hardware instead of dev kits with Vega56
This is, IMO (knowing TDP, die sizes of Navi and when chips are taped out) only possible explanation.
Who cares what would it be early dev kit? We know at beginning of a year there was no Navi, and we know Navi has palpable perf/tflop advantage therefore they had to put some big Vega inside dev kits to emulate Navi itself.
Here are my theoretical performance estimates based on 7nm DUV & EUV designs On 7nm DUV (9-10TF conservative estimate)
60CUs total - 54CU enabled - 320 bit bus - 3 Shader Engines -> 404mm2
But following design rules that prioritize density over frequently (empty spaces on 5700 die shots), should bring it closer to 390mm2 then shrink to 331mm2 when 6nm becomes available 2021-2022
75mm for CPU
81.6mm2 for 320 bit bus (20GB)
69.99mm2 for 3SEs (ROPs, cache, etc.)
IO 20.72mm2
10.85mm2 GPU CC (CP, ACEs etc.)
146.7mm2 30DCUs with RT (60CUs)
Total:
404.86mm2
54CUs @ 1400Mhz = 9.67TF (lowball)
54CUs @ 1500Mhz = 10.36TF (very likely)
54CUs @ 1550Mhz = 10.7TF
54CUs @ 1592Mhz = 11TF (best case scenario)
On 7nm EUV (11-12TF conservative estimate)
66CUs total - 60CU enabled - 384bit bus - 3 Shader Engines -> 348.68mm2 (335mm2 with a 320 bit bus)
75mm for CPU
97.92mm2 for 384 bit bus (24GB)
69.99mm2 for 3SEs (ROPs, cache, etc.)
IO 20.72mm2
10.85mm2 GPU CC (CP, ACEs etc.)
161.37mm2 33DCUs with RT (66CUs)
Total:
435.85mm2 7nm DUV
348.68mm2 7nm EUV
Here are my theoretical performance estimates based on 7nm DUV & EUV designs On 7nm DUV (9-10TF conservative estimate)
60CUs total - 54CU enabled - 320 bit bus - 3 Shader Engines -> 404mm2
But following design rules that prioritize density over frequently (empty spaces on 5700 die shots), should bring it closer to 390mm2 then shrink to 331mm2 when 6nm becomes available 2021-2022
75mm for CPU
81.6mm2 for 320 bit bus (20GB)
69.99mm2 for 3SEs (ROPs, cache, etc.) IO 20.72mm2
10.85mm2 GPU CC (CP, ACEs etc.)
146.7mm2 30DCUs with RT (60CUs)
Total:
404.86mm2
54CUs @ 1400Mhz = 9.67TF (lowball)
54CUs @ 1500Mhz = 10.36TF (very likely)
54CUs @ 1550Mhz = 10.7TF
54CUs @ 1592Mhz = 11TF (best case scenario)
On 7nm EUV (11-12TF conservative estimate)
66CUs total - 60CU enabled - 384bit bus - 3 Shader Engines -> 348.68mm2 (335mm2 with a 320 bit bus)
75mm for CPU
97.92mm2 for 384 bit bus (24GB)
69.99mm2 for 3SEs (ROPs, cache, etc.) IO 20.72mm2
10.85mm2 GPU CC (CP, ACEs etc.)
161.37mm2 33DCUs with RT (66CUs)
Total: 435.85mm2 7nm DUV
348.68mm2 7nm EUV
Here are my theoretical performance estimates based on 7nm DUV & EUV designs On 7nm DUV (9-10TF conservative estimate)
60CUs total - 54CU enabled - 320 bit bus - 3 Shader Engines -> 404mm2
But following design rules that prioritize density over frequently (empty spaces on 5700 die shots), should bring it closer to 390mm2 then shrink to 331mm2 when 6nm becomes available 2021-2022
75mm for CPU
81.6mm2 for 320 bit bus (20GB)
69.99mm2 for 3SEs (ROPs, cache, etc.)
IO 20.72mm2
10.85mm2 GPU CC (CP, ACEs etc.)
146.7mm2 30DCUs with RT (60CUs)
Total:
404.86mm2
54CUs @ 1400Mhz = 9.67TF (lowball)
54CUs @ 1500Mhz = 10.36TF (very likely)
54CUs @ 1550Mhz = 10.7TF
54CUs @ 1592Mhz = 11TF (best case scenario)
On 7nm EUV (11-12TF conservative estimate)
66CUs total - 60CU enabled - 384bit bus - 3 Shader Engines -> 348.68mm2 (335mm2 with a 320 bit bus)
75mm for CPU
97.92mm2 for 384 bit bus (24GB)
69.99mm2 for 3SEs (ROPs, cache, etc.)
IO 20.72mm2
10.85mm2 GPU CC (CP, ACEs etc.)
161.37mm2 33DCUs with RT (66CUs)
Total:
435.85mm2 7nm DUV
348.68mm2 7nm EUV
Why 3 Shader Engines?
GCN had a limit of 16 CU per Shader Engine, with 4 shader Engines used for 64 CUs.
RDNA is using 10 Double CUs per shader Engine. Or the equivalent to 20 GCN CU per SE.
As we can see, the 16 limit is beated if we compare CU, but not if we compare GCN CU with RDNA Double CU.
IF RDNA simply created a new CU that is a Double GCN CU, that would mean we can have 16 Double CU per Shader Engine, or 32 GCN CU per Shader Engine. In that case, 2 shader Engines would have 64 GCN CU, allowing RDNA no reach 128 GCN CUs keeping the same efficiency as GCN on 64 CU.
And in this case you would only need to disable 2 Double CU, or 4 GCN CU. This would allow 56 CU.
At 1800 Mhz, this would go to 12.9 Tflops.
Why 3 Shader Engines?
GCN had a limit of 16 CU per Shader Engine, with 4 shader Engines used for 64 CUs.
RDNA is using 10 Double CUs per shader Engine. Or the equivalent to 20 GCN CU per SE.
As we can see, the 16 limit is beated if we compare CU, but not if we compare GCN CU with RDNA Double CU.
IF RDNA simply created a new CU that is a Double GCN CU, that would mean we can have 16 Double CU per Shader Engine, or 32 GCN CU per Shader Engine. In that case, 2 shader Engines would have 64 GCN CU, allowing RDNA no reach 128 GCN CUs keeping the same efficiency as GCN on 64 CU.
And in this case you would only need to disable 2 Double CU, or 4 GCN CU. This would allow 56 CU.
At 1800 Mhz, this would go to 12.9 Tflops.