China Debuts First 7nm Data Center GPU

pharma

Legend
China Debuts First 7nm Data Center GPU To Rival Nvidia, AMD | Tom's Hardware
January 18, 2021
Shanghai Tianshu Zhixin Semiconductor Co., Ltd., commonly known as Tianshu Zhixin, announced that its Big Island (BI) GPGPU has come to life.
...
The BI packs 24 billion transistors, and it's based on a home-made GPU architecture. The chip is built with the cutting-edge 7nm process node and 2.5D CoWoS (chip-on-wafer-on-substrate) packaging. Tianshu Zhixin doesn't explicitly reveal the foundry that's responsible for producing the BI. However, the description of the node coincides with one of TSMC's manufacturing processes.
...
Tianshu Zhixin is keeping a tight lip on the BI's performance, but the company has teased FP16 performance up to 147 TFLOPS. For comparison, the Nvidia A100 and AMD Instinct MI100 deliver FP16 performance figures up to 77.97 TFLOPS and 184.6 TFLOPS, respectively. Note however that Nvidia's A100 also has Tensor cores that can do 312 FP16 TFLOPS (624 TFLOPS with sparsity).
 
Shanghai Tianshu Intellectual Semiconductor Co. (Tianshu Zhixin) announced Wednesday that it's nearing "mass production and commercial delivery" of Big Island, China's first domestically produced 7nm general-purpose GPU (GPGPU).

Tianshu Zhixin said in January that BI was made using an unidentified 7nm process node and 2.5D chip-on-wafer-on-substrate (CoWoS) packaging. On Wednesday, it confirmed our suspicion that BI was made using TSMC's 7nm FinFET process.

https://www.tomshardware.com/news/china-first-7nm-gpu-heads-to-production
 
Unless China forces these to be used internally I doubt they will be able to compete with NVIDIA or Cerebras.

Datacenter GPUs are a only a small part of the market, outside of short term movements in the Ethereum mining which Bitmain can't respond to in time, which are risky to stake a GPU company on. It will be hard to compete with NVIDIA/AMD which can recoup most of the one time costs in larger markets and have the most developer buy in. Cerebras at least has some massive architectural advantages for large MLPs.
 
Last edited:
Back
Top