Cerebras Completes Series F Funding, Another $250M for $4B Valuation

Every now and then a startup comes up with something out of place. In the generation of AI hardware, Cerebras holds that title, with its Wafer Scale Engine. The second generation product, built on TSMC 7nm, is a complete wafer packed with cores, memory and performance. Using proprietary manufacturing and packaging techniques, a Cerebras CS-2 features a single chip, larger than its head, with 2.6 trillion transistors. The cost of a CS-2, with adequate cooling, power and connectivity, is ‘a few million’, they tell us, and Cerebras has clients that include research, oil and gas, pharmaceuticals and defense, all after the single proposal. that provides a wafer-scale AI engine. Today’s news is that Cerebras is still in full-start mode, wrapping up a Series F funding round.

The new Series F funding round gives the company another $ 250 million in equity, bringing the total raised through venture capital to $ 720 million. In speaking with Cerebras prior to this announcement, we were told that this $ 250 million was effectively for 6% of the company, bringing the valuation of Cerebras to $ 4 billion. Compared to Cerebras’ last Series E funding round in 2019, where the company was valued at $ 2.4 billion, we are seeing an additional $ 800 million in value year over year. This funding round was led by Alpha Wave Ventures, a partnership between Falcon Edge and Chimera, which will join other Cerebras investors such as Altimeter, Benchmark, Coatue, Eclipse, Moore and VY.

Cerebras explained to me that it is best to remove a round of funding before you actually need it – they told us they already had the next 2-3 years funded and planned, and this round of additional funding provides something else on top of that, allowing the company also grow as needed. This encompasses not only the next generations of wafer scale (apparently a 5nm output tape costs around $ 20 million), but also the new memory scale-out systems that Cerebras announced earlier this year. Cerebras currently has around 400 employees at four sites (Sunnyvale, Toronto, Tokyo, San Diego) and is looking to expand to 600 by the end of 2022, with a heavy focus on engineering and full-stack development.

Cerebras wafer scale
AnandTechWafer scale
Gen1 engine
Wafer scale
Engine Gen2
Increase
AI cores400,000850,0002.13 times
Manufacturing16nm TSMC7nm TSMC
Release dateAug 2019Third quarter of 2021
Matrix size46225 mm246225 mm2
Transistors1200 billion2.6 billion2.17 times
(Density)25.96 mTr / mm256,246 mTr / mm22.17 times
SRAM on board18 GB40 GB2.22 times
memory bandwidth9 PB / s20 PB / s2.22 times
Fabric bandwidth100 Pb / s220 Pb / s2.22 times
cost$ 2 million +arm + leg

To date, Cerebras customers have been, in the company’s own words, from markets that have traditionally understood HPC and are investigating the border between HPC and AI. This means traditional supercomputer sites, such as Argonne, Lawrence Livermore, and PSC, but also commercial companies that have traditionally relied on heavy computing, such as pharmaceuticals (AstraZeneca, GSK), medical, and oil and gas. Part of Cerebras’ roadmap is to expand beyond ‘traditional’ HPC customers and introduce the technology in other areas, such as the cloud: Cirrascale recently announced a cloud offering based on the latest CS-2.

Cerebras Completes Series F Funding Another 250M for 4B Valuation

Upcoming is the annual Supercomputing conference, where more clients and deployments are likely to be announced.

Related reading

Leave a Comment