AI Wafer-scale chip by Cerebras raises $250 million in a new round of funding

25-11-2021 | By Robin Mitchell

Cerebras, an AI chip research company, has recently raised $250 million in a new round of funding putting the total valuation of Cerebras at $4 billion. How does Cerebras take advantage of wafer-scale chips to improve AI operations, what is their latest server product, and how will the funding help to drive AI research?


What is wafer-scale technology?


Wafer-scale technology is a microchip technology that uses an entire wafer for a single device, unlike traditional devices which only use a small portion of a wafer. The use of the entire wafer allows for wafer-scale devices to be many orders of magnitude more powerful than traditional devices having transistors counts in the trillions instead of the billions.

However, wafer-scale devices are never used to create single processors but instead multi-core processors that all have shared resources. This is because wafers contain point defects that can disable an entire device, and so by splitting a large design into many parallel units only those units that have defects can be disabled and ignored by the larger device.

Cerebras is a company that develops hardware AI technologies that were featured in Electropages last year when they announced their first wafer-scale AI chip, the Cerebras WSE. The device had 1.2 trillion transistors compared to the largest GPU of the time which had 21.1 billion transistors. The wafer-scale device was stated to be faster than 10,000 GPUs and 200 times faster than the Joule Supercomputer while only being 26 inches tall when installed in a computer system.

Cerebras devices are able to achieve such speeds thanks to the ability for the many thousands of CPU cores and memory spaces to be directly linked to each other on the same piece of silicon. This reduces the overall length of conductors that signals need to travel down while simultaneously making them more immune to interference. Essentially, the use of integrated SRAM, CPUs and data channels allows for one of the fastest possible designs.


Cerebras gains $250 million in new funding for next-generation devices


Recently, Cerebras announced that it has raised $250 million in a round of funding which now puts its company value at $4billion. The round of funding comes after the development of a new wafer-scale device along with a complete supporting computer to make use of the chip.

The newest wafer-scale device, called WSE-2, increases the total number of transistors from 1.2 trillion to 2.6 trillion. The same device utilises TSMCs 7nm process and also integrates 850,000 CPU cores, 40GB of highspeed SRAM, and internal data transfer speeds of 220 PB/s.

The WSE-2 has also been integrated into a fully functional computing device called the CS-2 that can fit into a single server rack and potentially replace many hundreds of high-end GPUs used in AI tasks. Furthermore, the CS-2 utilises standard 100Gb ethernet links and can communicate over TCP-IP meaning that it can be easily integrated into any server environment. However, this device is not available for physical purchase by customers, but will instead be available to customers via cloud services to allow for testing, algorithm refinement, and practical applications.



How will the funding help drive AI research?


While Cerebras is certainly an industry leader when it comes to AI hardware, its development with wafer-scale devices is arguably more interesting with regard to the future of computing. Wafer-scale devices are fundamentally very different to standard silicon devices as they allow for extremely powerful devices with unimaginable computational abilities in extremely small spaces. In the case of CS-2, it essentially shrinks a 100 top-end GPU server into half a rack with lower power requirements.

Developments made by Cerebras and TSMC could see wafer-scale devices become more mainstream in server environments in general. Wafer-scale devices are not just limited to AI applications like the ability to have high core counts would make them ideal for any process-intensive task. The wafer-scale size would also allow for large amounts of high-speed memory technologies such as SRAM which would further improve the performance of any computation system.

If wafer-scale devices prove to be highly efficient, one could then ask the question “will wafer-scale devices become the norm in households?”. While some would be quick to say that households would never use such devices is because they don’t require the processing power and that wafer-scale devices would be too expensive, it should be understood that homes already pay huge amounts on property and luxuries such as vehicles. In a future where virtual reality will most likely become the norm, households could potentially invest more money in computing systems than they currently do on vehicles. Computers that cost tens of thousands of pounds using wafer-scale technology would provide massive performance capabilities, and such computers could remain in operation for decades.

Profile.jpg

By Robin Mitchell

Robin Mitchell is an electronic engineer who has been involved in electronics since the age of 13. After completing a BEng at the University of Warwick, Robin moved into the field of online content creation, developing articles, news pieces, and projects aimed at professionals and makers alike. Currently, Robin runs a small electronics business, MitchElectronics, which produces educational kits and resources.