New supercomputer contains 16 chips the size of a dinner plate

Recently, representatives of Cerebras Systems announced their supercomputer called Andromeda AI, which has 13.5 million computing cores and is designed to solve problems in the field of artificial intelligence and deep machine learning. According to available information, Andromeda has a performance of 1 exaflop (1 quintillion operations per second) when calculating numbers with half 16-bit accuracy, which is widely used in the construction of artificial neural networks.

Andromeda AI is a cluster of 16 interconnected Cerebras CS-2 computers. At the heart of each CS-2 computer is the WSE-2 (Wafer Scale Engine) chip, whose crystal is a square with a side of 21.59 centimeters (8.5 inches) and which is the world’s largest silicon chip today. On the surface of this chip there are 2.6 trillion transistors organized into 850 thousand computing cores.

The Andromeda AI supercomputer was built in a data center in Santa Clara, California, at a cost of 25 million US dollars. “Andromeda, through the implementation of large-scale parallelism, is an ideal computing system for large GPT-class language models” – Cerebras representatives write – “The supercomputer effectively works with GPT-3, GPT-J and GPT-NeoX models, and is already used to solve problems from academic and commercial fields”.

According to information from representatives of Cerebras Systems, connecting a large number of CS-2 computers to a common system reduces the training time of neural networks in “perfect inverse proportion”. And in its current configuration, the Andromeda AI supercomputer is able to solve problems built with GPUs in mind.

The capabilities of the Andromeda AI system have been demonstrated on GPT-J model calculations with 2.5 and 25 billion parameters using 10240 long sequences (MSL) each. this on a Polaris system with 2 thousand computing nodes based on Nvidia A100, failed due to limited memory bandwidth and GPU memory”.

Of course, the Andromeda AI supercomputer doesn’t look like much compared to the fastest of traditional supercomputers. The most powerful supercomputer, Frontier, located at the Oak Ridge National Laboratory, provides 1.103 exaflops for operations with double 64-bit precision. However, Frontier cost more than 20 times as much to build, at $600 million, occupies a much larger footprint and consumes significantly more energy to operate than Andromeda AI.

Now interested organizations can get remote access to the Andromeda AI system. And now the users of this supercomputer include specialists from Cambridge University, Argonne National Laboratory, JasperAI and some other organizations dealing with artificial intelligence.