Elon Musk has just revealed plans to build a supercomputer "Gigafactory of Compute" to support his new AI chatbot Grok. According to sources from The Information, this supercomputer is expected to go into operation in the fall of 2025 and can cooperate with Oracle at a cost of up to 1 billion USD during the development process.

The "Gigafactory of Compute" supercomputer will use H100 graphics processing chips (GPUs), Nvidia's flagship product, and are designed to be more compact than today's largest GPU clusters. The H100 GPU is currently the market leader for AI data centers but is very scarce due to high demand.

xAI, the company founded by Musk in July 2023, is currently Oracle's largest customer for the H100 chip, using more than 15,000 AI chips produced by NVIDIA. Even Tesla, Musk's company, is using supercomputers provided by NVIDIA to produce electric vehicles.

The plan to build a supercomputer shows Musk's ambition to create an advanced AI chatbot, competing with rivals such as Microsoft-backed OpenAI and Alphabet's Google. He is also a co-founder of OpenAI.

Musk once shared that training the Grok 2 model used about 20,000 Nvidia H100 GPUs, and he expected Grok 3 onwards will require up to 100,000 H100 chips.

Building a supercomputer with such a "huge" scale will help xAI process huge amounts of data and train more complex AI models, thereby improving the performance and features of AI chatbots. Grok.

Although xAI and Oracle have not officially commented on this information, Musk's construction of a powerful supercomputer for Grok has attracted the attention of the technology world.

The “Gigafactory of Compute” supercomputer could be a major step forward in the global AI race, making Grok a formidable competitor in the field of AI chatbots.