Billionaire Elon Musk announced that his AI company xAI has launched the Colossus 100k H100 training cluster, according to Cointelegraph. Musk said that the team took 122 days to complete the launch and will expand to 200k (50k H200s) in the next few months.

Colossus is called the most powerful AI training system currently available, based on the number of GPUs. In comparison, OpenAI's strongest model uses 80,000 GPUs. The model was developed in collaboration with Nvidia, which also congratulated it.

Nvidia's H200 has 141 GB of HBM3E memory and 4.8 TB/sec of bandwidth. The latest Blackwell chip has 36.2% more capacity and 66.7% more bandwidth, respectively.

The industry has reacted enthusiastically to the achievement, with ARK Invest CEO Cathie Wood calling it an “impressive” accomplishment and heralding “big announcements” to come.

Musk plans to launch a supercomputer in the fall of 2025 in partnership with Oracle. He also supports AI safety regulations similar to those for products/technologies with potential public risks.