Tech giants like Intel, Google, Microsoft, Meta are collaborating to create an industry standard for connecting AI chips in data centers, aiming to reduce dependence on Nvidia.

On May 15, the UALink Promoter Group (UALink) officially launched with the participation of AMD, Hewlett Packard Enterprise, Broadcom, Cisco and many other famous names (except Arm and Nvidia). Their goal is to develop an open standard for connecting AI accelerator chips in data centers, to break the dependence on Nvidia's proprietary technology.

The first version, UALink 1.0, will connect up to 1,024 GPUs – the most popular type of AI accelerator chip – in a computer “pod” (defined as one or more server racks). Built on open standards like AMD's Infinity Fabric, UALink 1.0 enables direct data transfer between the memory of AI accelerator chips, thereby increasing processing speed and reducing data transfer latency compared to other existing connection standards.

It is expected that in the third quarter of 2023, the group will establish the UALink Consortium to oversee the development of the UALink standard. UALink 1.0 version will be available to participating companies at the same time, while an upgraded version with higher bandwidth, UALink 1.1, is expected to launch in the fourth quarter of 2024.

The most notable absence from the group is Nvidia – the world's largest AI chipmaker with an estimated market share of 80 to 95%. It is understandable that Nvidia is not interested in UALink. The company has its own proprietary connection technology for GPUs and is in a dominant market position, with data center revenue (including AI chips) growing more than 400% year-on-year. .

Besides AMD and Intel, the biggest beneficiaries of UALink are probably Microsoft, Meta and Google - companies that have spent billions of dollars on Nvidia GPUs to operate cloud services and train AI models. All three are trying to find alternatives to Nvidia - a partner that is growing stronger and at risk of monopoly in the AI ​​hardware segment.

According to Gartner, the market value of AI acceleration chips in servers this year will reach 21 billion USD, and is expected to increase to 33 billion USD by 2028. AI chip revenue is also forecast to reach 33.4 billion USD this year. 2025.

Google is developing its own TPU and Axion chips for the purpose of training and operating AI models. Amazon also owns several lines of AI chips. Last year, Microsoft joined the race with Maia and Cobalt. Meta is also not out of the game with its own lines of acceleration chips.

Recently, Microsoft and OpenAI were said to have planned to spend at least $100 billion on AI training supercomputers, using future versions of Cobalt and Maia chips. These chips will need a connectivity solution – and UALink could be the answer.

The birth of UALink marks a fierce competition in the AI ​​chip market. Whether UALink succeeds in challenging Nvidia depends on its ability to attract more members, the speed of standards development and market acceptance.