Author: Ian King, Bloomberg; Translated by: Baishui, Golden Finance

When a new product sets the tech world on fire, it’s usually a consumer product like a smartphone or a gaming console. This year, tech watchers zeroed in on an obscure computer part that most people can’t even see. The H100 processor enables a new generation of artificial intelligence tools that promise to transform entire industries and propel its developer, Nvidia Corp., to become one of the world’s most valuable companies. It shows investors that the buzz around generative AI is translating into real revenue, at least for Nvidia and its most important supplier. Demand for the H100 is so high that some customers have had to wait as long as six months to receive it.

1. What is Nvidia's H100 chip?

The H100, named in honor of computer science pioneer Grace Hopper, is a souped-up version of the graphics processing units typically found in PCs that help video gamers get the most realistic visuals. It includes technology that turns clusters of Nvidia chips into single units that can crunch huge amounts of data and perform high-speed calculations. That makes it well suited for the energy-intensive task of training the neural networks that generative AI relies on. The company, founded in 1993, pioneered the market with investments made nearly two decades ago, betting that the ability to work in parallel would someday make its chips valuable in applications beyond gaming.

Nvidia H100 Photographer: Marlena Sloss/Bloomberg

2. What makes H100 so special?

Generative AI platforms learn to do tasks like translating text, summarizing reports and synthesizing images by absorbing reams of existing material. The more they see, the better they get at recognizing human speech or writing a job application letter. They develop through trial and error, making billions of attempts to reach proficiency and consuming vast swathes of computing power in the process. Nvidia says the H100 is four times faster than the chip’s predecessor, the A100, at training so-called large language models (LLMs) and 30 times faster at responding to user prompts. Since releasing the H100 in 2023, Nvidia has announced allegedly faster versions — the H200 and Blackwell B100 and B200. The growing performance advantage is crucial for companies racing to train LLMs for new tasks. Many of Nvidia’s chips are seen as key to developing artificial intelligence, so much so that the U.S. government has restricted sales of the H200 and several less powerful models to China.

3. How did Nvidia become a leader in AI?

The Santa Clara, California-based company is a world leader in graphics chips, the components of computers that generate images on screens. The most powerful of these chips consist of thousands of processing cores that can execute multiple computational threads simultaneously, modeling complex 3D renderings like shadows and reflections. Nvidia engineers realized in the early 2000s that they could repurpose these graphics accelerators for other applications by breaking the tasks into smaller pieces and processing them simultaneously. AI researchers saw that by using this type of chip, their work could finally become practical.

4. Does Nvidia have any real competitors?

Nvidia currently controls about 92% of the data center GPU market, according to market research firm IDC. Dominant cloud computing providers such as Amazon.com Inc.’s AWS, Alphabet Inc.’s Google Cloud and Microsoft Corp.’s Azure are trying to develop their own chips, as are Nvidia’s rivals Advanced Micro Devices Inc. and Intel Corp. So far, those efforts haven’t made much headway in the AI ​​accelerator market, and Nvidia’s growing dominance has become a concern for industry regulators.

5. How does Nvidia stay ahead of its competitors?

Nvidia has updated its products, including the software that supports the hardware, at a pace no other company can match. The company has also designed a variety of cluster systems to help its customers buy the H100 in bulk and deploy it quickly. Chips like Intel's Xeon processors are capable of more complex data processing, but they have fewer cores and are much slower at processing the large amounts of information typically used to train AI software.

6. How do AMD and Intel compare to Nvidia in terms of AI chips?

AMD, the second-largest maker of computer graphics chips, last year launched a version of its Instinct series aimed at conquering a market dominated by Nvidia products. In early June, at the Taipei International Computex show in Taiwan, AMD CEO Lisa Su announced that an upgraded version of its MI300 AI processor would be available in the fourth quarter and said more products would be launched in 2025 and 2026, indicating the company's commitment to this business area. Intel is currently designing chips for AI workloads, but it acknowledges that demand for data center graphics chips is currently growing faster than server processor units, which have traditionally been its strong point. Nvidia's advantage is not only the performance of its hardware. The company invented a language called CUDA for its graphics chips that allows them to be programmed to perform the type of work that supports AI programs.

7. What product does Nvidia plan to release next?

The most anticipated product is Blackwell, and Nvidia said it expects to generate "significant" revenue from the new product line this year. However, the company has encountered obstacles in the development process that will slow down the release of some products.

Meanwhile, demand for H-series hardware continues to grow. CEO Jensen Huang has been acting as an ambassador for the technology and trying to entice governments and private businesses to buy in early or risk being left behind by those embracing AI. Nvidia also knows that once customers choose its technology for their generative AI projects, it will be easier to sell them upgrades than competitors hoping to woo users.