On June 2, NVIDIA announced at Computex that 28 million developers around the world can download NVIDIA NIM (Nvidia Inference Microservices) to deploy AI models in the cloud, data centers, or workstations, and easily build generative AI applications such as Copilot (an AI assistant) and ChatGPT chatbots. This product will reduce the model deployment time from weeks to just a few minutes.
NVIDIA NIM is an optimized cloud-native microservice launched by NVIDIA in March this year, designed to facilitate the deployment of AI services by enterprises of all sizes. A Securities Times reporter checked NVIDIA's official website and found that NVIDIA NIM has been launched on the official website and is available for users to download and deploy.
In addition, Nvidia founder and CEO Jensen Huang also announced that the new generation of AI chips and supercomputing platform Blackwell chips have now entered production, and the Blackwell Ultra AI chip will be launched in 2025. The next-generation AI platform is named Rubin and is currently under development and is expected to be released in 2026. The platform will use HBM4 memory chips.
Blackwell is a new growth engine that Nvidia has high hopes for. After releasing a financial report on May 22 that far exceeded market expectations, Huang Renxun pointed out in a financial report conference call that Blackwell architecture chips will be shipped in the second quarter and will increase production in the third quarter. It is expected that Blackwell architecture chips will bring a lot of revenue to the company this year. #英伟达