According to Cointelegraph, OpenAI's paid user base across its business segments, including ChatGPT Enterprise, Team, and Edu, has grown nearly 67% since April, reaching over one million users as of September 5. The San Francisco-based artificial intelligence firm's chatbot continues to thrive due to its advanced language model.

A Reuters report highlights that OpenAI's business products have seen significant growth, increasing from 600,000 users in April to one million. OpenAI is reportedly planning to introduce higher-priced subscription plans for its upcoming large language models, such as the Strawberry and Orion AI models, with potential costs reaching up to $2,000 per month.

This development follows the launch of xAI's Grok-2 AI assistant, available for X users with Premium or Premium+ memberships. Despite being a relatively new AI venture launched in July 2023, xAI could become a competitor to OpenAI by the end of 2024, according to Elon Musk during Viva Tech Paris 2024.

OpenAI's valuation could potentially reach $100 billion, with Apple and US chipmaker Nvidia reportedly interested in investing in the company's upcoming funding round. Microsoft, which holds a 49% stake in OpenAI after investing $13 billion since 2019, is also participating in the funding round. On August 29, OpenAI announced that ChatGPT's weekly active users have doubled over the past year, surpassing 200 million. Despite this growth, the company's revenue remains below expectations, with annualized sales around $3.4 billion as of May 2024.

OpenAI has expressed support for California's AB 3211 AI bill, which would require watermarks in the metadata of AI-generated photos, videos, and audio clips. However, the company opposed another AI-related bill, SB 1047. Introduced on February 7 by California State Senator Scott Wiener and co-authored by senators Richard Roth, Susan Rubio, and Henry Stern, SB 1047 mandates AI developers to conduct safety testing on some of their models.

On September 5, the United States, the European Union, and the United Kingdom signed the Framework Convention on AI, emphasizing the importance of human rights and democratic values in regulating both public and private-sector AI models. This is the first legally binding international treaty on AI, holding signatories accountable for any harm or discrimination caused by AI systems. However, the implementation of consequential measures, such as penalties for violations, is yet to be established.