Since November 2022, when the artificial intelligence (AI) chatbot ChatGPT was made available for public use, its parent company OpenAI has been battling global regulators.
This week, the company’s CEO, Sam Altman, spoke with officials in Brussels about the upcoming EU AI Act, about which he has “many concerns.”
Altman emphasized that the regulations include AI technology with general applications, such as Open AI’s GPT-4.
He cautioned about the future of the company’s activities in Europe if overly restrictive regulation is enacted, stating that “details matter” for such legislation.
“We will try to comply, but if we can’t comply we will cease operating.”
This week, Sundar Pichai, the CEO of Google, also traveled to European capitals to speak with regulators as they develop “guardrails” for AI regulation.
According to meeting representatives, Pichai advocated for rules that do not impede innovation.
The EU AI Act, which is expected to be finalized within the next year, will be one of the world’s leading AI technology regulation packages.
Initially, the legislation was designed to address specific, high-risk AI applications.
However, following a surge in popularity and accessibility, additional rules such as “foundation models” such as ChatGPT were implemented to hold developers accountable for the usage of their applications, even if they have no control over it.
In addition, companies will be required to disclose summaries of copyrighted materials used to train AI, and policymakers will classify the technology according to the risk it poses to society.
While firms within the industry agree that some regulation is necessary, executives have been vocal in their opposition to excessive regulation.
A week before he met with European leaders, Altman testified before the United States Congress in a “historic” hearing.
The CEO of OpenAI and other industry leaders argued in favor of government technology regulation before U.S. officials.