Artificial intelligence (AI) models are advancing at a remarkable pace. Major developers are making significant strides in enhancing these models’ ability to comprehend complex queries and deliver more insightful, well-reasoned responses.

This was highlighted in a Sept. 12 announcement from OpenAI, the creators of the widely known ChatGPT model, regarding their new “Strawberry” model.

This development, also known as the OpenAI o1 model series, will allow people to think more about problems before responding, “much like a person would.”

We're releasing a preview of OpenAI o1—a new series of AI models designed to spend more time thinking before they respond.

These models can reason through complex tasks and solve harder problems than previous models in science, coding, and math. https://t.co/peKzzKX1bu

— OpenAI (@OpenAI) September 12, 2024

The models will also be able to “refine their thinking process, try different strategies, and recognize their mistakes,” according to the developer.“ 

While AI certainly isn’t taking over the world, nor is that the goal of those developing the technology, the rapid advancement of the technology has legislators worried about the ability to control said models if they go rogue and implement safety measures during the developmental stages. 

Bills on the table

Over the past week, California lawmakers have continued to pass AI-related bills affecting residents and developers in California. 

This includes Assembly Bill 1836, prohibiting unauthorized AI-generated replicas of deceased personalities without prior consent to protect performers’ rights and likenesses.

However, one of the major bills contested among industry insiders is Senate bill (SB)-1047, also known as the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.”

If passed, the bill will mainly impact major AI developers — like OpenAI, Google, and Microsoft — who have the resources to develop AI models requiring more than 10^26 integer or floating-point operations (FLOPs) and costing over $100 million.

Developers will be required to train and fine-tune the models to implement the safety features outlined in the bill. This includes AI model shutdown capabilities, creating and retaining a written safety protocol, ensuring third-party annual audits, and submitting compliance statements and incident reports to California’s attorney general.

The bill is facing backlash from developers of all sizes within the industry, who say it stifles innovation. Cointelegraph spoke with Dina Blikshteyn, a partner at the legal firm Haynes Boone, to understand just how that could happen. 

Impact on developers

Blikshteyn said that the bill could also extend to small developers fine-tuning AI models with computing power greater or equal to three times 10^25 integer or FLOP and can afford a $10 million access.

“The bill aims to prevent disasters caused by AI models, particularly through the implementation of shutdown capabilities,” she said.

“However, it may not fully eliminate risks, as an AI model could trigger a chain reaction with harmful consequences even after shutdown.”

She also pointed out that: 

“While the bill's intent is positive, the requirements for safety protocols, audits, and compliance reports might be seen as excessive, potentially imposing burdensome disclosure and bureaucratic demands that could hinder innovation in California’s AI industry.”

The United States currently has no federal framework in place for regulating the outputs of AI models. However, Blikshteyn points out that states like California and Colorado are enacting their own regulations. 

The regulations on Governor Gavin Newsom’s desk would affect Californians who train and access the covered AI models. 

“The larger AI companies would have more manpower to handle the bill’s requirements,” she pointed out, “which may be considered a drain on the smaller company’s resources.”

“While large AI companies are unlikely to leave California, the variation in state laws and lack of federal oversight could push smaller developers to relocate or conduct AI work in states with fewer regulations on AI governance.”

California leads legislation

Nonetheless, Blikshteyn highlights what many in the industry see as a truth: “Legislation on a federal level that sets basic requirements for powerful AI models would be beneficial for both consumers and developers. It would also provide a baseline for all states as to what those requirements are.”

SB-1047 was submitted to Governor Newsom on Sept. 9 and is still awaiting a decision. Newsom has commented on the bill saying that he’s been working on “rational regulation that supports risk-taking, but not recklessness.” However, he has also expressed concern over the potential impact on competitiveness. 

With California being a global leader in tech innovation, its legal decisions regarding AI are something the entire world is watching with bated breath. 

AI Eye: AI drone ‘hellscape’ plan for Taiwan, LLMs too dumb to destroy humanity