The CEO of the blockchain platform Tether, Paolo Ardoino claims that only if artificial intelligence (AI) models are localized will people and privacy be protected. 

Ardoino said that locally executable AI models will also ensure the resilience and independence of the models.

He pointed to the latest technology available to users such as smartphones and laptops, saying they contain enough power to “fine tune general large language models (LLMs) with user's own data, preserving enhancements locally to the device.”

The X post ended with “WIP” the common acronym for “work in progress.”

Speaking to Cointelegraph, Ardoino said that locally executable AI models represent a “paradigm shift” in user privacy and independence.

“By running directly on the user’s device, be it a smartphone or laptop, these models eliminate the need for third-party servers. This not only ensures that data stays local, enhancing security and privacy, but also allows for offline use.”

He said this allows users to enjoy ”powerful AI-driven experiences and data analysis” while still maintaining complete control over their information.

Tether recently announced its expansion into AI, to which Ardoino said that they are “actively exploring” integration of locally executable models into their AI solutions.

This comes as a response to a recent announcement of a hack experienced by the AI developer OpenAI, the brains behind the popular chatbot ChatGPT.

According to reports, in early 2023, a hacker retrieved access to OpenAI’s internal messaging systems. Through this access, they were able to compromise details of the company's AI design.

The two sources close to the matter said the hacker stole details from sensitive discussions between OpenAI employees regarding the company’s technologies.

Related: Big Tech faces financial reckoning if human-level AI doesn’t happen soon

Additionally, users of ChatGPT were in for a wake-up call after it was revealed that conversations of users on macOS were not encrypted but rather stored in plain-text files. 

This comes shortly after the announcement of Apple’s integration of ChatGPT with its new “Apple Intelligence” lineup of AI-powered products and services.

The issue has reportedly already been resolved, though the community is still questioning why it happened in the first place. Some hypotheses have been that it would be easy for OpenAI to access the chat logs to further develop and train ChatGPT.

OpenAI is among one of the major AI developers currently releasing high-level AI models, alongside Google, Meta and Microsoft. Industry analysts, and even governments around the world, have been concerned what a Big Tech monopoly over AI models could mean for users in terms of privacy and control of data.

Many have even called for the decentralization of AI, with initiatives like 6079 advocating to explicitly challenge Big Tech’s dominance in the AI industry for more fair future.

Magazine: ‘Raider’ investors are looting DAOs — Nouns and Aragon share lessons learned