Blockchain could become the “bodyguard” that generative AI needs, to protect intellectual property (IP), mitigate cybersecurity and regulatory risks, and open up new revenue streams.

With the explosion of generative AI like ChatGPT and DALL-E, companies are racing to adopt this technology to drive innovation and increase profits. However, the question is: How to turn AI into a tool that brings real benefits to businesses?

Certainly, generative AI has created a buzz, and clearly has significant potential. But harnessing that potential while also facing the big risks of AI is now a pressing concern for business leaders as AI strategy has graduated from the innovation lab to the boardroom. board of directors.

According to KPMG's survey of 300 CEOs in various industries, 77% of executives believe that generative AI is the technology with the greatest impact, and at the same time, 73% believe that generative AI will increase productivity. labor. 71% plan to deploy a generative AI solution within the next 2 years and 64% believe that generative AI will help businesses gain a competitive advantage.

However, a difficult issue is protecting intellectual property (IP) rights when AI learns from available data, including yours and your competitors' data. How to ensure not to violate the intellectual property rights of others and protect your own IP?

The answer is still unclear, which is why 92% of leaders KPMG surveyed think implementing generative AI is a moderate to high risk endeavor.

In this context, blockchain emerges as a potential solution that can act as a “bodyguard” for generative AI, helping to solve security, intellectual property and regulatory challenges.

Has Blockchain found mainstream application yet?

Blockchain—a decentralized, distributed technology primarily used to develop and manage cryptocurrencies, smart contracts, and non-fungible tokens (NFTs)—is increasingly seen as “AI” that the business world needs today.

KPMG, in its new report “Blockchain and Generative AI: A Perfect Combination?”, points out that the decentralization and transparency of blockchain can play an important role in protecting property rights. Intelligence (IP) is used to train generative AI.

Here are some key benefits of using blockchain to combat AI abuse, detailed in more detail in KPMG's new report:

  1. Accurate authentication and identification: Blockchain's distributed identity and authentication capabilities can ensure the authenticity and ownership of IP used to train AI chatbots. Blockchain allows for the creation of an immutable transaction record, assigning IPs to the rightful owners, and preventing unauthorized use.

  2. Fair copyright and royalties: By converting pieces of IP into NFTs with smart contracts stored on the blockchain, companies can specify which pieces of IP can be used for free, Which parts require attribution and which parts require royalty payments or compensation.

  3. Protection against infringement: Storing and tagging IP on the blockchain before providing it to generative AI chatbots can significantly reduce the risk of violating others' intellectual property rights. Companies can compare their content with existing blockchain records to identify potential duplicates and avoid lawsuits or copyright violations.

  4. Legal certainty: AI is still the Wild West, from a legal standpoint. The legal frameworks around copyright protection, ownership and liability are just beginning to take shape. By applying blockchain technology to IP protection, companies can proactively address these uncertainties and establish transparent and traceable records of their content.

  5. Data security and compliance: Generative AI raises data security concerns, especially when AI models are trained on public data. By integrating blockchain into this process, companies can ensure compliance with data protection regulations and demonstrate responsible use of AI technology.

Ensuring the security of the blockchain-based chain of command

In many ways, blockchain is the ideal complementary technology to generative AI—the grounding rod for AI's lightning bolt. But, like AI, blockchain is not a universal solution to unleash AI innovation. “Technology” is certainly important, but just as important are the “people” and “process” factors.

The work ahead is huge. Organizations must adopt and implement a responsible AI framework—an approach to design, build, and deploy AI systems safely, reliably, and ethically, including all part of that “people-process-technology” equation. That means: ensuring data integrity, statistical validity, and accuracy of predictive models are the top three concerns cited by executives in “The Report.”

There is still a lot of work to be done in this area, as KPMG points out in the report, with some areas falling short of expectations. A notable example is that 82% of participants in KPMG's AI risk survey say their company has a clear definition of AI and a clear understanding of their predictive models. However, many of these companies are using third-party AI solutions that operate as “black boxes,” meaning they cannot understand or control how these solutions work, lacking necessary transparency.

19% of respondents said their company has the necessary expertise today to manage a variety of AI risk management tasks.

To successfully manage AI risk, companies need to consider their entire AI ecosystem, including the lifecycle of everything related to AI. They then need to design operating models and processes that reflect leading practices in establishing a responsible AI framework.