MIT researchers and other organizations have produced the AI Risk Repository — a broad database of documented risks compounded by AI systems. This comes as the tech is evolving at a fast pace, which is also the case associated with the risk of using AI systems.

The repository seeks to assist decision-makers in various institutions such as government, research, businesses, and industry to assess the emerging risks associated with AI, although it has transformative abilities.

The repository brings orderly documentation of the AI risks

As several organizations and researchers have acknowledged the significance of resolving AI risks, efforts to document and categorize these risks have, to a greater extent, been clumsy, resulting in a fragmented landscape of conflicting classification systems.

“We wanted a fully comprehensive overview of AI risks to use as a checklist,” MIT FutureTech incoming postdoc and project lead Peter Slattery told VentureBeat.

“But when we looked at the literature, we found that existing risk classifications were like pieces of a jigsaw puzzle: individually interesting and useful, but incomplete.”

Slattery.

The AI Risk Repository addresses the above challenge by fusing information from 43 existing taxonomies, including peer-reviewed articles, preprints, conference papers, and reports.

This scrupulous curation process has seen the establishment of a database with over 700 exclusive risks. The repository utilizes a two-pronged dimensional classification system.

Firstly, risks are classified based on their causes, taking into consideration the entity liable (human or AI)—the intent (unintentional or international), and the timing of the (post-development or pre-deployment).

According to MIT, this underlying categorization assists in understanding the situations and mechanisms through which AI risks can crop up.

MIT researchers categorized AI risks into seven

On the other hand, risks are categorized into seven different domains, including misinformation and malicious actors, misuse, discrimination and toxicity, privacy, and security.

The AI Risk Repository is premeditated to be a living database and is publicly available, and institutions can download it for their consumption.

Research teams can plan to frequently bring up to date the database with new risks, latest findings, and evolving trends.

The AI Risk Repository is also intended to be a practical resource for businesses in various sectors. For institutions developing AI systems—the repository acts as an invaluable checklist for assessing risks and mitigating them.

“Organizations using AI may benefit from employing the AI Risk Database and taxonomies as a helpful foundation for comprehensively assessing their risk exposure and management.”

MIT researchers.

“The taxonomies may also prove helpful for identifying specific behaviors which need to be performed to mitigate specific risks,” added the researchers.

An organization that is establishing an AI-powered hiring system, for instance, can make use of the repository to note potential risks associated with discrimination and bias.

And thus, a firm making use of AI for content moderation can leverage the “misinformation” domain to understand the potential risks related to AI-generated content and establish the necessary safety nets.

MIT researchers collaborated with colleagues at the University of Queensland, the Future of Life Institute, KU Leuven, and AI startup Harmony Intelligence to scour academic databases and retrieve documents relating to AI risks.

The researchers also revealed the AI Risk Repository will inform other research as they identify more gaps that need attention.

“We will use this repository to identify potential gaps or imbalances in how risks are being addressed by organizations,” said Neil Thompson, head of the MIT FutureTech Lab.