AI News: US Commerce Dept Launches Tool Test Model Risks 🧰

The increasing demand and popularity of Artificial Intelligence (AI) models have forced authorities in several regions to lay down standards. Introducing an AI tool into such regions would require compliance with such standards. As a result, the United States Commerce Department in collaboration with a few entities have launched Dioptra.

US to Tackle Malicious AI Attacks With Dioptra

Dioptra is a modular, open source web-based tool that the US Commerce Dept re-released in collaboration with the National Institute of Standards and Technology (NIST).

I just wanted to let you know that the relaunch of this initiative is coming at a critical time as the AI landscape continues to evolve quickly. The testbed is designed in such a way that it measures the capacity of malicious attacks to impact the performance of an AI system negatively. It is particular about attacks that “poison” AI model training data.

Noteworthy, Dioptra’s initial release took place two years ago with the focus on helping companies that train AI models and those entities who utilize the innovative technology.

Its functionality at the time encompasses assessing, analyzing and tracking AI risks. According to NIST, the tool can be used to benchmark and research models. Also, NIST said “Testing the effects of adversarial attacks on machine learning models is one of the goals of Dioptra.”

Like several other tools and AI regulation, Dioptra is an approach towards attaining AI model safety and protection of users.

Governments Set Regulation to Checkmate AI Model Launch

A few days ago, tech behemoths including Google, Microsoft, Nvidia, and OpenAI, took the matter of AI safety more intentionally and launched the Coalition for Secure AI (CoSAI). Precisely, CoSAI is focused on establishing robust security frameworks and standards for AI development and deployment.

#BinanceTurns7 #SOFR_Spike #Write2Earn! #ETH_ETFs_Approval_Predictions #BinanceHODLerBANANA