According to Cointelegraph, researchers at the University of Chicago have created a tool called Nightshade that allows artists to 'poison' their digital art to prevent developers from using it to train artificial intelligence (AI) systems. The tool alters images in a way that contaminates the data sets used for AI training with incorrect information, causing the AI system to misinterpret the image. For example, it could make the AI believe that a picture of a cat is actually a dog, and vice versa.
As a result, the AI's ability to generate accurate and sensible outputs would be compromised. If a user requested an image of a 'cat' from the tainted AI, they might receive a dog labeled as a cat or a combination of all the 'cats' in the AI's training set, including those that are actually images of dogs modified by Nightshade. Vitaly Shmatikov, a professor at Cornell University, stated that researchers 'don't yet know of robust defenses against these attacks,' implying that even strong models like OpenAI's ChatGPT could be vulnerable.
The research team behind Nightshade, led by Ben Zhao, a professor at the University of Chicago, has expanded their existing artist protection software called Glaze. In their previous work, they developed a method for artists to obfuscate or 'glaze' the style of their artwork. Nightshade will eventually be integrated into Glaze, which is currently available for free on the web or for download.