Introducing Nightshade: Empowering artists to ‘taint’ AI models

AI

AI Artists Fight Back Against Generative AI Models with Nightshade, an Open Source Tool

The generative AI era has seen a surge in popularity, with ChatGPT leading the way. However, this rise in AI technology has also sparked opposition from artists and performers who claim that their work is being used without their consent. Several lawsuits have been filed against AI companies, including OpenAI, for leveraging training data without artists’ permission.

One of the primary concerns revolves around the practice of scraping material from the web to create AI training datasets. Artists previously supported web scraping when it was used for search result indexing, but they now oppose it because it enables the creation of competing works through AI.

To combat the unauthorized use of their work, artists have turned to technology. MIT Technology Review recently showcased an open source tool called Nightshade, developed by University of Chicago researchers under computer science professor Ben Zhao. Nightshade can be added to artists’ images before uploading them to the web and uses invisible pixel alterations to “poison” the art for AI models.

Unlike other tools that simply alter the style of digital artwork, Nightshade goes a step further. It causes AI models to learn incorrect names for objects and scenery in the images. For example, images of dogs are poisoned to appear as cats to AI models. With just a small sample of poisoned images, AI models start generating distorted dog images. As the number of poisoned samples increases, the AI models begin consistently generating incorrect results, such as a cat when prompted for a dog.

The researchers tested Nightshade using Stable Diffusion, an open source text-to-image generation model. They successfully tricked the model into generating cats instead of dogs by strategically poisoning the pixels. Nightshade’s data poisoning technique is challenging to detect, as the altered pixels are not noticeable to the human eye and even difficult for software tools to identify.

Removing poisoned images from AI training datasets and re-training the models that have already been trained on them are necessary steps to defend against Nightshade’s effects. While the researchers acknowledge the potential for malicious use of their tool, they hope it will help shift the power balance back to artists and protect their copyright and intellectual property.

The researchers have submitted a paper on Nightshade for peer review to the computer security conference Usenix. They believe that by providing artists with a deterrent against disrespecting their work and IP, Nightshade can restore balance and protect artists’ rights in the AI era.

VentureBeat’s mission is to serve as a digital hub for technical decision-makers to acquire knowledge about transformative enterprise technology and facilitate transactions. Stay informed with our Briefings.