© 2024 Blaze Media LLC. All rights reserved.
Data 'poisoning' tool allows artists to corrupt AI models trained with unlicensed images
Visitor takes picture with mobile phone of image designed with artificial intelligence by Berlin-based digital creator Julian van Dieken (Photo by SIMON WOHLFAHRT/AFP via Getty Images)

Data 'poisoning' tool allows artists to corrupt AI models trained with unlicensed images

A new data "poisoning" tool allows artists to fight against generative artificial intelligence systems trained with unlicensed works by corrupting images.

The tool, Nightshade, was created to help artists protect their unique work from AI companies seeking to train their models on unlicensed images.

Nightshade allows artists to change the pixels in their art before uploading images online. The slight alterations are invisible to the human eye but can manipulate the outputs of machine learning models in unpredictable ways. For instance, dogs can become cats, and cars can become cows, MIT Technology Review reported.

AI developers have come under fire recently for using copyrighted materials without compensation or permission to train their models. Those unlicensed "inputs" allow the AI-power systems to mimic an artist's style, creating new outputs that resemble their works.

Nightshade was developed by University of Chicago professor Ben Zhao and his research team to deter AI developers from using copyrighted materials without permission.

These poisoned samples will cause AI systems to malfunction, and companies will be forced to tediously find and delete corrupted images ingested by the model to prevent further glitches.

Zhao and his team of researchers tested Nightshade on Stable Diffusion and an AI model they trained. After inputting 50 corrupted pictures, when asked to create an image of a dog, the system generated images with too many limbs and animals with cartoon-like faces. After inputting 300 poisoned images, the system generated pictures of dogs that resembled cats.

The researchers noted that "poisoning is not new" but that corrupting generative AI models "at scale is new."

While there is a risk that some could use the tool for malicious purposes, Zhao stated that it would be difficult to do real damage to larger AI models trained on billions of data samples.

Junfeng Yang, a computer science professor at Columbia University, told MIT Technology Review that tools like Nightshade could go a long way in protecting artists' unlicensed works.

"It is going to make [AI companies] think twice, because they have the possibility of destroying their entire model by taking our work without our consent," Yang said.

Nightshade has not yet been released to the public. However, Zhao and his team also developed and launched Glaze, a tool that allows artists to make subtle changes to their work to mask their style.

"For example, human eyes might find a glazed charcoal portrait with a realism style to be unchanged, but an AI model might see the glazed version as a modern abstract style," Glaze's website states.

The researchers called the "power asymmetry" between AI developers and artists "ridiculous."

"If you're a movie studio, gaming company, art gallery, or indep artist, the only thing you can do to avoid being sucked into a model is 1) opt-out lists, and 2) do-not-scrape directives," the researchers wrote on Glaze's X account. "None of these mechanisms are enforceable, or even verifiable. Companies have shown that they can disregard opt-outs without a thought. But even if they agreed but acted otherwise, no one can verify or prove it (at least not today). These tools are toothless."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Candace Hathaway

Candace Hathaway

Candace Hathaway is a staff writer for Blaze News.
@candace_phx →