© 2024 Blaze Media LLC. All rights reserved.
Meta's Oversight Board investigates two cases of AI-generated explicit content posted to Facebook and Instagram
Photo by Drew Angerer/Getty Images

Meta's Oversight Board investigates two cases of AI-generated explicit content posted to Facebook and Instagram

Meta's Oversight Board is reportedly investigating some of the tech company's policies and past actions concerning AI-generated explicit images and deepfakes of women, according to PC Mag.

The Oversight Board's investigation cites two specific instances where AI-generated explicit images of real-life women were posted to both Facebook and Instagram. One took place in India, and the other reportedly happened in the U.S.

In one case, an AI-generated photo of a nude woman being groped by a man was posted to a Facebook group specifically for AI-generated content. The post was eventually taken down. The board elected not to reveal the identity of the woman to protect her privacy and prevent further harassment. However, it is known that the woman in question is a well-known American figure.

The image of the nude woman was submitted to a media-matching database so that it would be detected if it appeared on any other Meta-owned platform, per PC Mag.

The Oversight Board noted that the second case involved "an AI-generated image of a nude woman posted on Instagram. The image has been created using artificial intelligence (AI) to resemble a public figure from India. The account that posted this content only shares AI-generated images of Indian women. The majority of users who reacted have accounts in India, where deepfakes are increasingly a problem."

The woman reportedly reported the content to Meta for pornography, but the image was left up because the company didn't respond to the report within 48 hours. The woman submitted an appeal for Meta's initial decision to leave up the content, but that, too, was automatically closed.

"The user then appealed to the Board," according to the Oversight Board's own report. "As a result of the Board selecting this case, Meta determined that its decision to leave the content up was in error and removed the post for violating the Bullying and Harassment Community Standard."

The BBC reported that deepfakes originating out of India has become a serious problem. Most of the deepfakes target celebrities.

AI expert Aarti Samani told the BBC that "Hollywood has borne the brunt of it so far," adding that actresses such as Natalie Portman and Emma Watson have been among the high-caliber victims.

Samani added that AI has made it easier to create fake video and audio of people.

"The tools have become so much more sophisticated over the past six months to a year, which explains why we are seeing more of this content in other countries," Samani said.

"Many tools are available now, which allow you to create realistic synthetic images at little or no cost, making it very accessible."

Samani went on to say that India has experienced a unique problem with deep fakes, suggesting that a large, young population has led to increased social media use, which ultimately results in "fascination with Bollywood and obsession with celebrity culture."

"Bollywood celebrity content makes an attractive clickbait, generating large ad revenue. There is also the possibility of selling data of people who engage with the content, unknown to them."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?