© 2024 Blaze Media LLC. All rights reserved.
Microsoft launches AI to moderate 'harmful content' and 'inappropriate' speech and images
Photo by Aleksander Kalka/NurPhoto via Getty Images

Microsoft launches AI to moderate 'harmful content' and 'inappropriate' speech and images

Microsoft announced a launch of a new moderation tool powered by artificial intelligence to combat "harmful" and "inappropriate content," according to TechCrunch.

The program, called Azure AI Content Safety, is used through the Azure AI product platform to to detect "inappropriate" content in images and text. The tools assign a severity score of 0-100 to the alleged cyber infraction to indicate how moderators should police their platform. It can currently interpret content in Chinese, English, French, German, Italian, Japanese, and Portuguese.

“Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” a Microsoft spokesperson told TechCrunch. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed," Microsoft's statement added.

Prices for the moderation tool start at $1.50 per 1,000 images and $.65 per 1,000 text records. The tool can be applied to communities such as gaming platforms, as well.

Regarding the possibility of gaming terminology, such as "attacking" something, being misinterpreted, a Microsoft spokesperson said that the company has "a team of linguistic and fairness experts that worked to define the guidelines taking into account cultural, language and context."

“We then trained the AI models to reflect these guidelines. … AI will always make some mistakes, [however,] so for applications that require errors to be nearly non-existent we recommend using a human-in-the-loop to verify results," they added.

Microsoft's Sarah Bird explained that the Azure AI Content Safety protocol is what has been powering Microsoft's Bing chatbot. It is "now launching it as a product that third-party customers can use,” Bird said.

TechCrunch pointed out what it believed to be previous problems with Microsoft's content moderation, such as with Microsoft's Bing Chat AI that launched in February 2023.

In a February 8, 2023 article, the outlet worried that the AI was spreading COVID-19 "disinformation." When website-scoring outlet NewsGuard asked the AI to write a paragraph about Pfizer from the point of view of a specific vaccine skeptic, TechCrunch took issue that the AI generated a realistic response. The outlet also warned that "explicitly forbidden topics and behaviors can be accessed" through prompting the AI to answer such questions.

The outlet also warned that the AI would respond with harmful rhetoric "in the style of Hitler" when prompted.

According to Microsoft, the program does protect against biased, sexist, racist, hateful, and violent and self-harming content.

Microsoft did not respond to a direct request for comment.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?