© 2024 Blaze Media LLC. All rights reserved.
YouTube used AI to remove more than 8 million offensive videos from platform in just 3 months
Image source: TheBlaze

YouTube used AI to remove more than 8 million offensive videos from platform in just 3 months

According to YouTube, artificial intelligence has been instrumental in helping to flag and remove offensive videos from the video platform. The company says it deleted 8 million videos in the last quarter of 2017 alone.

What are the details?

The videos were reportedly taken down from the platform for violating the site's terms and conditions.

Of the 8 million videos removed from the Google-owned platform, AI reportedly flagged about 6.7 million of those videos.

Fortune reported that the majority of the videos contained pornography or were spam. Other videos flagged and removed featured violence.

According to the report, AI was so proficient in detecting offensive videos that 76 percent of those flagged were removed from the site before even gaining a single view.

"Machines are allowing us to flag content for review at scale, helping us remove millions of violative videos before they are ever viewed," the YouTube team said. "And our investment in machine learning to help speed up removals is paying off across high-risk, low-volume areas (like violent extremism) and in high-volume areas (like spam)."

Despite the success of AI's assistance in flagging roles also held by humans —AI, rather than humans, flagged about 83 percent of later manually deleted videos — the company continues to hire more humans in such roles.

"Deploying machine learning actually means more people reviewing content, not fewer," the YouTube team explained. "Our systems rely on human review to assess whether content violates our policies. Last year, we committed to bringing the total number of people working to address violative content to 10,000 across Google by the end of 2018."

"We've also hired full-time specialists with expertise in violent extremism, counterterrorism, and human rights, and we've expanded regional expert teams," the team added.

Anything else?

According to a New York Times report, striking a balance between removing unwanted videos and maintaining free speech could be a "major challenge" for the company's future.

Eileen Donahoe, executive director at Stanford University’s Global Digital Policy Incubator, said, "It’s basically free expression on one side and the quality of discourse that’s beneficial to society on the other side. It's a hard problem to solve."

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?