Facebook has removed millions of posts, photos, and other forms of content that violated its policies against hate speech, terrorist propaganda and child exploitation, as well as more than a billion fake accounts, the social media giant announced Thursday.
What was removed?
The purge included more than 1.5 billion fake accounts from April through September, according to the company’s report on community standards enforcement. That included 12.4 million pieces of terrorist propaganda, 2.2 billion pieces of spam and 66 million pieces of other content that violates Facebook policies.
The report also showed an increase in the removal of such content. In the previous six months, 1.3 billion accounts had been removed.
Facebook’s announcement underscored the company’s struggles to uncover hate speech and bullying amid a rising tide of what some call “racist, sexist and anti-Semitic content online,” the Washington Post reported.
To enforce so-called community standards, Facebook hired thousands of people to monitor posts and also deployed artificial intelligence.
Facebook claims it catches more than “95 percent of nudity, fake accounts and graphic violence” before it’s reported by users. For hate speech and bullying, the company catches 51.6 percent and 14.9 percent of incidents, respectively.
Facebook chief executive officer and co-founder Mark Zuckerberg said during a call with reporters Thursday that the company is changing how it makes decisions on what content to remove. The company will soon have an independent board to review users’ appeals for those decisions.
Also, Facebook plans to publish the minutes of its meetings when it decides new content policies.
The company has faced criticism for lacking transparency in how and why it removes content. Facebook earlier this year announced some of its protocols for removing content.
The company also plans to share more information about what constitutes hate speech and its takedown of fake accounts, according to reports.