Twitter is updating how it decides which tweets to feature prominently and which to hide. The social media company says the goal is to weed out comments that are abusive before users have to report them.
But once again, it’s leading to questions about whether conservative voices will be silenced, an accusation Twitter has faced in the past. This time around, Twitter plans to use behavioral analysis to determine what represents “healthy conversations” on the site.
How will this work?
In a blog post, Twitter said accounts without a confirmed email address, for example, could be targeted. The same is true for users with multiple accounts, or if they repeatedly mentioned and re-tweet accounts that do not follow them. Your visibility could also be reduced if you connect with accounts that violate Twitter’s code of conduct.
“These signals will now be considered in how we organize and present content in communal areas like conversation and search,” the blog post states. “Because this content doesn’t violate our policies, it will remain on Twitter, and will be available if you click on ‘Show more replies’ or choose to see everything in your search setting. The result is that people contributing to the healthy conversation will be more visible in conversations and search.”
Twitter’s written policy forbids attacks on race, religion, and ethnicity, for example. But conservatives said the rules appear to apply more to them. Anti-Christian and anti-white views, often are allowed to prevail on the platform, critics have said.
British far-right activist Tommy Robinson was banned in March after writing a tweet that stated “Islam promotes killing people.” The remark was deemed hateful.
In an investigation of the company’s alleged left-wing bias, Project Veritas showed software engineer Abhinov Vadrevu on camera explaining how the company “shadow bans” users.
“One strategy is to shadow ban so you have ultimate control,” Vadrevu said. “The idea of a shadow ban is that you ban someone but they don’t know they’ve been banned, because they keep posting and no one sees their content.”
When asked whether algorithms would target conservative or liberal users, another software engineer said the “majority of [them] are for Republicans.”
In early testing of the system, Twitter saw a 4 percent drop in abuse reports from search and 8 percent fewer abuse reports, its blog post states.
What did the Twitter CEO say?
In March, Twitter founder and CEO Jack Dorsey said the platform has “witnessed abuse, harassment, troll armies, manipulation through bots and human-coordination, misinformation campaigns, and increasingly divisive echo chambers.”
He added that the company has failed to “fully predict or understand the real-world negative consequences,” when it comes to trolling and insulting behaviors.
In its blog post, Twitter executives wrote that the new behavioral formula is just one attempt to weed out abuse.
“There will be false positives and things that we miss; our goal is to learn fast and make our processes and tools smarter,” the post stated.