A spokesperson for Twitter has issued an apology for the social networking site's "racist" photo algorithm.
The algorithm in question comes into play when a user tweets a photo that is too large to properly display as a preview — then the algorithm kicks in to try to place in the preview the most important part of the photo (instead of randomly selecting, for example, the middle or top of the photo). Twitter users alleged that, when a photo is posted with both a black person and a white person, the Twitter algorithm is biased in selecting the image of the white person as the part that displays in the preview image.
The issue was initially brought to light when Colin Madland, a white education tech researcher, tweeted about how Zoom cropped a black person out of a call because it reportedly wouldn't detect the black face as a human face. When he tweeted his picture of himself and a black colleague together, he noticed that the preview image selected by Twitter displayed only his face.
What are the details?
On Sunday, social media users shared their experiences with how Twitter reportedly displays users of different colors and insisted that the algorithm and the artificial intelligence — loosely modeled after a human brain — is biased against faces with darker skin tones.
Cryptopgrahy engineer Tony Arcieri was one of those users.
In a tweet, he shared a photo of Senate Majority Leader Mitch McConnell (R-Ky.) and former President Barack Obama. A preview of the photo cropped out Obama and only showed McConnell. Regardless of whether Obama or McConnell was oriented on the top of the photo, the algorithm always displayed only McConnell's face.
He wrote, "Trying a horrible experiment... Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama?"
Trying a horrible experiment... Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? https://t.co/bR1GRyCkia— Tony “Abolish (Pol)ICE” Arcieri 🦀 (@Tony “Abolish (Pol)ICE” Arcieri 🦀)1600553103.0
According to Newsweek, Dantley Davis — Twitter's chief design officer — also acknowledged "evidence of racial bias in how a neural network used by the platform generates photo previews."
"Our team did test for bias before shipping the model and we did not find evidence of racial or gender bias in our testing," Dantley told a user. "But it's clear from these examples that we've got more analysis to do. We're looking into this and will continue to share what we learn and what actions we take."
The outlet reported that Davis on Saturday carried out his own experiments with the social media giant's algorithm. He reportedly discovered the same issue.
In response to a fellow Twitter user who complained about the algorithm, Davis said, "It's 100% our fault. No one should say otherwise. Now the next step is fixing it."
A spokesperson for Twitter told Newsweek, "We'll continue to share what we learn, what actions we will take, & will open source it so others can review and replicate."
Newsweek reported that on Sunday, chief technology officer Parag Agrawal said it was not known how long it will take Twitter to fully investigate its AI system.
"This is a very important question," Agrawal wrote. "To address it, we did analysis on our model when we shipped it, but needs continuous improvement. Love this public, open, and rigorous test — and eager to learn from this."