The Twitter “Ethical AI” team was fired by Elon Musk.
A group trying to improve the transparency and fairness of Twitter’s algorithms was terminated by the new CEO as part of a round of layoffs.
ɴᴏᴛ ʟᴏɴɢ ᴀꜰᴛᴇʀ When Elon Musk revealed his intentions to buy Twitter in March of last year, he hinted that “the algorithm” that controls how tweets are displayed in user feeds will be made public so that it could be checked for bias.
His supporters were overjoyed, as did others who think the social media site has a left-wing tilt.
But today, Musk’s management team fired a group of AI researchers who were working to make Twitter’s algorithms more open and just as part of an aggressive cost-cutting strategy that also involves eliminating thousands of Twitter employees.
Rumman Chowdhury, director of Twitter’s ML Ethics, Transparency, and Accountability (META—no, not that one) team, tweeted that she had been fired as part of a round of mass layoffs initiated by the new management, though it didn’t appear like she was looking forward to working for Musk.
The organisations’ work has been placed on hold as a result of Musk’s planned takeover, Chowdhury told earlier this week. She claimed, “We were warned very clearly not to rock the boat. Additionally, Chowdhury noted that her team has been conducting significant new research on political prejudice, which may have assisted Twitter and other social media platforms in preventing certain points of view from being unfairly disparaged.
A senior manager at Twitter’s META division, Joan Deitchman, confirmed that the entire staff had been let go. The “entire META team minus one” had been fired, according to Kristian Lum, a former member of the team and a machine learning researcher. This morning, no one from the team or Twitter could be reached for comment.
Many tech companies have established “ethical AI” teams that are purportedly committed to finding and mitigating such concerns as more and more AI-related difficulties, such as biases related to race, gender, and age, have come to light.
In providing information about flaws with the company’s AI systems and allowing other academics to test its algorithms for new problems, Twitter’s META unit was more forward than most.
Last year, Twitter made the unusual choice to allow its META unit to publish details of the prejudice it discovered after users noted that a photo-cropping algorithm appeared to favour white faces when deciding how to edit photographs.
Additionally, the team ran one of the first-ever “bias bounty” competitions, allowing outside academics to examine the algorithm for various issues. Chowdhury’s team also revealed instances of inadvertent political bias on Twitter in October of last year, demonstrating how right-leaning news sources were in fact given more attention than left-leaning ones.
The layoffs were viewed by many outside experts as a setback, not just for Twitter but also for initiatives to advance AI. The University of Washington associate professor who investigates online misinformation, Kate Starbird, tweeted, “What a disaster.”
According to Ali Alkhatib, head of the Center for Applied Data Ethics at the University of San Francisco, “The META team was one of the only good case studies of a tech business running an AI ethics division that communicates with the public and academics with significant credibility.”
Alkhatib claims that Chowdhury is highly well-regarded in the AI ethics community and that her team’s efforts holding Big Tech accountable was actually beneficial. He claims that there aren’t many business ethics teams worth paying attention to. This was one of the pieces I discussed in class.
The algorithms used by Twitter and other social media giants have a significant impact on people’s lives and need to be researched, according to Mark Riedl, an AI professor at Georgia Tech. From the outside, it’s difficult to tell whether META had any effect on Twitter, but the potential was there, he claims.
Riedl continues by saying that allowing outsiders to examine Twitter’s algorithms was a crucial step toward greater transparency and comprehension of AI-related concerns. The rest of us might use them as a watchdog to learn how AI was harming us, he claims. The researchers at META had exemplary backgrounds and extensive experience researching AI for social betterment.
Regarding Musk’s proposal to make the Twitter algorithm open-source, the actual implementation would be much more difficult. It can be difficult to comprehend the various algorithms that influence how material is surfaced without the real-time data they are fed, such as tweets, views, and likes.
The notion that there is a single algorithm with a clear political bias may oversimplify a system that is capable of harbouring more subtle biases and issues. This was precisely the type of work being done by Twitter’s META team. According to Alkhatib from the University of San Francisco, “There aren’t many groups that systematically investigate their own algorithms’ biases and faults.” MET did that. And it no longer does.