Addressing Hate & Bias in Search Algorithms

For the last two weeks, the city of Chemnitz in Germany has been rocked by right-wing demonstrations that have escalated into violence against migrants and other minority communities. Internet platforms have not only played a mobilizing role for Germany’s far right movement, but have also served the right’s agenda by spreading misinformation and extremist viewpoints. This is in line with researcher and CUNY professor Jessie Daniels’ discussion of the “Algorithmic Rise of the ‘Alt-Right’,” with the Internet’s distributed design allowing supremacists to spread hate on an unprecedented scale. News coming out of Chemnitz also demonstrate the harmful, real-life consequences that biased search results can have. Research has shown that YouTube, in particular, has played a significant role in mainstreaming extremist viewpoints and conspiracy theories in search results. YouTube’s search algorithms and recommended pages were significantly less likely to present users with mainstream news and balanced content.

The Chemnitz riots are not the only example of algorithms amplifying hate. Google has similarly come under scrutiny for promoting supremacist websites and Holocaust denial in response to the search query “did the Holocaust happen?” Researchers and journalists have also shown how search algorithms can undermine social justice by suppressing marginalized voices and feeding into societal biases. In the seminal book “Algorithms of Oppression,” Dr. Safiya Umoja Noble highlights the racist and sexist notions underlying Google’s search algorithms, with searches for women color routinely returning sexualized images. On YouTube, algorithms have not only demonetized the videos of transgender creators, but continue to block and demote LGBTQ-related videos in search results.

Driving the search algorithms of Internet platforms are advertising-based business models that maximize revenue by presenting users with the kind of content that keeps their attention on the platform for as long as possible. These algorithms constantly evolve by learning from users’ engagement. However, since companies consider algorithms trade secrets, they continue to leave users in the dark about the logic behind the search results being presented to them.

Making algorithms accountable to vulnerable users

Given the secretive and non-transparent nature of these algorithms, researchers and civil society actors struggle with the question of how best to hold Internet companies accountable for their algorithmic designs. Public outcry and activist pressure have yielded some positive change. After Safiya Umoja Noble and other researchers brought attention to the racist notions underlying Google Search results, the company modified its algorithm to represent Black women in a less sexualized way. Likewise, the company demoted websites promoting Holocaust denial from top search results. However, companies continue to respond reactively rather than proactively to pubic concerns related to their automated decision-making.

Significant changes have to be made by tech companies to ensure that the algorithms they create do not reflect the societal hate and biases encountered by minority users. They also have a unique opportunity to challenge racism, anti-semitism and other forms of prejudice through educational efforts. Journalist Yair Rosenberg argues that removal of hate speech “suppresses a symptom of hate, not the source.” Instead, he maintains that companies should work closely with groups like the Anti-Defamation League to develop counter-speech measures like disclaimers warning users that they are engaging with content promoting Holocaust denial. This could also be a helpful strategy for YouTube and other companies wanting to create educational opportunities for users searching for news on events like the Chemnitz protests.

In order to ensure that search algorithms do not promote content posing a challenge to social justice, companies should also take a stand against hate in their terms of service. Enforcement of these terms should not solely rely on automated systems. They must also involve humans making decisions about the types of content that violate the company’s terms.

Calling for greater transparency regarding companies’ terms of service enforcement, initiatives like the Ranking Digital Rights project have prompted companies to improve their transparency reporting by informing users about the number of accounts and content taken down for violating their terms. With its recent Transparency Report, Facebook has taken an important first step in this direction. In this report, the company publishes statistics on the company’s responses to hate speech and other types of content prohibited in its Community Standards. Companies should also provide appeals mechanisms that allow users to submit a complaint if they think their content was taken down wrongfully.

These strategies will only have long-term impact if companies work closely with affected communities when making decisions that shape user rights. Discussing algorithmic bias against vulnerable users, Farhad Manjoo writes in the New York Times, “These people — women, minorities and others who lack economic, social and political clout — fall into the blind spots of companies run by wealthy men in California.” It is critical for anti-hate organizations and other groups representing minority users to have a seat at the table when companies make changes to their policies and technological designs.

Some of the civil rights challenges that have resulted from algorithmic designs for minority communities directly contradict recent claims of tech companies having a liberal bias. In her work, Safiya Umoja Noble argues that automated decision-making and algorithms will be among the human rights issues of this century. Creating algorithms with companies’ vulnerable users in mind will be critical in ensuring that an increasingly automated online environment protects the human rights of all users.