Lack of diversity is at the core of social media's harassment problem

Right-wing figures and far-right trolls mocked questions to Facebook's Zuckerberg about diversity. But it's crucial to understanding how platforms enable harassment.

Sarah Wasko / Media Matters

This week, Facebook CEO Mark Zuckerberg was questioned on racial diversity within his company as he appeared before House and Senate committees to address Facebook’s handling of user data. Facebook -- and more generally, the tech industry -- has often been criticized for its lack of diversity, an issue that, as members of Congress pointed out, can hinder the platform’s ability to respond to discrimination against African-American users and fake news.

Rep. Yvette Clarke (D-NY) discussed the relationship between Facebook’s fake news problem and lack of diversity within the company itself:

Sen. Cory Booker (D-NJ) asked Zuckerberg about racial discrimination enabled by Facebook and indicated a "growing distrust ... about Facebook's sense of urgency” in addressing such discrimination:

Rep. G.K. Butterfield (D-NC) questioned Zuckerberg on Facebook’s lack of diversity:

REP. G.K. BUTTERFIELD (D-NC): You and your team certainly know how I feel about racial diversity in corporate America, and [Facebook Chief Operating Officer] Sheryl Sandberg and I talk about that all of the time. Let me ask you this, and the Congressional Black Caucus has been very focused on holding your industry accountable -- not just Facebook, your industry -- accountable for increasing African-American inclusion at all levels of the industry. And I know you have a number of diversity initiatives. In 2017, you’ve increased your black representation from 2 to 3 percent. While this is a small increase, it's better than none. And this does not nearly meet the definition of building a racially diverse community. CEO leadership -- and I have found this to be absolutely true -- CEO leadership on issues of diversity is the only way that the technology industry will change. So, will you commit, sir, to convene, personally convene a meeting of CEOs in your sectors -- many of them, all of them perhaps, are your friends -- and to do this very quickly to develop a strategy to increase racial diversity in the technology industry?

MARK ZUCKERBERG: Congressman, I think that that's a good idea and we should follow up on it. From the conversations that I have with my fellow leaders in the tech industry, I know that this is something that we all understand, that the whole industry is behind on, and Facebook is certainly a big part of that issue. We care about this not just from the justice angle, but because we know that having diverse viewpoints is what will help us serve our community better, which is ultimately what we're here to do. And I think we know that the industry is behind on this.

Right-wing media figures and far-right trolls scoffed at the idea of questioning the tech industry’s lack of diversity

Right-wing figures and far-right trolls scoffed at these questions on different social media platforms -- including Gab, an alternative to Twitter that has been called a “haven for white nationalists” and has on occasion served as a platform to coordinate online harassment -- dismissing them as “insane” and describing efforts to increase racial diversity as discrimination “against white people.” 

But experts have criticized Facebook and other platforms for the lack of racial diversity within their ranks and explained that diversity is at the core of social media’s harassment problems

Members of Congress were not alone in their concern that Facebook’s racial homogeneity might diminish its capacity to create a safe environment for every user and protect user data. Bärí A. Williams, formerly a senior commercial attorney at Facebook, explained that racial diversity specifically would improve the platform’s ability to respond to data breaches, “fill blind spots,” and improve “cultural competency” through “lived experience.”

While Zuckerberg announced Facebook’s intention to rely on Artificial Intelligence (AI) to adress many of the social network’s shortcomings, Molly Wood, host of the Marketplace Tech radio show, pointed out that AI is not a substitute for a racially inclusive workforce:

A lack of racial diversity in companies’ ranks is at the core of the harassment problem on their social media platforms, as online harassment disproportionately targets minorities of color. According to Pew, “harassment is often focused on personal or physical characteristics; political views, gender, physical appearance and race are among the most common,” with African-Americans experiencing more harassment because of their ethnicity than other groups, and women experiencing more harassment than men:

Some 14% of U.S. adults say they have ever been harassed online specifically because of their political views, while roughly one-in-ten have been targeted due to their physical appearance (9%), race (8%) or gender (8%). Somewhat smaller shares have been targeted for other reasons, such as their religion (5%) or sexual orientation (3%).

Certain groups are more likely than others to experience this sort of trait-based harassment. For instance, one-in-four blacks say they have been targeted with harassment online because of their race or ethnicity, as have one-in-ten Hispanics. The share among whites is lower (3%). Similarly, women are about twice as likely as men to say they have been targeted as a result of their gender (11% vs. 5%)

During a conversation with Wired about how Silicon Valley can address harassment in social media platforms, Black Lives Matter’s Chinyere Tutashinda talked about her experiences online as a black social activist, confirming Pew’s findings by remarking on the ways that people of color are targeted disproportionately online:

CHINYERE TUTASHINDA: I work within the social justice movement, and there’s no one, especially in the black community, who doesn’t expect harassment online. It’s just replicating what happens in the real world, right? How do we make other people know and care?

[...]

There is a lack of diversity in who’s creating platforms and tools. Too often it’s not about people, it’s about how to take this tool and make the most money off it. As long as people are using it, it doesn’t matter how they’re using it. There’s still profit to earn from it. So until those cultures really shift in the companies themselves, it’s really difficult to be able to have structures that are combating harassment.

[...]

Diversity plays a huge role in shifting the culture of organizations and companies. Outside of that, being able to broaden the story helps. There has been a lot of media on cyberbullying, for example, and how horrible it is for young people. And now there are whole curricula in elementary and high schools. There’s been a huge campaign around it, and the culture is shifting. The same needs to happen when it comes to harassment. Not just about young people but about the ways in which people of color are treated.

Experts have weighed in on the specific implications of social media platforms lacking racial diversity among their ranks. As Alice Marwick, a fellow for the Data & Society Research Institute, pointed out on Quartz,“the people who build social technologies are primarily white and Asian men” and because “white, male technologists don’t feel vulnerable to harassment” in the same way that minorities or people of color do, they often fail to incorporate protections against online abuse in their digital designs.

To illustrate Marwick’s point, take Twitter’s mute button, a feature that can filter unwanted content from users' timelines, making it easier for users to avoid abusive content directed at them. As Leslie Miley -- a black former engineering manager at Twitter who left the company specifically because of how it was addressing diversity issues -- told The Nation, the feature wasn’t perfected until a diverse group of people worked together to fix it:

[Leslie] Miley was a part of a diverse team at Twitter that he says proves his point. His first project as the engineering manager was to fix Twitter’s “mute” option, a feature that allows users to filter from their timelines unwanted tweets, such as the kind of harassment and personal attacks that many prominent women have experienced on the platform.

“Twitter released a version in the past that did not go over well. They were so badly received by critics and the public that they had to be rolled back. No one wanted to touch the project,” says Miley. So he pulled together a team from across the organization, including women and people of color. “Who better to build the feature than people who often experience abuse online?” he asks. The result was a new “mute” option that was roundly praised as a major step by Twitter to address bullying and abuse.

The blind spots caused by racial homogeneity might also delay platforms’ responses to rampant harassment. As documented by Model View Culture magazine, far-right troll and white nationalist sympathizer Milo Yiannopoulos was allowed to rampantly harass users for years on Twitter before getting permanently banned for his “sustained racist and sexist” harassment of African-American comedian Leslie Jones. As Model View Culture points out, racial diversity could be extremely helpful in addressing the challenge social media platforms face in content moderation:

From start to finish of the moderation pipeline, the lack of input from people who have real, lived experience with dealing with these issues shows. Policy creators likely aren’t aware of the many, subtle ways that oppressive groups use the vague wording of the TOS to silence marginalized voices. Not having a background in dealing with that sort of harassment, they simply don’t have the tools to identify these issues before they arise.

The simple solution is adding diversity to staff. This means more than just one or two people from marginalized groups; the representation that would need to be present to make a real change is far larger than what exists in the population. Diversity needs to be closer to 50% of the staff in charge of policy creation and moderation to ensure that they are actually given equal time at the table and their voices aren’t overshadowed by the overwhelming majority. Diversity and context must also be considered in outsourcing moderation. The end moderation team, when it comes to social issues specific to location, context and identity, needs to have the background and lived experience to process those reports.

To get better, platforms must also address how user-generated reports are often weaponized against people of color. Although there’s nothing that can be done about the sheer numbers of majority-White users on platforms, better, clearer policy that helps them question their own bias would likely stop many reports from being generated in the first place. It may also help to implement more controls that would stop targeted mass-reporting of pages and communities by and for marginalized people.

Ultimately, acknowledging these issues in the moderation pipeline is the first step to correcting them. Social media platforms must step away from the idea that they are inherently “fair,” and accept that their idea of “fairness” in interaction is skewed simply by virtue of being born of a culture steeped in White Supremacy and patriarchy.