Twitter's new privacy policy is ill-conceived, potentially helping bad actors avoid accountability
Far-right extremists are already weaponizing the change to target activists and researchers
Written by E. Rosalie Li & Madelyn Webb
Research contributions from Danil Cuffe
Published
On Tuesday, Twitter announced a policy prohibiting users from sharing “media of private individuals without the permission of the person(s) depicted” and promising to remove images and videos that violate the new rule. However, despite what may be the best of intentions, Twitter’s new private media policy is vague and difficult -- if not impossible -- to equitably enforce. Worse still, the policy seems uniquely vulnerable to abuse.
The new rule expands on Twitter’s existing private information safety policy, which already banned the sharing of personal information like phone numbers and addresses without permission as well as the spread of nonconsensual sexual images (so-called “revenge porn”). Criticism of the policy was swift and noted a number of likely issues with both the identification of violative content and Twitter's ability to consistently or productively enforce the new rule.
Twitter has historically allowed users to share footage or images taken in public spaces -- including depictions of people who would likely be considered private individuals.
Although they may be private individuals, under U.S. law it is legal to take and share an image in a public space where a person has “no reasonable expectation of privacy.” With this new policy, however, Twitter has inverted the norms of longstanding legal precedent around public expectations of privacy and instead placed the burden on its users to obtain consent where none is legally required, with no evidence that this will serve its stated objectives of protecting users from abuse.
The policy is so vague its meaning is unclear
The language of the new Twitter policy is vague about what content is included and how the platform would enforce potential violations. The updated Twitter policy reads:
You may not publish or post other people's private information without their express authorization and permission. We also prohibit threatening to expose private information or incentivizing others to do so.
In addition, you may not share private media, such as images or videos of private individuals, without their consent. However, we recognise that there are instances where users may share images or videos of private individuals, who are not public figures, as part of a newsworthy event or to further public discourse on issues or events of public interest. In such cases, we may allow the media to remain on the platform.
How Twitter defines “private media,” “private individual,” “newsworthy,” and “public interest” is unclear, making it nearly impossible to assess the policy’s impact. But even robust definitions would not fully resolve all of the policy’s many potential issues related to identification of this content and subsequent enforcement.
For example, the policy does not address whether consent is revocable, failing to specify how Twitter would handle fluctuating demands, which could effectively turn an acceptable post into a violation on a whim. The word “depictions” also leaves open the question of whether Twitter’s new policy would include non-photographic images of a person’s likeness, such as a cartoon.
The new policy may result in Twitter helping bad actors conceal criminal acts, chilling public accountability efforts
As the Columbia Journalism Review noted, Twitter’s enforcement of this policy may also be at odds with journalistic practices and wider public accountability measures, such as the documentation of extremism or potentially criminal acts.
The police murder of George Floyd is but one example of a situation where this new policy could have potentially devastating consequences. The video of Floyd’s death, posted online by a teenager who filmed a Minneapolis police officer kneeling on Floyd's neck for several minutes, could arguably be found to violate Twitter’s new terms. Particularly given that the original statement from law enforcement about Floyd’s killing described the incident as “Man Dies After Medical Incident During Police Interaction,” it seems likely that a policy prohibiting or deterring the sharing of such videos may have negative consequences for public accountability.
Excessive force and police brutality cases are far from the only instance where this policy could prevent public accountability for potentially criminal acts. On January 6, a crowd of pro-Trump rioters participated in an attack on the U.S. Capitol seeking to prevent the certification of Joe Biden’s victory in the 2020 election. Many of the individuals who participated in this attack were identified using public videos of the events -- some of which were posted online by participants themselves.
The FBI itself put out images that would now seemingly place it in violation of Twitter’s new policy. Extremist groups or others engaging in violence in the future might be able to rely upon Twitter to take down such attempts by law enforcement to crowdsource help in locating them. The policy could conceivably embolden bad actors to engage in violence, vandalism, and targeted harassment with little fear of being publicly identified on social media.
The former policies, if enforced, already cover concerns like nonconsensual nudity and sensitive media, so it's unclear how the new policy would prevent “the misuse of private media,” which Twitter notes “can have a disproportionate effect on women, activists, dissidents, and members of minority communities.” In fact, far-right extremists in the U.S. have already used the new rule to target the accounts of activists and researchers on Twitter for removal.
Twitter’s policy will empower autocrats and hurt people living under them
With social media providing networks of communication across the globe, we also must look beyond the U.S. to understand the potential consequences of Twitter’s new policy.
Left to decide moderation policies on their own, social media companies have enabled atrocities in countries where the public has little recourse against those in power. These atrocities have included genocide and a military coup in Myanmar. Now, authoritarian leaders may have a new way to silence opposition while sidestepping the threat of unpleasant sanctions.
In Belarus, a country living under an autocratic dictator, one of the only forms of protection the public has against a murderous regime is the ability to circulate video and images of violence by security forces, which have already retaliated against critics on social media. (Indeed, Facebook reported removing an information operation that it traced to the Belarusian KGB just days ago.) With far-right groups already weaponizing the policy within the U.S., it seems reasonable to consider how Belarusian authorities might weaponize this ill-conceived policy to take down evidence of the government’s atrocities -- and how much more extreme those consequences might be for Twitter’s other global users.
Narendra Modi, the leader of India known for his autocratic tendencies, has repeatedly requested Twitter remove posts critical of his government under threat of arresting the social media company’s local employees. The vagaries in Twitter’s policy make it seem as much a gift to those in power as a new tool of punishment for already oppressed people.
Even a well-crafted policy still requires enough moderators to equitably enforce it
Even if the new rule were well-crafted, Twitter hasn’t indicated it plans to make the increased investment or expansion of its workforce that would be necessary to enforce this policy. It is also unlikely that artificial intelligence will be sufficient -- or nuanced enough -- to handle the increased workload in an equitable and consistent manner. Limited resources will likely contribute to unequal application, as seen on other platforms.
AI mitigation is also unlikely to be successful on its own and may even exacerbate problems, given inequalities with AI-first enforcement policies. In fact, AI-driven mitigation could even make the situation worse through unwarranted suspensions and biased moderation against activists and others. Facebook, for example, has struggled with enforcing its own algorithms to moderate hate speech, which were actually found to be biased against people of color.
The question for Twitter remains: How will this new policy be equitably enforced? This currently seems impossible, as it requires subjective judgment for critical terms like “newsworthy” and “public individual,” leaving ample opportunity for biases to influence moderation decisions.
This is not a surprise — Twitter can and should do better
Tech companies have repeatedly prioritized positive public relations over effective policymaking and enforcement: Facebook’s failure to adequately define and moderate hate speech has had well-documented consequences, while Google has struggled to build robust moderation protocols for YouTube.
Twitter’s irresponsible policy announcement is part of a pattern in which platforms announce splashy updates with no regard for the consequences of enforcement or the investment needed.
Social media serves one of the largest customer bases, if not the largest, in human history. The attention and care given to content moderation policies should reflect the vast implications of even minute changes.
Twitter can and should do better than a vague policy that, at best, is unworkable and unenforceable. At worst, it could end up emboldening bad actors, inhibiting public accountability, and having a chilling effect on journalism.