Quantcast
Channel: Technology | The Atlantic
Viewing all articles
Browse latest Browse all 7173

The Many Ways Twitter Is Bad at Responding to Abuse

$
0
0

Hundreds of tributes have already been written since the world learned of Robin Williams’ death by suicide on August 11. The loss of his genius and warmth was clearly a terrible blow to his fans, friends, and especially his family. His 25-year-old daughter, Zelda Williams, wrote a beautiful farewell to her father on Instagram, including a quote from Antoine de Saint-Exupéry’s The Little Prince:

You - you alone will have the stars as no one else has them...In one of the stars I shall be living. In one of them I shall be laughing. And so it will be as if all the stars were laughing, when you look at the sky at night...You - only you - will have stars that can laugh.

But before even 48 hours had passed since her father’s body was found, two Twitter users tweeted graphic photographs of a dead body directly at Zelda Williams, claiming that it was her father’s.

The photos were fake, but the impact on Williams was devastating. In a tweet that she later deleted, Williams pleaded for other users to report the two accounts that sent the photos: "Please report @PimpStory @MrGoosebuster. I'm shaking. I can't. Please. Twitter requires a link and I won't open it. Don't either. Please." Within a few hours Williams deactivated her Twitter account, writing in a final post, “I'm sorry. I should've risen above. Deleting this from my devices for a good long time, maybe forever. Time will tell. Goodbye.”

On Wednesday, after Zelda Williams’s horrific (and very public) torment, Twitter announced that it will revise its rules regarding abuse. (Twitter hasn't outlined how or when the rules will change.) But the Williams case hardly shocks many Twitter users, especially women, who are subjected to vicious personal abuse on a near-daily basis. The problem extends far beyond Twitter, too: Across other social media and the entire Internet, women and minorities face rampant vitriol. That social media superpowers have done so little to change this so far suggests that the abuse isn’t a bug, it’s a feature. 

Twitter, though, has structured its architecture for reporting abuse particularly poorly: It effectively rewards abusers while discouraging support, solidarity, and intervention for their victims. As Larry Lessig observed more than a decade ago, “code is law.” The fantasy of a free, unregulated space–whether online or offline–is just that, a fantasy. Every platform has values and regulation built into its very structure, built by human designers who make choices about which values to promote and which to inhibit.

Twitter allows anyone to create an account, meaning anyone can harass other users in a way visible to the larger community. This both increases the impact of such abuse and encourages other abusers to join in the fray. Mass-abuse happens fast, and targeted users can drown in a sea of abuse within minutes: The journalist Caroline Criado-Perez received one rape threat per minute after daring to suggest that a woman be featured on British currency.

By comparison, the process of reporting abuse is slow, painful, and often ineffective. By actively discouraging third parties from reporting abuse of others and makes the reporting of abuse burdensome, Twitter has set up a game that targets of abuse can never win. 

From the Twitter Help Center(Emphasis mine.)

Who can report abusive behavior on Twitter?

In order to investigate reports of abusive behaviors, violent threats or a breach of privacy, we need to be in contact with the actual person affected or their authorized representative. We are unable to respond to requests from uninvolved parties […] If you are not an authorized representative but you are in contact with the individual, encourage the individual to file a report through our forms.

In May, I had the opportunity to speak with Twitter officials on the phone about their abuse and harassment policies. I suggested that this policy of discouraging bystanders to report the abuse of others contributed to abusive practices. There is a real psychological cost, I said, to being forced to read the abusive messages. (Sometimes, the victim must read them for the first time, if the abuser did not directly tweet the abuse at his target, but regardless they must revisit them in order to provide the direct links which the Twitter abuse report requires.) This policy directly brought about Zelda Williams’ despairing reaction:

Twitter requires a link and I won’t open it. Don’t either. Please.

This policy also inhibits the larger Twitter community from engaging in positive actions of support and solidarity with the victims of harassment. Instead of bystander intervention, we get bystander silencing.

The report abuse form moreover seems almost designed to discourage reports even by direct targets of abuse. For those who haven’t used the form, a user is confronted with a series of questions about their involvement with the abuse and what kind of abuse is being reported, the answers to which generate further options (a bit like one of those old “Choose Your Own Adventure” books), followed by the aforementioned required links, description of the problem, a strangely cheery request to “Tell us about yourself,” and an electronic signature. And here’s what happens once you submit the form:

What happens when Twitter receives a valid report?

Once you have submitted your report, we will review the reported account, including the links to Tweets you’d like us to investigate. If the account is in violation of our policies, we will take action, ranging from warning the user up to permanently suspending the account.

Consider this: If it is common knowledge that only targets, not bystanders, are encouraged to submit complaints, what conclusion are abusers likely to draw about who reported them? This serves as yet another deterrent to users who fear retaliation and escalation from their abusers.

Contrast this to the procedure for reporting spam. To report spam, a user must click a button that says “This account is spam.”

That’s it. Twitter is oddly unconcerned about false or unauthorized reports of spam: There are no questions about the user’s involvement with the alleged spam, no requirement to provide links or explain how the content qualifies as spam, no requirement of a signature, no need to fear retaliation from the reported spammer.

As Amanda Hess wrote recently in Slate, Twitter essentially tells users who are being harassed to “to shut up.” Twitter sagely advises users who are being harassed that “abusive users often lose interest once they realize that you will not respond.” Forcing victims to adjust their behavior is rarely the right response to acts of willful abuse by others. Indeed, Twitter seems to believe that abuse on its service primarily involves junior high schoolers sniping at each other. The service recommends:

If the user in question is a friend, try addressing the issue offline. If you have had a misunderstanding, it may be possible to clear the matter up face to face or with the help of a trusted individual.

This simplistic perception of abuse clearly underlies Twitter’s belief in the effectiveness of the block function. Blocking means that a person being harassed will no longer see abusers’ tweets in their mentions. That is, it does not actually prevent an abuser from tweeting at his target or from the abuse being visible from anyone else; it just means the target can’t see it anymore. This is the equivalent of responding to someone yelling in your face as you walk down the street by putting on a blindfold and earplugs.

Twitter demonstrates a fundamental lack of understanding of the dynamics of social media abuse by touting block and mute functions as adequate measures to respond to abuse. People who use social media to abuse other people do so at least in part because of its public dimension: They want not only to force themselves into their targets’ line of vision, but to ensure that other people see them doing it. That’s part of the harasser’s game–not merely to attack an individual, but to attack, discredit, and humiliate a target in front of as large an audience as possible.

In her powerful article, “Why Women Aren’t Welcome on the Internet,” Amanda Hess detailed how online harassment is disproportionately targeted at women, and that:

this type of gendered harassment—and the sheer volume of it—has severe implications for women’s status on the Internet. Threats of rape, death, and stalking can overpower our emotional bandwidth, take up our time, and cost us money through legal fees, online protection services, and missed wages.

The impact of online harassment is not limited to the victims. Like sexual harassment in workplaces, schools, and on the street, this abuse can drive women out of public spaces and inhibit their contributions to public discourse. All society suffers from the loss of women and girls’ voices in professional, creative, and social life. Harassment directed at racial and sexual minorities has similar effects, depriving social spaces of the diversity and innovation those groups might offer. 

This isn’t just a Twitter problem, of course. Gawker Media, Facebook and Google have also come under fire for inadequate responses to abuse. The question on many people’s minds is why social media superpowers cannot–or will not–design their platforms to optimize creativity and exchange instead of being swallowed up by the dark noise of abusers and trolls. After all, we have seen what their power looks like when they choose to exercise it. After learning that “mug shot websites” were extorting money from individuals facing financial, professional, and personal ruin as a result of the prominent display of their arrest records in search engine results, Google changed its search algorithm to push the results down. Major credit card companies responded as well, terminating the accounts of the mug shot sites so that they could not receive payments.

Twitter might now devise structural changes to its platform to help facilitate meaningful interaction while discouraging mob mentality. It shouldn’t have taken a public attack on a beloved celebrity’s memory and family to rouse any of these powerful companies from their slumber, but if this is the proverbial straw, let us hope that they will respond with wise, thoughtful, and lasting corrections to the architecture of their platforms. As Lessig wrote in 2000:

“Our choice is not between ‘regulation’ and "no regulation." The code regulates. It implements values, or not. It enables freedoms, or disables them. It protects privacy, or promotes monitoring. People choose how the code does these things. People write the code. Thus the choice is not whether people will decide how cyberspace regulates. People—coders—will. The only choice is whether we collectively will have a role in their choice—and thus in determining how these values regulate—or whether collectively we will allow the coders to select our values for us.

All of the major tech companies claim to be built around the interests of “users” to promote robust and diverse interaction. The question, then, is who counts as “users.” By rewarding abusers and isolating targets of abuse, these platforms are driving away creative, valuable users in favor of malicious, repressive users. This isn’t just a heartless response; it will likely turn out to be a self-defeating one.









Viewing all articles
Browse latest Browse all 7173

Trending Articles