Quantcast
Channel: Technology | The Atlantic
Viewing all articles
Browse latest Browse all 7173

The Primary Way to Report Harassment Online Is Broken

$
0
0

Death threats, violent misogyny, child pornography, copyright infringement. 

See any of those four very different things on a social media site and you pretty much have only one technical option: You can hit a button to mark the content as objectionable, ‘flagging’ it for review by the site’s moderators. It’s a feature as simple as it is widespread. Vine, YouTube, Facebook, and Twitter all implement the flag to some degree.

It’s also a procedure newly scrutinized. Last week, the actress Zelda Williams quit Twitter and Instagram after users on both harassed her over the death of her father, Robin Williams. Anonymous accounts were tweeting graphic images of dead bodies at her, falsely claiming them to show her father.

Following her public departure, Twitter announced it will revise its rules about harassment. While it hasn’t indicated what changes it plans to make, they will almost certainly involve the flag. 

And that’s going to be hard. A paper released last month in New Media and Society explored the many problems of relying on the flag to moderate content. Its two authors—Kate Crawford, a researcher at Microsoft, and Tarleton Gillespie, a professor at Cornell University—suggest that, while the flag had its moment, it might be time for social media sites to migrate to something new.

However, the flag, as Crawford and Gillespie point out, is not a singular thing. Different sites have tailored flags to their various purposes—and to the various degrees that their users have demanded. Vine only has a report button, a single method of telling the platform’s moderators: “I object to this.” Facebook, meanwhile, has a multi-level display for flagging content. It even has a “support dashboard,” where users can see the status of their flags and cancel them before they’re reviewed. 

In a useful appendix, the authors break out various sites’s flagging interfaces. The chart below (here’s the full size version) shows the huge variance in levels of objection a user can file between Vine’s, Twitter’s, and YouTube’s flagging features. (That fourth, most-specific level of objection a user can file on Twitter? Report an ad.)


New Media and Society

But regardless of how deep your can go in your flagging, the system itself as has drawbacks. The flag can be gamed, whether as a prank or as part of a sustained campaign. In 2012, a conservative group was accused of “flagging” pro-gay rights Facebook pages as objectionable content. When questioned, one of the group's leaders said he only encouraged users to flag-as-tactic after other groups flagged anti-gay rights pages.

And in some places only the abused can flag content, not bystanders or friends. In the hours before she quit Twitter, Williams asked her followers to report some of the worst abuse. But Twitter’s corporate policies permit only the targets of abuse themselves to flag content as objectionable. Other users’ flags did nothing, and the abuse continued.

Most of all, flags are purposefully “thin” means of signaling dissent. Users don’t have many ways to indicate how strongly they feel about the flagging or how pressing the flagging is. While they can type in text, they ultimately have only a binary choice: flag or no flag. In some ways, the flag's job is to make appear to be moderating offensive content—and, furthermore, lets them justify any kind of content removal. 

And for all their flaws, flags provide technical solutions to social problems. Twitter can improve how it handles mass-abuse, but no algorithm or mechanism can completely remove the potential for harassment nor can it make decisions about what is and isn't abusive. This doesn't mean Twitter shouldn't try (far from it!), and it doesn't mean their attempts are in vain. But trying to engineer away a social problem will always be hard, and it will ultimately require a human mind to make the final call. Crawford and Gillespie write:

The flag is merely the first step in a process of content regulation, a process that most sites hope either disappears completely beneath the smooth operation of the site, or when it must appear, to present itself as a rational and fair policing mechanism. But the regulation of contentious user content is an invariably messy process, fraught with the vagaries of human interpretation and shaped by competing institutional pressures. In fact, it benefits social media platforms to retain the ability to make judgments on content removal, based on ad hoc and often self-interested assessments of the case at hand. 

So what's to be done, given the paucity of any imperfect fix? The two authors propose something that might seem counterintuitive: a social fix to what, at heart, is a social problem. Rather than having conversations about flagged content behind closed doors, Crawford and Gillespie propose putting content decisions up for public debate. 

They point to the massive multiplayer game League of Legends as an example, where developers have built something called The Tribunal, a communal moderating system where players can debate and vote on offenses. Instead of invisible moderation, there’s visible debate—a solution that produces both more conflict and more honesty. 

“Flags,” write Crawford and Gillespie, “may allow us to disapprove of what others say":

But without any public record or space for contestation, they leave us little chance to defend the right for those things to be said, or to rally against something so egregious that it warrants more than a quiet deletion. 









Viewing all articles
Browse latest Browse all 7173

Trending Articles