Quantcast
Viewing all articles
Browse latest Browse all 7171

The Test We Can—and Should—Run on Facebook

In 1959, the sociologist Edward Shils wrote an influential essay called “Social Inquiry and The Autonomy of the Individual.” He discussed the nature of studying humans with new techniques—which, for him, included concealed cameras, microphones, and forms of “chemical and psychological manipulation.” These could be powerful tools, but they came at a great cost:

There is no doubt that some social scientists, with their zeal for novelty, will be attracted by the possibilities offered by these means of manipulating the external and internal lives of other persons. It is all the more necessary therefore, that the leading persons in these fields should declare themselves as strenuously and decisively opposed to such tampering with human autonomy.

Shils expresses many of the anxieties and conundrums we’ve heard this week about massive human studies on networked platforms.

For a widely criticized study, the Facebook emotional contagion experiment—which deployed its own type of new techniques—managed to make at least one significant contribution. It has triggered the most far-reaching debate we’ve seen on the ethics of large-scale user experimentation: not just in academic research, but in the technology sector at large.

Most of the attention has been focused on the particulars: how almost 700,000 Facebook users were subjected to a psychological experiment without their knowledge or explicit consent, the decision to manipulate their News Feeds to suppress either positive or negative updates, and why this study was accepted to a journal without academic ethics approval. But beyond this horizon, some truly difficult questions lie in wait: What kinds of accountability should apply to experiments on humans participating on social platforms? Apart from issues of consent and possible harm, what are the power dynamics at work? And whose interests are being served by these studies?

But this is not the first time emerging technologies have come into conflict with the reigning beliefs about how human experiments should be done. Not unlike today, the late 1950s and early 1960s were a time of rapid change for social science, with new approaches and a growing appetite for experimental interventions. “Manipulative experimentation,” according to Shils, “is not a relation between equals; it is a relationship in which power is exercised.” For him, the less a subject is informed about or agrees with the aims of the experimenter, and the less intelligible the means of the study, the more ethically problematic it becomes.

We have now had a glimpse within the black box of Facebook’s experiments, and we’ve seen how highly centralized power can be exercised. It is clear that no one in the emotional contagion study knew they were participants, and even now, the full technical means and mechanisms of the study are only legible to the researchers. Nor can we know if anyone was harmed by the negatively skewed feeds. What we do know is that Facebook, like many social media platforms, is an experiment engine: a machine for making A/B tests and algorithmic adjustments, fueled by our every keystroke. This has been used as a justification for this study, and all studies like it: Why object to this when you are always being messed with? If there is no ‘natural’ News Feed, or search result or trending topic, what difference does it make if you experience A or B?

The difference, for Shils and others, comes down to power, deception and autonomy. Academics and medical researchers have spent decades addressing these issues through ethical codes of conduct and review boards, which were created to respond to damaging and inhumane experiments, from the Tuskegee syphilis experiment to Milgram’s electric shocks. These review boards act as checks on the validity and possible harms of a study, with varying degrees of effectiveness, and they seek to establish traditions of ethical research. But what about when platforms are conducting experiments outside of an academic context, in the course of everyday business? How do you develop ethical practices for perpetual experiment engines?

There is no easy answer to this, but we could do worse than begin by asking the questions that Shils struggled with: What kinds of power are at work? What are the dynamics of trust, consent and deception? Who or what is at risk? While academic research is framed in the context of having a wider social responsibility, we can consider the ways the technology sector also has a social responsibility. To date, Silicon Valley has not done well in thinking about its own power and privilege, or what it owes to others. But this is an essential step if platforms are to understand their obligation to the communities of people who provide them with content, value and meaning.

Perhaps we could nudge that process with Silicon Valley’s preferred tool: an experiment. But this time, we request an experiment to run on Facebook and similar platforms. Rather than assuming Terms of Service are equivalent to informed consent, platforms should offer opt-in settings where users can choose to join experimental panels. If they don’t opt in, they aren’t forced to participate. This could be similar to the array of privacy settings that already exist on these platforms. Platforms could even offer more granular options, to specify what kinds of research a user is prepared to participate in, from design and usability studies through to psychological and behavioral experiments.

Of course, there is no easy technological solution to complex ethical issues, but this would be significant gesture on the part of platforms towards less deception, more ethical research and more agency for users.

Some companies might protest that this will reduce the quality of their experimental studies because fewer people will choose to opt in. There is a tendency in big data studies to accord merit to massive sample sizes, regardless of the importance of the question or the significance of the findings. But if there’s something we’ve learned from the emotional contagion study, a large number of participants and data points does not necessarily produce good research.

It is a failure of imagination and methodology to claim that it is necessary to experiment on millions of people without their consent in order to produce good data science. Shifting to opt-in panels of subjects might produce better research, and more trusted platforms. It would be a worthy experiment. 

Image may be NSFW.
Clik here to view.



Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.

Image may be NSFW.
Clik here to view.


Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.
Image may be NSFW.
Clik here to view.

Viewing all articles
Browse latest Browse all 7171

Trending Articles