Our information networks no longer even try to optimize for truth. Here's how we worked with the imperfect system.

First came the fakes. Old storm photos dredged up and labeled Sandy. Photoshopped sharks in a flooded New Jersey town. A still from the Day After Tomorrow of the Statue of Liberty. An empty Times Square. A scuba diver in Times Square station. A lost seal borrowed from Duluth.
But very early Tuesday morning, a few hours after Sandy had made landfall, the flow of fake images of the storm began to slow, and by Tuesday afternoon, the unbelievable photographs of damage were almost all, tragically, real. For two days, our team here at The Atlantic along with journalist Tom Phillips (@flashboy) did our best to reduce the amount of disinformation spreading on the web and to confirm the work that amateurs and pros alike were publishing about the storm. Through the hours of detail-oriented tasks, some thoughts accumulated in my head about the state of our information ecosystem. I'm not sure if they're waste products -- like leftover browser tabs from a wild Internet goose chase -- or if they're an interesting distillate, but I thought I'd share them. I'm wary of overlearning from one case. And yet this is what I saw.
What is it to experience a major and fast-moving news event primarily through the Internet? I don't think we've done nearly enough anthropological research on this topic. We know what it is to sit in front of a network news or cable news or even the radio. (One of my most distinct memories of childhood is sitting in front of the TV watching the LA riots unfold, drawing fantastical guns in a sketchbook.) Without really knowing it, you learned how to discount or rely on information depending on where it was coming from. If you saw a hot of dozens of fires across Los Angeles taken from a helicopter, you could count on that being real. If anchors said on the air that there were snipers on the 405, you knew to weight that report appropriately.
The nominal authority of the media and the natural authority of their live, on-sceneness combined to create an experience you could pretty much believe. What you watched was certainly mediated and in no way unproduced. The reports media outlets produced could be biased or wrong or framed in an idiotic way or otherwise terrible, but during a breaking news event, they were rarely total and complete bullshit. (Caveat granted that this is not 100 percent true: see, for example, the toppling of the Saddam statue in Baghdad.) But at the very least, some reporter had to stand on television and put his or her name on a story. In our mostly correct and psychologically satisfying desire to criticize the mainstream media's flaws, we sometimes forget how many techniques and procedures they developed over the 20th century that were good. Not out of the goodness of their own hearts, but because the incentives of their industry aligned to encourage veracity and the culture stuck. Judith Miller and the New York Times at least had to answer for her inaccurate reports about Iraq's nonexistent weapons of mass destruction.
The same incentives do not exist for most of the people who post the things you see when you're paging through Facebook, reading forwarded emails, scrolling through Tweets, or thumbing around Instagram. All of these platforms *want* you to post photographs. The algorithms at Facebook privilege photographs because they are what people are most likely to interact with. And users love a picture that's worth a thousand words, four thousand Facebook likes, 900 retweets, a bunch of hearts, and some reblogs: everyone likes being an important node. The whole system tilts towards the consumption of visual content, of pictures and infographics and image macros.
Particularly in a situation like the build up to Sandy's landfall, everyone is just itching itching itching to post something cool and interesting about the storm. Rebecca Greenfield noted in The Atlantic Wire, people really *wanted* to believe certain kinds of fake photos because they wanted there to be something to say. After the fake photos stopped popping up, my friend Rob Dubbin (a Colbert Report writer), tweeted to me, "classic case of supply finally meeting demand." Once there were real storm photos, people were more than happy to post those. They were going to post storm photos, whether they existed or not.
In the drive to flatten the production of media, to make everyone a publisher, we've ended up destabilizing the system we have for surfacing bits of truth. All pictures are the same on Facebook (or other social networks). Fake photo from 2004. Stock photo from 2009. AP photo from last night. Your mom's friend's cousin's flight attendant sister's friend's photo. They're all in the stream, just as likeable. And if one turns out to be fake, well, no one's career is on the line. No one is responsible for amplifying bad information, and more often than not, it's impossible to figure out who the original source of it was.
I'm not one for writing GET OFF MY LAWN posts about the social web. People have been creating and spreading bullshit since language was invented. But the way that the sites work is part of the problem. Right now, social networks are platforms of decontextualization. They could make creating chains of attribution easier. They could preserve the data embedded in photographs better. Instagram and Facebook, especially, in their closedness, make it more difficult to find any given source of information. Sooner or later, all the networks are going to have to take on the responsibility that comes with being millions of people's window on the world. Facebook, in particular, optimizes what you see for what you're most likely to click on. Is that the appropriate way to deal with news about a massive, dangerous storm?
Of course, the people on these social networks (i.e. all of us) are partially responsible as well. So many people rip photographs out of context. But cut and paste also tends to erase: erase provenance, erase responsibility, erase the internal integrity of a digital object. For example, photos can come loaded with a bunch of metadata that helps interested parties understand where they came from. Reuters, say, puts out photos with captions, credit, and dates embedded in them. Or an Instagram photo can have a geotag. Facebook photos (obviously) indicate who posted them. But all it takes is a simple cut and paste to get rid of that stuff. What was a photograph by Reuters becomes just another pic on Instagram with the same level of built-in verification -- zero -- as the pic your cousin Anthony's cell phone pics. This is a huge loss of value.
Some things are the same as ever: The bigger your distribution platform, the more impact you have. But in the old days, people with the big guns developed an ethical code to deal with this responsibility. They created a culture that said, "You must get things right or you will pay for it with your job and credibility." Nowadays, there are many more small distribution channels, people who influence their friends. And they don't have the same sense of responsibility, nor any incentives to encourage them to get one.
Perhaps rightly so. It's not their job to get it right. And yet they are doing the job of information dissemination that used to be done by professionals. That's the rub.
Given this current system and the tide of fake photos Sandy had brought, I decided to do the only thing that seemed likely to help, in some small way: create content that would A) counter the misinformation, B) have authority, and C) be as viral as the bad information. I began, with the help of my team, to try to verify each viral photo, collecting the different investigations together in one post. The results of our work reached nearly a million people.
The first decision we made was to focus on photographs. Those are relatively easy to debunk or confirm. And we could take advantage of the preference for visuals that I noted above.
Then, we realized that we needed a way to post the photos without adding to the problem. So, anytime we posted a fake photo, it had a prominent (digital) red sticker on it that said, "FAKE." And I put, in text overlaid on the photos, how we knew it was fake with a shortened URL to that information. That way, even if people did cut and paste those images, the key information would remain attached to them. In fact, the branding practically encouraged people to take the images and post them to social media. (We tried to meet the problematic system on its own terms.)
The hardest part was actually (B). Sure, we're The Atlantic. That helped. But I think the key was the tone of the post. We were not pronouncing from on high, but reporting from the Internet trenches. We were transparent about how we knew what we knew and left room for doubt, if appropriate.
We also drew heavily on our community, asking for help, and incorporating our networks into the search. People could see us thinking on their screens, and I think they saw that we were approaching this task fairly and with the pursuit of truth as a goal. I know that sounds high-minded, but you can't do this sort of thing for craven reasons or people smell it. I hope.
Along the way, I tried to note the techniques that we were using to try to verify information. This actually was part of the virality of the post. Each photograph was a mini-mystery that we were trying to unravel.
The second component of creating the conditions for viral spread was keeping every post in the same place. We didn't want to split the traffic and social media momentum between multiple URLs. In today's information marketplace, big posts tend to go VERY BIG, and everything else tends to sink like a rock.
The last part of the virality was that we used my Twitter account to talk about individual photos. I probably tweeted the link to the post 20 times, but each time focusing on the latest photograph that we'd demystified. Each tweet reinjected the entire idea back into the social ecosystem. Some got 500 or more retweets. And then as the story rippled outwards, people would find ways to scavenge the post for their own purposes. Oftentimes, people who'd tweeted fake photographs would tweet the post and apologize for having done so.
You can never make some things go viral, no matter how hard you try. In this case, though, all of our efforts paid off. Something like 900,000 people visited the post and many, many more saw tweets, Facebook posts, or references to the work. (For reference, that's more than twice the audience any post of mine has ever gathered.)
What we did was pair the long-developed media desire to get at the truth with the tools and ethos of the new ecosystem. I think we helped increase the truth-quotient of the social media posts out there by some small but measurable percentage. And that's one of the things I'm most proud of during my time at The Atlantic.
And yet, I know it was not enough. Millions of people think things happened in the world that did not.
With old media still largely moribund and no impending changes in the information ecosystem at the major social networks, the only current systematic answer is the laissez-faire one: over time, people will learn who to trust and who not to trust based on what they post. The people who "provide value" will win. I can't say that I saw such utopian ideas actually working during Sandy's media explosion on Monday. In fact, I saw the opposite. The best fake things -- like the sharks of New Jersey -- traveled further than anything real, precisely because they were designed as fictions to press our narrative buttons.
