Quantcast
Channel: Technology | The Atlantic
Viewing all 6870 articles
Browse latest View live

The Fight Between Waymo and Uber Intensifies

0
0

The legal showdown between Uber and Waymo appears to be escalating, weeks after Waymo accused Uber of stealing its top-secret designs for driverless vehicles.

An Uber spokesperson on Friday reduced Waymo’s claims to a “baseless attempt to slow down a competitor,” in the ridesharing giant’s strongest public response yet to a federal lawsuit filed in February. The companies are in an intense battle for the future of what many believe could be a trillion-dollar industry, and the development of a technology that will dramatically reshape the way people move through the world.

Uber’s statement came shortly after Waymo asked a federal court to force Uber to stop its work on self-driving cars. (Waymo is a new company that spun out from Google's self-driving car project last year.) The request for an injunction was based on Waymo’s claim that Anthony Levandowski, a former leading engineer for Google’s self-driving car project, secretly stole 14,000 files from the company before he quit to start his own self-driving truck company. Uber acquired Levandowski’s startup, Otto, for $680 million shortly after in launched last year.

Waymo further claims that it has proof—via an email that seemed to have been sent to Waymo accidentally—that Uber copied Waymo’s laser-radar system, the crucial component of what makes a self-driving car drive itself.

The request for a preliminary injunction was a natural next step, given the gravity of Waymo’s accusations against Uber.  “The circumstances of this case are such that why wouldn’t you want your stuff out of Uber’s hands while this case is pending,” said Courtland Reichman, a trial lawyer who specializes in intellectual property law in Silicon Valley. “If this is such a big deal, as Google says it is, then why wouldn’t they ?”

“We are incredibly proud of the progress that our team has made,” an Uber spokesperson said in a statement provided to The Atlantic. “We have reviewed Waymo’s claims and determined them to be a baseless attempt to slow down a competitor and we look forward to vigorously defending against them in court. In the meantime, we will continue our hard work to bring self-driving benefits to the world.”

Waymo fired back with its own response: “Competition should be fueled by innovation in the labs and on the roads,” a spokesperson said in an email, “not through unlawful actions. Given the strong evidence we have, we are asking the court step in to protect intellectual property developed by our engineers over thousands of hours and to prevent any use of that stolen IP.”

Reichman, who isn’t involved with the case, says he expects things will only intensify.“These are big companies locked into a big battle,” he said. “They are going to fight.”

Tyrone Siu / Reuters

Why Is Silicon Valley So Awful to Women?

0
0

One weekday morning in 2007, Bethanye Blount came into work early to interview a job applicant. A veteran software engineer then in her 30s, Blount held a senior position at the company that runs Second Life, the online virtual world. Good-natured and self-confident, she typically wore the kind of outfit—jeans, hoodie, sneakers—that signals coding gravitas. That day, she might even have been wearing what’s known as the “full-in start-up twin set”: a Second Life T-shirt paired with a Second Life hoodie.

In short, everything about her indicated that she was a serious technical person. So she was taken aback when the job applicant barely gave her the time of day. He knew her job title. He knew she would play a key role in deciding whether he got hired. Yet every time Blount asked him a question about his skills or tried to steer the conversation to the scope of the job, he blew her off with a flippant comment. Afterward, Blount spoke to another top woman—a vice president—who said he’d treated her the same way.

Listen to the audio version of this article:Download the Audm app for your iPhone to listen to more titles.

Obviously Second Life wasn’t going to hire this bozo. But what the heck: He was here, and they had a new employee, a man, who needed practice giving interviews, so they sent him in. When the employee emerged, he had an odd look on his face. “I don’t know what just happened,” he said. “I went in there and told him I was new, and all he said was he was so glad I was there: ‘Finally, somebody who knows what’s going on!’ ”

All Blount could do was laugh—even now, as she looks back on the incident. In the hierarchy of sexist encounters, it didn’t rank very high. Still, it was a reminder that as a woman in tech, she should be prepared to have her authority questioned at any moment, even by some guy trying to get a job at her company.

One reason her career had gone so well, she thinks, is that she’d made a point of ignoring slights and oafish comments. Awkward silences, too. Over the years, she’s experienced—many times—the sensation of walking up to a group of male colleagues and noticing that they fell quiet, as though they’d been talking about something they didn’t want her to hear. She’s been asked to take notes in meetings. She’s found herself standing in elevators at tech conferences late at night when a guy would decide to get, as she puts it, handsy. When she and a male partner started a company, potential investors almost always directed their questions to him—even when the subject clearly fell in Blount’s area of expertise. It drove him crazy, and Blount had to urge him to curb his irritation. “I didn’t have time to be pissed,” she says.

Bethanye Blount, co-founder and CEO, Cathy Labs (Jason Madara)

But at some point, something inside her broke. Maybe it was being at tech conferences and hearing herself, the “elder stateswoman,” warning younger women to cover their drinks, because such conferences—known for alcohol, after-parties, and hot women at product booths—have been breeding grounds for unwanted sexual advances and assaults, and you never knew whether some jerk might put something in your cocktail. She couldn’t believe that women still had to worry about such things; that they still got asked to fetch coffee; that she still heard talk about how hiring women or people of color entailed “lowering the bar”; that women still, often, felt silenced or attacked when expressing opinions online.

“I am angry that things are no better for a 22-year-old at the beginning of her career than they were for me 25 years ago when I was just starting out,” Blount says. “I made decisions along the way that were easier for me and helped me succeed—don’t bring attention to being a woman, never talk about gender, never talk about ‘these things’ with men,” unless the behavior was particularly egregious. “It helped me get through. But in retrospect I feel I should have done more.”

Blount decided it was never too late to start speaking out, and teamed up with other women who had undergone a similar awakening. This past May, they formed a group called Project Include, which aims to provide companies and investors with a template for how to be better. One of her collaborators on the effort, Susan Wu, an entrepreneur and investor, says that when she was teaching herself to code as a teenager, she was too naive to perceive the sexism of internet culture. But as she advanced in her career and moved into investing and big-money venture capitalism, she came to see the elaborate jiu-jitsu it takes for a woman to hold her own. At one party, the founder of a start-up told Wu she’d need to spend “intimate time” with him to get in on his deal. An angel investor leading a different deal told her something similar. She became a master of warm, but firm, self-extrication.

Looking back, Wu is struck by “the countless times I’ve had to move a man’s hand from my thigh (or back or shoulder or hair or arm) during a meeting (or networking event or professional lunch or brainstorming session or pitch meeting) without seeming confrontational (or bitchy or rejecting or demanding or aggressive).” In a land of grand ideas and grander funding proposals, she found that the ability to neatly reject a man’s advances without injuring his ego is “a pretty important skill that I would bet most successful women in our industry have.”

Wu learned how to calibrate the temperature of her demeanor: friendly and approachable, neither too intimate nor too distant. She learned the fine art of the three-quarters smile, as well as how to deflect conversation away from her personal life and return it to topics like sports and market strategy. She learned to distinguish between actual predators and well-meaning guys who were just a bit clueless. And yet to not be overly wary, because that, too, can affect career prospects.

The dozens of women I interviewed for this article love working in tech. They love the problem-solving, the camaraderie, the opportunity for swift advancement and high salaries, the fun of working with the technology itself. They appreciate their many male colleagues who are considerate and supportive. Yet all of them had stories about incidents that, no matter how quick or glancing, chipped away at their sense of belonging and expertise. Indeed, a recent survey called “Elephant in the Valley” found that nearly all of the 200-plus senior women in tech who responded had experienced sexist interactions. (And just as the print version of this article went to press, a former Uber engineer added to the evidence of Silicon Valley’s gender problem when she wrote a blog post detailing what she said was a pattern of sexist behavior at the company.)

As Bethanye Blount’s and Susan Wu’s examples show, succeeding in tech as a woman requires something more treacherous than the old adage about Ginger Rogers doing everything Fred Astaire did, only backwards and in high heels. It’s more like doing everything backwards and in heels while some guy is trying to yank at your dress, and another is telling you that a woman can’t dance as well as a man, oh, and could you stop dancing for a moment and bring him something to drink?

Such undermining is one reason women today hold only about a quarter of U.S. computing and mathematical jobs—a fraction that has actually fallen slightly over the past 15 years, even as women have made big strides in other fields. Women not only are hired in lower numbers than men are; they also leave tech at more than twice the rate men do. It’s not hard to see why. Studies show that women who work in tech are interrupted in meetings more often than men. They are evaluated on their personality in a way that men are not. They are less likely to get funding from venture capitalists, who, studies also show, find pitches delivered by men—especially handsome men—more persuasive. And in a particularly cruel irony, women’s contributions to open-source software are accepted more often than men’s are, but only if their gender is unknown.

Stephanie Lampkin, founder and CEO, Blendoor (Jason Madara)

For women of color, the cumulative effect of these slights is compounded by a striking lack of racial diversity—and all that attends it. Stephanie Lampkin, who was a full-stack developer (meaning she had mastered both front-end and back-end systems) by age 15 and majored in engineering at Stanford, has been told when applying for a job that she’s “not technical enough” and should consider sales or marketing—an experience many white women in the field can relate to. But she has also, for instance, been told by a white woman at a conference that her name ought to be Ebony because of the color of her skin.

In the past several years, Silicon Valley has begun to grapple with these problems, or at least to quantify them. In 2014, Google released data on the number of women and minorities it employed. Other companies followed, including LinkedIn, Yahoo, Facebook, Twitter, Pinterest, eBay, and Apple. The numbers were not good, and neither was the resulting news coverage, but the companies pledged to spend hundreds of millions of dollars changing their work climates, altering the composition of their leadership, and refining their hiring practices.

At long last, the industry that has transformed how we learn, think, buy, travel, cook, socialize, live, love, and work seemed ready to turn its disruptive instincts to its own gender inequities—and in the process develop tools and best practices that other, less forward-looking industries could copy, thus improving the lives of working women everywhere.

Three years in, Silicon Valley diversity conferences and training sessions abound; a cottage industry of consultants and software makers has sprung up to offer solutions. Some of those fixes have already started filtering out to workplaces beyond the tech world, because Silicon Valley is nothing if not evangelical. But the transformation hasn’t yet materialized: The industry’s diversity numbers have barely budged, and many women say that while sexism has become somewhat less overt, it’s just as pernicious as ever. Even so, there may be reason for hope as companies begin to figure out what works—and what doesn’t.

When Silicon Valley was emerging, after World War II, software programming was considered rote and unglamorous, somewhat secretarial—and therefore suitable for women. The glittering future, it was thought, lay in hardware. But once software revealed its potential—and profitability—the guys flooded in and coding became a male realm.

The advent of the home computer may have hastened this shift. Early models like the Commodore 64 and the Apple IIc were often marketed as toys. According to Jane Margolis, a researcher at UCLA, families bought them and put them in their sons’ rooms, even when they had technologically inclined daughters. By the time the children of the ’80s and ’90s reached college, many of the boys already knew how to code. Fewer girls did.

But that was a long time ago. Consider where we are today. More than half of college and university students are women, and the percentage of women entering many stem fields has risen. Computer science is a glaring exception: The percentage of female computer- and information-science majors peaked in 1984, at about 37 percent. It has declined, more or less steadily, ever since. Today it stands at 18 percent.

Claudia Goldin, a Harvard economist, told me that tech would seem to be an attractive field for women, since many companies promise the same advantages—flexibility and reasonable hours—that have drawn women in droves to other professions that were once nearly all male. The big tech companies also offer family-friendly perks like generous paid parental leave; new moms at Google, for instance, get 22 paid weeks. “These should be the best jobs for people who want predictability and flexibility,” Goldin said. “So what’s happening?”

A report by the Center for Talent Innovation found that when women drop out of tech, it’s usually not for family reasons. Nor do they drop out because they dislike the work—to the contrary, they enjoy it and in many cases take new jobs in sectors where they can use their technical skills. Rather, the report concludes that “workplace conditions, a lack of access to key creative roles, and a sense of feeling stalled in one’s career” are the main reasons women leave. “Undermining behavior from managers” is a major factor.

The hostility of the culture is such an open secret that tweets and essays complaining of sexism tend to begin with a disclaimer acknowledging how shopworn the subject feels. “My least favorite topic in the world is ‘Women in Tech,’ so I am going to make this short,” wrote one blogger, noting that after she started speaking at conferences and contributing to open-source projects, she began to get threatening and abusive emails, including from men who said they “jerked off to my conference talk video.” Another woman tweeted that, while waiting to make a presentation at Pubcon, a prestigious conference, she was told by a male attendee, “Don’t be nervous. You’re hot! No one expects you to do well.”

In the office, sexism typically takes a subtler form. The women I spoke with described a kind of gaslighting: They find themselves in enviably modern workspaces, surrounded by right-thinking colleagues and much talk of meritocracy, yet feel disparaged in ways that are hard to articulate, let alone prove.

Telle Whitney, the president and CEO of the Anita Borg Institute, a nonprofit that supports women in technology, says gender bias is a big problem in start-ups, which are frequently run by brotherhoods of young men—in many cases friends or roommates—straight out of elite colleges. In 2014, for instance, Snapchat CEO Evan Spiegel was two years out of Stanford and already leading a $10 billion company when his frat-boy-at-his-misogynistic-worst undergraduate emails were published and went viral. In them, his only slightly younger self joked about shooting lasers at “fat girls,” described a Stanford dean as “dean-julie-show-us-your-tits,” and for good measure, saluted another fraternity because it had decided to “stop being gay.”

But while start-ups may be the worst offenders, it’s notable how often the staid older companies also make missteps. Just last year, Microsoft hosted a party that featured “schoolgirl” dancers wearing short uniform-type skirts and bra tops, dancing on podiums. The event followed the Game Developers Conference in San Francisco—where, earlier that day, the company had sponsored a Women in Gaming Luncheon to promote a culture of inclusivity.

And then there are the public utterances that reveal what some leading men in tech think of women and their abilities. When Sir Michael Moritz, the chair of Sequoia Capital, one of Silicon Valley’s most venerable venture-capital firms, was asked by a Bloomberg reporter why the firm had no female investing partners in the U.S., he responded, “We look very hard,” adding that the firm had “hired a young woman from Stanford who’s every bit as good as her peers.” But, he added, “what we’re not prepared to do is to lower our standards.”

When Ellen Pao sued another prominent venture-capital firm, Kleiner Perkins Caufield & Byers, for gender discrimination, the 2015 trial sent a frisson through the tech world. Former Yahoo President Sue Decker wrote an essay for Recode, the tech-industry website, saying that she had been obsessively following the trial because it resonated so deeply with her. She took her daughters out of school to hear the closing arguments. “I, and most women I know, have been a party to at least some sexist or discriminatory behavior in the workplace,” she wrote, explaining that she and many other women had witnessed things like “locker-room discussion during travel with colleagues,” which they tried to brush aside, since “any individual act seems silly to complain about.” The Pao trial, however, shifted her attitude.

Pao lost the case, but the trial was a watershed. Afterward, a group of seven senior women in tech conducted the “Elephant in the Valley” survey. Eighty-four percent of the respondents had been told they were too aggressive; 66 percent had felt excluded from key networking opportunities because of their gender; 90 percent had witnessed sexist behavior at conferences and company off-site meetings; 88 percent had had clients and colleagues direct questions to male peers that should have been addressed to them; and 60 percent had fended off unwanted sexual advances (in most cases from a superior). Of those women, one-third said they had feared for their personal safety.

Pao went on to co-found Project Include with Blount, Wu, and others, including Tracy Chou. A software engineer who graduated from Stanford, Chou told me about working at a start-up where a co-founder would often muse that a man they’d just hired would turn out to be better and faster than she was. When Chou discovered a significant flaw in the company’s code and pointed it out, her engineering team dismissed her concerns, saying that they had been using the code for long enough that any problems would have been discovered. Chou persisted, saying she could demonstrate the conditions under which the bug was triggered. Finally, a male co-worker saw that she was right and raised the alarm, whereupon people in the office began to listen. Chou told her team that she knew how to fix the flaw; skeptical, they told her to have two other engineers review the changes and sign off on them, an unusual precaution. Her co-workers rationalized their scrutiny by explaining that the bug was important, and so was the fix.

Tracy Chou, co-founder, Project Include (Erik Tanner)

“I knew it was important,” she told me recently. “That’s why I was trying to flag it.”

For Chou, even the open-office floor plan was stressful: It meant there was no way to escape a male co-worker who liked to pop up behind her and find fault with her work. She was called “emotional” when she raised technical concerns and was expected to be nice and never complain, even as people around her made excuses for male engineers who were difficult to work with. The company’s one other female engineer felt the same way Chou did—as if they were held to a different standard. It wasn’t overt sexism; it was more like being dismissed and disrespected, “not feeling like we were good enough to be there—even though, objectively speaking, we were.”

Video: How Did Tech Become So Male Dominated?

That the tech industry would prove so hostile to women is more than a little counterintuitive. Silicon Valley is populated with progressive, hyper-educated people who talk a lot about making the world better. It’s also a young field, with none of the history of, say, law or medicine, where women were long denied spots in graduate schools intended for “breadwinning men.”

“We don’t have the same histories of exclusion,” says Joelle Emerson, the founder and CEO of Paradigm, a firm in San Francisco that advises companies on diversity and inclusion. But being new comes with its own problems: Because Silicon Valley is a place where a newcomer can unseat the most established player, many people there believe—despite evidence everywhere to the contrary—that tech is a meritocracy. Ironically enough, this very belief can perpetuate inequality. A 2010 study, “The Paradox of Meritocracy in Organizations,” found that in cultures that espouse meritocracy, managers may in fact “show greater bias in favor of men over equally performing women.” In a series of three experiments, the researchers presented participants with profiles of similarly performing individuals of both genders, and asked them to award bonuses. The researchers found that telling participants that their company valued merit-based decisions only increased the likelihood of their giving higher bonuses to the men.

Such bias may be particularly rife in Silicon Valley because of another of its foundational beliefs: that success in tech depends almost entirely on innate genius. Nobody thinks that of lawyers or accountants or even brain surgeons; while some people clearly have more aptitude than others, it’s accepted that law school is where you learn law and that preparing for and passing the CPA exam is how you become a certified accountant. Surgeons are trained, not born. In contrast, a 2015 study published in Science confirmed that computer science and certain other fields, including physics, math, and philosophy, fetishize “brilliance,” cultivating the idea that potential is inborn. The report concluded that these fields tend to be problematic for women, owing to a stubborn assumption that genius is a male trait.

The study authors considered several alternative explanations for the low numbers of women in those fields—including that women might not want to work long hours and that there might be more men at the high end of the aptitude spectrum, an idea notoriously put forward in 2005 by then–Harvard President Larry Summers.

But the data did not support these other theories.

“The more a field valued giftedness, the fewer the female PhDs,” the study found, pointing out that the same pattern held for African Americans. Because both groups still tend to be “stereotyped as lacking innate intellectual talent,” the study concluded, “the extent to which practitioners of a discipline believe that success depends on sheer brilliance is a strong predictor of women’s and African Americans’ representation.”

That may be why, for years, the tech industry’s gender disparity was considered almost a natural thing. When Tracy Chou was an intern at Google in 2007, she says, people would joke about the fact that the main Mountain View campus was populated mostly by male engineers, and that women tended to be relegated to other parts of the operation, such as marketing. But for all the joking, Chou says, it was strangely difficult to have a conversation about why that was, how women felt about it, and how it could be changed.

In October 2013, Chou attended the Grace Hopper conference, an annual gathering for women in computing, where Sheryl Sandberg, Facebook’s chief operating officer, warned that the number of women in tech was falling. Chou was startled. She realized that for such a data-driven industry, few reliable diversity statistics were available. That same month, she wrote a post on Medium in which she called on people to share data from their own companies, and she set up a spreadsheet where they could do so. “This thing that had been an open secret in Silicon Valley became open to everybody,” Chou told me.

At the time, some of the big tech firms were fighting a Freedom of Information Act request from the San Jose Mercury News asking the Department of Labor to release data on the makeup of their workforces. The companies contended that such statistics were a trade secret, and that exposing them would hurt their competitive edge. But Chou was not the only voice calling for transparency. Jesse Jackson and his Rainbow push Coalition were advocating on behalf of both women and people of color, and activist investors began pressuring companies to reveal information about salaries and gender pay gaps.

In January 2015, in a keynote speech at the International Consumer Electronics Show, in Las Vegas, Brian Krzanich, the CEO of Intel, announced that his company would devote $300 million to diversity efforts over the next five years. Two months later, Apple pledged $50 million to partner with nonprofits that work to improve the pipeline of women and minorities going into tech, and that spring Google announced that it would increase its annual budget for promoting diversity from $115 million to $150 million. This past June, 33 companies signed a pledge to make their workforces more diverse.

According to Nancy Lee, Google’s vice president of people operations until she retired in February, the company saw both a business imperative—it is, after all, designing a global product—and a moral one. She points to the “original vision” of Google’s founders, which was that “we’re going to build this company for the long haul. We’re not going to be evil.” Google released detailed information on its workforce, and because “our numbers weren’t great,” Lee told me, other companies felt safe releasing theirs. Google wanted to disclose its data, she said, because “then we’re on the hook. There’s no turning back.”

Indeed. At Google, the initial tally showed that just 17 percent of its technical employees were women. The female technical force was 10 percent at Twitter, 15 percent at Facebook, and 20 percent at Apple. Granted, women currently make up just 18 percent of computer-science majors, but these companies are so well funded and attractive that they should be able to get a disproportionate percentage of the pipeline. The firms resolved to do better, and began looking for new ways to attract and retain women. Their approaches include measures like recruiting from a broader array of colleges and creating more internships. But the flashiest—and most copied—approach is something called unconscious-bias training.

Lately, unconscious-bias training has emerged as a ubiquitous fix for Silicon Valley’s diversity deficit. It’s diversity training for the new millennium, in which people are made aware of their own hidden biases. It rests on a large body of social-psychology research—hundreds of studies showing how women and minorities are stereotyped. Google turned to it, Lee told me, in part because the company felt that its engineers would appreciate an approach grounded in social science: “That sort of discipline really, really resonated effectively with the hard scientists we have here.” Facebook put unconscious-bias training front and center in its diversity efforts, too; both companies have posted online videos of their training modules, to offer a model for other workplaces. Since then, talk of unconscious bias has spread through Silicon Valley like—well, like a virus.

On a Thursday morning last summer, Joelle Emerson, the diversity consultant, visited a midsize start-up to give a talk on unconscious bias. Emerson knows employees don’t like being dragged to diversity-training sessions, so she strives to keep her presentations upbeat and funny and full of intriguing findings, much like a ted Talk. “We as individuals become smarter, better versions of ourselves when we are working on teams that are diverse,” she told the audience, pointing out that when you’re in a meeting with people who don’t share your background or demographic profile, you sit up a little straighter, intellectually. Expecting more pushback, you become more persuasive. “Our brains just function a little bit differently; we’re more vigilant, we’re more careful,” she said, citing a study that found diverse juries demonstrate better recall of courtroom proceedings. Her talk then segued—as many training sessions do—into what’s known as an implicit-association test.

An implicit-association test is a popular way to demonstrate how unconscious bias works. It was pioneered by Anthony G. Greenwald, a psychology professor at the University of Washington, in 1995. The idea is to have people very quickly sort words and concepts, revealing the implicit, or hidden, associations their brains make and the stereotypes that underlie them.

Joelle Emerson, founder and CEO, Paradigm (Jason Madara)

Emerson started by having everybody practice raising his or her right hand and saying “right,” then raising his or her left hand and saying “left.” “I know it feels condescending that I make you practice, but the goal here is to be as quick as you can,” she said winningly. The audience obeyed, and there was clapping and laughter.

Then she gave the test, flashing a series of words on a screen and having the audience members raise their left hand if the word referred to a male—son, say, or uncle—and their right if it referred to a female. She then flashed words pertaining to science (right hand) or liberal arts (left hand). Next she upped the ante: They had to raise their right hand if the word pertained to a male or to science, and their left hand if it was female- or liberal-arts-related. The audience accomplished this without much trouble. But then came the revelatory moment. “This time we’re going to swap the categories,” Emerson said, instructing the group to raise their left hand if a word was male- or liberal-arts-oriented, and their right hand for a female- or science-leaning term. A series of words flashed on the screen—chemistry, history, sister, son, English, grandpa, math, girl, physics, niece, boy—and the room devolved into chaos and chagrined laughter: People’s brains just wouldn’t go there. They couldn’t keep up.

Emerson explained that regardless of what order the tasks are presented in, about three-quarters of the people who take the test are slower to respond when asked to link women with science and men with liberal arts. She talked about her own first time taking a version of the test, but with the categories of family and work. “I thought, I’m going to nail this,” she said, but confessed that even with a working mother, a career, and years of immersion in gender research, she had a tendency to associate women with family and men with work. Unconscious bias, revealed.

The idea that everyone holds biases and that there is nothing wrong with having them is a core tenet of the training. Presenters often point out that bias and stereotyping are a natural, evolutionary defense, a mechanism that goes back to our early human roots: When primitive man saw a snake, he didn’t have time to determine whether it was poisonous or harmless; his brain said Snake! and he reacted. Our brains today take in more than 11 million pieces of information at any given moment; because we can process only about 40 of those consciously, our nonconscious mind takes over, using biases and stereotypes and patterns to filter out the noise.

The message of these sessions is that snap judgments are usually biased. This is a problem in a field like tech, where hiring managers may have to fill hundreds of positions. Too many decisions are made on gut instinct, the training argues: A time-pressed hiring manager looks at a résumé and sees a certain fraternity or hobby, or a conventionally white or male name, and bang—thanks to the unconscious brain making shortcuts, that person gets an interview. People listen respectfully to that person, while others—women, people of color—are interrupted and scrutinized.

Shelley Correll, the faculty director of the Clayman Institute for Gender Research at Stanford, gave her first unconscious-bias talk, at Cornell University, in 2003, when, she says, the topic was mostly of interest to academic departments. Now, she says, demand has spiked as tech companies have adopted the training. “Virtually every company I know of is deploying unconscious-bias training,” says Telle Whitney of the Anita Borg Institute. “It’s a fast and feel-good kind of training that helps you feel like you’re making a difference.”

But there’s a problem. Unconscious-bias training may not work. Some think it could even backfire. Though the approach is much more congenial than the “sensitivity training” popular in the 1980s and ’90s—in which white men were usually cast as villains—it suffers from the same problem: People resent being made to sit in a chair and listen to somebody telling them how to act. Forcing them to do so can provoke the fundamental human urge to reply: No thanks, I’ll do the opposite.

Worse, repeatedly saying “I am biased and so are you” can make bias seem inescapable, even okay. People feel more accepting of their own bias, or throw their hands up, figuring that nothing can be done.

They may even become more biased. A 2015 study by Michelle M. Duguid of Cornell University and Melissa C. Thomas-Hunt of the University of Virginia demonstrates the peril of normalizing bad behavior. Stigmatizing certain behaviors, such as littering and alcohol abuse, makes people realize they are acting outside the norm and has proved to be a powerful way of changing these behaviors. Conversely, messages presenting good behavior as a social norm—“the majority of guests reuse their towels”—can make people embrace this behavior.

So what happens when you say that bias is natural and dwells within all of us? Duguid and Thomas-Hunt found that telling participants that many people hold stereotypes made them more likely to exhibit bias—in the case of the study, against women, overweight people, or the elderly. The researchers also suggest, provocatively, that even just talking too much about gender inequities can serve to normalize them: When you say over and over that women come up against a glass ceiling, people begin to accept that, yes, women come up against a glass ceiling—and that’s just the way it is.

I talked about all these issues with Maxine Williams, the global director of diversity at Facebook, who conducts part of the company’s online training module. Williams is originally from Trinidad and Tobago; in the module, she mentions a study that found that dark-skinned people of color are seen by white job interviewers as less smart than light-skinned people of color. She told me she finds such studies hard to talk about, and had to force herself to do so.

At Facebook, she says, “managing bias” sessions are “suggested,” not mandated, which she hopes cuts down on any resentment. The goal is to create a culture where, even if you opt out of training, you can’t avoid the lessons, because managers come around talking about bias, and people are encouraged to call out colleagues in meetings when, say, they interrupt someone. “Have you interrupted an interrupter recently?,” Williams likes to ask audiences. She believes that talking about the pervasiveness of bias serves to disabuse people of the meritocracy fallacy.

She also told me that if you are going to be serious about bias training, you have to create a workplace where people feel safe giving voice to their own biases—where they can admit to thinking that men are better at math, for instance, or that new moms are less committed to their work—a perilous task, she acknowledges. “Once you start going down that road and saying to people, ‘Be open!,’ all sorts of things are going to come out,” Williams said. “We’re going to have to go through this mud together. It means you have to be forgiving as well.” She added that it’s necessary to assume that people, no matter what bias they are confessing, are well intentioned. “Presuming good intent” is crucial.

When I mentioned this conversation to Bethanye Blount, who is a former Facebook employee (and thinks it’s a great place to work), she laughed at the “presuming good intent” part. “They’re catering to the engineers,” Blount said—engineers constituting a coveted and often sensitive cohort who like to think of themselves as “special snowflakes” and whom Facebook is smart to handle with care. One of the unspoken advantages of unconscious-bias training is that in an environment where companies are competing for talent, it promises to help attract talented women without scaring away talented men.

Bo Ren, product manager, Tumblr (Erik Tanner)

I also talked with Bo Ren, a former Facebook employee who’s now a product manager at Tumblr. Ren said the atmosphere at Facebook was tranquil and feel-good on the surface, but—as in all workplaces—there were power dynamics underneath. To succeed anywhere in Silicon Valley, she said, you need to have social credibility, to be able to bring people around to your point of view and get them on board with a new product or solution—to be able to “socialize” your ideas. “You would think all things are equal,” she said, “but these backdoor conversations are happening in settings that women are not invited to. The whole boys’-club thing still applies. If you party with the right people at Burning Man, you’re going to be part of this boys’ club.” As for calling people out in meetings, it sounds like a good idea, she said, but she never saw anyone do it. “It’s just—are you really going to be that person?”

Of late, the problems with unconscious-bias training have become more widely known. None other than Anthony Greenwald, the inventor of the implicit-association test, has expressed his doubts. “Understanding implicit bias does not actually provide you with the tools to do something about it,” he told Forbes. Kara Swisher, a co-founder of Recode, has said that talk about unconscious-bias training is “exhausting to listen to,” and an excuse for not trying hard enough. One tech executive, Mike Eynon, wrote in a Medium post that bias training makes “us white guys feel better” and lets the “privileged realize everyone has bias and they aren’t at fault,” while nothing changes for discriminated groups.

In 2016, Google reported incremental improvements: 31 percent of its overall workforce is now female, up one percentage point over the previous year. Nineteen percent of technical roles are held by women, also up a percentage point. At Facebook, women’s overall representation went up from 32 percent to 33 percent. In technical roles, women’s representation also increased a single percentage point, from 16 percent to 17 percent.

Telle Whitney points out that for a large workforce like Google’s, a one-percentage-point rise is not peanuts. But while the companies’ commitment seems genuine, the slow pace of change underscores how far they have to go. If they want to truly transform, they may need to take more-drastic measures.

Anti-Bias Apps

A wealth of apps and software platforms now exists to circumvent unconscious bias. Here are a few of the offerings:

Textio uses data and machine learning to scan job postings and flag phrases that are likely to repel women. Some are obvious: rock star, Ping-Pong, Nerf gun. But Kieran Snyder, Textio’s co-founder, says that other words can exhibit a subtler masculine bias. Examples include language that is what she calls “turned all the way up”: phrases like hard-driving and crush it as well as superlatives like flawless, relentless, and extremely. The software suggests gender-neutral alternatives.

GapJumpers hides résumés and other identifying information, including gender, until job applicants perform a test devised to assess their skills. It’s an attempt to duplicate one of the most renowned studies in the gender-bias genre: In 2000, Claudia Goldin and Cecilia Rouse showed that when major U.S. orchestras allowed musicians to audition behind a screen that hid their gender, the percentage of women selected rose dramatically. They demonstrated that when people are assessed on pure ability, women are much more likely to make the cut.

Blendoor is “Tinder for recruiting,” as its founder, Stephanie Lampkin, calls it. The app lets job candidates and recruiters check each other out: Candidates can see how a company rates on diversity; recruiters can see a person’s skills, education, and work history, but not his or her race, age, and gender.

Interviewing.io offers a free platform that lets engineers do mock technical interviews, giving women (and anyone else who might feel out of place) a chance to practice. It also has software that companies can use to mask applicants’ voices during actual interviews.

Unitive is based on the philosophy of “nudges,” or small changes that have a big effect. It guides managers through the hiring process, finding ways to prevent them from acting on bias. Names and gender are masked during résumé evaluation, for instance, and during interviews the software guides the managers through questions designed to evaluate relevant skills.

Lately, a new fix has emerged. Trying to change people’s unconscious attitudes is messy and complicated. But if you can’t easily dispel bias, what you can do is engineer a set of structural changes that prevent people from acting on it. Joelle Emerson talks about this a lot in her presentations, and works with companies to embed the insights of anti-bias training into hiring and promotion processes. One way to head off bias in hiring is to make sure that the job interviewer writes down a defined skill set beforehand, asks every applicant the same questions, and assesses the quality of answers according to a rubric, rather than simply saying, after the fact, “I really liked that person who went to the same school I did and likes ice hockey just as much as I do.”

Google has been a proponent of such changes. In his 2015 book, Work Rules!, Laszlo Bock, who was the company’s senior vice president of people operations until last summer, cited a study from the University of Toledo that found that the first 20 seconds often predict the outcome of a 20-minute interview. The problem, he wrote, is that such quick impressions are meaningless. He added that Google strongly encourages interviewers to use a combination of skill assessments and standard questions rather than relying on subjective impressions.

Other experts say that what companies need is an anti-bias checklist. The idea is spreading—Pinterest, for one, has worked with Emerson to develop a six-point checklist that includes measures such as reserving plenty of time for evaluating an employee’s performance, to counteract cognitive shortcuts that can introduce bias. But it’s early days: At Emerson’s talk on unconscious bias last summer, someone in the audience asked her which Silicon Valley companies are managing bias well. “No one,” she said, “because the idea of embedding it into organizational design is pretty new.”

This being Silicon Valley, new companies have already cropped up to digitize the checklist idea, offering tech solutions to tech’s gender problem: software that masks an applicant’s gender, or that guides hiring managers through a more objective evaluation process. (See the “Anti-Bias Apps” sidebar above.)

Even when they work, however, these bias interventions get you only so far. Diversity consultants and advocacy groups say they remain frustrated by tech companies’ unwillingness to change core parts of their culture. It is, for example, a hallowed tradition that in job interviews, engineers are expected to stand up and code on whiteboards, a high-pressure situation that works to the disadvantage of those who feel out of place. Indeed, whiteboard sessions are rife with opportunities for biased judgment. At Stanford, Shelley Correll works with a graduate student who, for his dissertation, sat in on a whiteboarding session in which a problem had an error in it; when one female job candidate sensed this and kept asking questions, evaluators felt that all her questions suggested she wasn’t competent.

“Until we see changes in the way we work, I don’t think we’re going to crack this nut,” Correll says. “I worked with one company that insisted that the best way for good ideas to emerge was to have people on teams screaming their ideas at each other. When you watch these teams work, they literally scream at each other and call each other names. They believe this dynamic is essential to scientific discovery—absolutely essential. I said, ‘Could you at least say you disagree with someone without saying you think they are an idiot?’ ”

There’s a term for the screaming-and-name-calling approach to scientific discovery. It’s called “constructive confrontation,” and it was pioneered by the company that helped give Silicon Valley its name. That would be Intel, maker of the silicon chip. Intel came into existence in a postwar America in which corporate offices were male as far as the eye could see. It and other early tech companies “were founded exclusively by men, and for better or worse they just had a male sensibility,” says Telle Whitney. As the former Intel CEO Andrew Grove put it in his book Only the Paranoid Survive: “From all the early bickering, we developed a style of ferociously arguing with one another while still remaining friends.”

Now, of course, the talk is of inclusion, not confrontation. And I was surprised to hear Intel—old-fashioned Intel—mentioned as one of the companies successfully innovating around gender. It had been releasing diversity numbers since 2000, though not with as much fanfare as some of its peers, and without much improvement. But in the past couple of years, Intel decided to try a few other approaches, including hiring quotas.

Well, not quotas. You can’t say quotas. At least not in the United States. In some European countries, like Norway, real, actual quotas—for example, a rule saying that 40 percent of a public company’s board members must be female—have worked well; qualified women have been found and the Earth has continued turning. However, in the U.S., hiring quotas are illegal. “We never use the word quota at Intel,” says Danielle Brown, the company’s chief diversity and inclusion officer. Rather, Intel set extremely firm hiring goals. For 2015, it wanted 40 percent of hires to be female or underrepresented minorities.

Now, it’s true that lots of companies have hiring goals. But to make its goals a little more, well, quota-like, Intel introduced money into the equation. In Intel’s annual performance-bonus plan, success in meeting diversity goals factors into whether the company gives employees an across-the-board bonus. (The amounts vary widely but can be substantial.) If diversity efforts succeed, everybody at the company gets a little bit richer.

Granted, Intel has further to go than some other companies, in part because most of its workforce is technical, unlike newer social-media companies. And with about 100,000 employees worldwide and decades of entrenched culture, it’s a slow and hulking ship to turn around.

But since it began linking bonuses to diversity hiring, Intel has met or exceeded its goals. In 2015, 43 percent of new hires were women and underrepresented minorities, three percentage points above its target. Last year, it upped its goal to 45 percent of new hires, and met it. These changes weren’t just happening at the entry level: 40 percent of new vice presidents were women and underrepresented minorities. Intel’s U.S. workforce in 2014 was just 23.5 percent female. By the middle of last year, the percentage had risen two points, to 25.4 percent.

Intel has also introduced efforts to improve retention, including a “warm line” employees can use to report a problem—feeling stuck in their career, or a conflict with a manager—and have someone look into it. A new initiative will take data from the warm line and from employee exit interviews to give managers customized playbooks. If a group is losing lots of women, for instance, the manager will get data on why they’re leaving and how to address the issue.

Intel isn’t perfect—its $300 million pledge for diversity efforts was seen by some as an effort to rehabilitate its image after the company got caught up in Gamergate, a complex scandal involving much gender-related ugliness. And women who have worked there say Intel’s not immune to the sexism that plagues the industry. But I was struck by how many people talk about the company’s genuine commitment.

Elizabeth Land, who worked at Intel for 18 years before leaving in 2015, says the hiring goals did foster some resentment among men. Still, she wishes more companies would adopt a similar approach, to force hiring managers to look beyond their immediate networks. “If you’re willing to spend the effort and the time to find the right senior-level females, you can.”

Shelley Correll, faculty director, Clayman Institute for Gender Research (Jason Madara)

Shelley Correll agrees. “Tying bonuses to diversity outcomes signals that diversity is something the company cares about and thinks is important,” she says. “Managers will take it seriously.” In fact, she points out, the idea has history: PepsiCo did something similar starting in the early 2000s. When, in the second year, the company didn’t meet its goal of 50 percent diversity hires, executive bonuses suffered. But eventually the company’s workforce did become more diverse. From 2001 to 2006, the representation of women and minorities among executives increased from 34 percent to 45 percent.

There are other reasons for hope: Venture-capital firms have formed specifically to invest in start-ups run by women, and certain colleges—notably Carnegie Mellon, Stanford, and Harvey Mudd—have dramatically increased the number of female students in their computer-science programs.

Perhaps most encouraging is that as new companies come along, some of them are preemptively adopting the lessons that places like Intel and Google have already learned. Among these is Slack, the group-messaging company, which is widely praised for having made diversity a priority from early on, rather than having to go back and try to reengineer it in. Last year, when Slack received the TechCrunch award for Fastest Rising Startup, the company sent four black female software engineers—rather than the CEO, Stewart Butterfield (who’s white)—onstage to accept the award. “We’re engineers,” one of the women, Kiné Camara, said, meaningfully. From September 2015 to February 2016, as Slack grew, its technical workforce went from 18 percent to 24 percent female. However slowly, the industry seems to be changing its mind about innate talent and where genius comes from.

Jason Madara / Erik Tanner / Paul Spella / The Atlantic

Rise of the Robolawyers

0
0

Near the end of Shakespeare’s Henry VI, Part 2, Dick the Butcher offers a simple plan to create chaos and help his band of outsiders ascend to the throne: “Let’s kill all the lawyers.” Though far from the Bard’s most beautiful turn of phrase, it is nonetheless one of his most enduring. All these years later, the law is still America’s most hated profession and one of the least trusted, whether you go by scientific studies or informal opinion polls.

Thankfully, no one’s out there systematically murdering lawyers. But advances in artificial intelligence may diminish their role in the legal system or even, in some cases, replace them altogether. Here’s what we stand to gain—and what we should fear—from these technologies.

1 | Handicapping Lawsuits

For years, artificial intelligence has been automating tasks—like combing through mountains of legal documents and highlighting keywords—that were once rites of passage for junior attorneys. The bots may soon function as quasi-employees. In the past year, more than 10 major law firms have “hired” Ross, a robotic attorney powered in part by IBM’s Watson artificial intelligence, to perform legal research. Ross is designed to approximate the experience of working with a human lawyer: It can understand questions asked in normal English and provide specific, analytic answers.

Beyond helping prepare cases, AI could also predict how they’ll hold up in court. Lex Machina, a company owned by LexisNexis, offers what it calls “moneyball lawyering.” It applies natural-language processing to millions of court decisions to find trends that can be used to a law firm’s advantage. For instance, the software can determine which judges tend to favor plaintiffs, summarize the legal strategies of opposing lawyers based on their case histories, and determine the arguments most likely to convince specific judges. A Miami-based company called Premonition goes one step further and promises to predict the winner of a case before it even goes to court, based on statistical analyses of verdicts in similar cases. “Which attorneys win before which judges? Premonition knows,” the company says.

If you can predict the winners and losers of court cases, why not bet on them? A Silicon Valley start-up called Legalist offers “commercial litigation financing,” meaning it will pay a lawsuit’s fees and expenses if its algorithm determines that you have a good chance of winning, in exchange for a portion of any judgment in your favor. Critics fear that AI will be used to game the legal system by third-party investors hoping to make a buck.

2 | Chatbot Lawyers

Technologies like Ross and Lex Machina are intended to assist lawyers, but AI has also begun to replace them—at least in very straightforward areas of law. The most successful robolawyer yet was developed by a British teenager named Joshua Browder. Called DoNotPay, it’s a free parking-ticket-fighting chatbot that asks a series of questions about your case—Were the signs clearly marked? Were you parked illegally because of a medical emergency?—and generates a letter that can be filed with the appropriate agency. So far, the bot has helped more than 215,000 people beat traffic and parking tickets in London, New York, and Seattle. Browder recently added new functions—DoNotPay can now help people demand compensation from airlines for delayed flights and file paperwork for government housing assistance—and more are on the way.

WitthayaP / Shutterstock; Avvo; Flickr; StudioSmart; Vladimir Nikitin / Shutterstock

DoNotPay is just the beginning. Until we see a major, society-changing breakthrough in artificial intelligence, robolawyers won’t dispute the finer points of copyright law or write elegant legal briefs. But chatbots could be very useful in certain types of law. Deportation, bankruptcy, and divorce disputes, for instance, typically require navigating lengthy and confusing statutes that have been interpreted in thousands of previous decisions. Chatbots could eventually analyze most every possible exception, loophole, and historical case to determine the best path forward.

As AI develops, robolawyers could help address the vast unmet legal needs of the poor. Roland Vogl, the executive director of the Stanford Program in Law, Science, and Technology, says bots will become the main entry point into the legal system. “Every legal-aid group has to turn people away because there isn’t time to process all of the cases,” he says. “We’ll see cases that get navigated through an artificially intelligent computer system, and lawyers will only get involved when it’s really necessary.” A good analogy is TurboTax: If your taxes are straightforward, you use TurboTax; if they’re not, you get an accountant. The same will happen with law.

3 | Minority Report

We’ll probably never see a court-appointed robolawyer for a criminal case, but algorithms are changing how judges mete out punishments. In many states, judges use software called compas to help with setting bail and deciding whether to grant parole. The software uses information from a survey with more than 100 questions—covering things like a defendant’s gender, age, criminal history, and personal relationships—to predict whether he or she is a flight risk or likely to re-offend. The use of such software is troubling: Northpointe, the company that created compas, won’t make its algorithm public, which means defense attorneys can’t bring informed challenges against judges’ decisions. And a study by ProPublica found that compas appears to have a strong bias against black defendants.

Forecasting crime based on questionnaires could come to seem quaint. Criminologists are intrigued by the possibility of using genetics to predict criminal behavior, though even studying the subject presents ethical dilemmas. Meanwhile, brain scans are already being used in court to determine which violent criminals are likely to re-offend. We may be headed toward a future when our bodies alone can be used against us in the criminal-justice system—even before we fully understand the biases that could be hiding in these technologies.

4 | An Explosion of Lawsuits

Eventually, we may not need lawyers, judges, or even courtrooms to settle civil disputes. Ronald Collins, a professor at the University of Washington School of Law, has outlined a system for landlord–tenant disagreements. Because in many instances the facts are uncontested—whether you paid your rent on time, whether your landlord fixed the thermostat—and the legal codes are well defined, a good number of cases can be filed, tried, and adjudicated by software. Using an app or a chatbot, each party would complete a questionnaire about the facts of the case and submit digital evidence.

“Rather than hiring a lawyer and having your case sit on a docket for five weeks, you can have an email of adjudication in five minutes,” Collins told me. He believes the execution of wills, contracts, and divorces could likely be automated without significantly changing the outcome in the majority of cases.

There is a possible downside to lowering barriers to legal services, however: a future in which litigious types can dash off a few lawsuits while standing in line for a latte. Paul Ford, a programmer and writer, explores this idea of “nanolaw” in a short science-fiction story published on his website—lawsuits become a daily annoyance, popping up on your phone to be litigated with a few swipes of the finger.

Or we might see a completely automated and ever-present legal system that runs on sensors and pre-agreed-upon contracts. A company called Clause is creating “intelligent contracts” that can detect when a set of prearranged conditions are met (or broken). Though Clause deals primarily with industrial clients, other companies could soon bring the technology to consumers. For example, if you agree with your landlord to keep the temperature in your house between 68 and 72 degrees and you crank the thermostat to 74, an intelligent contract might automatically deduct a penalty from your bank account.

Experts say these contracts will increase in complexity. Perhaps one day, self-driving-car accident disputes will be resolved with checks of the vehicle’s logs and programming. Your grievance against the local pizza joint’s guarantee of a hot delivery in 10 minutes will be checked by a GPS sensor and a smart thermometer. Divorce papers will be prepared when your iPhone detects, through location tracking and text-message scanning, that you’ve been unfaithful. Your will could be executed as soon as your Fitbit detects that you’re dead.

Hey, anything to avoid talking to a lawyer.

Alvaro Dominguez

What Happens If a Nuclear Bomb Goes Off in Manhattan?

0
0

On a quiet afternoon, two medium-sized nuclear blasts level portions of Manhattan.

If this were a movie, hordes of panicked New Yorkers would pour out into the streets, running around and calling out for their loved ones. But reality doesn’t usually line up with Hollywood’s vision of a disaster scene, says William Kennedy, a professor in the Center for Social Complexity at George Mason University. Instead, he expects people would stay in place, follow instructions, and tend to the injured nearby.

To come up with a picture of what would really happen, Kennedy and Andrew Crooks, another researcher at the center, are working with a pair of Ph.D. candidates to study the immediate social aftermath of a nuclear blast in an American megacity.

The Center for Social Complexity was awarded a grant worth more than $450,000 last May to develop a computer model that simulates how as many as 20 million individuals would react in the first 30 days after a nuclear attack in New York City. The grant, which came from the nuclear-focused Defense Threat Reduction Agency, or DTRA, will fund a three-year project. In the simulation, individual “agents” will make decisions and move about the area based on their needs, their surroundings, and their social networks.

I spoke with Kennedy about his progress, and the challenges of simulating the aftermath of a disaster in one of the world’s biggest cities. A transcript of our conversation, lightly edited for concision and clarity, follows.


Waddell: How will your computer model be able to accurately simulate people’s responses?

Kennedy: First, we’re doing basic research to try and identify how we expect people to respond, and how the environment and infrastructure and facilities would respond. When we get verbal descriptions that we are comfortable with, we will represent them more precisely as computer programs. We’ll start with the environment, the weapon and its effects, and then move on to the people, the infrastructure, and their response.

We’ve done other models of similar-sized areas, modeling natural disasters and things like that. So we have some infrastructure to support us. We’re using the MASON framework: It’s open source—our computer-science department distributes it and maintains it—and we’ve used it in several projects here.

We’ll be bringing in graphical information on New York City and surrounding area, and we’ll model a small nuclear weapon—or possibly multiple small nuclear weapons—going off. They’ll be in the neighborhood of 5 to 10 kilotons: That’s half of Nagasaki/Hiroshima, which was in the 20 kiloton range. The Oklahoma City bomber used 5,000 pounds of TNT, so that’s two and a half tons. That destroyed most of the one building where it blew up, but it affected something like 16 city blocks.

Waddell: So we’re talking damage to something like a chunk of Manhattan?

Kennedy: Yes, something like that. The number 10 isn’t driven by any particular intelligence, but to put it into perspective, North Korea has done tests in the neighborhood of two kilotons—or maybe as many as five. So we’re talking a relatively small, though still nuclear, weapon.

Waddell: What are the social responses you’ll look at?

Kennedy: We’re planning to model at the individual level. A megacity is more than 10 million, and in the region we’re talking about, we’ll potentially get to 20 million agents.

We’ve found that people seem to be reasonably well behaved and do what they’ve been trained to, or are asked or told to do by local authorities. Reports from 9/11 show that people walked down many tens of flights of stairs, relatively quietly, sometimes carrying each other, to escape buildings.

We’re finding those kinds of reports from other disasters as well—except after Hurricane Katrina. There, we have reports that people already didn’t trust the government, and then with the isolation resulting from the flooding, they were actually shooting at people trying to help.

Waddell: So is the difference between the two disasters trust and communication?

Kennedy: I suspect that’s a large part of it, yes.

Waddell: Do you mainly build the verbal models using interviews and reports?

Kennedy: We’re reading studies about disasters, and we’re looking back to events like the Halifax munitions explosion of 100 years ago—that was in the kiloton range, and followed immediately by a blizzard—and natural disasters like earthquakes, flooding, and hurricanes.

We’re going to have millions of agents, each with characteristics like where the agent lives, where it works, if it’s part of family, where the other members of the family are. That’s the first network that people respond to. But they’re also closely linked to the people they’re working with, or the people who are a part of their new family when they become isolated: the other people going down the stairs at the same time. That community now has a common experience.

So we’re going to model individuals responding to the immediate situation around them. They’re trying to leave the area, find food, water, and shelter: basic Maslow-like necessities.

DTRA wanted us to look at the reaction, not the recovery. They want to limit it to the first 30 days. Emergency responders will try to respond within minutes, so there will be some response. But no recovery, in the sense that infrastructure and business will start reestablishing normal behaviors.

Waddell: So the players will include individuals and rescue agents—who else is involved in the model?

Kennedy: When you broaden it more than just the collection of city blocks that are affected, you get into other infrastructure like police departments—not just fire and rescue that respond immediately, but others in the area, local governments, school systems, the utilities that provide food, water, clothing, shelter, etc. It’s a significant undertaking.

Where we are is that we have done the basic literature research on how people respond to this kind of a disaster, and we are starting now to collect the geographic information system data—GIS data—on the New York area: the road systems, subway, bus routes, bridges, and things like that.

It’s frustrating us a little bit that the publicly available data is not very clean. We’ve found lots of road segments that aren’t connected. We can’t just import somebody else’s map of New York and the surrounding areas and have our agents fleeing the area, so we’re spending some effort in the last several weeks trying to collect and clean up that data so that we can actually use it.

Waddell: What are the goals that each individual agent will be balancing? Safety, hunger, family and friends, getting out of the area—how will the model treat those needs?

Kennedy: One of the aspects we’ll be modeling is the individual agents’ social networks. Communications with those people, and confirmation of their status, seems to be one of the first urgencies that people feel, after their immediate survival of the event.

Part of our modeling challenge is going to be figuring out if a parent would go through a contaminated area to retrieve a child at a daycare or school, putting themselves at risk in the process, because it’s important to them to physically be there with their children. Or do they realize that they’re isolated, that communications aren’t going to be available in the near term, and they only deal with their local folks who are now their family? That’s the sense we get.

For a small nuclear weapon, especially a ground burst rather than an air burst, communications will not be as affected as they might be with an airborne electromagnetic pulse weapon. So communications may be available in the not-terribly-distant future from the initial event.

Waddell: So your hypothesis is that a parent will tend to the people immediately around them, in hopes that better communication will allow them to get in touch with family later?

Kennedy: Yes. And that changes what we expect our model will show from the Hollywood version of a disaster—people running down the streets. For disasters where people are injured, like a nuclear blast, as opposed to threatened with injury, we expect that everybody won’t exit the area en masse, because there will be people who need immediate help.

Waddell: And all of this intelligence is coming from research and reports from previous disasters?

Kennedy: Yes. Computational social science is not experimental. We don’t terrorize people and see how they behave.

Waddell: How do you take these insights from research and build them into a model, so that your agents mimic reality?

Kennedy: Through code that implements decisions trees, or needs that need to be fulfilled, for the individual agents in the models themselves. To give you an example from a previous model: We modeled herders and farmers in East Africa for the Office of Naval Research. We developed models of household units that had to make a living. They would make the living based on the terrain fertility, the water availability, and the weather in the local area. So we had a basic household unit that had these capabilities and then its behavior was, in a sense, driven by the environment that it’s in.

Here, the environment is a lot more hostile, and we’re going all the way down to the individual level rather than the household level.

Waddell: Can you also model psychological effects, like terror?

Kennedy: That certainly does affect how people behave. Some people will be frozen and unable to function as a result of terror, as well as their injuries and the environment around them. We will be modeling those effects. But we’re not, per se, modeling the internal states of those individuals. We’re primarily modeling their behavior.

Waddell: But to some extent, don’t you need to understand essentially what a person is feeling to try and guess what they’re going to do?

Kennedy: We will be modeling people very carefully. The challenge is how precisely we can do that.

I sometimes work on modeling individuals using a cognitive model that deals with memory and perception and actions at the near-millisecond level. Here, we probably don’t need a research-level cognitive model of every individual at the millisecond level. We are anticipating modeling people in the five-minute increment for the first several hours, and then expanding those steps, so that we talk about their actions in 15-minute intervals. That’s driven partly by the number of people, and the duration of the study.

Waddell: With so many agents and such a long period of time, how much computation power will this require?

Kennedy: It’s a lot! We do have significant university resources: We have a couple of clusters of computer systems, which we will probably tax significantly. We’re going to start small and find out how much we need tax them. We may be enlarging those facilities to provide the computation that we need.

But to provide some scale, we did modeling for the National Science Foundation on the effects of climate change in Canada and how people might migrate. We were modeling millions of people, moving over the course of 100 years. That would run slowly on a desktop, and to do experiments, we went to the cluster so that we could run different scenarios.

We don’t need to go 100 years, but we do need to have more people. We expect it’ll be taxing but within our resources.

Waddell: Do you have an estimate of how long a single simulation run might take?

Kennedy: I expect a single run may take a couple of days—and that’s at full scale, with all the people.

Waddell: Will you be able to make changes to the model while a simulation is running?

Kennedy: Not in the sense that you could change agents’ behaviors. But you might realize that because of the setup that you have people are not behaving as you expect. So the modeling of your behavior doesn’t make sense, and you have to go back and reconsider the model.

Waddell: It sounds like common sense plays a big role. If you see people acting irrationally, would you stop the scenario to try and figure out what’s going and fix it?

Kennedy: It depends on what you mean by irrational. Sometimes, it might be appropriate to have irrational behavior.

What you’re describing about common sense is referred to as “face validity.” If you have a simulation that comes up with something that’s just incredible on its viewing, it’s very hard to convince anybody that that’s reality.

Waddell: And how do you separate the actually inaccurate from the surprising but true?

Kennedy: There’s a couple of methods. One is that you try to be very careful about the reality of the model as you’re building it. This is sometimes called unit testing: You want small pieces to behave appropriately so that when you assemble them, the overall behavior is credible.

You can also, where available, run scenarios that are intended to replicate historical events so that you can compare how your model behaves with the actual, in a sense, natural experiment. I expect we’ll be able to do that. We have a surprising amount of data on how people responded in Nagasaki and Hiroshima. The U.S. occupied Japan immediately afterwards, and photographed, interviewed, and tracked people over a period of time. A lot of that data is available.

Waddell: What will your final product look like?

Kennedy: It’s interesting how DTRA describes what they want as a result: They told us that they are funding basic research. They’re expecting published papers and academic advancement of students. We are not expected to deliver them a model, or a system they can go play with.

Waddell: What’s the timeline going to look like?

Kennedy: The funding is for three years, with a possibility of two additional years. We hope to have something running in the three- to six-month time frame that we can use for codifying our practical theories about how people behave. Our basic plan is to have something running and then try to set up experiments, run those, and do the validation and verification, so that we are comfortable starting to report results in a year or so.

Waddell: How much of the work will be handled by the existing MASON system, and how much will need to be built from scratch?

Kennedy: I would expect that most of what we’re dealing with in this project is doable within the current MASON. It will support very large numbers of agents over very large areas, and their interactions, reasonably responsively. The code is very fast—it is an industrial simulation system.

We are exploring whether we should model individuals taking up a square meter of space when they move; we are wrestling with whether we need to model doors for each building or let people leave from anywhere around the block.

One of the interesting challenges we’re facing is that we don’t have a lot of data about the height and number of floors of buildings. We have population density, and from that we can extrapolate how many floors are in the buildings and how many people are on each floor, so that we can deal with evacuations.

Waddell: How does this simulation compare with others you’ve done on scale?

Kennedy: There’s some work at the center that involves modeling the U.S. economy at full scale, which is over 100 million agents. But those are what you might call lighter agents: They are simpler so that we can model them at that scale. They’re individual people, but all they do is their business. They don’t do anything else.

This is on the larger side of heavy-agent modeling, on the 20 million agent scale. We’ve been in the 5 to 10 million agent scale before.

We can make things easier by modeling different parts of the system separately. When we modeled climate change, for example, the climatologists did their simulation of the environment and provided us with that data, so we didn’t have to spend computer time on those calculations. We could process that sequentially, day after day, for several years. So it’s breaking down the simulation into its parts, which allows us to pre-process them so that they’re easier to deal with in the overall social simulation.

Lucas Jackson / Reuters

Welcome, Please Remove Your Shoes

0
0

I hoard slippers—the thin-soled, terry kind that many hotels include in their amenity packages. My house is full of them, some still plastic-wrapped. Shoes that will never be good for anything but indoor wear. Yet to me, they are simply too precious to leave behind.

I grew up in the USSR, where tapochki—indoor slippers—were worn habitually. We changed into them when we came home, leaving the dirt of the outdoors at the entrance. We carried them to school where our fellow students stood guard at the door posted by the principal with the sole purpose of checking our bags for smenka, the change of footwear. Museums provided containers of felt mules by the entrance for visitors to don over boots before entering the halls. And we knew that when we visited a friend, we would be expected to take off our shoes and wear the slippers the host owned just for that occasion. Walking inside a home—any home—while still wearing outdoor shoes was bad form.

* * *

The origin of the habit is mysterious, but tapochki occupy an important part of the Russian psyche. The pragmatic benefits are obvious—casting off outdoor shoes keeps the floors and rugs clean. But the real benefit is symbolic.

A decade ago, a monument to Oblomov—the titular character of Ivan Goncharov’s famous novel about a lackadaisical Russian nobleman—was installed in the city of Ulianovsk. The monument features Oblomov’s couch, with his slippers underneath it. Created by a local welder, the mules celebrate the novelist’s ability to infuse personal objects with a symbolism that captured the Russia of his day. In the novel, Ilya Ilich Oblomov spends most of his waking hours in his robe lying on a couch and doing nothing. The novel had political overtures; it was published two years before the abolishment of serfdom in Russia and has been credited by some as a portrayal of general apathy among the Russian nobility. Oblomov’s robe, the couch, and the slippers represent the hero’s indifference to life outside his home. But they also symbolize the domestic space, the feeling of leaving the worries of the world at the door, and the safety and comfort that only one’s abode can offer.

Personal objects separating the outside and the inside can be found in European paintings as early as the 15th century. In The Arnofini Portrait (1434), Jan Van Eyck included two pairs of pattens—the wooden clogs usually worn over the indoor shoes to protect footwear from the mud and dirt of the outside. The 1514 engraving Saint Gerome in His Study, by Albrecht Durer, also features shoes that seem to indicate domestic use—a pair of mules in the foreground, stored under a bench with books and pillows. Whether they are there to suggest their purpose as outdoor-only footwear or the beginning of the practice of using mules at home we may never know. Yet just as in the Van Eyck’s work, a discarded pair of shoes—the shoes that the subject isn’t wearing at home—may be the indication of a new custom taking hold: a custom of separating footwear into indoor and outdoor.

Around this time, the conquests of the Ottoman Empire brought Eastern habits into the European continent. “[Most Ottoman people] were wearing outdoor shoes over the indoor shoes like galoshes,” explains Lale Gorunur, the curator of the Sadberk Hanim Museum in Istanbul. “But they’d never go indoors with outdoor shoes. They’d always take off the outdoor shoes at the gate of the house.” Territories under the empire’s rule seemed to adopt this habit, and slippers remain common in countries like Serbia and Hungary.

“We have the tradition of indoor shoes because we were under the Ottoman rule,” confirms Draginja Maskareli, a curator at the Textile and Costume Department of the Museum of Applied Art in Belgrade. When she was a student in the early 1990s, Maskareli visited cousins of Serbian origin in Paris, to which she traveled with slippers in tow. “They were shocked that we had indoor shoes.”

Although the late 20th-century Parisians seemed amused at the idea, their predecessors were enamored with indoor shoes. “By the 17th century, an increasing number of men are having portraits done of themselves in a kind of casual, domestic setting in their mules, their slippers,” explains Elizabeth Semmelhack, a curator of the Bata Shoe Museum in Toronto. “By the 18th century, where intimacy and intimate gatherings become very much a part of social culture, you begin to see more pictures of women and their mules.”

The Victorian era added its own twist to the infatuation with the indoor shoe. Women used Berlin wool work, a needlepoint style popular at the time, to make the uppers of their husband’s home slippers. “[They] would take those uppers to a shoemaker who would then add a sole. And they would be gifted to the husband to wear while he is smoking his pipe by the fire in the evening,” says Semmelhack.

Portraits of the Russian upper classes of the 18th and 19th century frequently feature subjects in either the Ottoman style mules or in thin—intended for indoor use—slipper-shoes. The same couldn’t be said for the poor. Peasants and laborers are either shown barefoot, wearing boots meant for outdoor work, or donning valenki, the traditional Russian felt boot. Perhaps because of this link between the indoor footwear and the leisure of the rich, tapochki were snubbed immediately following the 1917 Russian Revolution. Remnants of the maligned, old world had no place in the new Soviet paradigm. But the sentiment didn’t stick. Although never as extravagant or ornate as before, soon tapochki were back in most Soviet homes offering their owners comfort after a long day of building the Communist paradise.

Today, attitudes towards taking off shoes indoors vary, often by national culture. An Italian friend told me it was considered rude to go barefoot in the house in Italy, and a Spanish friend raised her eyebrows when I offered a pair of slippers. “Spaniards don’t take their shoes off.”

In Japan, where slippers are a Western introduction, most people take off their outdoor shoes before going indoors. Jordan Sand, a professor of Japanese History at Georgetown University, notes that architecture accommodates the practice. “The Japanese live in dwellings with raised floors. It’s basic, even in modern apartment buildings, that every private dwelling has space at the entry,” he explains. “As you enter the door there is a little space and step up and the rest of the house is higher than the outside. You shed your footwear there. In a traditional house, most of the interior space is covered with tatami mats. No footwear is worn on tatami mats.” While the Japanese generally go either barefoot or wear socks on the mats, there are exceptions. In those parts of the house that aren’t covered by tatami—the kitchen, the hallway, and the toilet—people wear slippers. A singular pair of slippers is reserved specifically for the toilet, where it stays.

* * *

When I moved to the U.S. in 1989, slippers disappeared from my life. Americans never took off their shoes and their wall-to-wall carpeting bore traces of the outside tracked indoors on the soles of their footwear. I could never get used to it. My shoes came off immediately whenever I entered my house and I’ve asked my guests to take off theirs. The panoply of terry mules I have hoarded from hotels is always on hand to help.

As for me—my personal slippers wait for me by the door. When I slip them on my feet are freer, my floors stay cleaner, and I always feel as if I’ve truly come home.


This article appears courtesy of Object Lessons.

Ivan Sekretarev / APA member of the Russian Orthodox Church takes an icy plunge in Moscow on Three Kings' Day.

Trump's Cyber Skepticism Hasn't Stopped Charges Against Foreign Hackers

0
0

President Donald Trump doesn’t put a lot of stock in security researchers’ ability to track down cyberattackers. When the Democratic National Committee’s systems were breached during the presidential campaign, he shrugged and said just about anyone could have been behind the hacks—even though the intelligence community pointed fingers straight at Russian President Vladimir Putin. “Unless you catch ‘hackers’ in the act, it is very hard to determine who was doing the hacking,” he tweeted in December.

Just before Trump was inaugurated, I wondered if his unwillingness to endorse the practice of cyber-attribution would derail the Justice Department’s pattern of bringing indictments and charges against foreign hackers—and even embolden hackers to launch more cyberattacks, without fear of repercussions.

But while the president cast doubt on the worth of attribution, the Justice Department appears to have pressed on with its campaign to slap foreign hackers with public criminal charges. On Wednesday, the department announced charges against four Russians—two intelligence agents and two hired hackers—for the 2014 data breach at Yahoo that compromised 500 million user accounts.

The two agents worked for a branch of the Russian Federal Security Service, or FSB, called the Center for Information Security. That agency is the FBI’s point of contact within the Russian government for fighting cybercrime—but instead of investigating cyberattacks, the FBI alleges that the two officers participated in one.

The indictment accuses the officers, 33-year-old Dmitry Aleksandrovich Dokuchaev and 43-year-old Igor Anatolyevich Sushchin, of hiring a pair of hackers to help them break into Yahoo’s systems. Mary McCord, the acting assistant attorney general in the Justice Department’s national-security division, said the agents are suspected of orchestrating the cyberattack in their official capacity as members of the FSB.

One of the hackers was already notorious. Alexsey Alexseyevich Belan has already been indicted in the U.S. twice—once in 2012 and once in 2013—and was added to the FBI’s list of most-wanted cybercriminals in 2013. The other hacker, Karim Baratov, was brought on to help hack into 80 non-Yahoo accounts, using information gleaned from the accounts that were already compromised. Baratov, who lives in Canada, was arrested on Tuesday. The other three defendants remain at large in Russia, which doesn’t have an extradition agreement with the United States.

According to the indictment, the hackers had access to Yahoo’s networks all the way until September 2016, two years after they first got in.

When the data breach was announced that month, that hack was one of the largest single breaches that had ever been made public. But it was eclipsed in December, when the company announced that another breach, this one from 2013, had compromised one billion user accounts. Yahoo said in December that the two hacks were separate—but that it suspected the “same state-sponsored actor” was behind both hacks.

One of the tricks the Russian hackers used to steal information was to forge cookies—small packages of data that track users and tell browsers which accounts a user is signed into, among other things—in order to access at least 6,500 user accounts, the Justice Department alleges. (The 2013 hack also used forged cookies, according to Yahoo.)

The hackers targeted a wide range of people: government officials, intelligence and law enforcement agents, and employees of an unnamed “prominent Russian cybersecurity company.” They also accessed accounts that belonged to private companies in the U.S. and elsewhere, the indictment claims.

Some of the information was probably useful for the intelligence officers, but Belan, the hired hacker, appears to have used the opportunity presented by the enormous trove of stolen Yahoo accounts to make a little money. He searched emails for credit-card and gift-card numbers, and scraped the contact lists from at least 30 million accounts for use in a large-scale spam campaign.

The FBI is also investigating Russian cyberattacks on the Democratic National Committee, but Wednesday’s indictment doesn’t draw a connection between that event and the Yahoo hack.

As she announced the charges, McCord, the acting assistant attorney general, said additional options for punishing Russia for the hack are still on the table. An executive order that former president Barack Obama signed in March, for example, gave the Treasury Department the power to set up economic sanctions in response to cyberattacks or espionage.

FBI and Justice Department officials have said in the past that bringing public charges against foreign hackers for state-sponsored cyberattacks can deter others from hacking American people and organizations. Belan clearly wasn’t deterred by the charges brought against him in 2012 and 2013, but it’s possible that the prospect of joining the cyber most-wanted list has convinced other, lower-profile hackers not to participate.

Paul Abbate, the executive assistant director in the FBI’s cybercrime branch, said the government has formally requested that Russia send the defendants to be tried in the U.S. But without an extradition treaty, and given that Russia’s own intelligence service is implicated in the indictment, working together will be, as Abbate delicately put it, a challenge. “We can now gauge the level of cooperation we’ll see from them,” he said.

Susan Walsh / APActing Assistant Attorney General Mary McCord announces charges against four Russians in connection to an enormous data breach at Yahoo.

Scientists Brace for a Lost Generation in American Research

0
0

The work of a scientist is often unglamorous. Behind every headline-making, cork-popping, blockbuster discovery, there are many lifetimes of work. And that work is often mundane. We’re talking drips-of-solution-into-a-Petri-dish mundane, maintaining-a-database mundane. Usually, nothing happens.

Scientific discovery costs money—quite a lot of it over time—and requires dogged commitment from the people devoted to advancing their fields. Now, the funding uncertainty that has chipped away at the nation’s scientific efforts for more than a decade is poised to get worse.

The budget proposal President Donald Trump released on Thursday calls for major cuts to funding for medical and science research; he wants to slash funding to the National Institutes of Health by $6 billion, which represents about one-fifth of its budget. Given that the NIH says it uses more than 80 percent of its budget on grant money to universities and other research centers, thousands of institutions and many more scientists would suffer from the proposed cuts.

“One of our most valuable natural resources is our science infrastructure and culture of discovery,” said Joy Hirsch, a professor of psychiatry and neurobiology at the Yale School of Medicine. “It takes only one savage blow to halt our dreams of curing diseases such as cancer, dementia, heart failure, developmental disorders, blindness, deafness, addictions—this list goes on and on.”

For decades, scientists have been rattled by the erosion of public funding for their research. In 1965, the federal government financed more than 60 percent of research and development in the United States. “By 2006, the balance had flipped,” wrote Jennifer Washburn a decade ago, in a feature for Discover, “with 65 percent of R&D in this country being funded by private interests.”

This can’t be all bad, can it? Given the culture of competition in Silicon Valley, where world-changing ideas attract billions upon billions of dollars from eager investors, and where many of the brightest minds congregate, we may well be entering a golden era of private funding for science and medicine.

Along with the business side of science, the world’s tech leaders have built a robust philanthropic network for research advancement. The Bill & Melinda Gates Foundation is a major force in the prevention of infectious diseases, for example. Last year, the Facebook founder Mark Zuckerberg launched his own foundation—with his wife, Priscilla Chan, who is a pediatrician—aiming to help “cure, prevent or manage all diseases in our children’s lifetime.” Between those two initiatives alone, billions of dollars will be funneled to a variety of crucial research efforts in the next decade.

But that amount still doesn’t approach $26 billion in NIH research grants that are doled out to scientists every year. For about a decade, stagnant funding at the NIH was considered a serious impediment to scientific progress. Now, scientists say they are facing something much worse.

I asked more than a dozen scientists—across a wide range of disciplines, with affiliations to private schools, public schools, and private foundations—and their concern about the proposed budget was resounding. The consequences of such a dramatic reduction in public spending on science and medicine would be deadly, they told me. More than one person said that losing public funding on this scale would dramatically lower the country’s global scientific standing. One doctor said he believed Trump’s proposal, if passed, would set off a lost generation in American science.

“Where do I start?” said Hana El-Samad, a biochemist at the University of California, San Francisco, School of Medicine and an investigator in the Chan Zuckerberg Biohub program, one of the prestigious new privately funded science initiatives in Silicon Valley. In her research, El-Samad analyzes biological feedback loops, studying how they work so that she can predict their failure in diseases.

“First, we most certainly lose diversity in science—ranging from diversity of topics researched to diversity of people doing the research,” she told me. “Since we don’t know where real future progress will come from, and since history tells us that it can and almost certainly will come from anywhere—both scientifically and geographically—public funding that precisely diversifies our nation’s portfolio is crucial.”

Private funding, on the other hand, is often narrowly focused. Consider, for example, Elon Musk’s obsession with transporting humans to Mars. The astronaut Buzz Aldrin told CNBC this week that Musk’s plan may be well-funded, but it’s not very well thought out—and that cheaper technology isn’t necessarily better. “We went to the moon on government-designed rockets,” Aldrin said.

Even if Musk’s investment in SpaceX does represent a world-changing scientific effort—it’s not enough by itself.

“Funding like the Chan-Zuckerberg Initiative is fantastic and will be transformative for Bay Area Science,” said Katherine Pollard, a professor at the UCSF School of Medicine and a Chan Zuckerberg Biohub researcher who studies the human microbiome. “But the scope and size of even a large gift like this one cannot come close to replacing publicly supported science.” The unrestricted research funding she’s getting as part of her work with the Chan Zuckerberg program is “still only 10 percent of what it costs to run my lab,” she said.

And what happens to all the crucial basic science without billionaire backing—the kind of research with wide-ranging applications that can dramatically enhance human understanding of the world?  NIH funding is spread across all disciplines, several scientists reminded me, whereas private funding tends to be driven by the personal preferences of investors.

Plus, scientific work is rarely profitable on a timescale that delights investors. The tension between making money and making research strides can result in projects being abandoned altogether or pushed forward before they’re ready. Just look at Theranos, the blood-testing company that was once a Silicon Valley darling. As The Wall Street Journalreported last year, even when the company’s technology hadn’t progressed beyond lab research, its CEO was downplaying the severity of her company’s myriad problems—both internally and to investors. Its fall from grace—and from a $9 billion valuation—is a stunning and instructive illustration of where private and public interests in scientific research can clash.

But also, in a privately-funded system, investor interest dictates the kind of science that’s pursued in the first place.

“Put simply, privatization will mean that more ‘sexy,’ ‘hot’ science will be funded, and we will miss important discoveries since most breakthroughs are based on years and decades of baby steps,” said Kelly Cosgrove, an associate professor of psychiatry at Yale University. “The hare will win, the tortoise will lose, and America will not be scientifically great.”

America’s enduring scientific greatness rests largely on the scientists of the future. And relying on private funding poses an additional problem for supporting people early in their careers. The squeeze on public funding in recent years has posed a similar concern, as young scientists are getting a smaller share of key publicly-funded research grants, according to a 2014 study published in the Proceedings of the National Academy of Sciences. In 1983, about 18 percent of scientists who received the NIH’s leading research grant were 36 years old or younger. In 2010, just 3 percent of them were. Today, more than twice as many such grants go to scientists who are over 65 years old compared with people under 36—a reversal from just 15 years ago, according to the report.

The proposed NIH cuts “would bring American biomedical science to a halt and forever shut out a generation of young scientists,” said Peter Hotez, the dean of the National School of Tropical Medicine at Baylor College of Medicine. “It would take a decade for us to recover and move the world's center of science to the U.S. from China, Germany, and Singapore, where investments are now robust."

The cuts are not a done deal, of course. “Congress holds the purse strings, not the president,” said Senator Brian Schatz, a Democrat from Hawaii and a member of the Appropriations Committee, in a statement.

In the meantime, there’s a deep cultural question bubbling beneath the surface of the debate over science funding, one that seems to reflect a widening gap in trust between the public and a variety of American institutions. A Pew survey in 2015 found that more than one-third of people said they believed private investments were enough to ensure scientific progress. And while most people said they believed government investment in basic scientific research “usually” paid off in the long run, other research has showed a sharp decline in public trust in science—notably among conservatives. This erosion of trust means that the politicization around specific areas of scientific inquiry, like climate change and stem-cell research, may have deep consequences for scientific advancement more broadly.

El-Samad, the biochemist, describes this dynamic as the weakening of a social contract that once made the United States the scientific beacon of the world. In her view, there is something almost sacred about using taxpayer dollars to fund research.

Using  “the hard earned cash of the citizens, all of them—has constituted an enduring bond between the scientist and the public,” she told me. “It was clear that we were, as scientists, bound by the necessity to pay them back not in kind, but in knowledge and technology and health. And they, the citizens, took pride and well deserved ownership of our progress. I truly believe that this mutual investment and trust is what made science in the United States of America a model to follow for the rest of the world, and also gave us the tremendous progress of the last decades. Huge setbacks will ensue if this erodes.”

Jon Nazca / ReutersA microscopic view of algae colonies

The Lifesaving Potential of Underwater Earthquake Monitors

0
0

The seconds between the warning of an impending earthquake and the moment the quake hits can be the difference between life or death. In that time, automatic brakes can halt trains; people can duck for cover or rush for safety. But current warning systems aren’t always where they are needed, and scientists don’t fullyunderstand what determines the size and location of earthquakes. Nearly 10,000 people were killed in earthquakes in 2015, the majority from the devastating Nepal quake. The federal government estimates that earthquakes cause $5.3 billion in damage per year to buildings in the U.S.

Ground-based sensors help warn of quakes, but they have their limits. Now, a group of researchers at Columbia University are taking measurement somewhere new: underwater. They’re designing a system that could lead to faster warnings for people living near areas affected by underwater earthquakes and tsunamis. If they succeed, they could help reduce the damage caused by these natural disasters and save many lives.

I recently visited a laboratory at Columbia’s Lamont-Doherty Earth Observatory, in Rockland County, New York, where a technician was testing pieces of the boxy, three-foot-long underwater seismometers under a microscope. The lab’s floor-to-ceiling shelves were stacked with bright yellow and orange parts that will have to endure crushing pressures on the ocean floor at depths of thousands of feet for years at a time.

The networks of land-based earthquake monitors around the world warn of quakes by watching for changes in pressure and seismic signals. Underwater sensors could more accurately locate underwater earthquakes than ground-based networks, says Spahr Webb, the Lamont-Doherty researcher leading the project, because “the system is designed to be deployed over the top of a large earthquake and faithfully record the size and location of both the earthquake and the tsunami. … By installing pressure and seismic sensors offshore you get a much more accurate determination of location and depth of a nearby earthquake.”

Webb pointed out the crab-like shape of a thick steel shell that is designed to prevent the seismometers from being pried from the sea floor by fishing trawl nets. “Keeping these things where they belong is the key,” he told me.

When they are launched about a year from now, 10 to 15 seismometers will be carefully lowered by a crane from a ship to the seabed. Similar to the land-based monitors, they will contain sensitive pressure sensors and accelerometers to measure and separate out seismic and oceanic signals. These sensors will monitor subduction zones, the areas where one plate of the earth’s crust slides under another. An earthquake produces a tsunami at a subduction  zone when an underwater plate snaps back like a giant spring after it is forced out of position by the collision of an adjacent plate.

According to Webb, the land-based seismometers monitoring the regions that produce the largest tsunamis are sometimes more than 100 miles away, which hinders speed and accuracy. “A big motivation for the offshore observations is the size of the tsunami from any given earthquake has a large uncertainty based on land observations alone,” says Webb. In Japan, after the devastating 2011 earthquake, an expensive cable with numerous sensors was installed offshore to speed up warnings and boost accuracy. Now the Columbia seabed-based seismometers will obtain data in regions of the globe with similar tsunami hazards as Japan to augment land-based early warning systems.

The project is not alone. Columbia’s seismometer system is just one of a wide array of new earthquake-monitoring technologies that are being developed. “There are many exciting techniques coming online,” says Elizabeth Cochran, a geophysicist with the U.S. Geological Survey.

While the ocean depths offer opportunities to monitor quakes close to their source, for instance, watching from space could provide a wider view. Scientists at University College London have proposed launching several small satellites to look for signs of earthquakes using electromagnetic and infrared sensors. So far, experiments have proven that the concept works, but a problem has kept the project from getting off the ground: Electromagnetic and infrared  signals are emitted by all sorts of things, natural as well as man-made.

Dhiren Kataria, one of the leaders of the proposed project, which has been dubbed TwinSat, hopes that using a large enough number of satellites should allow researchers to separate out the seismic from the non-seismic events. Multiple satellites would also provide extensive global coverage, because each would orbit the earth every 90 minutes, he adds.

The TwinSat team has previously failed to get funding from the U.K. Space Agency, but it plans to resubmit its proposal in the next few months. If approved, the team could launch its satellites within three years, Kataria claims. To keep costs low, the satellites are designed to be small and use some off-the-shelf commercial components.

Another approach researchers are using is turning cell phones into science instruments. The app MyShake constantly runs a phone’s motion sensors to analyze how it’s shaking around. If the movement fits the vibrational profile of an earthquake, the app relays this information along with the phone’s GPS coordinates to the app’s creators, the seismological laboratory at the University of California, Berkeley, for analysis.

While the app’s not intended to replace traditional seismic sensor networks like those run by the U.S. Geological Survey, says Richard Allen, the seismological laboratory’s director, it could provide faster and more accurate warnings through vast amounts of crowd-sourced data. More than 250,000 people have downloaded the app since it debuted a year ago.

Quicker warnings like these can be used improve safety by being incorporated right into existing infrastructure. San Francisco’s Bay Area Rapid Transit has integrated Allen’s earthquake warnings into its system so that trains automatically slow when they receive a signal that an earthquake will hit. The system relies on the fact that the electronic signals from monitoring stations travel faster than seismic waves, giving the brakes time to act. “I can push out the warning before many people can feel the tremors,” Allen says.

Even better than faster earthquake warnings would be a way to predict quakes. Researchers at Los Alamos National Laboratory are using artificial intelligence to simulate earthquakes so that they can forecast when they will occur. But Cochran of the USGS doubts it will ever be possible to reliably predict quakes. “Earthquakes are very complex,” she says. “It’s hard to predict such chaotic systems.”

Yuri Maltsev / ReutersA tsunami warning sign on the Island of Shikotan, part of the disputed territory between Russia and Japan

Tech Start-Ups Have Become Conceptual Art

0
0

Let’s catalog a few important moments in the history of conceptual art:

In 1917, Marcel Duchamp signed and dated a porcelain urinal, installed it on a plinth, and entered it into the first exhibition for the Society of Independent Artists.

In 1961, Robert Rauchenberg submitted a telegram reading “This is a portrait of Iris Clert if I say so” as his contribution to an exhibition of portraits hosted at Clert’s eponymous Paris gallery.

That same year, Piero Manzoni exhibited tin cans labeled “Artist’s Shit.” The cans purportedly contained the feces of the artist, but opening them to verify the claim would destroy the work.

In 2007, Damien Hirst commissioned a diamond-encrusted, platinum cast of a human skull. It cost £14 million to produce, and Hirst attempted to sell it for £50 million—mostly so that it would become the most valuable work sold by a living artist.

And in 2017, Nigel Gifford designed an edible, unmanned drone meant to deliver humanitarian aid to disaster zones.

Okay, I lied. The last one is a technology start-up. But it might as well be a work of conceptual art. In fact, it makes one wonder if there’s still any difference between the two.

* * *

Conceptualism has taken many forms since the early 20th century. At its heart, the name suggests that a concept or idea behind work of art eclipses or replaces that work’s aesthetic properties. Some conceptual works deemphasize form entirely. Yoko Ono’s Grapefruit, for example, is a book with instructions on how to recast ordinary life as performance art. Others, like Hirst’s diamond-encrusted skull, lean heavily on the material object to produce effects beyond it. And others, like the pseudonymous graffiti-artist Banksy’s documentary film, Exit Through the Gift Shop, about a street artist who becomes a commercial sensation, deliberately refuse to reveal whether they are elaborate put-ons or earnest portrayals.

In each case, the circulation of the idea becomes as important—if not more so—than the nature of the work itself. And circulation implies markets. And markets mean money, and wealth—matters with which art has had a long and troubled relationship. By holding business at a distance in order to critique it, the arts may have accidentally ceded those critiques to commerce anyway.

Before art was culture it was ritual, and the ritual practice of art was tied to institutions—the church, in particular. Later, the Renaissance masters were bound to wealthy patrons. By the time the 20th-century avant-garde rose to prominence, the art world—all of the institutions and infrastructure for creating, exhibiting, selling, and consuming art—had established a predictable pattern of embrace and rejection of wealth. On the one hand, artists sought formal and political ends that questioned the supposed progress associated with industrial capitalism. But on the other hand, exhibition and collection of those works were reliant on the personal and philanthropic wealth of the very industrialists artists often questioned.

One solution some artists adopted: to use art to question the art world itself. Such is what Duchamp and Rauchenberg and Manzoni and Hirst all did, albeit obliquely. Others were more direct. Hans Haacke, for example, used artwork to expose the connections between the art and corporate worlds; his exhibitions looked more like investigative reports than installations.

Despite attempts to hold capital at arms length, money always wins. Artists low and high, from Thomas Kinkade to Picasso, have made the commercialization of their person and their works a deliberate part of their craft.

By the 1990s, when Hirst rose to prominence, high-art creators began embracing entrepreneurship rather than lamenting it. Early in his career, Hirst collaborated with the former advertising executive and art collector Charles Saatchi, who funded The Physical Impossibility of Death in the Mind of Someone Living, a sculpture of a severed tiger shark in three vats of formaldehyde. That work eventually sold for $12 million. Hirst’s relationship with Saatchi was less like that of a Renaissance master to a patron, and more like that of a founder to a venture capitalist. The money and the art became deliberately inextricable, rather than accidentally so.

Banksy, for his part, has often mocked the wealthy buyers who shelled out six-figure sums for his stenciled art, and even for his screenprints. It’s a move that can’t fail, for the artist can always claim the moral high ground of supposed resistance while cashing the checks of complicity.                                                           

Hirst and Banksy have a point: Cashing in on art might have become a necessary feature of art. The problem with scoffing at money is that money drives so much of the world that art occupies and comment on. After the avant-garde, art largely became a practice of pushing the formal extremes of specific media. Abstract artists like Mark Rothko and Jackson Pollock pressed the formal space of canvas, pigment, and medium to its breaking point, well beyond representation. Duchamp and Manzoni did the same with sculpture. And yet, artists have resisted manipulating capitalism directly, in the way that Hirst does. In retrospect, that might have been a tactical error.

* * *

If markets themselves have become the predominant form of everyday life, then it stands to reason that artists should make use of those materials as the formal basis of their works. The implications from this are disturbing. Taken to an extreme, the most formally interesting contemporary conceptual art sits behind Bloomberg terminals instead of plexiglass vitrines. Just think of the collateralized debt obligations and credit default swaps that helped catalyze the foundation of the 2008 global financial crisis. These are the Artist’s Shit of capitalism, daring someone to open them and look. The result, catastrophic though it was, was formally remarkable as a work made of securities speculation, especially for those who ultimately profited from collapsing the world economy. What true artist wouldn’t dream of such a result?

Even so, finance is too abstract, too extreme, and too poorly aestheticized to operate as human culture. But Silicon Valley start-ups offer just the right blend of boundary-pushing, human intrigue, ordinary life, and perverse financialization to become the heirs to the avant-garde.

Take Nigel Gifford’s drone start-up, Windhorse Aerospace, which makes the edible humanitarian relief drone. In the event of disasters and conflict, the start-up reasons, getting food and shelter to victims is difficult due to lost infrastructure. The drone, known as Pouncer, would be loaded with food and autonomously flown into affected areas. Whether in hope or naivety, Windhorse claims that Pouncer will “avoid all infrastructure problems, corruption or hostile groups,” although one might wonder why bright green airplanes might avert the notice of the corrupt and the hostile.

The product epitomizes the conceit of contemporary Silicon Valley. It adopts and commercializes a familiar technology for social and political benefit, but in such a simplistic way that it’s impossible to tell if the solution is proposed in earnest or in parody. Pouncer can be seen either as a legitimate, if unexpected, way to solve a difficult problem, or as the perfect example of the technology industry’s inability to take seriously the problems it claims to solve. How to feed the hungry after civil unrest or natural disaster? Fly in edible drones from the comfort of you co-working space. Problem solved!

It’s not Gifford’s first trip up where the air gets light, either. His last company, Ascenta, was acquired by Facebook in 2014 for $20 million. Once under Facebook’s wing, Gifford and his team built Aquila, the drone meant to deliver internet connectivity to all people around the globe. Here too, an idea—global connectivity as a human right and a human good—mates to both formal boundary-pushing and commercial profit-seeking. By comparison to Mark Zuckerberg’s desire to extract data (and thereby latent market value) from every human being on earth, it’s hard to be impressed at a wealthy British artist trying to flip a diamond-encrusted skull at 300 percent profit.

Conceptualism has one gimmick—that the idea behind the work has more value than the work itself. As it happens, that’s not a bad definition of securitization, the process of transforming illiquid assets into financial instruments. Whether Windhorse’s edible drones really work, or whether they could effectively triage humanitarian crises is far less important in the short term than the apparent value of the concept or the technology. If humanitarian aid doesn’t work out, the company can always “pivot” into another use, to use that favorite term of start-ups. What a company does is ultimately unimportant; what matters is the materials with which it does things, and the intensity with which it pitches those uses as revolutionary.

This routine has become so common that it’s become hard to get through the day without being subjected to technological conceptualism. On Facebook, an advertisement for a Kickstarter-funded “smart parka” that hopes to “re-invent winter coats” and thereby to “hack winter.” A service called Happify makes the foreboding promise, “Happiness. It’s winnable.” Daphne Koller, the co-founder of the online-learning start-up that promised to reinvent education in the developing world like Windhorse hopes to do with the airdrop, quits to join Google’s anti-aging group Calico. Perhaps she concluded that invincibility would be a more viable business prospect than education.

Me-too tech gizmos and start-ups have less of an edge than conceptual art ever did. Hirst’s work, including the diamond skull and the taxidermied shark, are memento mori—symbols of human frailty and mortality. Even Rauchenberg’s telegram says something about the arbitrariness of form and the accidents of convention. By contrast, when technology pushes boundaries, it does so largely rhetorically—by laying claim to innovation and disruption rather than embodying it. But in so doing, it has transformed technological innovation into the ultimate idea worthy of pursuit. And if the point of conceptual art is to advance concepts, then the tech sector is winning at the art game.

* * *

Today, the arts in America are at risk. President Donald Trump’s new federal budget proposes eliminating the National Endowment for the Arts (along with the National Endowment for the Humanities, and the Corporation for Public Broadcasting). The NEA is especially cheap, making its proposed elimination symbolic more than fiscal. It’s a dream some Republicans have had for decades, thanks in part to the perception that NEA-funded programs are extravagances that serve liberalism.

The potential gutting of the NEA is worthy of concern and lamentation. But equally important, and no less disturbing, is the fact that the role of art, in part, had already shifted from the art world to the business world anyway. In particular, the formal boundary-pushing central to experimental and conceptual artists might have been superseded by the conceptual efforts of entrepreneurship. The much better-funded efforts, at that. As ever, money is the problem for art, rather than a problem within it.

Elsewhere in the art world, successful works have become more imbricated with their financial conditions. Earlier this year, Banksy opened the Walled Off Hotel, an “art hotel” installation in Bethlehem. It’s an idea that demands reassurance; the first entry on the project’s FAQ asks, “Is this a joke?” (“Nope—it's a genuine art hotel,” the page answers.) Despite the possible moral odiousness of Palestinian-occupation tourism, local critics have billed it as a powerful anti-colonialist lampoon. A high-art theme park.

It’s an imperfect solution. But what is the alternative? In the tech industry, the wealthy don’t tend to become arts collectors or philanthropists. Unlike Charles Saatchi, they don’t take on young artists as patrons, even if just to fuel their own egos. Instead they start more companies, or fund venture firms, or launch quasi non-profits. Meanwhile, traditional arts education and funding has become increasingly coupled to technology anyway, partly out of desperation. STEAM adds “art” to STEM’s science, technology, engineering, and math, reframing art as a synonym for creativity and innovation—the conceptual fuel that technology already advances as its own end anyway.

Looking at Duchamp’s urinal and Rauchenberg’s telegram, the contemporary viewer would be forgiven for seeing them as banal. Today, everyone transforms toilets into artworks on Instagram. Everyone makes quips on Twitter that seem less clever as time passes. What remains are already-wealthy artists funding projects just barely more interesting than the products funded by other, already-wealthy entrepreneurs.

From that vantage point, the conceptual art avant-garde becomes a mere dead branch on the evolutionary tree that leads to technological entrepreneurship. Everyone knows that ideas are cheap. But ideas that get executed—those are expensive. Even if that implementation adds precious little to the idea beyond making it material. The concept, it turns out, was never enough. It always needed implementation—and the money to do so.

Windhorse Aerospace

How Monopoly’s New Tokens Betray Its History

0
0

This week, Hasbro announced the results of an online vote on the future of tokens in the board game Monopoly. The results are startling: the boot, wheelbarrow, and thimble have been expunged from the iconic game, replaced by a Tyrannosaurus rex, rubber ducky, and penguin. Voters passed up over 60 other contenders, among them an emoji and a hashtag. It’s the latest in a series of efforts to update the game, whose onerous play sessions, old-fashioned iconography, and manual cash-counting have turned some players away.

When today’s players play games, digital or tabletop, they identify with their token or avatar. It becomes “them,” representing their agency for the game. So it’s not surprising that players would want pieces with which they feel affinity. But ironically, affinity and choice in Monopoly token selection undermine part of the history of that game, which juxtaposed capitalist excess in an era of destitution.

Monopoly went through many evolutions. It was first invented as The Landlord’s Game, an educational tool published by Lizzie Magie in 1906 to explain and advocate for the Georgist single tax—the opposite take on property ownership that eventually became synonymous with the game (whose design Charles Darrow derived from Magie’s original).

By the 1930s, when Monopoly became popular, economic conditions were very different. To reduce costs of production, early sets included only the paper board, money, and cards needed to play. The tokens were provided by players themselves. As Philip E. Orbanes explains in his book Monopoly: The World's Most Famous Game and How It Got That Way, Darrow’s niece and her friends used bracelet charms and Cracker Jack treats as markers in the game. The sense of choice and identification was still present, to an extent, but the feeling of making do and using things already at hand was more salient. It was the Depression, after all.

When Parker Brothers marketed the complete game that we know today, in the mid-1930s, the company elected to include four of the metal charms direct from the manufacturer that supplied the popular bracelet charms Darrow’s niece had adopted, along with another four of new design. Those original tokens—car, iron, lantern, thimble, shoe, tophat, and rocking horse—were joined by the battleship and cannon soon after.

Despite Hasbro’s attempts to modernize Monopoly, the game is really a period piece. It hides the victory of personal property ownership and rentier capitalism over the philosophy of shared land value in Georgism. And it juxtaposes the economic calamity of the Great Depression with the rising tide of industrialism and monopolism that allowed the few to influence the fates of the many. To play the game with a thimble—that symbol of domesticity and humility—instead of a T-rex, connects players to that history, both in leisure and in economics. Reinventing the game might appear to make it more “relevant” to younger players. But perhaps what today’s Monopoly players really need isn’t easy familiarity and identification, but an invitation to connect to a time when the same game bore different meaning, and embraced different experience.

Wayne Parry / AP

How Aristotle Created the Computer

0
0

The history of computers is often told as a history of objects, from the abacus to the Babbage engine up through the code-breaking machines of World War II. In fact, it is better understood as a history of ideas, mainly ideas that emerged from mathematical logic, an obscure and cult-like discipline that first developed in the 19th century. Mathematical logic was pioneered by philosopher-mathematicians, most notably George Boole and Gottlob Frege, who were themselves inspired by Leibniz’s dream of a universal “concept language,” and the ancient logical system of Aristotle.

Mathematical logic was initially considered a hopelessly abstract subject with no conceivable applications. As one computer scientist commented: “If, in 1901, a talented and sympathetic outsider had been called upon to survey the sciences and name the branch which would be least fruitful in [the] century ahead, his choice might well have settled upon mathematical logic.” And yet, it would provide the foundation for a field that would have more impact on the modern world than any other.

The evolution of computer science from mathematical logic culminated in the 1930s, with two landmark papers: Claude Shannon’s “A Symbolic Analysis of Switching and Relay Circuits,” and Alan Turing’s “On Computable Numbers, With an Application to the Entscheidungsproblem.” In the history of computer science, Shannon and Turing are towering figures, but the importance of the philosophers and logicians who preceded them is frequently overlooked.

A well-known history of computer science describes Shannon’s paper as “possibly the most important, and also the most noted, master’s thesis of the century.” Shannon wrote it as an electrical engineering student at MIT. His adviser, Vannevar Bush, built a prototype computer known as the Differential Analyzer that could rapidly calculate differential equations. The device was mostly mechanical, with subsystems controlled by electrical relays, which were organized in an ad hoc manner as there was not yet a systematic theory underlying circuit design. Shannon’s thesis topic came about when Bush recommended he try to discover such a theory.

Shannon’s paper is in many ways a typical electrical-engineering paper, filled with equations and diagrams of electrical circuits. What is unusual is that the primary reference was a 90-year-old work of mathematical philosophy, George Boole’s The Laws of Thought.

Today, Boole’s name is well known to computer scientists (many programming languages have a basic data type called a Boolean), but in 1938 he was rarely read outside of philosophy departments. Shannon himself encountered Boole’s work in an undergraduate philosophy class. “It just happened that no one else was familiar with both fields at the same time,” he commented later.

Boole is often described as a mathematician, but he saw himself as a philosopher, following in the footsteps of Aristotle. The Laws of Thought begins with a description of his goals, to investigate the fundamental laws of the operation of the human mind:

The design of the following treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus, and upon this foundation to establish the science of Logic ... and, finally, to collect ... some probable intimations concerning the nature and constitution of the human mind.

He then pays tribute to Aristotle, the inventor of logic, and the primary influence on his own work:

In its ancient and scholastic form, indeed, the subject of Logic stands almost exclusively associated with the great name of Aristotle. As it was presented to ancient Greece in the partly technical, partly metaphysical disquisitions of The Organon, such, with scarcely any essential change, it has continued to the present day.

Trying to improve on the logical work of Aristotle was an intellectually daring move. Aristotle’s logic, presented in his six-part book The Organon, occupied a central place in the scholarly canon for more than 2,000 years. It was widely believed that Aristotle had written almost all there was to say on the topic. The great philosopher Immanuel Kant commented that since Aristotle’s logic had been “unable to take a single step forward, and therefore seems to all appearance to be finished and complete.”

Aristotle’s central observation was that arguments were valid or not based on their logical structure, independent of the non-logical words involved. The most famous argument schema he discussed is known as the syllogism:

  • All men are mortal.
  • Socrates is a man.
  • Therefore, Socrates is mortal.

You can replace “Socrates” with any other object, and “mortal” with any other predicate, and the argument remains valid. The validity of the argument is determined solely by the logical structure. The logical words — “all,” “is,” are,” and “therefore” — are doing all the work.

Aristotle also defined a set of basic axioms from which he derived the rest of his logical system:

  • An object is what it is (Law of Identity)
  • No statement can be both true and false (Law of Non-contradiction)
  • Every statement is either true or false (Law of the Excluded Middle)

These axioms weren’t meant to describe how people actually think (that would be the realm of psychology), but how an idealized, perfectly rational person ought to think.

Aristotle’s axiomatic method influenced an even more famous book, Euclid’s Elements, which is estimated to be second only to the Bible in the number of editions printed.

A fragment of the Elements (Wikimedia Commons)

Although ostensibly about geometry, the Elements became a standard textbook for teaching rigorous deductive reasoning. (Abraham Lincoln once said that he learned sound legal argumentation from studying Euclid.) In Euclid’s system, geometric ideas were represented as spatial diagrams. Geometry continued to be practiced this way until René Descartes, in the 1630s, showed that geometry could instead be represented as formulas. His Discourse on Method was the first mathematics text in the West to popularize what is now standard algebraic notation — x, y, z for variables, a, b, c for known quantities, and so on.

Descartes’s algebra allowed mathematicians to move beyond spatial intuitions to manipulate symbols using precisely defined formal rules. This shifted the dominant mode of mathematics from diagrams to formulas, leading to, among other things, the development of calculus, invented roughly 30 years after Descartes by, independently, Isaac Newton and Gottfried Leibniz.

Boole’s goal was to do for Aristotelean logic what Descartes had done for Euclidean geometry: free it from the limits of human intuition by giving it a precise algebraic notation. To give a simple example, when Aristotle wrote:

All men are mortal.

Boole replaced the words “men” and “mortal” with variables, and the logical words “all” and “are” with arithmetical operators:

x = x * y

Which could be interpreted as “Everything in the set x is also in the set y.”

The Laws of Thought created a new scholarly field—mathematical logic—which in the following years became one of the most active areas of research for mathematicians and philosophers. Bertrand Russell called the Laws of Thought“the work in which pure mathematics was discovered.”

Shannon’s insight was that Boole’s system could be mapped directly onto electrical circuits. At the time, electrical circuits had no systematic theory governing their design. Shannon realized that the right theory would be “exactly analogous to the calculus of propositions used in the symbolic study of logic.”

He showed the correspondence between electrical circuits and Boolean operations in a simple chart:

https://cdn-images-1.medium.com/max/800/0*K9_VBhOT_82AKdAL.
Shannon’s mapping from electrical circuits to symbolic logic (University of Virginia)

This correspondence allowed computer scientists to import decades of work in logic and mathematics by Boole and subsequent logicians. In the second half of his paper, Shannon showed how Boolean logic could be used to create a circuit for adding two binary digits.

https://cdn-images-1.medium.com/max/800/0*OUD0N1RzLXZK8nLj.
Shannon’s adder circuit (University of Virginia)

By stringing these adder circuits together, arbitrarily complex arithmetical operations could be constructed. These circuits would become the basic building blocks of what are now known as arithmetical logic units, a key component in modern computers.

Another way to characterize Shannon’s achievement is that he was first to distinguish between the logical and the physical layer of computers. (This distinction has become so fundamental to computer science that it might seem surprising to modern readers how insightful it was at the time—a reminder of the adage that “the philosophy of one century is the common sense of the next.”)

Since Shannon’s paper, a vast amount of progress has been made on the physical layer of computers, including the invention of the transistor in 1947 by William Shockley and his colleagues at Bell Labs. Transistors are dramatically improved versions of Shannon’s electrical relays — the best known way to physically encode Boolean operations. Over the next 70 years, the semiconductor industry packed more and more transistors into smaller spaces. A 2016 iPhone has about 3.3 billion transistors, each one a “relay switch” like those pictured in Shannon’s diagrams.

While Shannon showed how to map logic onto the physical world, Turing showed how to design computers in the language of mathematical logic. When Turing wrote his paper, in 1936, he was trying to solve “the decision problem,” first identified by the mathematician David Hilbert, who asked whether there was an algorithm that could determine whether an arbitrary mathematical statement is true or false. In contrast to Shannon’s paper, Turing’s paper is highly technical. Its primary historical significance lies not in its answer to the decision problem,  but in the template for computer design it provided along the way.

Turing was working in a tradition stretching back to Gottfried Leibniz, the philosophical giant who developed calculus independently of Newton. Among Leibniz’s many contributions to modern thought, one of the most intriguing was the idea of a new language he called the “universal characteristic” that, he imagined, could represent all possible mathematical and scientific knowledge. Inspired in part by the 13th-century religious philosopher Ramon Llull, Leibniz postulated that the language would be ideographic like Egyptian hieroglyphics, except characters would correspond to “atomic” concepts of math and science. He argued this language would give humankind an “instrument” that could enhance human reason “to a far greater extent than optical instruments” like the microscope and telescope.

He also imagined a machine that could process the language, which he called the calculus ratiocinator.

If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in their hands, and say to each other: Calculemus—Let us calculate.

Leibniz didn’t get the opportunity to develop his universal language or the corresponding machine (although he did invent a relatively simple calculating machine, the stepped reckoner). The first credible attempt to realize Leibniz’s dream came in 1879, when the German philosopher Gottlob Frege published his landmark logic treatise Begriffsschrift. Inspired by Boole’s attempt to improve Aristotle’s logic, Frege developed a much more advanced logical system. The logic taught in philosophy and computer-science classes today—first-order or predicate logic—is only a slight modification of Frege’s system.

Frege is generally considered one of the most important philosophers of the 19th century. Among other things, he is credited with catalyzing what noted philosopher Richard Rorty called the “linguistic turn” in philosophy. As Enlightenment philosophy was obsessed with questions of knowledge, philosophy after Frege became obsessed with questions of language. His disciples included two of the most important philosophers of the 20th century—Bertrand Russell and Ludwig Wittgenstein.

The major innovation of Frege’s logic is that it much more accurately represented the logical structure of ordinary language. Among other things, Frege was the first to use quantifiers (“for every,” “there exists”) and to separate objects from predicates. He was also the first to develop what today are fundamental concepts in computer science like recursive functions and variables with scope and binding.

Frege’s formal language — what he called his “concept-script” — is made up of meaningless symbols that are manipulated by well-defined rules. The language is only given meaning by an interpretation, which is specified separately (this distinction would later come to be called syntax versus semantics). This turned logic into what the eminent computer scientists Allan Newell and Herbert Simon called “the symbol game,” “played with meaningless tokens according to certain purely syntactic rules.”

All meaning had been purged. One had a mechanical system about which various things could be proved. Thus progress was first made by walking away from all that seemed relevant to meaning and human symbols.

As Bertrand Russell famously quipped: “Mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.”

An unexpected consequence of Frege’s work was the discovery of weaknesses in the foundations of mathematics. For example, Euclid’s Elements — considered the gold standard of logical rigor for thousands of years — turned out to be full of logical mistakes. Because Euclid used ordinary words like “line” and “point,” he — and centuries of readers — deceived themselves into making assumptions about sentences that contained those words. To give one relatively simple example, in ordinary usage, the word “line” implies that if you are given three distinct points on a line, one point must be between the other two. But when you define “line” using formal logic, it turns out “between-ness” also needs to be defined—something Euclid overlooked. Formal logic makes gaps like this easy to spot.

This realization created a crisis in the foundation of mathematics. If the Elements — the bible of mathematics — contained logical mistakes, what other fields of mathematics did too? What about sciences like physics that were built on top of mathematics?

The good news is that the same logical methods used to uncover these errors could also be used to correct them. Mathematicians started rebuilding the foundations of mathematics from the bottom up. In 1889, Giuseppe Peano developed axioms for arithmetic, and in 1899, David Hilbert did the same for geometry. Hilbert also outlined a program to formalize the remainder of mathematics, with specific requirements that any such attempt should satisfy, including:

  • Completeness: There should be a proof that all true mathematical statements can be proved in the formal system.
  • Decidability: There should be an algorithm for deciding the truth or falsity of any mathematical statement. (This is the “Entscheidungsproblem” or “decision problem” referenced in Turing’s paper.)

Rebuilding mathematics in a way that satisfied these requirements became known as Hilbert’s program. Up through the 1930s, this was the focus of a core group of logicians including Hilbert, Russell, Kurt Gödel, John Von Neumann, Alonzo Church, and, of course, Alan Turing.

Hilbert’s program proceeded on at least two fronts. On the first front, logicians created logical systems that tried to prove Hilbert’s requirements either satisfiable or not.

On the second front, mathematicians used logical concepts to rebuild classical mathematics. For example, Peano’s system for arithmetic starts with a simple function called the successor function which increases any number by one. He uses the successor function to recursively define addition, uses addition to recursively define multiplication, and so on, until all the operations of number theory are defined. He then uses those definitions, along with formal logic, to prove theorems about arithmetic.

The historian Thomas Kuhn once observed that “in science, novelty emerges only with difficulty.” Logic in the era of Hilbert’s program was a tumultuous process of creation and destruction. One logician would build up an elaborate system and another would tear it down.

The favored tool of destruction was the construction of self-referential, paradoxical statements that showed the axioms from which they were derived to be inconsistent. A simple form of this  “liar’s paradox” is the sentence:

This sentence is false.

If it is true then it is false, and if it is false then it is true, leading to an endless loop of self-contradiction.

Russell made the first notable use of the liar’s paradox in mathematical logic. He showed that Frege’s system allowed self-contradicting sets to be derived:

Let R be the set of all sets that are not members of themselves. If R is not a member of itself, then its definition dictates that it must contain itself, and if it contains itself, then it contradicts its own definition as the set of all sets that are not members of themselves.

This became known as Russell’s paradox and was seen as a serious flaw in Frege’s achievement. (Frege himself was shocked by this discovery. He replied to Russell: “Your discovery of the contradiction caused me the greatest surprise and, I would almost say, consternation, since it has shaken the basis on which I intended to build my arithmetic.”)

Russell and his colleague Alfred North Whitehead put forth the most ambitious attempt to complete Hilbert’s program with the Principia Mathematica, published in three volumes between 1910 and 1913. The Principia’s method was so detailed that it took over 300 pages to get to the proof that 1+1=2.

Russell and Whitehead tried to resolve Frege’s paradox by introducing what they called type theory. The idea was to partition formal languages into multiple levels or types. Each level could make reference to levels below, but not to their own or higher levels. This resolved self-referential paradoxes by, in effect, banning self-reference. (This solution was not popular with logicians, but it did influence computer science — most modern computer languages have features inspired by type theory.)

Self-referential paradoxes ultimately showed that Hilbert’s program could never be successful. The first blow came in 1931, when Gödel published his now famous incompleteness theorem, which proved that any consistent logical system powerful enough to encompass arithmetic must also contain statements that are true but cannot be proven to be true. (Gödel’s incompleteness theorem is one of the few logical results that has been broadly popularized, thanks to books like Gödel, Escher, Bach and The Emperor’s New Mind).

The final blow came when Turing and Alonzo Church independently proved that no algorithm could exist that determined whether an arbitrary mathematical statement was true or false. (Church did this by inventing an entirely different system called the lambda calculus, which would later inspire computer languages like Lisp.) The answer to the decision problem was negative.

Turing’s key insight came in the first section of his famous 1936 paper, “On Computable Numbers, With an Application to the Entscheidungsproblem.” In order to rigorously formulate the decision problem (the “Entscheidungsproblem”), Turing first created a mathematical model of what it means to be a computer (today, machines that fit this model are known as “universal Turing machines”). As the logician Martin Davis describes it:

Turing knew that an algorithm is typically specified by a list of rules that a person can follow in a precise mechanical manner, like a recipe in a cookbook. He was able to show that such a person could be limited to a few extremely simple basic actions without changing the final outcome of the computation.

Then, by proving that no machine performing only those basic actions could determine whether or not a given proposed conclusion follows from given premises using Frege’s rules, he was able to conclude that no algorithm for the Entscheidungsproblem exists.

As a byproduct, he found a mathematical model of an all-purpose computing machine.

Next, Turing showed how a program could be stored inside a computer alongside the data upon which it operates. In today’s vocabulary, we’d say that he invented the “stored-program” architecture that underlies most modern computers:

Before Turing, the general supposition was that in dealing with such machines the three categories — machine, program, and data — were entirely separate entities. The machine was a physical object; today we would call it hardware. The program was the plan for doing a computation, perhaps embodied in punched cards or connections of cables in a plugboard. Finally, the data was the numerical input. Turing’s universal machine showed that the distinctness of these three categories is an illusion.

This was the first rigorous demonstration that any computing logic that could be encoded in hardware could also be encoded in software. The architecture Turing described was later dubbed the “Von Neumann architecture” — but modern historians generally agree it came from Turing, as, apparently, did Von Neumann himself.

Although, on a technical level, Hilbert’s program was a failure, the efforts along the way demonstrated that large swaths of mathematics could be constructed from logic. And after Shannon and Turing’s insights—showing the connections between electronics, logic and computing—it was now possible to export this new conceptual machinery over to computer design.

During World War II, this theoretical work was put into practice, when government labs conscripted a number of elite logicians. Von Neumann joined the atomic bomb project at Los Alamos, where he worked on computer design to support physics research. In 1945, he wrote the specification of the EDVAC—the first stored-program, logic-based computer—which is generally considered the definitive source guide for modern computer design.

Turing joined a secret unit at Bletchley Park, northwest of London, where he helped design computers that were instrumental in breaking German codes. His most enduring contribution to practical computer design was his specification of the ACE, or Automatic Computing Engine.

As the first computers to be based on Boolean logic and stored-program architectures, the ACE and the EDVAC were similar in many ways. But they also had interesting differences, some of which foreshadowed modern debates in computer design. Von Neumann’s favored designs were similar to modern CISC (“complex”) processors, baking rich functionality into hardware. Turing’s design was more like modern RISC (“reduced”) processors, minimizing hardware complexity and pushing more work to software.

Von Neumann thought computer programming would be a tedious, clerical job. Turing, by contrast, said computer programming “should be very fascinating. There need be no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself.”

Since the 1940s, computer programming has become significantly more sophisticated. One thing that hasn’t changed is that it still primarily consists of programmers specifying rules for computers to follow. In philosophical terms, we’d say that computer programming has followed in the tradition of deductive logic, the branch of logic discussed above, which deals with the manipulation of symbols according to formal rules.

In the past decade or so, programming has started to change with the growing popularity of machine learning, which involves creating frameworks for machines to learn via statistical inference. This has brought programming closer to the other main branch of logic, inductive logic, which deals with inferring rules from specific instances.

Today’s most promising machine learning techniques use neural networks, which were first invented in 1940s by Warren McCulloch and Walter Pitts, whose idea was to develop a calculus for neurons that could, like Boolean logic, be used to construct computer circuits. Neural networks remained esoteric until decades later when they were combined with statistical techniques, which allowed them to improve as they were fed more data. Recently, as computers have become increasingly adept at handling large data sets, these techniques have produced remarkable results. Programming in the future will likely mean exposing neural networks to the world and letting them learn.

This would be a fitting second act to the story of computers. Logic began as a way to understand the laws of thought. It then helped create machines that could reason according to the rules of deductive logic. Today, deductive and inductive logic are being combined to create machines that both reason and learn. What began, in Boole’s words, with an investigation “concerning the nature and constitution of the human mind,” could result in the creation of new minds—artificial minds—that might someday match or even exceed our own.

Wikimedia / donatas1205 / Billion Photos / vgeny Karandaev / The Atlantic

Hacking Tools Get Peer Reviewed, Too

0
0

In September 2002, less than a year after Zacarias Moussaoui was indicted by a grand jury for his role in the 9/11 attacks, Moussaoui’s lawyers lodged an official complaint about how the government was handling digital evidence. They questioned the quality of the tools the government had used to extract data from some of the more than 200 hard drives that were submitted as evidence in the case—including one from Moussaoui’s own laptop.

When the government fired back, it leaned on a pair of official documents for backup: two reports produced by the National Institute of Standards and Technology (NIST) that described the workings of the software tools in detail. The documents showed that the tools were the right ones for extracting information from those devices, the government lawyers argued, and that they had a track record of doing so accurately.

It was the first time a NIST report on a digital-forensics tool had been cited in a court of law. That its first appearance was in such a high-profile case was a promising start for NIST’s Computer Forensics Tool Testing (CFTT) project, which had begun about three years prior. Its mission for nearly two decades has been to build a standardized, scientific foundation for evaluating the hardware and software regularly used in digital investigations.

Some of the tools investigators use are commercially available for download online, for relatively cheap or even free; others are a little harder for a regular person to get their hands on. They’re essentially hacking tools: computer programs and gadgets that hook up to a target device and extract their contents for searching and analysis.

“The digital evidence community wanted to make sure that they were doing forensics right,” said Barbara Guttman, who oversees the Software Quality Group at NIST. That community is made up of government agencies—like the Department of Homeland Security or the National Institute of Justice, the Justice Department’s research arm—as well as state and local law enforcement agencies, prosecuting and defense attorneys, and private cybersecurity companies.

In addition to setting standards for digital evidence-gathering, the reports help users decide which tool they should use, based on the electronic device they’re looking at and the data they want to extract. They also help software vendors correct bugs in their products.

Today, the CFTT’s decidedly retro webpage—emblazoned with a quote from an episode of Star Trek: The Next Generation—hosts dozens of detailed reports about various forensics tools. Some reports focus on tools that recover deleted files, while others cover “file carving,” a technique that can reassemble files that are missing crucial metadata.

The largest group of reports focuses on acquiring data from mobile devices. Smartphones have become an increasingly valuable source of evidence for law enforcement and prosecutors, because they’re now vast stores of private communication and information—but the sensitive nature of that data has made the government’s attempts to access it increasingly controversial.

“It’s a very fast-moving space, and it’s really important,” Guttman said. “Any case could potentially involve a mobile phone.”

It’s an odd feeling to flip through these public, unredacted government reports, which lay bare the frightful capabilities of commercially available mobile-extraction software. A report published just two weeks ago, for example, describes a tool called MOBILedit Forensic Express, which is made by San Francisco-based Compelson Labs. The tool works on Apple iPhones 6, 6S, and 6S Plus, two versions of Apple’s iPads, as well as several Samsung Galaxy smartphones and tablets. It can extract the following types of information from a mobile device:

… deleted data, call history, contacts, text messages, multimedia messages, files, events, notes, passwords for wifi networks, reminders and application data from apps such as Skype, Dropbox, Evernote, Facebook, WhatsApp, Viber, etc.

The product page for MOBILedit Forensic Express claims the software is capable of cracking passwords and PINs to get into locked phones, but it’s not clear how effective that feature is. Getting into a locked, encrypted smartphone—especially an iPhone—is difficult, and it’s unlikely MOBILedit can bypass every modern smartphone’s security system.

When the FBI tried to break into an iPhone 5C it found at the scene of the 2015 San Bernardino shooting, it initially wasn’t able to access the phone’s data, and asked Apple for help. (Presumably, the FBI would have had access to MOBILedit and other commercial tools.) Apple refused, and the FBI brought a lawsuit against the company—but withdrew it when agents finally found a way in.

Guttman says NIST doesn’t address phone encryption in its testing. “Encryption is certainly an issue for law enforcement access to phones and other digital media, but that issue is outside of our expertise and the type of work we do, which is focused on software quality and software understanding,” she said.

The NIST report on MOBILedit describes how the tool fared against different combinations of smartphones and mobile operating systems. It found, for example, that the tool only obtained the first 69 characters in particularly long iOS notes. Besides that issue and five others, though, the tool largely behaved as it promised it would on iOS devices, the report says.

“None of the tools are perfect,” Guttman said. “You really need to understand the strengths and limitations of the tools you’re using.”

Unlike some more complex tools, MOBILedit doesn’t require an investigator to open up a smartphone and manipulate its internals directly—the software connects to the target phone with a cord, just like a user might to update his or her device. But law enforcement doesn’t necessarily need to force its way into a phone that it’s interested in searching, either by cracking open its case or by brute-forcing its passcode.

In certain cases, officers can just ask—or pressure—the phone’s owner to open it. That’s what happened when Sidd Bikkannavar, a NASA engineer, was stopped by a customs agent on his way back to his native United States from a vacation: The officer just asked Bikkannavar to turn over his PIN, wrote it down, and took his smartphone to another room for about half an hour. When the agent returned the phone, he said he’d run “algorithms” to search for threats. It’s possible Bikkannavar’s phone was searched with one of the mobile acquisition tools that DHS has tested.

The government’s growing library of forensic tool reports is supplemented by other testers. Graduate students at the Forensic Science Center at Marshall University in West Virginia, for example, do some of the same sorts of testing that NIST does. They often work with West Virginia State Police, which runs its own digital forensics lab on campus, to test extraction tools before they’re deployed. They post their results online, just like NIST does, to grow the body of shared knowledge about these tools.

“If we weren’t validating our software and hardware systems, that would come up in court,” said Terry Fenger, the director of Marshall’s Forensic Science Center. “Part of the validation process is to show the courts that the i’s were dotted and t’s crossed.”

A new NIST project called “federated testing” will make it easier for others to pitch in with their own test reports. It’s a free, downloadable disk image that contains all the tools needed to test certain types forensic software, and automatically generate a report. The first report from the project came in recently—from a public defender’s office in Missouri, an indication that digital forensics isn’t just the realm of law enforcement.

I asked Fenger if the technical information being made public in these validation reports could help hackers or criminals circumvent them, but he said the validation data probably wouldn’t be of much value to a malicious hacker. “It’s more or less just the nuts and bolts of how things work,” Fenger said. “Most of the hackers out there are way beyond the level of these validations.”

Kim Kulish / Corbis / GettyA hard drive taken is in as evidence at the Silicon Valley Regional Computer Forensics Lab.

Abu Dhabi to Los Angeles: 17 Hours Without a Laptop

0
0

Updated at 8:45 a.m.

The Department of Homeland Security will no longer allow passengers to carry electronics onto flights to the U.S. from 10 major airports in the Middle East and North Africa. Devices larger than a mobile phone—including laptops, tablets, and cameras—will need to be placed in checked baggage.

The airports are located in eight countries: Egypt, Jordan, Kuwait, Morocco, Qatar, Saudi Arabia, Turkey, and the United Arab Emirates. (Two airports were designated in Saudi Arabia and in the UAE.) Nine airlines—none of them American or European—will be responsible for enforcing the rules. The Department of Homeland Security said about 50 flights a day will be affected by the rules.

The ban was communicated to the relevant airlines and airports at 3 a.m. Eastern on Tuesday, in the form of an emergency amendment to a security directive. From that point, the airlines and airports will have 96 hours to comply. If they fail to, a senior administration official told reporters, “we will work with the Federal Aviation Administration to pull their certificate, and they will not be allowed to fly to the United States.”

The ban on larger electronics was developed in response to a “continuing threat to civil aviation,” according to another official, who would not say whether the threat had developed recently, or when the ban might be lifted. DHS is concerned about a trend of bombs being disguised as consumer items, like shoes, a printer, and even a laptop. The official said the data on checked electronics would not be searched.

Items in checked baggage are generally subjected to intense screening, and security officers sometimes open bags to look through them by hand. Requiring electronics to be checked could allow the Transportation Security Administration to scan them more closely than they would otherwise. In 2014, TSA officers began asking passengers to power on their devices to prove that they’re real—and not just a clever disguise for an explosive.

But relegating most electronics to a plane’s cargo hold comes with potential dangers. In the past, the Federal Aviation Administration has expressed concerns about checking too many lithium-ion batteries—the sort that power laptops—because they can catch fire. A senior administration official said the FAA is sharing information about “best practices” for transporting electronics with the affected airlines.

It will be up to airlines to differentiate between smartphones, which will be allowed in airplane cabins, and tablets, which will need to be checked. Some large smartphones, often called “phablets,” blur this boundary.

Royal Jordanian, the state airline of Jordan, was the first to notify its passengers of the new rules on Monday afternoon. The carrier, which operates flights to New York City, Detroit, and Chicago multiple times a week, announced the change in a tweet—which it went on to delete several hours later.

The tweet sparked hours of confusion, during which U.S. officials were tight-lipped. The Jordanian airline’s statement said only that the new policy “follow[ed] instructions from the concerned U.S. departments.”

A spokesperson for the Jordanian Embassy in Washington, D.C. said the airline’s policy was requested by the State Department. A senior administration official told reporters that the State Department had begun notifying governments of the upcoming ban on Sunday.

Edward Hasbrouck, a travel expert and consultant to The Identity Project, said government has a history of announcing big policy changes with little notice. “This reminds me of the chaos when the DHS started restricting liquids, which occurred with no warning and people found out only at the airport,” he said.

Arnd Wiegmann / Reuters

The Like Button Ruined the Internet

0
0

Here’s a little parable. A friend of mine was so enamored of Google Reader that he built a clone when it died. It was just like the original, except that you could add pictures to your posts, and you could Like comments. The original Reader was dominated by conversation, much of it thoughtful and earnest. The clone was dominated by GIFs and people trying to be funny.

I actually built my own Google Reader clone. (That’s part of the reason this friend and I became friends—we both loved Reader that much.) But my version was more conservative: I never added any Like buttons, and I made it difficult to add pictures to comments. In fact, it’s so hard that I don’t think there has ever been a GIF on the site.

I thought about building new social features into my clone until I heard my friend’s story. The first rule of social software design is that more engagement is better, and that the way you get engagement is by adding stuff like Like buttons and notifications. But the last thing I wanted was to somehow hurt the conversation that was happening, because the conversation was the whole reason for the thing.

Google Reader was engaging, but it had few of the features we associate with engagement. It did a bad job of giving you feedback. You could, eventually, Like articles that people shared, but the Likes went into an abyss; if you wanted to see new Likes come in, you had to scroll back through your share history, keeping track in your head of how many Likes each share had the last time you looked. The way you found out about new comments was similar: You navigated to reader.google.com and clicked the “Comments” link; the comments page was poorly designed and it was hard to know exactly how many new comments there had been. When you posted a comment it was never clear that anyone liked it, let alone that they read it.

When you are writing in the absence of feedback you have to rely on your own judgment. You want to please your audience, of course. But to do that you have to imagine what your audience will like, and since that’s hard, you end up leaning on what you like.

Once other people start telling you what they like via Like buttons, you inevitably start hewing to their idea of what’s good. And since “people tend to be extremely similar in their vulgar and prurient and dumb interests and wildly different in their refined and aesthetic and noble interests,” the stuff you publish will start looking a lot like the stuff that everybody else publishes, because everybody sort of likes the same thing and everybody is fishing for Likes.

What I liked about Reader was that not knowing what people liked gave you a peculiar kind of freedom. Maybe it’s better described as plausible deniability: You couldn’t be sure that your friends didn’t like your latest post, so your next post wasn’t constrained by what had previously done well or poorly in terms of a metric like Likes or Views. Your only guide was taste and a rather coarse model of your audience.

Newspapers and magazines used to have a rather coarse model of their audience. It used to be that they couldn’t be sure how many people read each of their articles; they couldn’t see on a dashboard how much social traction one piece got as against the others. They were more free to experiment, because it was never clear ex-ante what kind of article was likely to fail. This could, of course, lead to deeply indulgent work that no one would read; but it could also lead to unexpected magic.

Is it any coincidence that the race to the bottom in media—toward clickbait headlines, toward the vulgar and prurient and dumb, toward provocative but often exaggerated takes—has accelerated in lock-step with the development of new technologies for measuring engagement?

You don’t have to spend more than 10 minutes talking to a purveyor of content on the web to realize that the question keeping them up at night is how to improve the performance of their stories against some engagement metric. And it’s easy enough to see the logical consequence of this incentive: At the bottom of article pages on nearly every major content site is an “Around the Web” widget powered either by Outbrain or Taboola. These widgets are aggressively optimized for clicks. (People do, in fact, click on that stuff. I click on that stuff.) And you can see that it’s mostly sexy, sexist, and sensationalist garbage. The more you let engagement metrics drive editorial, the more your site will look like a Taboola widget. That’s the drain it all circles toward.

And yet we keep designing software to give publishers better feedback about how their content is performing so that they can give people exactly what they want. This is true not just for regular media but for social media too—so that even an 11-year-old gets to develop a sophisticated sense of exactly what kind of post is going to net the most Likes.

In the Google Reader days, when RSS ruled the web, online publications—including blogs, which thrived because of it—kept an eye on how many subscribers they had. That was the key metric. They paid less attention to individual posts. In that sense their content was bundled: It was like a magazine, where a collection of articles is literally bound together and it’s the collection that you’re paying for, and that you’re consuming. But, as the journalist Alexis Madrigal pointed out to me, media on the web has come increasingly un-bundled—and we haven’t yet fully appreciated the consequences.

When content is bundled, the burden is taken off of any one piece to make a splash; the idea is for the bundle—in an accretive way—to make the splash. I think this has real consequences. I think creators of content bundles don’t have as much pressure on them to sex up individual stories. They can let stories be somewhat unattractive on their face, knowing that readers will find them anyway because they’re part of the bundle. There is room for narrative messiness, and for variety—for stuff, for instance, that’s not always of the moment. Like an essay about how oranges are made so long that it has to be serialized in two parts.

Conversely, when media is unbundled, which means each article has to justify its own existence in the content-o-sphere, more pressure than most individual stories can bear is put on those individual stories. That’s why so much of what you read today online has an irresistible claim or question in the title that the body never manages to cash in. Articles have to be their own advertisements—they can’t rely on the bundle to bring in readers—and the best advertising is salacious and exaggerated.

Madrigal suggested that the newest successful media bundle is the podcast. Perhaps that’s why podcasts have surged in popularity and why you find such a refreshing mixture of breadth and depth in that form: Individual episodes don’t matter; what matters is getting subscribers. You can occasionally whiff, or do something weird, and still be successful.

Imagine if podcasts were Twitterized in the sense that people cut up and reacted to individual segments, say a few minutes long. The content marketplace might shift away from the bundle—shows that you subscribe to—and toward individual fragments. The incentives would evolve toward producing fragments that get Likes. If that model came to dominate, such that the default was no longer to subscribe to any podcast in particular, it seems obvious that long-running shows devoted to niches would starve.

* * *

People aren’t using my Reader clone as much anymore. Part of it is that it’s just my friends on there, and my friends all have jobs now, and some of them have families, but part of it, I think, is that every other piece of software is so much more engaging, in the now-standard dopaminergic way. The loping pace of a Reader conversation—a few responses per day, from a few people, at the very best—isn’t much match for what happens on Twitter or Facebook, where you start getting likes in the first few minutes after you post.

But the conversations on Reader were very, very good.

Beck Diefenbach / Reuters

What Happens When the President Is a Publisher, Too?

0
0

It had to be Twitter. What other platform could a member of Congress use during a high-profile congressional hearing to keep tabs on the president’s reaction to that very hearing?

Not TV. Not radio. Certainly not a crinkly newspaper full of yesterday’s news.

But on Twitter, it’s possible to be sitting in a room full of your colleagues, surreptitiously scrolling on your mobile phone, and notice that, hey, whaddya know, President Donald Trump is tweeting again.

At a House Intelligence Committee hearing on Monday, Jim Himes decided to share some of those tweets with the men who were there being questioned—the FBI director James Comey and the NSA director Mike Rogers—along with the rest of the room, and the public. Here’s how it went down:

Himes: Gentlemen, in my original questions to you, I asked you whether the intelligence community had undertaken any sort of study to determine whether Russian interference had had any influence on the electoral process, and I think you told me the answer was no.

Rogers: Correct. We said the U.S. intelligence community does not do analysis or reporting on U.S. political process or U.S. public opinion...

Himes: So, thanks to the modern technology that’s in front of me right here, I’ve got a tweet from the president an hour ago saying, “The NSA and FBI tell Congress that Russia did not influence the electoral process.” So that’s not quite accurate, that tweet?

Comey: I’m sorry, I haven’t been following anybody on Twitter while I’ve been sitting here.

Himes: I can read it to you. It says, “The NSA and FBI tell Congress that Russia did not influence the electoral process.” This tweet has gone out to millions of Americans—16.1 million to be exact. Is the tweet as I read it to you—“The NSA and FBI tell Congress that Russia did not influence the electoral process”—is that accurate?

Comey: Well. It’s hard for me to react. Let me just tell you what we understand... What we’ve said is: We’ve offered no opinion, have no view, have no information on potential impact because it’s never something that we looked at.

Himes: Okay. So it’s not too far of a logical leap to conclude that the assertion—that you have told the Congress that there was no influence on the electoral process—is not quite right.

Comey: It certainly wasn’t our intention to say that today because we don’t have any information on that subject. That’s not something that was looked at.

The most telling aspect of this exchange is the nearly three seconds it takes for Comey and Rogers to react to Himes. They seem dumbfounded at first. Rogers does a little shake of his head and smirks. And, for once, it seems the moment of disbelief wasn’t—or at least wasn’t only—directed at the substance of the president’s tweet, but at the very fact of it.

In 2017, the president’s habit of spreading misinformation on Twitter is being fact-checked, nearly in real time, by members of Congress. Surely, a president has never before interjected himself into a congressional hearing this way?

It’s really worth watching the video.

As my colleague McKay Coppins wrote, we don’t actually know whether the president personally authored these tweets. “According to the @POTUS Twitter bio, they are mostly written by Trump’s social media director Dan Scavino. But if nothing else, the aide was taking his cues from the boss,” Coppins wrote.

Though Trump’s bombastic Twitter presence is a well-worn part of his shtick—or, um, personal brand—Monday’s episode shows he’s increasingly leveraging it for a new kind of punditry. (Possibly also a new kind of propaganda.) True to his reality-television instincts, Trump appeared primed for a fight Monday morning, before the hearing even began, when he used his personal Twitter account to deride coverage of the Russia scandal as “FAKE NEWS and everyone knows it!”

What everyone actually knows, or should by now, is that while Trump claims to hate “the media,” he is himself an active publisher. And when the Trump administration talks about the press as “the opposition,” that may be because Trump is himself competing with traditional outlets in the same media environment, using the same publishing tools. It’s no wonder there was so much speculation about Trump possibly launching his own TV network to rival Fox. It’s also no wonder that Trump recently suggested he owes his presidency to Twitter, which he has used to blast critics and spout conspiracy theories since at least 2011.

“I think that maybe I wouldn’t be here if it wasn’t for Twitter,” he told the Fox News correspondent Tucker Carlson during an interview that aired last week, “because I get such a fake press, such a dishonest press.”

“So the news is not honest,” Trump went on. “Much of the news. It’s not honest. And when I have close to 100 million people watching me on Twitter, including Facebook, including all of the Instagram, including POTUS, including lots of things—but we have—I guess pretty close to 100 million people. I have my own form of media.”

Trump’s right. He does have his own form of media. But he should also know this: Some Americans may be ambivalent about the truth. Politicians lie all the time and get away with it. But nobody likes the dishonest media.

Screenshot from C-SPANFBI Director James Comey and NSA Director Mike Rogers reacting to Congressman Jim Himes telling them, "I’ve got a tweet from the president."

How the Rise of Electronics Has Made Smuggling Bombs Easier

0
0

Last February, a Somali man boarded a Daallo Airlines flight in Mogadishu, Somalia’s capital. Twenty minutes after the flight took off, the unassuming laptop in his carry-on bag detonated, blowing a hole in the side of the plane. The bomber was killed, and two others were injured. But if the aircraft had reached cruising altitude, an expert told CNN, the bomb would have ignited the plane’s fuel tank and caused a second, potentially catastrophic blast.

The Daallo explosion was one of a handful of terrorist attacks that the Department of Homeland Security cited to help explain why it introduced new rules for some passengers flying to the U.S. with electronics. Starting this week, travelers on U.S.-bound flights from 10 airports in the Middle East and North Africa will be required to check all electronic items larger than a smartphone.

A senior administration official told reporters Monday night that the indefinite electronics ban was a response to continuing threats against civil aviation, but wouldn’t elaborate on the specific nature or the timing of the threat. Adam Schiff, the ranking member of the House Intelligence Committee, said in a statement that the ban was “necessary and proportional to the threat,” and that terrorists continue to come up with “creative ways to try and outsmart detection methods.”

The specificity of the new rules could hint at the nature of the intelligence it’s based on, says Justin Kelley, the vice president for operations at MSA Security, a large private firm that offers explosive-screening services. The ban could be focused on simply separating items like laptop-bombs from passengers who would need to access them in order to set them off, Kelley says. A transcript of our conversation, lightly edited for concision and clarity, follows.


Kaveh Waddell: How much has bomb-smuggling technology changed since Richard Reid tried to hide explosives in a shoe in 2001?

Justin Kelley: It’s pretty common, and it was common even before Reid. But now, everything we have on our person has some sort of power source to it, and that’s what they’re looking for. Everything from a laptop to a phone to an iPad—most of those restricted items they want now in checked baggage—they all have a power source, which is what bombers are generally looking for: something to kick off their device.

Waddell: What’s most difficult about designing a bomb that’s hard to detect, and small enough to fit into something like a shoe or underwear?

Kelley: The bombs in underwear were pretty rudimentary—they needed a human element. But if we’re looking at an electronic device, they can be done a whole host of different ways. They don’t necessarily need an actor to set off the device.

The electronic version has been around since we’ve had cellphones. Even before cellphones, you could use a greeting card that sings a holiday tune or a birthday wish—those use power as well. There’s a whole host of things that can be used to initiate a device. But now, we travel with all these electronics on our person, we need to look even harder.

Waddell: DHS said the new ban was created in response to a threat. How do authorities monitor the state of adversaries’ bomb-making skills to have a sense of what to watch for in airports?

Kelley: That intelligence could have been gathered through social media, or people they’re monitoring. Terrorist groups are always changing and adapting to what we put forth as security principles, so this is just another step. When liquids were banned from planes, that was also a product of intelligence, and I’m not surprised they don’t want to disclose the source.

Waddell: What kind of extra screening might luggage be subject to in checked bags that they might not be if they were carried on?

Kelley: Anything on those planes is going to be screened, whether it’s passenger-carried or cargo. This may have been driven by intelligence that someone would use power sources during a flight, that there would need to be some human interaction.

If they were interested in banning electronics entirely, there would be a stronger restriction.

Waddell: So you’re saying that it’s not necessarily that it’s easier to detect a bomb in checked baggage—it could be that separating a person from their electronics breaks a necessary link for using it as a bomb?

Kelley: Yeah, it tells me they want to separate the human element from the device. That’s what jumped out at me. That could be part of the reason.

Waddell: Would it be that difficult to check a laptop that would be set to detonate at a certain time or altitude?

Kelley: No, in fact, we have seen that in the past. That’s why I think this specific ban was driven by specific intelligence that they’ve gathered. The Reid-type device was human driven. If their concern was about a device in the belly of the plane, I think they’d have imposed other restrictions, but this just says, “You can fly with it; you just can’t fly with it on your person.”

Waddell: The scope of the ban is pretty limited right now: It only covers direct flights to the U.S. from 10 airports in the Middle East and North Africa. Could this be broadened at some point? Might this be a pilot program that’ll end up being implemented elsewhere?

Kelley: I think that comes down to how comfortable DHS is with security at these host countries. Our hope is that other countries that other countries follow TSA-like guidelines for screening—but some don’t. Those that are at or near our standard wouldn’t be part of the ban.

Waddell: Earlier today, the U.K. introduced a similar ban. If enough other countries jump on board, could this become standard practice?

Kelley: No doubt. And once someone steps up with a concern, with real-time information sharing, I think you might see quite a few more countries jump on as well.

Dado Ruvic / Reuters

What Happens If Uber Fails?

0
0

The thing about a market bubble is that you don’t really know how big it is until it pops. So it doesn’t pop, and doesn’t pop, and doesn’t pop, until one day it finally pops. And by then it’s too late.

The dot-com collapse two decades ago erased $5 trillion in investments. Ever since, people in Silicon Valley have tried to guess exactly when the next tech bubble will burst, and whether the latest wave of investment in tech startups will lead to an economic crash. “A lot of people who are smarter than me have come to the conclusion that we’re in a bubble,” said Rita McGrath, a professor of management at Columbia Business School. “What we’re starting to see is the early signals.”

Those signals include businesses closing or being acquired, venture capitalists making fewer investments, fewer companies going public, stocks that appear vastly overpriced, and startup valuations falling.

Then you have a company like Uber, valued at $70 billion despite massive losses, and beleaguered by one scandal after another. In 2017 alone Uber has experienced a widely publicized boycott that led to an estimated half-a-million canceled accounts, high-profile allegations of sexual harassment and intellectual property theft, a leaked video showing its CEO cursing at an Uber driver, a blockbuster New York Times scoop detailing the company’s secret program to trick law enforcement, and multiple senior leaders either resigning or being forced out.

“As someone trying to raise [venture capital] right now, I am very concerned that this is going to implode the entire industry,” one person wrote in a forum on the technology-focused website Hacker News earlier this week. It’s understandable that investors and entrepreneurs would be “watching this Uber situation unfold closely,” as Mike Isaac, the New York Times reporter, put it in a tweet about the Hacker News post. Especially at a time when rising interest rates give investors more options, and ostensibly make the highly valued pre-IPO companies like Uber less attractive.

But how much is the tech industry’s fate actually wrapped up in Uber’s? If Uber implodes, will the bubble finally pop? It’s a question that’s full of assumptions: Uber’s fate is uncertain, and nobody really knows what kind of bubble we’re in right now. Yet it’s a question still worth teasing apart. Trillions of dollars, thousands of jobs, and the future of technology all hang in the balance.

“These bubbles swing back and forth in fear and greed,” McGrath told me, “and when Uber stumbles, it triggers fear. Part of this bubble is created basically in a low-interest-rate environment. Money from all over the world is pouring into this sector because it has nowhere else to go.”

This is a key point—perhaps the key point that will determine whether Uber lives or dies. Uber isn’t worth $70 billion because it is actually worth $70 billion. Its valuation is that high despite the fact that it’s not profitable, and despite the fact that it has little protection from competitors baked into what it is and does. Uber’s valuation, in other words, is a reflection of the global marketplace and not a reflection of Uber’s own durability as a company.  

“To me, it’s a big question of whether they are going to be able to sustain the business model,” McGrath told me. “They have been very disruptive to incumbents, but there are no significant barriers to entry to their model. If you switch [services], you maybe have to re-enter your credit-card information and download a new app, but from there you’re good to go. There are pundits who say it’s only a matter of time.”

And then what? If Uber goes kablooey, what happens to all the other unicorns—the 187 startups valued at $1 billion or more apiece, according to the latest count by the venture capital database CB Insights?

Despite Uber’s influence, it’s unlikely that the company’s potential failure would set off too terrible of a chain reaction in Silicon Valley, several economists told me. “You need to make a distinction,” McGrath said, “between the startups that are really creating value and have something that will protect them in the event of imitation—versus the ones that are built on a lot of assumptions that really haven’t been tested yet, and money has been pouring into them because [it] have nowhere else to go.”

One instructive example is Theranos, the company known for its needle-free blood-testing technology. A few short years ago, it was roundly considered a Silicon Valley success story, valued at some $9 billion. Then, The Wall Street Journal revealed in a deeply investigated series of stories that the technology didn’t actually work as claimed—information that led to federal sanctions, lab closures, and ultimately Theranos’s announcement that it would leave the medical-testing business altogether. Theranos failed spectacularly, but it didn’t pop the bubble. So perhaps that’s a sign that the bubble isn’t going to pop all at once the way it did last time. The key is whether investors see a significant failure—like Theranos, and maybe Uber—as a one-off, or as a reflection of a systematic problem bubbling under the surface.

“One hypothesis could be that if a large pre-IPO tech company fails, then the source of capital for the others will start to shrink,” said Arun Sundararajan, a professor at New York University’s Stern School of Business. “That’s part of, I am sure, what happened during the dot-com bubble. But we are in a very different investment environment now.”

There are two big changes to consider. For one, practically every company is now a technology company. Silicon Valley used to make technology that mainstream consumers didn’t care about—or didn’t know that they even used. Not so, today. Technology is pervasive throughout the economy and throughout culture, which creates a potential protective effect for investors. “The investments into these companies are creating new business models in massive swaths of the economy, as opposed to being insulated,” Sundararajan said. “Also, a bulk of the money going into these companies is coming from players who are not dependent on the success of tech alone for their future financing.”

This is the second change to consider: Whereas tech investments were once made by a relatively small group of venture capitalists who funded companies that then went public, that’s no longer the case. “Even if you put Uber aside and look at some of the larger recipients of pre-IPO investment over the last few years—it’s a very different cast of characters,” Sundararajan told me. “There are large private equity firms that are much more diversified than, say, Kleiner Perkins was 20 years ago.”

Sundararajan’s referring to Kleiner Perkins Caufield & Byers, the venture capital firm that “all but minted money” in the 1990s, as the writer Randall Smith put it. Back in the day, the company made its investors enormous sums of money with early investments in Google and Amazon, but has stumbled in recent years.

All of this means that the investment infrastructure supporting technology companies has changed, and that’s largely because of how technology’s place in culture has changed. “If Uber fails—and there’s no guarantee that it will—all of Uber’s investors won’t say, ‘Were we wrong to invest in tech?’” Sundararajan said. “They will say, ‘Did we misread the capabilities of this one company?’”

If anything, Sundararajan says, Uber is getting a tough, public lesson in how not to run a business. The company, for its part, is doubling down on attempts to rebuild its image. In a conference call with reporters on Tuesday, executives for the ride-sharing service expressed support in Travis Kalanick, Uber’s embattled co-founder and CEO.

“By now, it’s becoming increasingly apparent that the issues that are putting Uber in the news frequently don’t have much to do with either its business model or its identity as a tech company,” Sundararajan said. “If there is a serious reduction in Uber’s value over the next year, the lesson that people will take away is one of better corporate governance for early-stage tech companies—so that, as they get into a later stage, they are not in a position where the tradeoffs they made early-on ended up being more harmful than good.”

Meanwhile, many investors are shifting their focus away from platforms and to the underlying technologies that, if they succeed, will outlast any given brand—for example, sensors for self-driving cars, autonomous medical technologies, myriad robotics, and so on. This, too, has an insulating effect against any single company’s failure.

Uber, which may or may not fail, may or may not bring down the rest of the economy with it. But the bubble is still likely to burst sooner or later. “There was this fog hanging over Silicon Valley in 2001,” Roelof Botha, a partner with VC firm Sequoia Capital, told Bloomberg Businessweek last fall. “And there’s a fog hanging over it now. There’s no underlying wave of growth.”

Since 2015, CB Insights has counted 117 down rounds in tech, instances when a company raises more money by selling existing shares at reduced value. A down round doesn’t mean a company will fail, but it does signal a warning about the market it’s operating in.

The lesson here is that people trying to raise venture capital shouldn’t be worried about what Uber, specifically, might do to the economy if the company fails. There are plenty of other hints that a market correction is already well under way. The question now is whether the bubble will pop as dramatically as it has before, or simply go right on deflating the way it seems to be.

Reuters

Becoming ‘Everyone’s Little Sister’ to Deal With Sexism

0
0

A reader with a Ph.D. in physics has been working in the tech industry for many years, but she’s struggled to cope with the huge gender imbalance at the start-ups she’s worked for. She feels she can’t fully be herself—or a mother:

When I entered the office for my interview, I saw every head in the glass-enclosed conference room pop up and look over at me. I’ve trained myself to have a sort of small, permanent smile plastered on my face, and I hoped, as the room was looking me over, that my smile looked natural, approachable, and genuine.

That is the persona I’ve settled on: Approachable and genuine. Everyone’s little sister.

In that way, I can inhabit a special place, still allowed to be feminine, someone everyone roots for but no one is sexually attracted to, or intellectually threatened by. Everyone wants his kid sister to win. Everyone will defend his little sister from bullies.

Sure, you may forget she is a girl; you may leave her out of some things because you forget about her; but you are not going forget her all together. And you certainly aren’t going to want your friends to sleep with her.

Read On »

The Video Game That Claims Everything Is Connected

0
0

I am Rocky Mountain elk. I somersault forward through the grass, toward a tower of some sort. Now I am that: Industrial Smoke Stack. I press another button and move a cursor to become Giant Sequoia. I zoom out again, and I am Rock Planet, small and gray. Soon I am Sun, and then I am Lenticular Galaxy. Things seem a little too ordinary, so I pull up a menu and transform my galaxy into a Woolly Mammoth. With another button I multiply them. I am mammoths, in the vacuum of space.

There are others, too. Hydrogen atom. Taco truck. Palomino horse, spruce, fast-food restaurant, hot-air balloon. Camel, planetary system, Higgs boson, orca. Bacteriophage, poppy, match, pagoda, dirt chunk, oil rig. These are some of the things I got to be in Everything, a new video game by the animator and game designer David OReilly.

It may sound strange. What does it mean to be a fast-food restaurant or a Higgs boson? That’s the question the game poses and, to some extent, answers. In the process, it tumbles the player through galaxies, planets, continents, brush, subatomic abstractions, and a whole lot of Buddhist mysticism. The result turns a video-game console into an unlikely platform for metaphysical experimentation.

* * *

In retrospect, OReilly’s last game was a warm-up for this one. Called Mountain, the game depicted a mountain, disembodied in space, at which worldly miscellany hurtled and sometimes stuck. Eventually, after 50 hours or more, the mountain, rather than the human, quit playing and departed.

When I wrote about Mountain upon its release in 2014, it was easy to find a hook. OReilly had produced several esoteric, animated short films, but he was best-known for designing the animations for the “alien child” video-game sequence in Spike Jonze’s Her, a film about a man’s relationship with an artificial intelligence that eventually reaches transcendence and leaves him. Her, I argued, was Hollywood taking the easy way out with alien love. Scarlett Johansson’s Samantha was just a human left unseen. Mountain offered a bolder invitation: to commune with a representation of an inanimate, aggregate object rather than a living, individual one.

Against all odds, Mountain was a commercial success. It cost $1, and it did well enough to allow OReilly to self-fund the development of Everything. That’s a big bet, but OReilly feels palpable glee from having taken it. “Money lacks the ability to look forward,” he tells me, reflecting on his difficulty parlaying short-film festival success into paying work. He sells the risk as a moral imperative. “I could have done a commercial thing or gotten a mortgage,” he explains, “but I felt a responsibility to go deeper.”

Three years later, Everything certainly goes deeper. The game sports thousands of unique, playable things, promising players that anything they can see, they can be. To “be” something in Everything means binding to and taking control of it. Once accomplished, the player can pilot that object around Everything’s vast, multi-level 3-D world. Rolling the boulder over to a Montgomery palm allows the player to ascend further, as I did up into the galaxy from the Rocky Mountain elk. The game also allows downward progress: descending from planet, to continent, to kelp forest, and then orca, then plankton, then fungus, then atom. And further, too, until discovering that, according to Everything, the tiniest of things in one dimension might just be the biggest ones in another.

Everything’s tagline promises that everything you can see, you can be, which has led some to conclude that the game is a “universe simulator,” along the lines of No Man’s Sky. But Everything isn’t a universe simulator. You can’t be anything in Everything, and anyone with that aspiration will leave disappointed. After bonding with a fast-food restaurant, players can’t descend into it to discover booths, ceramic floor tiles, low-wage workers, hamburger patties, or the fragments of spent straw sheaths like they can with galaxies and continents and shrubs.

Only a fool would try to make a game that contains everything—or think that it would be possible to play one. A game containing everything in the universe would be coextensive with the universe. We’re already playing that game, it turns out. But that fact is hard to see. Everything helps a little, by reminding people of the things that coexist both alongside and very far away from them.

* * *

Everything’s take on the matter comes at a cost. In Mountain, the nature of a mountain was easier to imply, even if as a caricature, in a video game. Mountains are massive structures of rock and earth, formed and destroyed by tectonics and erosion. The timescale of a mountain makes a human being’s encounter with it evanescent. Fifty hours into Mountain, staring at the same mountain, the slowness of geological time feels palpable.

In Everything, everything feels more familiar, more human. The player moves things around, side to side and up and down, in the manner familiar to video games. This makes sense for beetles and cargo ships, but less so for redwood trees and office buildings—which disconnect from the ground and lumber along as if they were giraffes. The game tries to undermine its anthropomorphism by animating living creatures in a deliberately unfamiliar, awkward way. Mammals, their limbs fixed, execute somersaults rather than walking upright. The things in Everything also express existential angst, and with language, no less. “So many times I could have asked him out,” a lime wedge says, as I, VHS cassette, tumble past it. What would it mean for citrus to date, or to Tinder?

Such questions make more sense when considered alongside audio clips that the player can find throughout the game. They are excerpts from the lectures of the Alan Watts, the British-American philosopher-mystic who popularized Eastern philosophy in the West during the mid-20th century. He’s largely responsible for importing Buddhism to California in particular, where its rise had an important influence on the counter-culture movements of the 1960s—which together partly shaped the rise of the microcomputer in Silicon Valley.

Watts’s monologues are insidiously seductive. His voice imposes involuntary serenity, even among listeners (like me) who disagree with the ideas it conveys. The cadence and quality of the recordings also telegraph the period of their production: the cradle of mid-century prosperity, when certainty sounded more certain. Blending Zen and Vedanta with Freud and Heisenberg, Watts argued against the Western notion of the alienated self, separate from and at odds with its surroundings. Instead, he advocated a holistic conception of being, in which all entities in the cosmos are fundamentally interconnected, reliant, and compatible.

The recordings are extensively excerpted in the game, to “bind the ideas of the game to its structure,” as OReilly puts it. To do so required extensive negotiation with Watts’s estate, which has turned Watts’s lectures into a cottage industry for corporate licensing. OReilly appealed to Mark Watts, Alan’s son. Many owners of PlayStations are probably unfamiliar with Watts’s ideas, OReilly reasoned, despite their influence on the contemporary mindfulness trend they partly enabled. The two struck a deal: The video game and the library would partake in a delightful, unexpected cross promotion.

OReilly’s own view of the result is broad and unassuming. He does see a central claim in the game—“the world as subtracted from the idea of the self.” But OReilly also knows from Mountain—used as an object of mockery as much as a relaxation aid—that people use media for their own purposes, even if those purposes amount to making GIFs for their friends.

* * *

Even so, Everything yokes its horse too tightly to Watts’s cart. The concreteness of the philosopher’s voice and ideas risk overwhelming all other interpretations. And even without them, Everything’s narrative structure (yes, there is one) is textbook Watts. The player enters the universe with an anxious certainty about the role of the self. Over time, with practice, that player can let go of those attachments, free the mind, and reach enlightenment. At which point the real work of living—or playing—can commence.

For players prepared to adopt Watts’s take on existence, that’s not a problem. But others, including me, it’s hard to shake off Everything’s unwelcome claim that everything in the universe is connected, accessible, and familiar. To be a thing in Everything feels so much like being a person, or an avatar of one, that it undermines the separation OReilly so adeptly achieved in Mountain.  

When I eat bacon, or view zebras, or feel the breeze from a desktop fan, or ingest the hydrogen atoms bound to oxygen in a glass of water, I partake of those things only in part. Their fundamental nature remains utterly separate and different me, and from one another, too. I might be made of carbon and oxygen and hydrogen, but I can never really grasp what it is to be carbon. I might enter a fast-food restaurant, and I might even leave with bits of it inside me, but I can never fathom what it means to be a restaurant. The best I can do is to tousle the hair of that question, and establish the terms on which approximations might be possible.

I tried to play Everything with that attitude in mind, rather than Watts’s holism. And it obliged surprisingly well. For one part, the game puts man-made entities on the same footing as natural ones. Bacon and street lamps are no less or more valid avatars in Everything than are spruce trees or ice planets. This idea alone is enough to recommend the game, and to break the yoke of Alan Watts, whose version of Western Buddhism still bound too much to environmental naturalism.

And Everything offers a paradoxical salve to the anthropomorphism on which it also relies. When the rocks and the amoeba all have and express the same anxiety of death as people, as they do in Everything, they also draw attention to the fact that rocks and amoeba can’t possibly have that anxiety—at least not in the same way as you and me. In her book Vibrant Matter, the political scientist Jane Bennett has a tidy summary of this unexpected escape route from human self-centeredness:

Maybe it’s worth running the risks associated with anthropomorphizing … because it, oddly enough, works against anthropocentrism: a chord is struck between person and thing, and I am no longer above or outside a nonhuman “environment.”

Counterintuitively, by allowing things unlike people to pretend they are like us, the game helps drive home the fact that they are not.

For another part, Everything embraces an aesthetic of messiness rather than order. Things are in their place, to an extent: Descending into a continent unveils animals, fences, and farmhouses; rising into a solar system reveals planets and spacecraft. But the range and specificity of things in Everything spotlights the delightful and improbable diversity of existence. The universe contains bowling pins no less than quasars, articulated buses no less than cumulus clouds. The aesthetics of being isn’t a smooth flow of interconnectedness, as Alan Watts would have it. It’s a depraved bestiary whose pages share the ordinary with the preposterous with the divine.

There’s a lovely moment in Everything, just before the player reaches its version of awakening. A new thing appears in a curious murk. It’s a PlayStation, wired up to a television. The game displayed upon it is Everything, and the scene is the very one the player currently occupies. In a humble whisper, Everything admits that it is not everything, but only a video game by that name, full of things made from polygons, just pretending.

People play games—and read books, and listen to lectures—not to mistake their ideas for the world, but in order to find new ways to approach that world. This fact is so obvious that it seems stupid to observe it. And yet, video games—that medium of prurient adolescent fantasy at worst, and numbing, compulsive distraction at best—rarely try, or succeed, in doing so. Especially at the level of ideas so abstract as ontology, the study of being.

Perhaps this is Everything’s greatest accomplishment: a video game with a metaphysical position strong and coherent enough to warrant objection as much as embrace.

Everything Game / Ian BogostA herd of wooly mammoths float among galaxies in Everything, a new video game.

How the Diving Bell Opened the Ocean's Depths

0
0

Imagine sitting on a narrow bench inside a dark room. Your feet are dangling into a floor of water. You’re vaguely aware of the room moving. Your ears start ringing. If you move too much, you feel the room sway, which could bring the floor rushing in to fill it. You take a breath and dive down, swim outside the room, groping the water, looking for its bottom, reaching for something valuable enough to take back with you.

If you’ve ever pushed an upside-down cup into water, reached inside, and found it still empty, you’ve encountered a diving bell. It’s a simple concept: The water’s pressure forces the air, which has nowhere else to go, inside the “bell.” Once people realized that trapped air contains breathable oxygen, they took large pots, stuck their heads inside, and jumped into the nearest body of water. In the 2,500 years since, the device has been refined and expanded to allow better access to the ocean’s depths. But that access has not come without human cost.

* * *

The first account of diving bells comes from Aristotle in the 4th century B.C.E. Legend has it Aristotle’s pupil Alexander the Great went on to build “a very fine barrel made entirely of white glass” and used it in the Siege of Tyre in 332 B.C.E. However, the facts of Alexander the Great’s adventures come mostly from depictions in fragments of ancient art and literature, which render him as a demigod who conquered the darkness and returned to the dry realm of historians and poets.

Prior to the diving bell, the wet parts of the earth were places people could move atop but not transit within. Shallow diving was possible: Duck hunters in ancient Egypt and swimmers in Rome and Greece used hollowed-out reeds or plant stems as snorkels. But they were still surface-bound, barely deeper than the reflections of the sky on the water above.

The diving bell changed that. Figuring out how to stay underwater was a turning point in not only naval technology, but also in science and adaptation. The diving bell acted as a portable atmosphere, allowing divers to descend a dozen feet or so, briefly leave the bell, return to it for air, and then return to the surface and start all over once they had filled their home base with too much carbon dioxide.

Staying submerged began as a simple trick, a novelty meant mostly for spectacle. But like most human exploration, the underwater landscape became appealing for its latent revenue opportunities. At first, diving bells appear to have been most heavily used in the pearl and sponge industries. Then, in 1531, the Italian inventor Guglielmo de Lorena came up with a new application. Using slings to attach a bell to his body, he could collect treasure from capsized Roman ships. After the defeat of the Spanish Armada in 1588, according to Francis Bacon, Spanish prisoners spread the word that their captors’ riches had sunk off the coast of Scotland; industrious divers used bells to pick up the scraps.

Seeing the technology as a business opportunity, scientists and inventors made improvements to a concept that had shown virtually no change in two millennia. A renaissance era for the diving apparatus commenced. The German painter and alchemist Franz Kessler, the Swedish colonel Hans Albrecht von Treilebe, the Massachusetts Bay Colony governor Sir William Phips (best known today for the Salem witch trials), the French priest Abbe Jean de Hautefeuille, the French physicist Dennis Papin, and the British super-scientist Edmund Halley (of Halley’s Comet fame) all made contributions to diving bell technology in the 17th century—all in the interest of collecting valuables no one else could reach. Phips went as far as modern-day Haiti to chase sunken treasure.

The most important of these contributions may have come from de Hautefeuille, who in 1681 wrote that diving deeper alters the atmospheric pressure of the air available to a diver. Pressure was the key to more sustainable expeditions, it turned out. Halley then developed a complex system of weighted air barrels, hoses, and valves to keep a relatively stable level of oxygen and pressure inside his lead-reinforced wooden bell design.

But increasing the pressure inside the bell posed a problem. It also kept the water level down as the bell descended, and thereby pressurized the bell’s inhabitants, occasionally bursting divers’ ear drums. Using faucets to adjust the pressure inside the bell and sending barrels back-and-forth to the surface to replenish his air supply, Halley was able to spend well over an hour 60 feet below the surface, though he did complain that his ears felt “as if a quill had been thrust into them.”

Casualties became a theme. In 1775, the Scottish confectioner Charles Spalding improved diving-bell safety with better balance weights, a pulley system that increased dive control, signal ropes leading to the surface, and even windows. Spalding and his nephew, Ebenezer Watson, used such diving bells for salvage work—until they both suffocated inside one off the coast of Ireland.

The final contribution to the Halley-style “wet” (partially-enclosed) bell was by Englishman John Smeaton more than a decade later. Smeaton’s bell maintained the air supply by connecting a hose to a pump above the surface. This design enabled laborers to fix the foundation of England’s Hexham Bridge. But it led to lower-class caisson workers coming down with what they called “caisson sickness,” as Smeaton’s bell became ubiquitous in harbors throughout the world. Now known as the bends or decompression sickness, caisson sickness sometimes caused surfaced divers to have strokes, leading to paralysis and even death. Workers would come back to the dry world and pray they didn’t mysteriously take ill.

It wouldn’t be until nearly 1900 that scientists began to master the effects of pressure on the human body. Eventually, the wet bell gave way to modern, completely enclosed “dry” bells, which were really just pressurized diving chambers. By the mid-20th century, these sophisticated diving bells aided the booming offshore oil industry—fuel awaited human discovery in the deep, next to shipwrecks, sponges, and pearls.

* * *

Alexander the Great reportedly claimed that while submerged, he saw a monster so massive it took it three full days to swim past him. It would have been physically impossible to survive in his bell that long, of course, but the story makes good legend: The ocean offers a void big enough to contain human metaphor and myth, an emptiness vast enough to consume a three-day-long behemoth—or to swallow the continent of Atlantis (as Plato claimed), or the whole Earth itself (as Noah’s God commanded), or to hide the Missing Link on the lost island of Lemuria, or to conceal countless missing vessels in the Bermuda Triangle.

It’s no coincidence that the psychoanalyst Carl Jung chose Proteus, the shape-shifting Greek god of water who tells the future to whoever can catch him, as a manifestation of the unconscious, that great dark sea in the mind. The ocean represents the unknown. For thousands of years, it marked the portion of Earth people could never access. It was a place conquerable only by God, whom Isaiah addressed, “Art thou not it that dried up the sea, the waters of the great deep; that made the depths of the sea a way for the redeemed to pass over?”

Imagine again: Your arms are full of something heavy you imagine to be precious, and you spend the last of your breath kicking, ascending back to the small room where you hope there’s still enough air to breathe, and then enough to make it to the surface, where you’ll wait to find out whether you’ll become one of the sick ones. Someone will pay you, and maybe it’ll be enough. To go underwater always challenges humanity’s natural place; to strive to stay there is to defy our given position on the earth. But humans will persist, because still so little is known about what lurks deep in the ocean, and because discovering it is worth the trial of pursuit.


This article appears courtesy of Object Lessons.

APOtis Barton, a deep-sea diver, prepares for a practice dive in 1952.
Viewing all 6870 articles
Browse latest View live




Latest Images