Quantcast
Channel: Technology | The Atlantic
Viewing all 6888 articles
Browse latest View live

The 'Curious' Robots Searching for the Ocean's Secrets

$
0
0

People have been exploring the Earth since ancient times—traversing deserts, climbing mountains, and trekking through forests. But there is one ecological realm that hasn’t yet been well explored: the oceans. To date, just 5 percent of Earth’s oceans have been seen by human eyes or by human-controlled robots.

That’s quickly changing thanks to advancements in robotic technologies. In particular, a new class of self-controlled robots that continually adapt to their surroundings is opening the door to undersea discovery.  These autonomous, “curious” machines can efficiently search for specific undersea features such as marine organisms and landscapes, but they are also programmed to keep an eye out for other interesting things that may unexpectedly pop up.

Curious robots—which can be virtually any size or shape—use sensors and cameras to guide their movements. The sensors take sonar, depth, temperature, salinity, and other readings, while the cameras constantly send pictures of what they’re seeing in compressed, low-resolution form to human operators. If an image shows something different than the feature a robot was programmed to explore, the operator can give the robot the okay to go over and check out in greater detail.

The field of autonomous underwater robots is relatively young, but the curious-robots exploration method has already lead to some pretty interesting discoveries, says Hanumant Singh, an ocean physicist and engineer at Woods Hole Oceanographic Institution in Massachusetts. In 2015, he and a team of researchers went on an expedition to study creatures living on Hannibal Seamount, an undersea mountain chain off Panama’s coast. They sent a curious robot down to the seabed from their “manned submersible”—a modern version of the classic Jacques Cousteau yellow submarine—to take photos and videos and collect living organisms on several dives over the course of 21 days.

On the expedition’s final dive, the robot detected an anomaly on the seafloor, and sent back several low-resolution photos of what looked like red fuzz in a very low oxygen zone. “The robot’s operators thought what was in the image might be interesting, so they sent it over to the feature to take more photos,” says Singh. “Thanks to the curious robot, we were able to tell that these were crabs—a whole swarming herd of them.”

The team used submarines to scoop up several live crabs, which were later identified through DNA sequencing as Pleuroncodes planipes, commonly known as pelagic red crabs, a species native to Baja California. Singh says it was extremely unusual to find the crabs so far south of their normal range and in such a high abundance, gathered together like a swarm of insects. Because the crabs serve as an important food source for open-ocean predators in the eastern Pacific, the researchers hypothesize the crabs may be an undetected food source for predators at the Hannibal Seamount, too.

When autonomous robot technology first developed 15 years ago, Singh says he and other scientists were building robots and robotics software from scratch. Today a variety of programming interfaces—some of which are open-source—exist, making scientists’ jobs a little easier. Now they just have to build the robot itself, install some software, and fine-tune some algorithms to fit their research goals.

While curious robot software systems vary, Girdhar says some of the basics remain the same. All curious robots need to collect data, and they do this with their ability to understand different undersea scenes without supervision. This involves “teaching” robots to detect a given class of oceanic features, such as different types of fish, coral, or sediment. The robots must also be able to detect anomalies in context, following a path that balances their programmed mission with their own curiosity.

This detection method is different from traditional undersea robots, which are preprogrammed to follow just one exploration path and look for one feature or a set of features, ignoring anomalies or changing oceanic conditions. One example of a traditional robot is Jason, a human-controlled “ROV,” or remotely operated vehicle, used by scientists at Woods Hole to study the seafloor.

Marine scientists see curious robots as a clear path forward. “To efficiently explore and map our oceans, intelligent robots with abilities to deliberate sensor data and make smart decisions are a necessity,” says Øyvind Ødegård, a marine archaeologist and Ph.D. candidate at the Centre for Autonomous Marine Operations and Systems at Norwegian University of Science and Technology.

Ødegård uses robots to detect and investigate shipwrecks, often in places too dangerous for human divers to explore—like the Arctic. Other undersea scientists in fields like biology and chemistry are starting to use curious robots to do things like monitor oil spills and searching for invasive species.

Compared to other undersea robots, Ødegård says, autonomous curious robots are best suited to long-term exploration. For shorter missions in already explored marine environments, it’s possible to preprogram robots to cope with predictable situations, says Ødegård. Yet, “for longer missions, with limited prior knowledge of the environment, such predictions become increasingly harder to make. The robot must have deliberative abilities or ‘intelligence’ that is robust enough for coping with unforeseen events in a manner that ensures its own safety and also the goals of the mission.”

One big challenge is sending larger amounts of data to human operators in real time. Water inhibits the movement of electromagnetic signals such as GPS, so curious robots can only communicate in small bits of data. Ødegård says to overcome this challenge, scientists are looking for ways to optimize data processing.

According to Singh, one next step in curious robot technology is teaching the robots to work in tandem with drones to give scientists pictures of sea ice from both above and below. Another is teaching the robots to deal with different species biases. For example, the robots frighten some fish and attract others—and this could cause data anomalies, making some species appear less or more abundant than they actually are.

Ødegård adds that new developments in robotics programs could allow even scientists without a background in robotics the opportunity to reap the benefits of robotics research. “I hope we will see more affordable robots that lower the threshold for playing with them and taking risks,” he says. “That way it will be easier to find new and innovative ways to use them.

Ioannis Rekleitis / McGill UniversityYogesh Girdhar, a researcher at Woods Hole Oceanographic Institute, swims with a "curious" robot he helped build.

A Bot That Identifies 'Toxic' Comments Online

$
0
0

Civil conversation in the comment sections of news sites can be hard to come by these days. Whatever intelligent observations do lurk there are often drowned out by obscenities, ad-hominem attacks, and off-topic rants. Some sites, like the one you’re reading, hide the comments section behind a link at the bottom of each article; many others have abolished theirs completely.

One of the few beacons of hope in the morass of bad comments shines at The New York Times, where some articles are accompanied by a stream of respectful, largely thoughtful ideas from readers. But the Times powers its comment section with an engine few other news organizations can afford: a team of human moderators that checks nearly every single comment before it gets published.

For those outlets who can’t hire 14 full-time moderators to comb through roughly 11,000 comments a day, help is on the way. Jigsaw, the Google-owned technology incubator, released a tool Thursday that uses machine-learning algorithms to separate out the worst comments that people leave online.

The tool, called Perspective, learned from the best: It analyzed the Times moderators’ decisions as they triaged reader comments, and used that data to train itself to identify harmful speech. The training materials also included hundreds of thousands of comments on Wikipedia, evaluated by thousands of different moderators.

Perspective’s current focus is on “toxicity,” defined by the likelihood that a comment will drive other participants to leave a conversation, most likely because it’s rude or disrespectful. Developers that adopt the platform can use it as they choose: It can automatically suppress toxic comments outright, or group them to help human moderators choose what to do with them. It could even show a commenter the toxicity rating of his or her comment as it’s being written, in order to encourage the commenter to tone down the language. (That could work a little bit like Nextdoor’s prompts aimed at tamping down on racist posts.)

Perspective’s website lets you test the system by typing in your own phrase. The system then spits out a toxicity rating on a 100-point scale. For example, “You’re tacky and I hate you,” is rated 90 percent toxic. Fair enough. But there are discrepancies—“You’re a butt” is apparently 84 percent toxic, while “You’re a butthead” is only at 36 percent. (When I tried more aggressive insults and abuse—your usual angry comments-section fodder—each scored over 90 percent.)

The Times has been using the system since September, and now runs every single incoming comment through Perspective before putting it in front of a human moderator. Perspective will help the newspaper expand the number of articles that include comments—currently, only about one in ten have comments enabled.

Future versions of Perspective will approach other aspects of online commenting. It may one day be able to tell when a comment is off-topic, for example, by comparing it to the themes contained in the news story it’s referring to.

The platform could help make more comment sections enjoyable and informative—and it might help draw out voices that are often silenced by harassment. A study published in November found that nearly half of Americans have been harassed or abused online, and that women, racial minorities, and LGBT people are more likely to be attacked than others.

The abuse drove people to change their contact information or retreat from family and friends. Worryingly, it also led one in four people to censor themselves in order to avoid further harassment. Most harmful abuse happens on social networks, not news-site comment sections, of course—Twitter is often a loud crossfire of vitriol—but barbs exist on every social platform. Tamping down on abuse on news sites can help make them a safer space for commenters.

Perspective’s developers hope that opening the tool to every publisher will bring comment moderating within reach for more, and perhaps stave off the demise of comment sections. As more news organizations adopt the system, it will continue to learn and improve its accuracy. And if automated moderating proves useful for news sites, it may have a future on larger social media networks, which are most in need of a gatekeeper to stop abuse.

Karen Bleier / AFP / GettyAlgorithms developed by Google's Jigsaw help publishers like The New York Times moderate comments.

How New Orleans's Favorite Mardi Gras Cocktail Was Saved From Extinction

$
0
0

“Wait one second, let me go see if I have any Peychaud’s.” Tommy Westfeldt leaves the room and returns wagging a small bottle of reddish-pink bitters. I’m sitting in Westfeldt’s Gravier Street office in New Orleans, where he runs one of the oldest coffee-importing businesses in the United States. Westfeldt, a Crescent City native, has been drinking Ojen since he was 18 years old. “You know, my father drank Ojen Cocktails, and my grandfather drank them too. It’s been around for as long as I can remember. We usually had it on Mardi Gras day and during Christmas, and sometimes when we went duck hunting,” he reminisces.

Ojen (pronounced oh-hen) is an anise-based liqueur that came into production in 1830 near the small town of Ojén, Spain, in southern Andalusia. It’s sweeter and far less alcoholic than absinthe, and always mixed over ice with seltzer and a few dashes of Peychaud’s bitters (“not Angostura,” Westfeldt emphasizes). In the mid-20th century, Ojen found a thriving market in New Orleans because of its popularity during Mardi Gras, though nobodyknows exactly why. “You used to drink absinthe on Mardi Gras day for good luck, but that’s about all I know,” the renowned absinthe collector Ray Bordelon tells me.

What New Orleanians do know is that Ojen has been hard to find—until recently, when it was resurrected by the Sazerac Company.

* * *

Ojen is an obscure ingredient that most people have never heard of, even in New Orleans. Hailed as the “preferred cocktail of the Rex ruling class,” the Ojen Cocktail is, or was, a bougie drink, one for the crustiest of the upper crust of New Orleans.

An 1883 ad for Ojen

It became widely popular in the late 1940s, when Brennan’s, a well-to-do Royal Street restaurant, opened in the French Quarter. They featured the Ojen Frappé—a fancy term for a cocktail over shaved ice—as a brunch menu item, marketed as “the absinthe preference of the Spanish aristocracy.”

But by the time the Ojen Frappé was introduced at Brennan’s, Ojen had already been a New Orleans staple for decades. In 1874, a young businessman named Paul Gelpi and his brother Oscar opened a liquor distribution company specializing in imported wines and spirits. Between May and June of 1883, the Gelpi brothers ran a series of advertisements for Ojen. It read, in part, “Superior to ABSINTHE as an Appetizer and Tonic.”

Describing Ojen as “superior to absinthe” was a savvy move. Like absinthe, Ojen was first used as a tonic for medicinal purposes. Southerners, Northerners, and foreign tourists alike made pilgrimages to the former French colony of New Orleans to imbibe the Absinthe House’s “Parisian style” of dripped absinthe at the corner of Bourbon and Bienville. Not only did Gelpi advertise during the height of tourist season, but claiming that Ojen is superior to absinthe invokes New Orleans’s sophisticated European past. It connotes the authenticity and bourgeois refinement for which the city was renowned.

As a prominent and successful citizen, Paul Gelpi served as a board member in a number of the city’s powerful and influential organizations, including an elite gentlemen’s club, the Boston Club. The Boston, as it was known, was influential in the founding of the Krewe of Rex, one of the oldest Carnival organizations in New Orleans. On an uncharacteristically warm February day in 1886, Gelpi was inaugurated as a member, and it was at the Boston Club that the Ojen Cocktail was first mixed by adding two dashes of Peychaud’s bitters and soda water over cracked ice. By the early 1900s, the Ojen Cocktail had become the Boston’s most popular drink.

* * *

In the early 20th century, the Ojen Cocktail made its way out of the Boston Club and onto banquet and cocktail menus, recipe snippets, and printed advertisements. It was served at elite cocktail parties and events through the 1910s, including the annual banquets of the Louisiana Bar Association and the Louisiana Engineering Society. Absinthe was banned in 1912 after the Temperance movement successfully lobbied for a number of legislative measures designed to curb alcohol consumption, so New Orleanians looked for other anise-based alternatives like Pernod, L.E. Jung’s Greenopal, and Herbsaint, which rose to popularity during this period. New Orleans generally preferred Ojen, though, and during the pre-Prohibition period extravagant stories teeming with inebriated tourists drinking Ojen circulated around the country. In a 1919 issue of The Photo-Engraver’s Bulletin, one visitor recounted waiting for his colleague, E.C. Miller, at the train station. When Miller didn’t show, the visitor went to his hotel to find that he had already checked in:

“On arriving at the hotel they discovered that his High Nibs had registered. After locating his room this Committee was received [...] in Miller's room. Miller was in the bath-tub with a Sazerac cocktail in one hand, a Ramos gin fizz in the other, and a[n] Ojen cocktail in his shaving cup. He seemed to become acclimated almost immediately.”

A few months later, the 18th Amendment and the Volstead Act brought a prompt end to such tales. Anticipating financial ruin when it became clear that Prohibition would not be repealed immediately, the Gelpis left the liquor industry and went into candy-making. Less than a year later in 1920, Paul Gelpi passed away at the age of 72. In an unsuccessful attempt to revitalize her father’s liquor dynasty after the repeal of Prohibition in the mid 1930s, Paul’s daughter, Vivian, would wax romantic about her grandfather sailing heroically into the port of New Orleans on a clipper ship from Barcelona, a hull filled to the brim with rare and coveted wines.

Others in the liquor industry reinvented their businesses to profit within the confines of Prohibition, trying their hands at creating non-alcoholic versions of New Orleans cocktails. One of these companies was the New Orleans-based manufacturer L.E. Jung & Wulff Company, which manufactured absinthe substitutes. Jung, also a prominent liquor salesman, was an associate of Gelpi, and both were members of the Boston Club.

In fact, the Boston Club is the hub through which four generations of Ojen drinkers can be traced. It’s how the Ojen Cocktail became the Mardi Gras drink. As Westfeldt explains, the King of Carnival used to toast his queen at the Boston Club, a practice that was abandoned due to time constraints. “The Krewe of Rex—and a lot of other Krewes, too—used to drink Ojen every Mardi Gras, usually before a parade.” A krewe refers to a Carnival organization that puts on parades and balls during Carnival. “I remember when I was younger, there was a secret stash of Ojen that only the lieutenants of Rex could drink. The rest of us had to drink champagne. It’s not like that anymore, though.” All members of Rex are now allowed to drink Ojen, if they can get their hands on it.

* * *

The Ojen distillery in Spain ceased operations nearly 30 years ago. However, efforts to recreate Ojen began even before Manuel Fernandez, its last proprietor, shut the place down. Cedric Martin of Martin Wine Cellar, Ojen’s sole distributor in New Orleans, had initially tried to obtain the recipe from the distillery in southern Spain. “It just wasn’t available,” Martin told me by phone. “In the late 80s, all the Fernandezes were gone, and the distillery was going to shut down. We panicked.” Ojen devotees imported 6,000 bottles into the city, hoarding them away in private liquor cabinets. The last bottle officially sold in 2009. A select few restaurants continue to serve Ojen, if you know to ask for it. Chris Hannah of Arnaud’s French 75 Bar showed off his collection of vintage Ojen bottles to the Times-Picayune in 2012, and Ralph Brennan served Ojen to Jack Maxwell of the Travel Channel’s Booze Traveller in 2015.

New Orleanians are famous for their boozy traditions: lax open container laws, the Go-Cup, and drive-thru daiquiris shops (to name but a few). They often despair in the disappearance of even the most minor culinary tradition. In Louisiana, to lose a tradition such as Ojen often means losing entire communities and ways of life.

* * *

Westfeldt sets down the fetched Peychaud’s and picks up an aged bottle of Ojen he keeps in his office. Its label, off-white and peeling, reads “MANUEL FERNANDEZ.” Behind it, a slightly-yellowed liquid fills the bottle halfway. Westfeldt sniffs the bottle and makes a face, wondering aloud if it’s gone bad. He calls for three small cups with ice, and then makes three Ojen Cocktails, a recipe that’s remained unchanged for 130 years: a shot of Ojen, a couple of drops of Peychaud’s bitters, and a splash of seltzer served over ice.

We taste it. It has indeed turned slightly, like a corked bottle of wine, with a cardboard aftertaste. “That’s a shame,” sighs Westfeldt, and we pour them out. “I’ve gotta get a bottle of that new stuff.”

The nostalgic cultural appeal of Ojen has made it possible. After that last bottle in 2009 sold, work began again to revive the liqueur. “We’d been tossing around recreating Ojen for a few years,” Amy Preske of the Sazerac Company explains. In January 2016, they released a small run of the drink, just in time for Carnival.

Before the re-release of Ojen in January 2016, Sazerac had spent a few years working on an authentic reproduction of Fernandez’s original recipe. They secured a bottle from his last run and sent it to their lab. “The chemists in our lab basically reverse engineered it to recreate the flavors,” Preske tells me. Sazerac then worked with a few Ojen devotees to tweak the taste profile until they could get as close as possible to the original. “If you tasted [the old and new versions] blind together,” said Martin, “you wouldn’t be able to tell the difference.” The Sazerac Company, which also produces Peychaud’s Bitters, the other key ingredient in an Ojen Cocktail, named it Legendre Ojen, after New Orleanian J. Marion Legendre, the inventor of Herbsaint, a brand which Sazerac now also reproduces. Both Herbsaint and Ojen are products in the company’s New Orleans Specialty Brands, which includes Sazerac Rye, Southern Comfort, Peychaud’s Bitters, and Peychaud’s Apertivo.

Sazerac maintains that it didn’t bring back Ojen for commercial profit: New Orleans is the only real market for Ojen, though Preske said that they do take orders from other elsewhere.

Through the new production of the liqueur, Westfeldt hopes that the Mardi Gras tradition will continue for the next generation. Fortunately, 2017 will be the second year in a row that tourists and local alike will enjoy clinking their pink Ojen Cocktails as the parades rumble past. He smiles, “It’s a hard liquor to acquire a taste for. But when you do, you just sit there and it’s just beautiful.”


This article appears courtesy of Objects Lessons.

Judi Bottoni / APA bartender pours a cocktail at the Sazerac Bar in New Orleans.

Why Nothing Works Anymore

$
0
0

“No… it’s a magic potty,” my daughter used to lament, age 3 or so, before refusing to use a public restroom stall with an automatic-flush toilet. As a small person, she was accustomed to the infrared sensor detecting erratic motion at the top of her head and violently flushing beneath her. Better, in her mind, just to delay relief than to subject herself to the magic potty’s dark dealings.

It’s hardly just a problem for small people. What adult hasn’t suffered the pneumatic public toilet’s whirlwind underneath them? Or again when attempting to exit the stall? So many ordinary objects and experiences have become technologized—made dependent on computers, sensors, and other apparatuses meant to improve them—that they have also ceased to work in their usual manner. It’s common to think of such defects as matters of bad design. That’s true, in part. But technology is also more precarious than it once was. Unstable, and unpredictable. At least from the perspective of human users. From the vantage point of technology, if it can be said to have a vantage point, it's evolving separately from human use.

* * *

“Precarity” has become a popular way to refer to economic and labor conditions that force people—and particularly low-income service workers—into uncertainty. Temporary labor and flexwork offer examples. That includes hourly service work in which schedules are adjusted ad-hoc and just-in-time, so that workers don’t know when or how often they might be working. For low-wage food service and retail workers, for instance, that uncertainty makes budgeting and time-management difficult. Arranging for transit and childcare is difficult, and even more costly, for people who don’t know when—or if—they’ll be working.

Such conditions are not new. As union-supported blue-collar labor declined in the 20th century, the service economy took over its mantle absent its benefits. But the information economy further accelerated precarity. For one part, it consolidated existing businesses and made efficiency its primary concern. For another, economic downturns like the 2008 global recession facilitated austerity measures both deliberate and accidental. Immaterial labor also rose—everything from the unpaid, unseen work of women in and out of the workplace, to creative work done on-spec or for exposure, to the invisible work everyone does to construct the data infrastructure that technology companies like Google and Facebook sell to advertisers.

But as it has expanded, economic precarity has birthed other forms of instability and unpredictability—among them the dubious utility of ordinary objects and equipment.

The contemporary public restroom offers an example. Infrared-sensor flush toilets, fixtures, and towel-dispensers are sometimes endorsed on ecological grounds—they are said to save resources by regulating them. But thanks to their overzealous sensors, these toilets increase water or paper consumption substantially. Toilets flush three times instead of one. Faucets open at full-blast. Towel dispensers mete out papers so miserly that people take more than they need. Instead of saving resources, these apparatuses mostly save labor and management costs. When a toilet flushes incessantly, or when a faucet shuts off on its own, or when a towel dispenser discharges only six inches of paper when a hand waves under it, it reduces the need for human workers to oversee, clean, and supply the restroom.

Given its connection to the hollowing-out of labor in the name of efficiency, automation is most often lamented for its inhumanity, a common grievance of bureaucracy. Take the interactive voice response (IVR) telephone system. When calling a bank or a retailer or a utility for service, the IVR robot offers recordings and automated service options to reduce the need for customer service agents—or to discourage customers from seeking them in the first place.

Once decoupled from their economic motivations, devices like automatic-flush toilets acclimate their users to apparatuses that don’t serve users well in order that they might serve other actors, among them corporations and the sphere of technology itself. In so doing, they make that uncertainty feel normal.

It’s a fact most easily noticed when using old-world gadgets. To flush a toilet or open a faucet by hand offers almost wanton pleasure given how rare it has become. A local eatery near me whose interior design invokes the 1930s features a bathroom with a white steel crank-roll paper towel dispenser. When spun on its ungeared mechanism, an analogous, glorious measure of towel appears directly and immediately, as if sent from heaven.

* * *

Rolling out a proper portion of towel feels remarkable largely because that victory also seems so rare, even despite constant celebrations of technological accomplishment. The frequency with which technology works precariously has been obscured by culture’s obsession with technological progress, its religious belief in computation, and its confidence in the mastery of design. In truth, hardly anything works very well anymore.

The other day I attempted to congratulate my colleague Ed Yong for becoming a Los Angeles Times Book Prize finalist. I was tapping “Awesome, Ed!” into my iPhone, but it came out as “Aeromexico, Ed!” What happened? The iPhone’s touchscreen keyboard works, in part, by trying to predict what the user is going to type next. It does this invisibly, by increasing and decreasing the tappable area of certain keys based on the previous keys pressed. This method—perhaps necessary to make the software keyboard work at all—amplifies a mistype that autocorrect then completes. And so goes the weird accident of typing on today’s devices, when you hardly ever say what you mean the first time.

The effects of business consolidation and just-in-time logistics offer another example. Go to Amazon.com and search for an ordinary product like a pair of shoes or a toaster. Amazon wants to show its users as many options as possible, so it displays anything it can fulfill directly or whose fulfillment it can facilitate via one of many catalog partnerships. In some cases, one size or color of a particular shoe might be available direct from Amazon, shipped free or fast or via its Prime two-day delivery service, while another size or color might come from a third party, shipped later or at increased cost. There is no easy way to discern what’s truly in stock.

Digital distribution has also made media access more precarious. Try explaining to a toddler that the episodes of “Mickey Mouse Clubhouse” that were freely available to watch yesterday via subscription are suddenly available only via on-demand purchase. Why? Some change in digital licensing, probably, or the expiration of a specific clause in a distribution agreement. Then try explaining that when the shows are right there on the screen, just the same as they always have been.

Or, try looking for some information online. Google’s software displays results based on a combination of factors, including the popularity of a web page, its proximity in time, and the common searches made by other people in a geographic area. This makes some searches easy and others difficult. Looking for historical materials almost always brings up Wikipedia, thanks to that site’s popularity, but it doesn’t necessarily fetch results based on other factors, like the domain expertise of its author. As often as not, Googling obscures more than it reveals.

Most of these failures don’t seem like failures, because users have so internalized their methods that they apologize for them in advance. The best defense against instability is to rationalize uncertainty as intentional—and even desirable.

* * *

The common response to precarious technology is to add even more technology to solve the problems caused by earlier technology. Are the toilets flushing too often? Revise the sensor hardware. Is online news full of falsehoods? Add machine-learning AI to separate the wheat from the chaff. Are retail product catalogs overwhelming and confusing? Add content filtering to show only the most relevant or applicable results.

But why would new technology reduce rather than increase the feeling of precarity? The more technology multiplies, the more it amplifies instability. Things already don’t quite do what they claim. The fixes just make things worse. And so, ordinary devices aren’t likely to feel more workable and functional as technology marches forward. If anything, they are likely to become even less so.

Technology’s role has begun to shift, from serving human users to pushing them out of the way so that the technologized world can service its own ends. And so, with increasing frequency, technology will exist not to serve human goals, but to facilitate its own expansion.

This might seem like a crazy thing to say. What other purpose do toilets serve than to speed away human waste? No matter its ostensible function, precarious technology separates human actors from the accomplishment of their actions. They acclimate people to the idea that devices are not really there for them, but as means to accomplish those devices own, secret goals.

This truth has been obvious for some time. Facebook and Google, so the saying goes, make their users into their products—the real customer is the advertiser or data speculator preying on the information generated by the companies’ free services. But things are bound to get even weirder than that. When automobiles drive themselves, for example, their human passengers will not become masters of a new form of urban freedom, but rather a fuel to drive the expansion of connected cities, in order to spread further the gospel of computerized automation. If artificial intelligence ends up running the news, it will not do so in order to improve citizen’s access to information necessary to make choices in a democracy, but to further cement the supremacy of machine automation over human editorial in establishing what is relevant.

There is a dream of computer technology’s end, in which machines become powerful enough that human consciousness can be uploaded into them, facilitating immortality. And there is a corresponding nightmare in which the evil robot of a forthcoming, computerized mesh overpowers and destroys human civilization. But there is also a weirder, more ordinary, and more likely future—and it is the one most similar to the present. In that future, technology’s and humanity’s goals split from one another, even as the latter seems ever more yoked to the former. Like people ignorant of the plight of ants, and like ants incapable of understanding the goals of the humans who loom over them, so technology is becoming a force that surrounds humans, that intersects with humans, that makes use of humans—but not necessarily in the service of human ends. It won’t take a computational singularity for humans to cede their lives to the world of machines. They’ve already been doing so, for years, without even noticing.

A Matter Of How You See It Photography by Kala / Getty

Thomas Edison and the Origins of Surf Filmography

$
0
0

Searching for surfing videos will lead you into some of the gnarliest, most awe-inspiring rabbit holes on the internet—in part because there are so, so many of them out there.

You could spend days watching clips of surfers bumping across the monster wintertime waves at Waimea Bay, catching crisp lines at Bells, dropping into bomb swells at Jaws, and wiping out into the coral reef at Teahupoʻo. (And then there’s the endless footage of wave riding at secret and lesser-known breaks.)

Advances in camera technology—including waterproof lenses, GoPros, and drones—have made it easy (and relatively affordable) to capture high-quality footage of modern surfing. And the internet, with platforms like YouTube, has made it possible for people to share their videos far and wide.

But the art of surf filmography goes back to the very beginnings of motion pictures.

Overhead views and shore-break perspectives notwithstanding, the classic vantage point in surf films hasn’t changed that much in more than a century. No matter how much technology has advanced, “the ultra-simple tripod arrangement from shore has remained the bread-and-butter shot,” writes John Engle in his book, Surfing in the Movies. Which is part of why I was skeptical, when I happened across a YouTube video, which claimed to depict surfers in Honolulu 111 years ago, in footage of “Thomas Edison’s Hawaii.” The clip includes a fixed shot, about a minute long, of surfers on long boards, viewed from about 150 feet offshore. In other words, it looks a lot like surf filmography today—despite its black-and-white graininess.

Did Edison really travel all the way to Honolulu to capture the sport for a silent film more than a century ago? Well, not exactly.

“We have no evidence Edison ever visited Hawaii,” said Leonard DeGraaf, an archivist at Thomas Edison National Historical Park. “There’s tons of stuff about Edison out there that’s bogus.”

The thing is, the surfing footage actually is legitimate—only Edison himself didn’t capture it. It was the work of Robert Bonine, a legendary cameraman for a production company owned by Edison. Bonine traveled to Hawaii in 1906 at the invitation of the territory’s Promotion Committee, according to newspaper reports at the time. He stayed in the Hawaiian Islands for nearly three months and gathered a series of film actualities—little documentary vignettes—of various outdoor scenes. That included the surfers at Waikiki and the first-known film of Kilauea, the volcano on the Big Island, newspapers said.

This wasn’t the first time an Edison crew visited Hawaii, however. DeGraaf directed me to Charles Musser’s book, “Before the Nickelodeon,” which describes an Edison camera operator stopping in Honolulu en route back to the United States from Japan in 1898. One of the short films he made, titled “Kanakas Diving for Money,” shows boys splashing in a harbor. (Kanaka means person in Hawaiian.) Despite the film’s brevity—it’s under a minute long—it conveys a powerful metaphor about the cultural upheaval wrought by colonialism at the time it was shot: At one point, an outrigger canoe passes in front of a huge cargo ship in the background. “I think this is the earliest record of an Edison film crew in Hawaii,” DeGraaf told me.

Bonine’s film was shot about eight years later, and we have a reliable estimate for exactly when:  August 12, 1906. According to an item in The Honolulu Advertiser that day: “Moving pictures of canoes and surfboard riding are to be taken off the Moana and Seaside hotels, Waikiki, this afternoon. ... Those who can ride surfboards standing up are wanted to be there in force.”

The next day, The Daily Pacific Commercial Advertiser, described a cheery spectacle at the famous beach: “Everybody that could get in focus was ‘Bonined’ at Waikiki beach yesterday afternoon. That is, they were included in some rare pictures taken by Robert Bonine, the moving-picture man of the Edison company of Orange, N.J.”

“[T]he water was fairly live with people, and all were in a merry mood and that, of course, was best for the moving picture,” the newspaper account continues. “Hawaiian canoes, birch canoes, surf-boards and water wings were greatly in evidence. There were big rollers yesterday and it is believed that some good pictures were taken of surf riders standing erect on their boards as they were shot on the crest of waves toward the shore. These were taken from the end of the Moana pier. ... Mr. Bonine was satisfied and so was the crowd.”

Bonine’s surfing footage is arguably the most “historically significant” of early surf films, Engle writes in his book, in part because of how surprisingly modern it appears: “With this rudimentary minute of film, Bonine may be unwittingly illustrating a truth about the surf movie to come. When the subject is being filmed—human beings standing on collapsing walls of water, after all!—is so inherently riveting, perhaps little other technique is needed.”

Back in 1907, when Bonine’s Hawaii films were distributed on the mainland, many people had never seen surfers or motion pictures—let alone a film depicting surfing. Humans have been surfing for hundreds of years, but the sport didn’t spread beyond Polynesia until the twilight of the 19th century. In the early 20th century, the Edison Company’s surfing reels played in nickelodeons “as far east as New Jersey,” according to Matt Warshaw, author of The History of Surfing.

By that point, the tentative art of surfing filmography was nearly a decade old.

“Presumably the oldest surfing footage was shot in 1897 or 1898 by Burton Holmes, a professional travel writer and lecturer,” DeSoto Brown, a historian and archivist at the Bishop Museum in Honolulu, told me in an email. “However—and this is typical of old films—the footage is not known to exist anymore, so nobody can see what was shot.

There are records of other lost surfing films. The Pathé brothers, a pair of French filmmakers, gathered nearly two hours of footage for “Surfing, Le Sport National des Illes Hawaii,” which has long since disappeared, according to Engle’s book. It’s remarkable that Bonine’s footage survived—it predated the first book about surfing, which is itself exceedingly rare—but perhaps more astonishing that it’s still so accessible today. The overwhelming majority of the movies made in the United States between 1912 and 1929—some 70 percent of them—are lost to history, the Library of Congress found in a 2014 study. The short clip is in the film collection of the Library of Congress. And you can watch it on YouTube. But the film is accessible in the abstract, as well.

In this era of livestreams and nearly-real-time surf cams, if you go online and tune into Waikiki’s gentlest surfbreaks—Queens or Canoes, where Bonine took his film—you may well see the same thing Bonine did: Sky meeting ocean, a fresh set rolling in, and a human figure standing there on the water before diving into the froth of a distant wave.

Library of CongressThe legendary waterman Duke Kahamoku, fourth from left, photographed in 1921

A Doozy of a Lawsuit Over Self-Driving Cars

$
0
0

A stunning claim of stolen trade secrets may be the first big intellectual property battle of the self-driving car era.

Waymo, the self-driving car company that began at Google, is suing Uber and the self-driving truck company Otto, which Uber acquired last year. Waymo said in a federal lawsuit filed on Thursday that one of Google’s former software engineers, Anthony Levandowski, installed special software on his laptop so he could download more than 14,000 secret documents—totaling nearly 10 gigabytes of “highly confidential data”—from the company’s server when he still worked at Google. Waymo claims in the court filing that Levandowski then reformatted the laptop in an attempt to wipe it of evidence, then never used the laptop again.

Levandowski then took those secrets with him when he left Google to found Otto and used them in his role leading Uber's self-driving car effort, Waymo claims. A call to Levandowski’s cellphone went straight to voicemail, which was full. “We take the allegations made against Otto and Uber employees seriously and we will review this matter carefully,” an Uber spokesperson said in an email.

In another astonishing detail from the court filing, Waymo says it was tipped off of the alleged theft when Waymo was “apparently inadvertently” copied on an email from a vendor of Uber’s. The email included an attachment of an Uber circuit board that “bears a striking resemblance to Waymo’s own highly confidential and proprietary design and reflects Waymo trade secrets,” Waymo said in its lawsuit.

In addition to the allegations against Levandowski, Waymo’s complaint claims a supply-chain manager and a hardware engineer stole additional trade secrets before leaving their jobs at Waymo to join Otto, which was later acquired by Uber.

The technology at stake involves a laser-based sensing system known as LiDAR, which helps self-driving cars position themselves on the roads, figure out where they’re going, and essentially see what’s around them. (If you’ve seen a self-driving car, you may have noticed the LiDAR sensor: It’s the spinning cylinder sitting in a box on the car’s roof.)

Indeed, Waymo’s leadership in the self-driving car arena has much to do with its LiDAR system. (That and Waymo’s longevity: The company has been at it since 2009, after all.) They’re “so far ahead of everyone else because the maps they use are so detailed and the LiDAR they’re using gives so much rich information,” John Leonard, an engineering professor at the Massachusetts Institute of Technology, told me in 2015. Their sensor “gives 1 million data points a second.”

Waymo said in its suit that other stolen trade secrets included “confidential supplier lists, manufacturing details and statements of work with highly technical information,” all related to its specialized LiDAR set-up.

A U.S. Patent illustration, included in Waymo’s federal complaint, shows Google’s laser diode firing circuit positioned in a LiDAR component on top of a self-driving car.

But for self-driving car companies who are vying to lead the future of driverlessness, there’s far more hanging in the balance: an enormous amount of money, yes, but also the power to shape what many people believe will be the most profound technological and cultural shift in the 21st century. Self-driving cars could save hundreds of thousands of lives per decade. The mass adoption of such technology could become the most profound public-health improvement since the success of anti-smoking efforts. Self-driving cars promise to change the way cities are designed, creating and destroying entire industries along the way.

Patent disputes are typical in Silicon Valley, but given the outsized significance of self-driving cars—and the power of the players involved—the fight between Waymo and Uber is likely to be one of the biggest legal battles since the war between Apple and Samsung over smartphones.

Elements of that dispute are somewhat instructive here. In claiming that Samsung copied its designs, for example, Apple repeatedly argued that it took substantial effort and risk to develop the first iPhone and iPad—work that Samsung never had to do. That argument was convincing, if complex. Apple won nearly $400 million in damages from Samsung—a sum that remains in dispute after the Supreme Court overturned the damages judgment, sending it back to a lower court.

Waymo is now making a similar argument: “Waymo developed its patented inventions and trade secrets at great expense, and through years of painstaking research, experimentation, and trial and error.” A key difference, though, is that Waymo says its secrets were stolen, whereas Apple alleged Samsung copied its design. “Design patents, which address what products look like, are far less common than utility patents, which cover how products work,” The New York Timesreported last year.

The Waymo lawsuit comes at a time of rapidly intensifying competition in the self-driving car space, an area that involves the biggest names in tech. Though Uber didn’t signal an interest in driverless cars until 2015, when it poached an entire department of engineers from Carnegie Mellon, the ridesharing giant has since rushed to prove its commitment to the technology. Uber’s efforts have included its $680 million acquisition of Otto, the development of its own mapping system to rival Waymo’s, plus PR efforts to win hearts and minds—like the demo of self-driving cars in Pittsburgh last summer.

Despite its extraordinary claims, Waymo’s lawsuit is likely a signal of litigation to come. After all, there are several other major players in the race to build self-driving cars,  including longtime auto manufacturers, newer car companies like Tesla, and Apple.

“Fair competition spurs new technical innovation,” Waymo wrote in its complaint against Uber, “but what has happened here is not fair competition.”

In the end, as is often the case in Silicon Valley, that will be for the courts to decide. This may be the first major lawsuit of the self-driving car era. But it won’t be the last.

Christian Charisius / ReutersCars are parked in the Car Towers at the Autostadt visitor attraction in Wolfsburg, Germany.

Does the Internet Breed Creativity or Destroy It?

$
0
0

What the internet does to the mind is something of an eternal question. Here at The Atlantic, in fact, we pondered that question before the internet even existed. Back in 1945, in his prophetic essay “As We May Think,” Vannevar Bush outlined how technology that mimics human logic and memory could transform “the ways in which man produces, stores, and consults the record of the race”:

Presumably man’s spirit should be elevated if he can better review his shady past and analyze more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanize his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory. His excursions may be more enjoyable if he can reacquire the privilege of forgetting the manifold things he does not need to have immediately at hand, with some assurance that he can find them again if they prove important.

Bush didn’t think machines could ever replace human creativity, but he did hope they could make the process of having ideas more efficient. “Whenever logical processes of thought are employed,” he wrote, “there is opportunity for the machine.”

Fast-forward six decades, and search engines had claimed that opportunity, acting as a stand-in for memory and even for association. In his October 2006 piece “Artificial Intelligentsia,” James Fallows confronted the new reality:

If omnipresent retrieval of spot data means there’s less we have to remember, and if categorization systems do some of the first-stage thinking for us, what will happen to our brains?

Read On »

The Future of Shopping Is More Discrimination

$
0
0

Two years ago, at a retail-marketing conference called “The Internet of Things: Shopping,” a consultant took the stage and predicted that by 2028, half of Americans will have implants that communicate with retailers as they walk down stores’ aisles and inspect various items. By 2054, he added, this would be true of nearly all Americans. The rest of the vision went like this: Based on how long shoppers hold an item, the retailer’s computers would be able to determine whether or not they like it. Other signals from the implant would indicate whether consumers are nervous or cautious when they look at the price of the product they’re holding—an analysis that may prompt the retailer to try to put them at ease with a personalized discount.

After hearing these prognostications, no one in the audience voiced any doubts that consumers would want such an implant. The attendees knew the retailing business to be changing so drastically and confusingly that such statements seemed plausible. By now it is industry consensus that brick-and-mortar merchants—the department stores, supermarkets, specialty stores, and chain stores that still sit at the center of the retailing universe—will succeed only if they turn those locations into facilities that track shoppers using wifi, Bluetooth, light beams, undetectable sounds, facial recognition, and more, even implants. Further, the people in charge of these retailers see it as a top priority that coming generations of customers learn to think of  this surveillance as natural, even welcome—who doesn’t like a discount?

This push pertains to a topic that in other realms is far more controversial: Policy experts, privacy advocates, corporate executives, and academics are arguing fiercely about the legality and ethics of data mining by online advertisers and the government. Meanwhile, retailers are doing the same thing and attracting comparatively little attention. As they continue, they are quietly sending consumers the message that offering up information about themselves is simply a prerequisite in a new era of shopping.

Even if retailers frame their increasing reliance on analytics as the natural next step of a competitive industry, there’s no law of shopping stating that sellers will treat customers better and better the more they learn about them. In fact, the fallacy of expecting that to happen becomes clear when examining how the act of buying things has changed in the past 250-plus years. Shoppers are entering a third stage of American retailing, one that has more in common with the 18th and 19th centuries than with the one that just passed.

The first stage was that of the peddler and small merchant. European sellers of the 1700s, for example, followed well-worn strategies to maximize their returns on goods. To remember what they paid their suppliers, peddlers marked the back or bottom of their products with symbols known only to them or close relations. In addition to keeping track of the prices and loans they negotiated, they kept track of their customers: They recorded people’s occupations, their spouse’s names, their family connections, and their social standing in their village. These records allowed peddlers to customize their sales pitches. Their “preferred” customers were getting especially good deals, the merchants could say, while keeping secret that those buyers were actually paying more than other groups.

As many European immigrants poured into North America during the 18th and 19th centuries, the peddling business migrated with them. When these salesmen were able to amass a bit of cash, some established small general stores or food markets. Settling down allowed merchants to develop more-personal relationships with their customers than they could going door-to-door or marketplace-to-marketplace. Yet personalized deals increasingly caused angst for shopkeepers, perhaps more than when they were itinerants. Customers suspected that grocers of ethnicities different from their own overcharged them or supplied them with lower-quality products. Many black people who frequented stores owned by whites were especially suspicious about this opacity of price and quality.

During the mid- and late 19th century, these strains helped produce America’s second stage of retailing: the era of posted prices. Although Quaker merchants had long believed it morally abhorrent to charge different people different amounts for the same items, a growing number of non-Quaker merchants began to adopt fixed prices because doing so saved them the trouble of teaching their clerks how to bargain—an important consideration during the growth, beginning in the 1840s, of multi-story, multi-department emporia with many employees (such as A.T. Stewart, Lord and Taylor, and Wannamaker).

It was a transformative time for shopping in other ways, too. The rise of department stores with posted prices fed into an entirely new philosophy of consumerism’s societal importance. Although the real incomes of many 19th-century Americans were growing, the distribution of wealth was lopsided; the captains of industry, a small group, controlled much of the nation’s assets. Some of those in power at the time seem to have wanted to draw public attention away from the criticism that the resources of a relative few were diminishing the democratic political power of the many. As the historian William Leach argued in his 1993 book Land of Desire: Merchants, Power, and the Rise of a New American Culture, democracy was reimagined as material instead of political—as the equal right of each American to want the same goods and to pursue them in the same environment of comfort and luxury.

Leach’s phrase “the democratization of desire” encapsulates the ideology retailers promoted in the 20th century. Not only was everyone in the store presented with the same price, but they could all see the same goods, in beautiful surroundings ostensibly open to all; women especially were welcomed. Competing to win over customers, merchants tried to outdo one another, hiring designers and architects to appoint stores’ edifices with large display windows, carved wood, polished stone, imposing mirrors, fancy elevators, streamlined escalators, and central heating. (Macy’s, in Manhattan, and John Wanamaker and Gimbels, in Philadelphia, are some of the most well-known now, but several other cities were home to similar enterprises.)

Grocers followed suit, though at a slower pace. By the 1950s, Collier’s magazine could enthuse that America’s supermarkets

are the world’s most beautiful. They’ve gone into color therapy to rest the shopper’s eyes; installed benches to rest her feet; put up playgrounds and nurseries to care for her children; invented basket carts with fingertip control; revolutionized a packaging industry to make her mouth water; put on grand openings worthy of Hollywood premieres.

The reality, and it was very much a gendered, class-based reality, didn’t always match up with the ideology. Early on, department stores divided their clientele into two broad groups, the more affluent “carriage trade” and the poorer “mass” or “shawl” trade—welcoming the former on the upper levels and ushering the latter toward the basement. Store managers selected experienced and native-born women as salespeople for the higher-price departments, and placed neophyte and immigrant women into areas that sold less-expensive goods. As for the large grocery chains, store owners often avoided low-income neighborhoods, especially ones that were predominantly black. The supermarkets that did open up in those areas tended to be dirtier and have lower-quality food and less variety than those in more affluent districts. These disparities rarely made the headlines. When they did, notably with supermarkets in the 1960s, retailing executives offered excuses or promised to do better. But these businesspeople didn’t challenge the basic proposition that shopping should provide Americans with equal access to a wide range of consumer products.

Today, in the third stage of retailing, many executives are challenging this fundamental dictum. They are celebrating the routine profiling and discrimination that characterized the peddler era, and scaling up this discrimination with the help of data analytics. This third stage began with the commercialization of the internet in the 1990s, though a number of earlier inventions, especially the barcode, led up to it. The new-era merchants’ dogma is that differentiating individual shoppers is the best way to maximize profits. A retailing requirement is therefore to learn as much as possible about people and their shopping habits so the merchant can show them the right goods with the right messages at the right moments. Merchants can offer different people different prices for the same products—not only online but also in the aisles, via smartphones—based on what they know about them.

To those who get the best deals and service in such a system, this probably sounds perfectly acceptable. But for every person who feels that way, there will be plenty who don’t. People whose buying history shows they are mostly bargain shoppers who bring the retailer small or no profit margins will be shown few discounts, or maybe none at all. If shoppers are cherished regulars, special mirrors with cameras may remember their shape and help them match clothes without trying them on. Others may see a different side of recognition: Store cameras that identify people with criminal records might alert the store’s security team.

Even those who think they will end up better off under this new system may not be accounting for some possible outcomes they may not like. Retailers might hire statistical consultants to generate reports about people’s eating habits based on the food they buy, about their weight based on the clothes they look at online and in the store. They might make predictions about people’s health based on the groceries and over-the-counter drugs they purchase. The resulting portrait of each shopper may result in some personalized coupons to redeem now, or even ads from insurance companies that have determined someone to be a likely target for specific policies. But this picture may turn sour as one ages, when statistical formulas start to make unflattering inferences about one and one’s family. Consider, too, that some retailers sell or trade the information they compile about their customers in possibly unwanted ways; some even assign “attractiveness” scores to shoppers based on the data. And in the not-too-distant future, the knowledge that companies have developed about shoppers may lead news organizations to highlight, and even modify, certain stories for them, and advertisers to provide them free access to certain premium television programs but not to others.

Much of this will be happening—or is already happening—without Americans’ consent or knowledge. Yet this new stage of retailing—a stage that harks back to 18th-century strategies of price and product discrimination—is only beginning. Merchants, left to their own interests and in response to hypercompetition, will create a world where what individuals experience when they shop will be based on data-driven profiling. And at present, shoppers have little or no insight into the profiles and how they are used. The common connotations of the word surveillance have yet to encompass the world of retail.

Harris & Ewing / Library of Congress / PureSolution / Shutterstock / Zak Bickel / The Atlantic

How Long Can Border Agents Keep Your Email Password?

$
0
0

When you cross into or out of the United States, whether in a car or at an airport, you enter a special zone where federal agents have unusual powers to search your belongings—powers they don’t have elsewhere in the country. The high standard set by the Fourth Amendment, which protects people against unreasonable searches, is lowered, and the Fifth Amendment, which guards against self-incrimination and prevents the government from demanding computer passwords or smartphone PINs, is rendered less effective.

These special rules allowed a customs officer at the Houston airport to ask a NASA engineer to give up the passcode to his smartphone last month. The engineer, Sidd Bikkannavar, was reentering the U.S. after a two-week vacation in Chile, but the device he had on him belonged to his employer, NASA’s Jet Propulsion Laboratory. He routinely used the smartphone for sensitive work, so losing sight of it for a half hour was a “huge, huge violation of work policy,” Bikkannavar told me.

After he was released, Bikkannavar immediately got a new phone from his employer and changed his PIN. But what if he hadn’t, and then traveled internationally again? If he were selected a second time for questioning at the border, the officer interviewing him would check at the record from Bikkannavar’s last run-in with Customs and Border Protection—which may include the passcode that he revealed to an agent.

• • •

The rules around what information can be retained after CBP inspections—and for how long—aren’t entirely clear-cut.

A notice published in 2008 in the Federal Register, the government’s official journal, describes the main database CBP uses for traveler information. Here’s a non-exhaustive list of the kinds of data that the records system known as TECS—that’s not an acronym—hosts:

… full name, alias, date of birth, address, physical description, various identification numbers (e.g., social security number, alien number, I-94 number, seizure number), details and circumstances of a search, arrest, or seizure, case information such as merchandise and values, methods of theft.

In addition, if officials search an electronic device like a laptop or smartphone, they may create a copy of the device’s contents. Without probable cause, CBP can’t keep that data on record for longer than a week (although some circumstances allow the window to be extended to a month), except for information “relating to immigration, customs, and other enforcement matters,” according to an official privacy impact assessment released in 2009. A CBP spokesperson confirmed that this policy is still in place.

The list of data that CBP can keep doesn’t include “passwords” or “credentials,” but that doesn’t mean they aren’t gathered and stored. Hugh Handeyside, a staff attorney at the American Civil Liberties Union, says that customs officials can enter miscellaneous information into records submitted to the TECS system.

The CBP spokesperson said that the agency can hang on to a password to facilitate digital searches once a device has been detained. The spokesperson did not say whether the password would be deleted from a traveler’s record after the search is over.

Generally, once a piece of information has been entered into the system, it can stay there for a very long time. According to the Federal Register notice, data in TECS can be kept for 75 years—or for the duration of a “law enforcement matter” or any “other enforcement activities that may become related.”

One of the few laws that would constrain how CBP would collect, keep, and disseminate personal information is the Privacy Act of 1974, which regulates how federal agencies treat sensitive personal data. But the Department of Homeland Security, CBP’s parent agency, exempted TECS from that law since at least 2009. Instead, CBP considers requests from individuals who ask to access records about them—a right guaranteed under the Privacy Act—on a case-by-case basis.

“Any limits would have to be derived directly from the Constitution or international treaties, not from statutes or regulations,” said Edward Hasbrouck, a travel expert and consultant to The Identity Project. “I am not aware of any case law limiting retention of this sort of data.”

To better understand how CBP collects and retains data, Hasbrouck requested his own travel records from the agency in 2007. He received incomplete responses and eventually stopped hearing back, so in 2010, Hasbrouck sued DHS to compel it to turn over the records he requested.

The documents he won in the lawsuit—some of which went as far back as 1992—show the detailed notes that CBP officials keep on travelers. Two records in particular showed the result of a pair of inspections Hasbrouck submitted to in 2009 and 2007. In one instance, Hasbrouck was interviewed at Boston Logan airport on the way back from London, because he “verbally declared” that he was carrying food. In the “inspection remarks” section, an official noted that “1 APPLE WAS SEIZED. BREAD WAS INSPECTED AND RELEASED.” (The “remarks” section is likely where a seized password might be entered.)

That information probably won’t come back to haunt Hasbrouck the next time he flies internationally. But once it’s saved, it’s fair game for use in future border encounters. Hasbrouck says he’s been questioned about the findings of previous inspections even years after the initial incident. If his records had contained any more sensitive information, they could easily have caused him trouble every time he traveled.

• • •

In October, a Canadian man traveling from British Columbia to New Orleans was taken aside for questioning at the Vancouver Airport. There, a CBP officer demanded the password to his phone and computer, according to a recent report in the DailyXtra, a Canadian LGBT news site. The man, identified only by his first name, André, turned over his credentials, and waited for “an hour or two” as officers searched through his digital life.

When the officer returned, he began “grilling” André about his emails, apps, and browsing history. The officer asked about André’s accounts on Scruff, a gay hookup app, and BBRT, a gay hookup website, in an episode the traveler described as “humiliating.” Ultimately, he chose not to enter the U.S. that day, and gave up his seat on the flight.

A month later, André tried to fly to New Orleans again. He was again singled out for secondary inspection at the Vancouver airport, where officers asked for his devices. But this time, DailyXtra reported, the officers didn’t ask for his passwords; they still had them saved from the previous inspection. They rifled through his devices again, and even though André had wiped them of most of his personal information, he said he was not let through and was told he was a “suspected escort.”

I asked several experts who study digital border searches whether they’d ever heard of a similar case. None were aware of a specific instance of a device or online password being retained—and reused—but each said he or she wouldn’t be surprised to learn that CBP has a policy of retaining passwords.

“Based on the policy and reported incidents, my best guess is that CBP agents have broad discretion to keep login credentials if they think they will have a reason to use them in the future,” said Catherine Crump, a law professor at the University of California, Berkeley who has brought multiple cases against the government’s digital border search policy. “Bottom line: Change your passwords, people!”

For now, CBP is most likely interested mostly in passwords to physical devices, and not passwords to online accounts. Since the agency’s expanded authority to conduct searches is restricted to the border, lawyers say it wouldn’t cover a search of a traveler’s Facebook or Twitter profile. Doing so would request information from data centers located around the country or overseas—outside of the border zone—and would require a traditional subpoena, or some other type of court order.

But DHS Secretary John Kelly has proposed making social-media searches routine. At a hearing earlier this month, Kelly said he’d like to make it mandatory for visitors to the U.S. to turn over their browsing history and passwords to their social-media accounts. The proposal was met with intense opposition from human-rights groups and security experts, who say it would violate fundamental privacy rights, and could set a worrying precedent for other countries.

If such a policy were put into place, CBP could begin to compile the keys to travelers’ digital kingdoms, simply by asking for passwords at the border, jotting them down, and keeping them. Unless travelers change their passwords after a search, they may find that their input isn’t needed next time they’re stopped at the border.

Stephen Chernin / GettyA U.S. Customs and Border Protection agent checks an overseas visitor's fingerprints and image in a database.

How Does Donald Trump Think His War on the Press Will End?

$
0
0

American presidents have often clashed with the press. But for a long time, the chief executive had little choice but to interact with journalists anyway.

This was as much a logistical matter as it was a begrudging commitment to the underpinnings of Democracy: News organizations were the nation’s watchdogs, yes, but also stewards of the complex editorial and technological infrastructure necessary to reach the rest of the people. They had the printing presses, then the steel-latticed radio towers, and, eventually, the satellite TV trucks. The internet changed everything. Now, when Donald Trump wants to say something to the masses, he types a few lines onto his pocket-sized computer-phone and broadcasts it to an audience of 26 million people (and bots) with the tap of a button.

It may be banal to point out how dramatically the world wide web democratized publishing. But to understand Donald Trump’s war on the press, you have to consider what has happened to American journalism since August 6, 1991, the day the first website launched. With that first website, the thick layer of mediation that once existed between the president and the masses began to evaporate. The influence of all those former intermediaries would undergo a profound cultural shift as a result.

Before, you couldn’t get the news without publishers, producers, editors, reporters, camera operators, technicians, truck drivers, and kids with paper routes. Today, any president can bypass all that. And he can say whatever the hell he wants.

Incidentally, 1991 wasn’t a great year for Donald Trump. It was the year of his first major bankruptcy. The Trump Taj Mahal casino was $3 billion in debt. Trump faced a staggering $900 million in personal liabilities. His spectacular financial woes made countless front pages. The bankruptcy was legitimate news. But also: Schadenfreude sells. He was eviscerated in the tabloids and trashed on late-night television. Newspaper columnists described him as a “poor little rich boy,” and a clueless confidence man responsible for the tailspin that brought him down.

That same year, the World Book Encyclopedia promoted the fact that Trump had been axed from its latest edition, “beaten out by former Panamanian dictator Manuel Noriega,” according to newspaper reports at the time. Trump “makes interesting newspaper copy, but so far he lacks lasting significance for a World Book article,” World Book’s executive editor told The Chicago Tribune. The encyclopedia was promoting its product based on the fact that Donald Trump wasn’t in it. And for the first time since he’d become famous, Trump shunned publicity. His silence, as much as anything, seemed to signify how serious his troubles were. But it didn’t last.

There would be three additional bankruptcies, but none prevented Trump’s famous (then really famous) comeback. Once mocked by the New York City tabloids for exploiting his 15 minutes of fame in the 1980s, Trump is now the most newsworthy figure on the planet. His reputation for attention seeking hasn’t waned, but now that he is the president of the United States, he doesn’t have to appeal to news organizations to get the spotlight.

So here we are. Trump has used the ease of modern publishing technology—and his influence as president—to lead a full-on anti-press crusade. Since December of last year, when Trump first started tweeting about “fake news,” he’s been using every platform within his reach to attack journalists and news organizations.

No one has ever accused Trump of being overly nuanced, but his vitriol for the media is brazen—even for him. This brazenness seems to be the point.

Trump has long been masterful at commanding attention from tabloids and television stations. Declaring a “running war” with the media, turning “fake news” into a catchphrase, doubling down on his characterization of journalists as the “enemy of the people”—all of this is part of a larger strategy. “I want you to quote this,” Stephen Bannon, one of Trump’s top advisors, said in an interview with The New York Times in January, “The media here is the opposition party.”

The “opposition party” bit made headlines, naturally, but Bannon’s insistence that it be quoted is just as telling. It’s clear the Trump administration wants people to focus on its disdain for the press. What’s less clear is how the president believes his war on journalism will end. But if you pick apart the strategy, it has all the earmarks of a classic Trump publicity blitz—the kind of campaign he has used in the past for financial, personal, and political gain.

Trump has tweeted about “fake news,” a term he uses for stories he doesn’t like, 20 times in February so far.
(Screenshot from the Trump Twitter archive)

First, there’s the appeal to emotion. Trump has picked an easy target by tapping into existing distrust for American journalism. Although it is shocking for a U.S. president to threaten a free press the way Trump has, his criticism may resonate with Americans—few of whom have a lot of confidence in information from professional news outlets, according to a Pew Research Center study last summer. One way to win people over: Tell them something they already believe. Trump doesn’t have many targets who are more unpopular than he is, but the media might be one of them.

Second, there’s the muscle flexing. Trump’s anti-press campaign is a way of simultaneously putting journalists on the defensive and exerting his own power—and there’s plenty of evidence that Trump relishes public demonstrations of might. (See also: The role he played on his popular gameshow The Apprentice, his taste for military parades, that intense handshake yank of his.)

Trump’s strategy operates on multiple levels: He attempts to undermine credible yet unflattering news reports by calling them “fake” or “dishonest.” Then, by provoking an alarmed response from journalists, he’s poised to brush them off further as hysterical. At the Conservative Political Action Conference, he chastised professional news organizations for not calling themselves fake. That, he explained, was proof that they were.

Similarly, calling the press “the opposition party” makes every unflattering story seem like confirmation that journalists are acting against him—rather than merely reporting the news and holding him accountable as an elected official. Trump is taking the naturally adversarial role between the press and the government, and attempting to recast it as a fight between political rivals. In this way, he is setting a stage so that any of his or his administration’s potential missteps can be recast as politically-charged criticism or outright  lies.

As a bonus to him, Trump’s hostility toward the press is a distraction from the actual work of the Trump administration. Which means Trump successfully leaves the impression that the press is busily focused on itself—rather than concerned primarily with the issues of the people. (Never mind that journalists continue to cover his administration doggedly, and will continue to do so.)

All of this is about making Trump appear strong and successful, no matter what. And that’s essential for a person who wants to stay in power. As my colleague Vann Newkirk wrote, “dogged by unprecedented public disapproval, confronting questions of legitimacy, relying on a base fueled by partisan conflict, and facing extensive grassroots opposition, Trump’s campaign will be indefinite.”

Trump is a master provocateur, perhaps because he has a reputation for being thin-skinned himself. He knows how to needle people. He knows which buttons to push. It’s why people adore him and despise him, because he knows how to get to them. He seems to have intuited that journalists, who believe deeply about the importance of their own work, will leap to defend the significance of what they do. Journalists writing about their own indispensability run the risk of underscoring the perception that they are elite, privileged, and somehow separate from “the people.”

It’s no mistake that Trump describes “the media” as a monolith, despite his recent insistence that only some news is fake news. This has a dehumanizing effect: These aren’t your fellow citizens questioning the people in power on your behalf, he suggests, they’re the media.

At the same time, Trump’s list of objectionable outlets appears to be expanding. With the exception of Fox, he has called every major TV news network “fake,” including ABC, NBC, CBS, and CNN. The New York Times, in particular, has been an obsessive target of his lately.

Three months ago, Trump complimented the paper. “I have great respect for The New York Times. Tremendous respect. It’s very special,” he said in a November meeting with the newspaper’s leadership. (Trump also complained that the Times was “the roughest of all” in what he saw as unfair media treatment toward him, but concluded that the Times was a “great, great American jewel. A world jewel.”)

Today he refers to the paper as “the failing @nytimes,” on Twitter, evoking the nicknames he bestowed on political rivals like “crooked Hillary” Clinton and “lyin‘ Ted” Cruz. News organizations that were blocked from attending an off-camera White House press briefing last week included The New York Times, The Los Angeles Times, BBC, CNN, Politico, and BuzzFeed News. “As you saw throughout the entire campaign, and even now, the fake news doesn’t tell the truth,” Trump had said earlier that day in his remarks at Conservative Political Action Conference. “...it doesn’t represent the people. It never will represent the people. And we’re going to do something about it, because we have to go out and we have to speak our minds, and we have to be honest.”

The absurdity of using the First Amendment as justification for repeated attacks on a free press raises a lingering question in all of this about whether Trump himself is faking it. It’s not such a stretch to see bluster against journalism as the ultimate Donald Trump performance—the product of a cultural convergence that includes pro-wrestling, reality television, conspiracy theories, and Trump’s singular talent for making up sophomoric catchphrase-insults. The temptation to see things this way is dangerous.

Because when you’re the president of the United States, you can’t pretend to tear down an institution without risking its actual destruction. And you can’t speak like an authoritarian and expect to avoid the suggestion that you are one.

“I love the First Amendment,” Trump told the CPAC crowd. “Nobody loves it better than me. Nobody.”

“I mean, who uses it more than I do?” he added

He uses it all right, but to what end? Freedom of the press is not an institutional right, it’s a Constitutional one. It belongs to all American people—to you, and to me, and to Donald Trump. And no matter what the president says, no matter who he calls fake, the best journalists will be doing what they must. They’ll be reporting. Fearlessly, fairly, truthfully, and relentlessly. And nothing the president says will stop them.

Jonathan Ernst / ReutersDonald Trump addresses the Conservative Political Action Conference in February 2017.

Elon Musk’s Moon Mission Would Vault SpaceX Past NASA

$
0
0

SpaceX is planning to send two people for a trip around around the moon in late 2018, Elon Musk, the company’s CEO, announced Monday.

Two passengers—private citizens, not astronauts—will launch inside a Dragon capsule atop a Falcon Heavy rocket for a weeklong, 400,000-mile loop around the moon. The space tourists paid SpaceX a “significant amount of money” for the trip and will begin training next year. Musk wouldn’t give their names or genders, nor did he say how much the journey would cost.

For the mystery passengers, the trip is a once-in-a-lifetime vacation. For Musk, the mission, if successful, could establish SpaceX as the state of the art in human spaceflight. NASA is still a few years away from testing its Space Launch System, which is supposed to carry astronauts into low-Earth orbit, and even further away from testing the system with humans on board.

If Musk meets his deadline, the moon trip will take place during the 50th anniversary of Americans’ first-ever orbit around the moon. “Like the Apollo astronauts before them, these individuals will travel into space carrying the hopes and dreams of all humankind, driven by the universal human spirit of exploration,” SpaceX said in a statement Monday.

Deadlines, of course, have not been kind to SpaceX. The company had to push several rocket tests from 2016 to this year following an explosion in September. The Falcon Heavy has never flown before, and is scheduled for a test launch this summer.

The news is bound to make the White House happy, even if it unnerves some inside NASA. President Donald Trump’s closest advisers want to return humans to the moon as soon as possible, and they’ve shown a preference for private space companies over traditional contractors with ties to the government. More and more, there’s talk of “New Space”—SpaceX, Blue Origin—gaining speed over “Old Space”—Boeing, Lockheed Martin. Musk’s announcement could earn him still more favor with Trump.

Mario Anzuoni / Reuters

The Wisdom of Nokia's Dumbphone

$
0
0

They weighed heavy in pockets and jackets and bags, for they were thick and bulky, not lithe and narrow. Harried professionals never clutched one ostentatiously to say silently, “I’ve got better things to do than listen to this pitch or order this coffee.” Fashionable youth never dangled one nonchalantly from fingers as a flirty pique. Nothing was less sexy or less useful than a cell phone.

How is it possible, then, that Nokia has announced an updated edition of one of its most popular phones of the early aughts, the 3310? In short, because nothing has become less sexy or less useful than a smartphone.

* * *

First released in 2000, the Nokia 3310 emerged during the Cambrian explosion of mobile devices. Fashionless black bricks crossed the paths of colorful, candy-bar handsets. Slim, black Ericsson flip-phones shared airport security bins alongside silvered Motorola clamshells. WAP-enabled “feature phones” offered rudimentary, useless access to the internet, while the fat fingers of government officials and corporate executives mashed the keys of BlackBerry 957s and Treo 180s. Teens thumb-typed too—but texts instead of emails, on Danger Hiptops. The Nokia N-Gage even tried, and failed, to merge the mobile handset with the portable game system. Over the first half-decade of mass-market mobile devices, everything was attempted and nothing was holy.

In retrospect, the story of all these devices’ downfall is an obvious one. BlackBerry and Palm had failed to guess that smartphones would appeal to a general audience, not just to business users. They’d also mistaken the physical keyboard for a requirement. And all the rest of them, flip or clamshell or candy-bar, had assumed that phone calls and text messages would define portable handheld computing.

Instead, in 2007, Apple made a general-purpose touch-interface computer in the form of a thin glass rectangle, which others have copied and adapted ever since. 10 years later all anyone does, pretty much, is stroke and fondle one of these things, all day long. And for the privilege, consumers pay big bucks, continuously,  to keep up with planned obsolescence reinvented as seasonal fashion.

The shame of a habit can only emerge once it has reached ubiquity. Even though the risk of compulsive obsession with smartphones was clear halfway into their ten-year life, the trauma of that obsession is only starting to dawn on people.

There are reasons. Years of odious abuse on services like Twitter and Reddit have finally mestastasized into resigned admission. The logic of amplifying information based on popularity, as Google and Facebook do, has finally revealed its obviousdownsides. The demand of constant, unceasing attention from apps like Snapchat and games like Candy Crush Saga has begun to feel like the unpaid labor it always was.

For years, internet-driven, mobile computing technology was heralded as either angel or devil. Only recently has it become possible to admit that it might be both. Cigarettes, after all, produce pleasure even as they slowly kill.

Given the rising angst of a society run by technology, Nokia might have picked the perfect time to introduce an antidote to the smartphone. But even under today’s conditions, it is tempting to see the new Nokia 3310 merely as another example of retro nostalgia. Ha-ha, what if you could get a dumbphone instead? It would pair perfectly with a milk crate full of vinyl albums. But it’s also possible that the 3310 marks the start of a new period of technological mobility. One that offers a sense of how even the most entrenched technological habits might yet turn out differently.

* * *

Phones like the Nokia 3310 never went fully extinct, despite the geological devastation of the iPhonecene. Thanks to their low cost, feature phones have remained popular in the developing world, where they have always been more common than traditional computers. Even in sub-Saharan Africa, smartphones only overtook feature phones sales in the last two years. HMD Global, the Finnish company that licensed Nokia technology and branding back from Microsoft sees more opportunity in these markets. Late last year, the company unveiled two new, $25 feature phone models, marketed to the billions of feature phone users in Europe, the Asia Pacific, India, the Middle East, and Africa.

Since the iPhone and Android rose to prominence, high-quality, reliable alternatives to smartphones have all but disappeared in the developed world. And where they remained, they have become so uncool as to make their adoption grounds for public shame.

But there are reasons to prefer a phone as a portable communications tool instead of a compulsive, general-purpose computer. Some of those reasons recall the original uses of the cell phone. When it first became generally popular in the late 1990s, mobile handsets often were bought as insurance against surprises or emergencies. People tossed them into automobile glove boxes, or carried them in pockets or purses only when the perception of risk or the need for coordination seemed required. Those concerns persist, especially now that people have become accustomed to being able to reach one another immediately and constantly. An inexpensive, reliable handset with a long-lasting battery might turn the Nokia 3310 a second phone, or a backup phone.

Another possible use is domestic. Once mobile phones became ubiquitous, many households gave up on home phone lines. By 2012, more than half of American homes didn’t have a landline. But the decline of the home phone also had unexpected consequences. Smartphones are personal devices, while the home phone line is a shared resource. As mom, dad, and all of the kids began getting their own, private communication tools, the phone as a point of entry to the household disappeared. But there’s reason to imagine a future in which the telephonic hearth renews. A cheap, appealing device like the Nokia 3310 could be left on the kitchen counter, ready to cry its loud, familiar ring as a means of reaching the home in general, rather than one of its occupants. Anyone who has missed a service call because the iPhone was on silent on the nightstand, or awkwardly passed around a personal device with grandma and grandpa on the other end, might willingly add another line to avoid such discomforts.

Then there’s the matter of smartphones’ overall sameness. Last year, my colleague Robinson Meyer profiled the Caterpillar S60, a smartphone designed by the heavy equipment company, built for construction work. Rugged and reinforced, it can be dropped from heights, immersed, and exposed to the elements. Rob rightly called the future that the Caterpillar S60 represents one in which “smartphones become unboring.” Boredom ends not thanks to the absolution of some newfangled feature that everyone needs, but from the specific features useful in particular circumstances.

The simultaneous fragility and expense of smartphones also helps explain why the Nokia 3310 might appeal even to consumers who can afford better. Speaking toThe Guardian, the industry analyst Ben Wood cited the “festival phone” and the “backup phone” as possible contexts for the handset. When working in the yard, camping, or white-water rafting, the risk of damaging or destroying a $750 smartphone makes carrying one seem foolish. But what other choice is there? Likewise, when attending a concert or sporting event, thin smartphones might risk getting lost or stolen. In these instances, a $50 Nokia 3310 offers clear benefit.

Security and privacy offer more reasons to consider a dumbphone. Smartphones are sophisticated tracking devices. GPS and motion sensors, along with the always-on internet servicing dozens of apps doing unspecified processing in the background make these devices encyclopedias of their users’ actions and behavior. While the cellular network can always be used to determine a handset’s location, the granularity and accuracy of its physical tracking is limited compared to a GPS device. And given recent examples of border agents demanding access to travelers’ smartphones, feature phones might become standard equipment for frequent travelers.

Upon arrival, the feature phone might even prove a more useful tool than the smartphone to the jetsetter. How much energy do business travelers expend managing the power and network needs of their smart devices at meetings or conferences? True, the Nokia 3310 can’t direct owners around an unfamiliar city on a map, or allow them to live-tweet an event. But perhaps the need to keep a smartphone powered and connected to the network—not to mention the compulsion to use all those apps—would be reduced given a second device meant solely for communication.

The Nokia 3310 might also offer an alternative to smartphones for younger kids. It’s much easier for a preteen to signal a pickup by phone or text than to try to guess when and where to arrive. Some parents, myself included, try to pawn off cheap, basic phones to children as they become old enough to pursue schedules outside the home. But these tools are devoid of style. They are embarrassments, without appeal, and as such they become more likely to go unused or left at home. Should Nokia rekindle the cultural allure of the feature phone, perhaps its potential as a communications tool absent the urges of the smartphone will inspire parents to stop handing over these glass-and-metal temptations to their progeny without a second thought.

* * *

It might be premature to announce the end of humanity’s love affair with the smartphone. But the relationship’s cracks are surely showing. Some have immediate consequence. Apps have contributed to a huge spike in traffic accidents and deaths, as more and more people attempt to operate finicky handheld devices while driving. The partial-reinforcement techniques baked into today’s apps and games has become more apparent to users, who seem increasingly resigned to services they also feel no option to quit. And the uniformity in design of devices has arrested their future potential. Every year another glass rectangle, affording no more or less than it promises, which is more of the same.

The smartphone’s conquest is definitive and complete. A decade after its form solidified, the contemporary citizen of the developed world has almost no choice but to own and operate one. And yet, the joy and the utility of doing so has declined, if not ceased entirely.

People called them called cell phones, or mobile phones, depending on the home continent of their carriers. “Cell phone” highlighted the empty space between communication nodes—the cells—common to North America. In Europe, where the land is flat, the nations small, and the coverage ubiquitous, versatility took center stage: the capacity to remain connected while in on the move thanks to a “mobile.” These name describe dreams as much as reality. Likewise, “smartphone” is wishful thinking as much as anything. A skeleton key, an aluminum genius, a singular device that deploys software, connectivity, portability, and design to end, finally and definitively, the ideal purpose of universal computation.

But the truly smart actor does not use one single tool, but the right tool for the job. That the right tool might always be one forged in software and activated via touch and gesture always should have been as preposterous as it is now only starting to seem. The Nokia 3310 isn’t the answer to the future of mobile telecommunications, or computing, or digital fashion, or any of the other domains in which the iPhone has pulled the wool over collective eyes. Instead it can be just what it is: one specimen in an ecosystem of technical diversity. The future promise of Nokia’s device isn’t this particular device, but the alternate, unthought future it represents.

HMD Global

The Scary State of Volcano Monitoring in the United States

$
0
0

Thirteen days before Christmas, somewhere in the frigid waters of the Bering Sea, a massive volcano unexpectedly rumbled back to life.  

Just like that, Bogoslof volcano began its first continuous eruption since 1992, belching great plumes of ash tens of thousands of feet into the cold sky over the Aleutian islands, generating volcanic lightning, and disrupting air travel—though not much else.

The volcano is on a tiny island about 60 miles west of Unalaska, which is the largest city in the Aleutians. It has a population of about 5,000 people.

Bogoslof hasn’t quieted yet. One explosion, in early January, sent ash 33,000 feet into the air. Weeks later, another eruption lasted for hours, eventually sprinkling enough ash on the nearby city to collect on car windshields and dust the snow-white ground with a sulfurous layer of gray. Over the course of two months, Bogoslof’s intermittent eruptions have caused the island to triple in size so far, as fragments of rock and ash continue to pile atop one another.

Geologists don’t know how long the eruption will last. In 1992, the activity at Bogoslof began and ended within weeks. But more than a century ago, it erupted continuously for years. In the 1880s, volcano observers in the Aleutians had little but their own senses to track what was happening. Today, scientists use satellite data and thermal imagery to watch Bogoslof—signs of elevated temperatures in satellite data indicate that lava has bubbled to the surface, for example. But monitoring efforts are nowhere near what they could be. For the relatively remote Bogoslof, the absence of ground-level sensors is inconvenient, perhaps, but not necessarily alarming. Elsewhere, the dearth of volcano sensors poses a deadly problem.

There are at least 169 active volcanoes in the United States, 55 of which are believed to pose a high or very high threat to people, according to a 2005 U.S. Geological Survey report.

The flow of lava from Kilauea as photographed by a NASA satellite in 2014. (NASA / Reuters)

About one-third of the active volcanoes in the U.S. have erupted—some of them repeatedly—within the past two centuries. Volcanoes aren’t just dangerous because of their fiery lava. In 1986, volcanic gas killed more than 1,700 people in Cameroon. And one of the latest theories about the epic eruption at Pompeii, in 79 A.D., is that many people died from head injuries they sustained when boulders rained down on them.

Hawaii’s Kilauea, Washington’s Mt. St. Helens, and Wyoming’s Yellowstone all have extensive monitoring. But many volcanoes in the Cascades have only a couple of far-field sensors, several geologists told me. The Pacific Northwest, which includes high-population areas in close proximity to active volcanoes, is of particular concern for public safety.

“Most people in the U.S. perceive volcanic eruptions as rare, and [believe] that we’d be able to get advance notice because of the advance in science and instrumentation,” said Estelle Chaussard, an assistant professor of geophysics and volcanology at the State University of New York at Buffalo. “However, the massive eruption of Mount St. Helens, in Washington, was only 37 years ago, and it took until the volcano became active again in 2004 to start a truly comprehensive monitoring. ... This kind of assumption is therefore very dangerous, because most of our volcanoes are not as intensively monitored as we think they are or as they should be.”

Mount St. Helens spews steam and gray ash from a small explosive eruption in its crater on October 1, 2004.
(John Pallister / USGS / Reuters)

Almost half of the active volcanoes in the country don’t have adequate seismometers—tools used to track the earthquakes that often occur during volcanic eruptions. And even at the sites that do have seismometers, many instruments—selected because they are cheaper and consume less power—are unable to take a complete record of the ground shaking around an eruption, meaning “the full amplitude of a seismogram may be ‘clipped’ during recording, rendering the data less useful for in-depth analyses,” according to a 2009 report by the U.S. Geological Survey.

“Using satellite radar and other systems, it should be possible to systematically keep a close eye on most all hazardous volcanoes around the world,” said Roland Bürgmann, a professor of planetary science at the University of California at Berkeley. “Currently, some volcanoes in the U.S. and globally are well-monitored, but most are not.”

GPS helps fill in some of the gaps. As magma accumulates at the Earth’s surface, the ground bulges upward—and that bulge can be measured from space, using radar bounced off the ground. “That’s a big advance, because you don’t need sensors on the ground and, in theory, you could monitor all the Earth’s volcanoes,” said Paul Segall, a professor of geophysics at Stanford University. “The trouble is, there’s nothing up there that is designed to do that, and the orbital repeat times aren’t frequent enough to do a really good job.”

“In my view,” he added, “We haven’t even gotten up to bare bones, let alone more sophisticated monitoring.”

A plume from the Bogoslof eruption can be seen from Unalaska Island, 53 miles away from the volcano, on February 19, 2017.
(Janet Schaefer / AVO)

That’s part of why a trio of U.S. senators is reintroducing legislation aimed at improving the country’s volcano monitoring efforts. “For the past 34 years, we have experienced first-hand the threat of volcanic activity to our daily lives with the ongoing eruption at Kilauea,” Senator Mazie Hirono, a Democrat from Hawaii, said in a statement about the bill. “As recently as 2014, we had evacuations and damage to critical infrastructure and residences.”

The Hawaiian Volcano Observatory, on Hawaii’s Big Island, has been monitoring volcanoes since 1912—nearly four decades before Hawaii became a state. Today it’s considered one of the world’s leading observatories. Yet there’s little coordination between even the best observatories in the United States. The Senate bill calls for the creation of a Volcano Watch Office that will provide continuous “situational awareness of all active volcanoes in the U.S. and its territories,” and act as a clearinghouse for the reams of volcanic data that new sensor systems would collect.“Long-records of activity are especially important in volcano monitoring to successfully identify behaviors that differ from the ordinary,” Chaussard told me in an email, “and not all of our volcanoes have such records.”

“Essentially everything we do now is empirical,” Segall told me, “but most of the really dangerous volcanoes haven’t erupted in modern instrumental times.”

More data means a better opportunity to identify eruption warning signs, which Segall hopes could eventually make it possible to forecast volcanic activity the way we can predict severe weather like hurricanes. “I don’t know if it’s possible, but it seems a worthy goal,” he said. “We obviously have less ability to peer into the Earth as we do to peer into the sky.”

USGS / ReutersThe lava flow from the Kilauea volcano moves over a fence on private property near the village of Pahoa, Hawaii, in 2014.

As Uber Melts Down, Its CEO Says He 'Must Fundamentally Change'

$
0
0

It took eight years and at least as many back-to-back-to-back-to-back controversies to break Travis Kalanick.

After a stunning month of scandals at Uber, Kalanick, its founder and CEO, sent an emotional and uncharacteristically apologetic memo to his employees Tuesday night. “This is the first time I’ve been willing to admit that I need leadership help,” Kalanick wrote. “And I intend to get it.”

Uber has always been controversial, but never like this.

Kalanick’s message came hours after a video surfaced that showed dashboard-camera footage of him arguing with an Uber driver who had just given him a ride. In the video, Fawzi Kamel, who gave a recording of the conversation to Bloomberg, tells Kalanick that he and other drivers suffered as a result of lower fares for riders. “People are not trusting you anymore,” Kamel tells Kalanick. “I'm bankrupt because of you... You changed the whole business. You dropped the prices.”

“Bullshit,” Kalanick repeatedly says, raising his voice, criticizing the driver, and eventually exiting the car with the slam of a door.

“By now I’m sure you’ve seen the video where I treated an Uber driver disrespectfully,” Kalanick said in his message to Uber employees on Tuesday night. “To say that I am ashamed is an extreme understatement. My job as your leader is to lead … and that starts with behaving in a way that makes us all proud. That is not what I did, and it cannot be explained away.”

In the past, Uber explained away all kinds of transgressions. Its publicists are experts at managing the perception of mounting public backlash. And for a long time, Uber had the only two things that seem to matter in Silicon Valley: a product people kept using and an obscene amount of money.

Now, Uber’s future suddenly seems questionable.

One day before the dashcam video came out, Uber’s senior vice president of engineering resigned after having failed to tell Uber that he’d left his previous job at Google over a sexual-harassment complaint against him, according to Recode. Less than a week before that, Uber’s head of self-driving cars was accused in a federal lawsuit of having stolen a trove of secret documents from Google. That news came only days after an explosive blog post, written by the former Uber engineer Susan Fowler, describing a culture of pervasive and systematic sexism at the company.

“When I joined Uber, the organization I was part of was over 25 percent women. By the time I was trying to transfer ... this number had dropped down to less than 6 percent,” Fowler wrote. The clear reasons for this, she said, were “organizational chaos” and sexism. In one particularly memorable example, Fowler details an episode in which Uber’s female employees were told they couldn’t get the leather jackets that were being ordered for male staffers.

The director replied back, saying that if we women really wanted equality, then we should realize we were getting equality by not getting the leather jackets. He said that because there were so many men in the org, they had gotten a significant discount on the men’s jackets but not on the women’s jackets, and it wouldn't be equal or fair, he argued, to give the women leather jackets that cost a little more than the men’s jackets.


Uber even botched its response to the outrage over Fowler’s blog post, sending to some users who’d asked for more information a message that seemed to blame Fowler: “Everyone at Uber is deeply hurting after reading Susan Fowler’s blog post,” it said. Uber also felt compelled to come out and say explicitly that it’s not behind a smear campaign against Fowler, which she says started in the wake of her blog post about Uber, perhaps because Uber’s senior vice president of business once suggested spending $1 million on an opposition to dig up dirt on its critics personal lives. Uber declined to speak with me on the record for this story, and turned down requests for interviews with the company’s global head of diversity and its chief human resources officer.

The Fowler blog post dropped just weeks after a noisy anti-Uber campaign, which spread across the internet as #DeleteUber, launched by people who were incensed that Uber was giving rides to John F. Kennedy Airport, in New York City, during a strike by the union representing New York City taxi drivers. The New York Taxi Workers Alliance had halted service to and from the airport to show solidarity with people protesting President Donald Trump’s immigration ban.

Initially, Kalanick said in a statement he would raise concerns about the ban with Trump in a meeting with other business leaders as part of a White House economic advisory group he’d agreed to join. “I understand that many people internally and externally may not agree with that decision [to join the group], and that’s OK,” he wrote at the time. “It’s the magic of living in America that people are free to disagree... I’ve always believed in principled confrontation and just change; and have never shied away (maybe to my detriment) from fighting for what’s right.” After the backlash, Kalanick stepped down from the council.

The question now isn’t whether Uber has real problems—it clearly does—the question is whether one of Silicon Valley’s biggest success stories is actually a cautionary tale about hubris, sexism, and, ultimately, failure.

The conceptual simplicity behind Uber—press a button on your phone to hail a ride from anywhere—was always its strength. It’s why the company has been so frequently emulated by any startup that claims to be “like an Uber for” some other service. Uber’s growth since it launched in 2009 has been astounding by any measure. What began in San Francisco as driving service with four staffers and two cars is now a global juggernaut with 11,000 employees operating in more than 500 cities across six continents. Uber is valued at an eye-popping $68 billion.

That growth isn’t unequivocally good. Uber remains “massively unprofitable,” as the Forbes writer Brian Solomon put it in an article about leaked financial documents from 2015. Uber has declined to comment on reports about its financial standing, but analysts point out that its widening losses are likely tied to longer term initiatives like strategic acquisitions, mapping for self-driving cars—and self-driving cars themselves. In other words, Uber has been investing heavily in its own growth. Which is part of why, for a long time, the biggest mystery about Uber was when it would go public. The timing of an IPO is now very much secondary. It’s not clear that Uber can survive its own cratering reputation and uncertain business model. There are signs investors are worried. And if the reasons for Uber’s success were fundamentally about its simplicity, the reasons for its demise may be just as straightforward.

So maybe it’s too little, too late. But Kalanick is vowing to change things at Uber. He hired Eric Holder, the former U.S. attorney general, to investigate claims of sexual harassment and discrimination. In the months to come, the company will almost certainly showcase the work of two additional high-profile recent hires: Its new human resources chief, Liane Hornsey, who spent a decade at Google, and its global head of diversity and inclusion, Bernard Coleman III, who worked on Hillary Clinton’s presidential campaign.

It’s easy to see why. “There's strong evidence that shareholder activists do affect public company decisions,” Terra T. Terwilliger, a director at the Clayman Institute for Gender Research told me.  “Of course Uber is not yet a public company, but it hopes to be one soon. Regardless, activism of all kinds is a reality in today’s market, and companies must learn how to deal with activists both in the investor community and, increasingly, among their employees."

The memo Kalanick sent last night suggests that he may finally have seen himself the way others do. “It’s clear this video is a reflection of me,”  he wrote in his memo, “and the criticism we’ve received is a stark reminder that I must fundamentally change as a leader and grow up.”

This, from the 40-year old CEO who, in 2014, jokingly characterized his desirability among women as “Boob-er,” and referred to his own success as “Holy shit ... hashtag winning.” The man whose quarrelsome demeanor has been described—by one unnamed venture capitalist to Vanity Fair, anyway—as “douche as a tactic, not a strategy.” The man who is frequently described as being on a warpath against taxi companies, against regulators, against Uber drivers, and against anyone else who might stand in his way.

In the past, Uber has approached controversy with swagger and a shrug that seemed to say: Tough shit. Things change and there’s nothing you can do about it.

That was certainly the stance Kalanick seemed to have adopted in his confrontation with Kamel, the Uber driver who recorded footage of their encounter. One can’t help but wonder whether, by watching himself from the distance of a dashboard camera, Kalanick realized that the insult he’d lobbed at Kamel was actually better directed at him: “Some people don't like to take responsibility for their own shit,” he’d growled. “They blame everything in their life on somebody else.”

Kim Kyung Hoon / ReutersThe Uber CEO, Travis Kalanick, photographed in Beijing in 2014

How the Chili Dog Transcended America's Divisions

$
0
0

Forget about commercial feedlots and GMOs. Forget high cholesterol, expanding waistlines, and the merits of plant-based diets. Forget The Omnivore’s Dilemma and Fast Food Nation. Forget the trends of locavorism and clean eating.

Instead, consider the chili dog: the mass-produced frank, rolling down a gleaming conveyor belt. Consider the pillowy consistency of a bun pulled from a package of so many identical buns. Consider the ladle of brown chili draped over the top. Consider the sprinkle of cheddar cheese or the stripe of mustard, both the same artificial yellow.

The chili dog’s story is actually many stories: not only one about American fast food and appetites, but also about American industrialization, immigration, and regionalism. And each component—the hot dog, the chili, even where we eat chili dogs—adds another twist.

* * *

What is a hot dog? In his thoughtful and thorough book Hot Dog: A Global History, Bruce Kraig calls it a category of precooked sausage. Hot dogs can be skinless or stuffed into a casing. They are filled with emulsified red meats (beef, pork, veal). They are served in a bun. They are eaten out of hand.

Sausage has been a part of humans’ culinary repertoire for 15,000 years. Nobody knows who first thought to chop up one part of an animal, stuff that mixture into another part, and then cook it. But as long as humans have had access to fire and meat, they have been eating something that we could recognize, with only a bit of squinting, as a sausage. Ancient Rome and medieval Europe had sausages. They’re even mentioned in the Odyssey.

In the 17th and 18th centuries, British immigrants brought bangers to American shores, but the hot dog as it is known today is nearer to the German sausage. (The tradition of sausage-making is so established in Germany that Kraig cites a 1432 law regulating wurst.) While sausages as street food were common in American cities by the late 1700s, it was only after the Civil War that the sausage became, like so many other products of the age, machinated and industrialized. Meat moved from the butcher shop to the factory. And as it did, sausages homogenized. The hot dog was born.

This industrialization was possible for a few reasons. The growing American desire for meat and the ability to afford it, for one part. The construction and connection of railways, for another. New machinery had started to replace human butchers as well. By the 1870s, massive firms could swiftly slaughter, season, and process animals arriving by rail from the stockyards of the Midwest and turn them into hot dogs. Which was good, because the appetite for the encased meat product was growing. Americans wanted hot dogs, particularly the identical ones that came from name-brand companies such as Hormel or Armour.

Xenophobia played no small part in the demand for hot dogs. In the 1890s, America was experiencing its second wave of immigration, and many of the Eastern European arrivals were less than welcome. The handmade sausages hanging in an immigrant’s butcher shop were foreign and the man with the thick accent selling them suspicious.

But pleasingly uniform hot dogs sold from food carts seemed distinctly American, even if those carts were owned by immigrants. A decade later, as Upton Sinclair’s classic book The Jungle told horror stories about the labor conditions at meatpacking plants, some purveyors emphasized “pure” hot dogs as an alternative. Jewish-owned hot dog stands, with their kosher associations, made the all-beef hot dog number one in Chicago, even though many weren’t actually kosher. Nathan’s Famous on Coney Island dressed their countermen in clean white surgeon’s smocks to associate their brand with cleanliness.

By the early 20th century, the hot dog was fully American, and inextricably associated with another American pastime, baseball. Somehow, a mass-produced hot dog had become a symbol of American individualism. Never mind, of course, that both baseball and the Industrial Revolution have roots in Britain.

* * *

Meanwhile, in Texas, another immigration story was unfolding, featuring a group of women known as the chili queens.

Like the first sausage, the origins of the first chili is unknown. But as Gustavo Arellano explains in his book Taco USA: How Mexican Food Conquered America, by the 1870s a chili-and-meat dish appeared in San Antonio. As tourists streamed into the city’s plazas, they gawked not only at the spicy meat concoction but who was selling it: women. The so-called “chili queens” played up the romantic exoticism of Old Mexico and decked out their booths with lanterns and musicians.

Arellano disputes an often-repeated story that the 1893 World’s Columbian Exposition that whetted the nation’s appetite for chili. He notes that chili con carne appeared on Northern restaurant menus by the 1880s and was available to consumers in cans by the opening of the Exposition. Chili, it turns out, was a perfect product for canning: It was cheap, and the same railways that fed the stockyards of Chicago could transport chili and other canned goods by the case. So chili gained traction contemporaneously with the hot dog. It was another ethnic food, sanitized, homogenized, and made blandly American.

By the 1910s, the chili queens were being run out of the plazas and chased to progressively less visible corners of the city. They returned briefly in the 1930s, this time restricted by screened tents and the health department. By the end of World War II, the chili queens had disappeared entirely. Yet their signature dish, transformed into a gloppy canned product, had made its way to grocery store shelves across America.

* * *

The chili that adorns today’s chili dog is much closer to the meat sauces of Greece and Macedonia—marking the appearance of yet another marginalized ethnic group in the story of the modern chili dog.

If you are from Detroit, or Cincinnati, or if you have eaten hot dogs at roadside stands in Pennsylvania or upstate New York, you have had something like a coney. Several coney islands, a type of restaurant, in Michigan claim to have invented the coney—a hot dog dressed in a meat sauce, striped with yellow mustard and punctuated with diced onions. There’s Todoroff’s in Jackson, circa 1914. The brothers Bill and Gust Keros at American and Lafayette Coney Islands in Detroit say they had one by the 1910s. Down in Ohio, Thomas Kiradjieff claimed to have invented the Cincinnati cheese-covered coney in 1922. In each case, the meat sauce is laced with Greek seasonings—cinnamon, oregano, even chocolate. This is not Texas-style chili, but in most parts of the country, it is probably what blankets a chili dog.

There are countless coney-style hot dog variations that dot the map, especially around the Great Lakes, New England, and Atlantic regions. (For a helpful map, check out Hawk Krall’s on the website Serious Eats.) Even more convoluted is the nomenclature of each of these coney-esque hot dogs: the Michigans of Plattsburgh, New York; the “New York system” of Rhode Island; the universally misspelled “Texas weiners” of New Jersey and Pennsylvania.

The name coney comes from Coney Island, though it’s thought that few Greek or Balkan immigrants had seen Brooklyn’s Coney Island. Instead, they took the distinctly American name for their Midwestern coney restaurants, possibly to seem less foreign. After all, the first recipes for coney sauce called for beef hearts. In order to make the offal and seasonings of their homelands less exotic, they draped the sauce over the familiar hot dog.

Out in Los Angeles, Art Elkind claims to have invented the chili dog in 1939. Entire generations of Southern Californians will name Pink’s as the formative chili dog. Regardless of who invented the chili dog, it was here to stay. By the time the fast food drive-ins and car culture of the 1950s and 1960s took hold, the chili dog was a menu fixture at local stands and highway Dairy Queens. A whole generation of Americans could eat chili dogs in their cars—carefully, and with lots of extra napkins.

* * *

In West Virginia, chili dogs come topped with chopped coleslaw. They come chargrilled at Ted’s in Buffalo. At The Varsity in Atlanta, employees serve them up after barking what’ll ya have what’ll ya have. Chili dogs can be had at Tigers, Reds, or Astros games. Artisan versions are on offer in Portland and San Francisco. In the span of about a hundred years, America turned a German sausage into a hot dog and turned Mexican chili con carne and Greek saltsa kima into chili, and then repackaged the whole thing as a cheap, distinctly American dish.

In some places, the chili dog transcended socioeconomic or ethnic divisions, attracting Americans from across the population. The authors Maria Godoy and Ari Shapiro claim that coneys were the lunch of choice for harried Detroit autoworkers in the 1920s and 1930s, who had only 20 minutes for lunch. The same has been said for the aerospace workers that lined up at Art’s in L.A. Men could wolf down a few cheap dogs and get back to the line. Crammed into coney islands, or queued up at a hot dog cart, patrons were united not in race, language, or homeland, but in their desire for quick food.

The chili dog became a food laced with regional pride. It is one way that Americans identify themselves, a way to claim local citizenship. It’s ironic that a food descending directly from homogenization—a food that had to change itself to fit in—is now the same food regional fanatics hold up as uniquely local.

The next time you come across a coney island or a hot dog stand, order the chili dog. Notice the snap of the natural casing of the hot dog. Try to identify that hint of spice in the sauce. Look around at the regulars. And finally, consider the long and convoluted journey the chili dog took to get to your plate. From the ethnic enclaves of Germans and Greeks and Eastern Europeans, from the butcher shops of New York and the meatpacking plants of the Midwest, through industrialization and xenophobia and ingenuity, emerged something wholly new, messy, and distinctly American.


This article appears courtesy of Object Lessons.

Kamenko Pajic / Reuters

Machine Learning Is Bringing the Cosmos Into Focus

$
0
0

The telescope offers one of the most seductive suggestions a technological object can carry: the idea that humans might pick up a thing, peer into it, and finally solve the riddle of the heavens as a result.

Unraveling that mystery requires its own kind of refraction in perspective, collapsing the distance between near and far as a way to understand our planet and its place in the universe.

This is why early astronomers didn’t just gaze up each night to produce detailed sketches of celestial bodies. They also tracked the movement of those bodies across the sky over time. They developed an understanding of Earth’s movement as a result. But to do that, they had to collect loads of data.

It makes sense, then, that computers would be such useful tools in modern astronomy. Computers help us program rocket launches and develop models for flight missions, but they also analyze deep wells of information from distant worlds. Ever larger telescopes have illuminated more about the depths of the universe than the earliest astronomers ever could have dreamed.

Spain’s Gran Telescopio Canarias, in the Canary Islands, is the largest telescope on Earth. It has a lens diameter of 34 feet. The Thirty Meter Telescope planned for Hawaii, if it is built, will be nearly three times larger. With telescopes, the bigger the lens, the farther you can see. But soon, artificial intelligence may help bypass size constraints and tell us what we’re looking at in outer space—even when it looks, to a telescope, like an indeterminate blob.

The idea is to train a neural network so that it can look at a blurry image of space, then accurately reconstruct features of distant galaxies that a telescope could not resolve on its own.

In a paper published in January by the Monthly Notices of the Royal Astronomical Society, a team of researchers led by Kevin Schawinski, an astrophysicist at ETH Zurich, described their successful attempts to do just that. The researchers say they were able to train a neural network to identify galaxies, then, based on what it had learned, sharpen a blurry image of a galaxy into a focused view. They used machine-learning technique known as “generative adversarial network,” which involves having two neural nets compete against one another.

The frames above show an example of an original galaxy image (far left), the same image deliberately degraded middle), and the image after recovery with the neural net (right).

As computer scientists and physicists experiment with these techniques, increasingly powerful telescopes will offer more opportunities for neural nets to offer clarifying views of the universe. One example is the James Webb Space Telescope, or JWST, which is set to launch next year. If all goes well, the telescope will provide views of some of the oldest galaxies in the universe—ones that formed just a few hundred million years after the Big Bang. “Even JWST will have trouble to resolve these baby Milky Ways,” Schawinski told me. “A neural net might help us make sense of these images.”

“There’s a catch, though,” Schawinski told me in an email. The neural net is trained to recognize galaxies based on what we know them to be like today. Meaning, to train a neural network how to reconstruct a baby Milky Way, scientists have to be able to tell the machine what that galaxy looked like in the first place. “Now, we know that galaxies in the early universe were very different that the ones in our old, evolved universe,” Schawinski said. “So we might be training the neural net with the wrong galaxies. That’s why we have to be extremely careful in interpreting what a neural net recovers.”

It’s a crucial caveat that will continue to come up as machine learning expands across disciplines and attempts more complex applications. Elsewhere, for instance, academics have proposed using machine learning to identify the subtle signatures of a phase of matter, then reverse-engineer what it learned to generate glimpses of new materials or phases of matter, as the quantum physicist Roger Melko wrote in a recent essay.

“If we are careful enough and do this right, [using] a neural network might not be too different from what we are currently doing with more classic statistical approaches,” said Ce Zhang, a computer scientist at ETH Zurich and a co-author of the recent RAS study.

And though these efforts seem likely expand the way humans think about our place in space, humankind isn’t fundamentally changing the way it examines the universe. “None of the images from space objects, from galaxies to planets, are any more or less ‘real’ than what you might see with a human eye,” Schawinski said. “Our biological eyes can see neither X-rays nor infrared radiation, let alone focus on light from just a single transition in a particular ion.”

Imaging data is always processed in some way, he added. Human vision is its own kind of filter, and the whole history of astronomy is the story of its augmentation. “I see neural nets as the latest in sophisticated methods in telling us what’s actually there in the universe and what it means.”

Carnegie-Irvine Galaxy Survey / NASA / JPL-CaltechThis galaxy, known as NGC 1448, contains an example of a supermassive black hole hidden by gas and dust.

Why Do the Big Stories Keep Breaking at Night?

$
0
0

It’s usually around 8 p.m. when the push notifications start rolling in. On Wednesday night, The New York Times kicked things off at 7:56 p.m. with a major story about how Obama administration officials had scrambled to preserve intelligence on Russia in the days before President Donald Trump’s inauguration.

An hour later, the next big story dropped. This time, from The Washington Post, with a bombshell revelation that Jeff Sessions did not disclose at least two encounters he had last year with Russia’s ambassador to the United States, despite having told lawmakers that “I did not have communications with the Russians” in his recent confirmation hearings to become the United States attorney general.

Both stories were stunning, but not wholly unexpected. In the chaotic early weeks of the Trump presidency, a drumbeat of late-night breaking news has become routine. News junkies have come to anticipate big scoops before bedtime.

“In a world where we had control of such things,” Tom Jolly, the associate masthead editor at The New York Times, told me in an email, “we’d break the big stories early in the day, when more people are online.”

This dynamic is, in a strange way, a throwback. As Matt Pearce, a national correspondent for The Los Angeles Times pointed out in a string of tweets Wednesday night, “it's like we’ve bizarrely returned to the era of the evening edition.”

The news alert that The New York Times distributed to readers’ cellphones Wednesday night.
About one hour after Times readers received a news alert Wednesday night, The Washington Post notified readers of its latest scoop.

In the late 19th and 20th centuries, the evening edition was the newspaper you grabbed for your commute home from work. Because it was published in the afternoon, it was the best way to get the most up-to-date news in print. After all, by the time the work day ended, that day’s morning paper covered events that had taken place at least a full day before.

That’s why, in their earliest incarnations, newsiness was emphasized: The New York World’s evening edition, branded The Evening World, was a “bright, sparkling paper,” “bubbling over with all the news from everywhere,” according to newspaper descriptions in 1887, when The Evening Worldfirst launched.

Radio, television news, cable news, and the internet all chipped away at the need for an ultra-newsy nighttime print product. Evening editions were already becoming scarce 30-plus years ago. In many cities, when one of two big metro dailies folded, the evening paper was the one to go.

Today, in an age of nearly-real-time news, the evening edition as we once knew it has been made obsolete. ( “We print news as it breaks, and it has been that way for years,” a spokeswoman for The Washington Post told me.) But if you’re someone who feels a romance for print, there’s something especially nostalgic about the evening paper. And that’s part of why there have been so many attempts to revive it.

When the iPad first launched, several news organizations wondered whether tablet technology might create an opportunity for a new kind of evening paper. The idea was to captivate readers at a reflective moment of the day, with a high-gloss news product that had a calmer feel than the dizzying (and often junky) social-media news streams.

But as it turns out, social platforms—and notably Twitter, where journalists and news junkies tend to gather—have become a new kind of evening edition, one that’s an amalgam of breaking news and people’s reaction to it, driven by good ole-fashioned print newspaper deadlines.

“It is true that print deadlines create a publishing target because if a story isn’t done in time for print, it obviously doesn’t get into the paper,” Jolly, the Times editor, said. So, instead of individual news organizations putting out their own evening editions, print deadlines across the industry mean different newspapers all put out their big stories for the next day’s print paper around the same time the night before.

At the Times, there are three major targets between the national and city editions of the paper: 7:30 p.m., 9:30 p.m., and 11 p.m. “In pre-Internet days, those stories often broke when the newspaper came out in the morning, but now we’re breaking them when we can on our digital sites,” Jolly told me. “And, when one publication breaks a big story, others tweet it out and try to match it, which means news blows up in the moment in a much bigger way than it did in the days of print only.”

Of course, the nightly news drops of late aren’t purely a technological phenomenon. You can’t have big scoops without serious reporting. And the biggest scoops tend to reflect the volatility of this particular moment in time.  

“While Trump and his administration have put us in an unusually lively news cycle, the rhythm of news out of Washington has always leaned later in the day,” Jolly said. “It just hasn't always been as momentous.”

Library of CongressA newsboy holds up a copy of the "Anchorage Daily Times" in the early 20th century.

The Invisible Fence That Keeps Drones Away From the President

$
0
0

A drone flying through the air in southwest Baltimore might, if it wanders too far in the wrong direction, stop suddenly in midair, as if running into an invisible force field. The obstacle isn’t physical—it’s been programmed into the software that helps the drone fly. A ring with a 30-nautical mile radius centered on the Ronald Reagan National Airport, delineates the D.C. Special Flight Rules Area, where drones aren’t allowed—and so many consumer drones obediently stay away.

Technology that keeps drones from entering restricted airspace is called geofencing. It’s a straightforward system: Drones that support geofencing regularly download databases from their manufacturers that delineate active no-go zones. If a drone flies toward a restricted area, its built-in GPS will sense the boundary, and the drone will stop mid-flight; if an operator tries to take off inside a restricted area, the drone won’t start up at all.

Most restricted areas are permanent. An area five miles in radius around airports, for example, is always off-limits to drone enthusiasts, without prior approval from the airport’s control tower. But the Federal Aviation Administration also announces temporary flight restrictions, or TFRs, to protect big public events like the Super Bowl, guide pilots away from hazards like wildfires or pipeline explosions, or shield the president when he travels.

Information about temporary restrictions bounces along a winding path on its way to many of the hundreds of thousands of drones in the United States. Here, for example, is how information about President Trump’s upcoming visit to Mar-a-Lago, his club in Palm Beach, Florida, will end up being disseminated to consumer drones.

The process for establishing a presidential TFR begins when the Secret Service reaches out to the FAA and requests that a protective zone be set up wherever the president is headed. The FAA then publishes a Notice to Airmen, or NOTAM, with details about the restriction. Every time a pilot is getting ready to start their engines—whether they’re flying a business jet, prop plane, or three-pound drone they got for Christmas—they are supposed to check for new NOTAMs to make sure their flight path doesn’t cross restricted airspace.

The FAA has a clunky website where pilots can browse current and scheduled NOTAMs, but there are easier ways to access that data, too. The agency has a free, simple smartphone app called called B4UFLY that uses the phone’s GPS to show nearby restrictions—and it shares that data with a few other companies as well.

One of them, AirMap, gets data about travel restrictions from the FAA every few minutes, a spokesperson for the company said. In addition to plotting them on an interactive map,  the company makes the data available to drone companies that subscribe to its service—including DJI, the largest manufacturer of consumer drones.

DJI, in turn, classifies the data in three categories: warning zones, authorization zones, and restricted zones. Finally, those areas, with their classifications, are sent to every internet-connected DJI drone that supports geofencing.

Warning zones tell drone operators they’re flying over a special area, but don’t prevent flight there—a protected wildlife area might be categorized this way.

Most temporary flight restrictions are designated as authorization zones, which require drone operators to confirm their intention to fly in. When a drone noses up against one of these, its operator will be asked to “self-authorize,” acknowledging the fact that there might be extra rules in the area the drone is entering. “By doing that, you say in the popup window that you have authorized business in here,” said Adam Lisberg, a spokesperson for DJI. (The 30-mile zone around D.C., for example, is an authorization zone.)

To self-authorize, operators need to have connected their drone to their identity by submitting a mobile phone number or a credit-card number. That way, if law enforcement has questions about a DJI drone flying where it shouldn’t be, the company can help police track down its operator. Lisberg says the company complies with all legitimate law enforcement requests, but wouldn’t share any details about how often such requests come in.

Restricted zones come with the most limitations. Those areas, which protect sensitive locations like airport runways or nuclear power plants, are inaccessible to DJI drones, and aren’t eligible to be unlocked by authorized users. Within the 30-mile authorization zone surrounding D.C., a smaller 15-mile area is classified as restricted. (Since drones are sometimes used to inspect sensitive infrastructure like airports and power plants, DJI makes case-by-case exceptions as needed.)

This weekend, the FAA will activate a flying restriction that’s 60 nautical miles in diameter around Palm Beach, to coincide with Trump’s visit. The agency has a standing protocol for protecting Mar-a-Lago, since it’s a usual haunt for the president.

But although presidential TFRs are designed to protect the leader of the free world from aerial attacks, they aren’t categorized as a restricted zone in DJI’s system. Like any other TFR, such as one established over a wildfire, it’s designated as an authorization zone. That means a verified DJI drone operator who’s willing to take a risk could self-authorize and fly their drone near the president this weekend, if he or she wanted to.

A spokesperson for the FAA says that there shouldn’t be any difference between TFRs for drone operators. “If you’re not allowed to fly there, you’re not allowed to fly there,” the spokesperson said.

DJI’s system is solely educational, Lisberg said; it’s not designed to enforce air-traffic laws or punish bad actors. “That’s not our job any more than a car manufacturer is responsible for making sure people adhere to the speed limit.” It remains the responsibility of each drone operator to make sure they’re not flying someplace they shouldn’t.

Several other drone manufacturers use the FAA’s or AirMap’s data for flight awareness, too, including Intel, Aeryon Labs, 3D Robotics, Yuneec, and senseFly, the commercial arm of a popular French drone maker called Parrot. (A Parrot representative said the company’s consumer drones don’t have automatic geofencing technology, and that its commercial drones only use AirMap data for flight planning.)

Ultimately, no matter how detailed the data is that AirMap feeds to drone manufacturers, the final authority on temporary flight restrictions is the FAA. If a drone operator really wants to circumvent restrictions built into his or her drone, there’s little in the way of liftoff, except for the legal consequences the government might impose if the drone is discovered in the air: The FAA says individuals can be fined more than $1,400 for violating TFRs.

Alex Brandon / APSome drones include geofencing technology that prevents them from starting up in restricted areas.

This Speck of DNA Contains a Movie, a Computer Virus, and an Amazon Gift Card

$
0
0

In 1895, the Lumiere Brothers—among the first filmmakers in history released a movie called The Arrival of a Train at La Ciotat Station. Just 50 seconds long, it consists of a silent, unbroken, monochrome shot of a train pulling into a platform full of people. It was a vivid example of the power of “animated photographs”, as one viewer described them. Now, 122 years later, The Arrival of a Train is breaking new ground again. It has just become one of the first movies to be stored in DNA.

In the famous double-helices of life’s fundamental molecule, Yaniv Erlich and Dina Zielinski from the New York Genome Center and Columbia University encoded the movie, along with a computer operating system, a photo, a scientific paper, a computer virus, and an Amazon gift card.

They used a new strategy, based on the codes that allow movies to stream reliably across the Internet. In this way, they managed to pack the digital files into record-breakingly small amounts of DNA. A one terabyte hard drive currently weighs around 150 grams. Using their methods, Erlich and Zielinski can fit 215,000 times as much data in a single gram of DNA. You could fit all the data in the world in the back of a car.

Storing information in DNA isn’t new: life has been doing it for as long as life has existed. The molecule looks like a twisting ladder, whose rungs are made from four building blocks, denoted by the letters A, C, G, and T. The sequence of these letters encodes the instructions for building every living thing. And if you can convert the ones and zeroes of digital data into those four letters, you can use DNA to encode pretty much anything.

Why bother? Because DNA has advantages that other storage media do not. It takes up much less space. It is very durable, as long as it is kept cold, dry, and dark—DNA from mammoths that died thousands of years ago can still be extracted and sequenced. And perhaps most importantly, it has a 3.7-billion-year track record. Floppy disks, VHS, zip disks, laser disks, cassette tapes… every media format eventually becomes obsolete, and every new format forces people to buy new reading devices and update their archives. But DNA will never become obsolete. It has such central importance that biologists will always want to study it. Sequencers will continue to improve, but there will always be sequencers.

George Church from Harvard University made the first forays into DNA storage in 2011, encoding his newly published book, some images, and a Javascript program. A year later, Nick Goldman and Ewan Birney from the European Bioinformatics Institute improved on his efforts, with a more complex cipher. They encoded all of Shakespeare’s sonnets, a clip of Martin Luther King’s “I have a dream” speech, a PDF of the paper from James Watson and Francis Crick that detailed the structure of DNA, and a photo of their institute, into a speck of DNA so small that when it arrived in their lab, Goldman didn’t see it. He though he was staring at an empty tube.

The big catch with DNA is that we can only create and sequence it as small stretches, a few hundred letters long. So if you want to encode a large piece of data, you need to break it down, and synthesize it as a messy soup of DNA fragments. It’s hard to ensure that all of these are evenly represented, so there’s a risk of losing bits of data.

Goldman and Birney coped with this by creating an overlapping code, so that each bit of data was represented by at least four fragments of DNA. If they lost one, the same information would still exist in three other places. It was a good strategy but also a slightly inefficient one. And it wasn’t perfect: the team still encountered a few errors when they tried to recover their files. “I thought we could do something more efficient and robust,” says Erlich.

Coincidentally, online streaming services like Netflix and Spotify face a similar problem. They send information across choppy channels, and they also need to recover that data perfectly, regardless of missing fragments. They solve the problem using fountain codes—a style of coding that partitions data into small packets (or “droplets”) in such a way that you can recover the whole thing even if you only snag a random subset. As long as you can catch enough droplets, regardless of which ones you miss, you can reconstruct the entire stream. Erlich compares it to doing a giant Sudoku puzzle: If some of the squares filled in, you can deduce what the others are.

Using fountain codes, the duo developed a cipher that’s 60 percent more efficient than previous ones, and comes close to the limit of how densely information can be packed into DNA. “We get very close to an optimal configuration,” Erlich says.

They used this system, which they call DNA Fountain, to encode: the train movie; KolibriOS, the smallest computer operating system around; the image that was sent on the Pioneer 10 and 11 probes; a scientific paper that describes how much information can fit into a given medium; a virus called Zipbomb that fills your hard drive with junk (“We thought it would be fun,” says Erlich); and a $50 Amazon gift card. (The latter has already been deciphered and spent, by one of Erlich’s Twitter followers.)

They ended up with a library of 72,000 DNA fragments, which they then sequenced, decoded, and reassembled. In the process, they lost more than 2,000 of the fragments, but they still managed to recreate the files perfectly.

DNA storage has another weakness. The act of sequencing the strands also destroys them, so this is a storage medium that gradually disappears the more it is read. “My daughter loves Frozen,” says Erlich. “If we had encoded that damn Let It Go song, we would run out of DNA within a week.” Fortunately, DNA, by its nature, is also very easy to copy, so it is trivial to double up a cache of DNA-encoded data. Every time you do that, you risk introducing errors: copies of copies are rarely identical to the originals. But DNA Fountain is so resistant to errors that even when Zielinski copied the data cache ten times over, she could still recover the files perfectly.

“This work is great,” says Birney, and proves that DNA storage “is a really robust idea.” That being said, he and Goldman are working on their own updated coding scheme, which they hope to test and release in the near future.  Microsoft is also getting in on the action. Last July, Microsoft researcher Karin Strauss  and computer scientist Luis Henrique Ceze from the University of Washington stored a record 200 megabytes of data in DNA. “We are convinced of the density benefits of DNA as a storage medium and are working on improving the capacity and system design to make it practical for storage,” they say.

For DNA storage to become mainstream, it will have to be much cheaper. It is still expensive to sequence DNA, and really expensive to actually synthesize it. However, both costs are falling. When Birney and Goldman published their study in 2012, it cost $12,400 to encode a megabyte of data. Now, it costs just $3,500. But even if those costs fall further, synthesizing DNA is still a niche activity, done by a small number of facilities that support research labs. There’s currently not enough capacity around the world to encode a petabyte of data.

But Erlich predicts this will change as he and others prove that DNA is the format of the future. “The first hard drive needed four people to carry it,” he says. “After decades of extensive research and development, we have thumb drives. That’s a small fraction of the money that’s gone into DNA synthesis so far. My hope is that by focusing on better approaches, we can realize the potential of DNA storage.”

New York Genome CenterYaniv Erlich and Dina Zielinski at work

The Bounty Hunters Protecting Your Slack Account

$
0
0

One of the best ways to ward off hackers is to ask for their help. That, and promise to pay them for it.

That’s the thinking behind the bug bounty program at Slack, the popular group-chat platform, which offers a pay-out to people who find and report legitimate security flaws that could be exploited by hackers.

Frans Rosén, a researcher at the web security firm Detectify, described in a recent blog post how he identified a flaw that would have allowed him to steal an individual Slack user’s private token—thus enabling him to log-in as that person.

Rosén submitted a report to Slack, detailing what he’d found, on a Friday evening. He heard back in 33 minutes. In that time, Slack had started the work of determining whether the bug was real (it was) so engineers could begin coordinating a patch. While one group worked fixing the bug, another group of Slack engineers began investigating whether anyone had already exploited the security flaw (they found no evidence of this).

To recap:

Slack fixed the bug. (“The solution Slack made was a great one,” Rosén said.)

Users accounts remained secure.

And Rosén got $3,000 for his efforts.

This isn’t unusual. Of the thousands of tips Slack has received, more than 500 have been valid bugs. The company has paid more than $200,000 in bug bounties. “This bug is exactly why we invest in our public bug bounty program,” a spokesperson for Slack told me. “Once it was identified by the security researcher, we were able to fix it within five hours and confirm shortly after that it was not exploited in the wild.”

An earlier Slack vulnerability discovered by researchers at Detectify last June had involved the code Slack used for custom bots, which contained tokens—or private credentials tied to individual accounts—and which developers were then copying to GitHub, the collaborative programming site. “In the worst case scenario, these tokens can leak production database credentials, source code, files with passwords, and highly sensitive information,” Detectify wrote at the time. Slack closed that security gap, too.

Bug bounty programs have been around since the early days of the web, but they’ve become more popular in recent years as a way to keep web users safe from “from criminals and jerks,” as Tumblr puts it in a description of its program.

In some cases these programs have resulted in massive payouts. Facebook has paid more than $5 million to some 900 researchers in the past five years. Twitter has paid more than $600,000, according to its page on Hackerone, a site where companies share information about their bug bounties. Google offers rewards of tens of thousands of dollars to hackers who identify vulnerabilities that could result in someone taking over a Google account. United Airlines pays hackers in miles instead of cash. (That program launched after a hacker claimed he’d assumed control of a United flight.)

It was Apple’s lack of a bug bounty program that may have prompted hackers to help the FBI unlock the iPhone that belonged to one of the attackers in the San Bernardino mass shooting in 2015. (Apple announced in August it would finally begin to offer cash bounties for valid bug reports.)

Back at Slack, there’s a sense of urgency about any report of vulnerabilities—whether from within the organization, or from outside researchers, or hobbyists. “Slack works very hard to ensure we don't ship known security flaws,” the Slack spokesperson said, “and the added brainpower of the developer and security communities is invaluable in keeping the service safe for everyone.”

In the meantime, if you’re a Slack user feeling mildly queasy about the thought of your messages being made public, here’s where you can change message-retention settings.

Lucasfilm Ltd. / IMDB
Viewing all 6888 articles
Browse latest View live




Latest Images