Quantcast
Channel: Technology | The Atlantic
Viewing all 6870 articles
Browse latest View live

Welcome to the SXSW of Concrete

0
0

It’s a cold January afternoon outside the Las Vegas Convention Center. I’m leaning over the barriers of what appears to be a glossy black ice rink. Gliding across its shiny surface are the offspring of a La-Z-Boy recliner and a hovercraft. In the driving seats, men clad in heavy jeans and khaki sweaters effortlessly steer the humming machines around in smooth, swinging circles. Judging by the couple next to me who have been staring quietly at the display for several minutes, I’m not the only person who finds it hypnotic, even soothing.

I’m at World of Concrete, the concrete and masonry industry’s South by Southwest—a five-day show that has summoned more than 60,000 attendees. Concrete takes many forms here—thick liquid, solid blocks, even slender decorative ribbons. The rink in front of me is poured concrete, and the machines are riding trowels whose whirring blades smooth down concrete floors into a mirrored sheen. They look like a lot of fun to pilot, and I wonder how far I could get if I commandeered one out of the lot and onto Paradise Road, heroically riding it into the Mojave Desert.

This is also the week of the presidential inauguration. World of Concrete begins on Monday; on Friday, on the other side of the country, Donald John Trump will be sworn into office. In the distance beyond the lot, Trump’s concrete literally looms over the show, bound up in the golden tower of the Trump International Hotel Las Vegas, a slim brick shimmering in the winter sun.

Trump paved his pathway to the White House with pledges to build roads, hospitals, and, of course, a “great great wall.” So now I’m staring at riding trowels in an effort to answer what I soon realize is not an easy question. How do Trump’s high-octane and often contentious campaign promises sit with the people who will actually be doing the building?

* * *

I arrive in the city over the weekend, touching down late at night. The Las Vegas Strip is visible from the air as glittering neon cubes and boxes on the glowing circuit-board of the city grid. In my ride from the airport, the boxes transform into enormous replicas of the New York skyline, Egyptian pyramids (complete with a full-schnoz Sphinx), and parts of the Roman Pantheon, variously flanked by hyperactive fountain displays.

Concrete isn’t the only show in town this week. The Shooting, Hunting, and Outdoor Trade Show, a.k.a. SHOT, is at the Sand Expo Center. Members of the pornography and sex-toy industries gather for the Adult Entertainment Expo over in the Hard Rock Hotel. Last week, all eyes in Vegas were on the Consumer Electronics Show, or CES, America’s largest tech event, filled with smart cars, fingerprint-enabled padlocks, and consumer drones. Even smart cars need roads to drive on, so I’d initially planned my visit to explore how the technologies and philosophies of concrete differ from CES’ disruptive widget-scape. Then Trump got elected.

Las Vegas feels like a natural home for World of Concrete. The casinos huddled together on the Strip’s curving four miles may be a grab-bag of international translation and appropriation, but underneath each façade lies structurally reinforced concrete. At the back of the Circus Circus casino, prefabricated concrete houses power facilities in delicate white curlicues.

Concrete is old, dating back to the actual Pantheon and Colosseum of the Roman Empire. Concrete at the industrial scale is newer. Reinforced concrete, strung through with steel and iron to make it stronger, emerged in Europe in the middle of the 19th century. Concrete pumps, machines that transfer liquid concrete by pumping, were patented in 1932. The pumps allowed builders to rapidly carry and lift large volumes of concrete from point to point, and concrete structures saturated the American landscape shortly afterward.

On this timeline, World of Concrete is a scrappy newcomer. It was founded in Houston, Texas, in 1975, and bounced around the country before settling down in Las Vegas. The Las Vegas Convention Center is a 10 minute walk from the main Strip, and with two million square feet of exhibition space, it’s one of the few places large enough to contain the show. A week ago, these beige hallways were CES’s “Tech East” section. Now the sightlines on the show floor are blocked by dinosaur-sized yellow and orange vehicles that could crush any leftover consumer gadget without leaving a smudge. Machines here actually do move fast and break things. The air smells of dust and plastics.

On Tuesday, I head to the convention center’s courtyards, where forklifts carry weights around their necks on yellow straps to show off how much each can carry—40,000 pounds for the strongest. Inside the show, men in polo shirts spritz and buff freestanding tires with treads that are deep enough to stick an arm into. The vehicles on display are glossy, candy-colored, and catwalk-pretty; one concrete mixer is decorated with massive gold and blue glittery decals that twinkle as its drum rotates.

Outside the convention center (Georgina Voss)

At the merchandise area, I rifle through instructional books on sustainable bridge structures and pavement performance. Racks of XXL t-shirts are on display, covered with phrases like “Concrete Is My Addiction.” Attendees can take in seminars (“When Bad Things Happen To Good Concrete”) or drive a Western Star severe duty truck around an obstacle course, backing it up over a ramp made of crushed gravel. You can talk pipes or paving at the thousands of vendors’ stalls, or have your photo taken alongside their machines.

In the convention center’s Central Hall, the 200-foot robotic arms of a gang of sugar-pink and lime-green concrete pumps are entwined in the rafters, like diplodocuses snuggling together. Squeezing them all into a photograph proves impossible, so I head to the booth for EarthCam, a company that specializes in image capture at a construction scale. EarthCam shoots time-lapse footage of building sites, filleting years of slow work into short balletic films where cranes and scaffolding delicately swoop around each other. Their videos of a clinic in Abu Dhabi took so long that they inadvertently captured the construction of the rest of the city behind it.

EarthCam’s staff are apologetic: Senior managers aren’t around to chat because they’re over in Washington, D.C., to set up for the inauguration. We don’t know it yet, but on Saturday, EarthCam’s cameras will capture the aerial footage that contradicts Trump’s claims about the inauguration’s robust attendance.

On the show floor (Georgina Voss)

As I speak to more exhibitors and attendees, it becomes clear that the scale and variety concrete must achieve in modern construction runs face-first into the material’s complexity. “Concrete is often regarded as a dumb or stupid material,” writes Adrian Forty, a professor of architecture at University College London, in his book Concrete and Culture. Indeed, concrete seems simple enough: Mix grit, cement, and water, then pour it where you want something solid. But concrete is unwieldy, viscous, heavy, dangerous, “a real pain in the ass” (several attendees).

The architect Bryan Boyer describes these challenges as a “matter battle”—a conflict between human intention and the inescapable laws of the physical universe, where three-dimensional things don’t easily overlap. “The guys working with concrete are sculptors,” says Chris Jilka, of the navigations systems company Topcon. “Moving dirt, moving stone, even asphalt can be fine. But for concrete you have to have a high level of understanding of how it’s being placed, how everything lines up. It’s like a ballet.”

* * *

These are interesting times for the concrete industry. After the misery of the 2008 financial crisis, construction in America is back in rude health, albeit patchily. Texas, California, and Colorado are all “very hot,” attendees say, as places where new hotels and homes and offices are being built. Demand is so high in these states that concrete-pump manufacturers are apparently having trouble filling orders. Employees worry that with baby boomers retiring, there isn’t the skilled labor force in place to do the work.

But America’s public infrastructure is still a mess—rusting rebars and cracked freeways stand as miserable testaments to a lack of net investment. It’s a complex and cross-party problem, as James Surowiecki has described in The New Yorker. Republicans have shied away from big-government investment, and the increasing need to get the nod from different government bodies makes it hard to pass policy. For politicians keen on publicity, grand plans for big new things are exciting. But the subsequent decades of maintenance are thankless and dull.

Concrete and construction formed a core part of the 2016 election manifestos, including a promise by Trump of a $1 trillion investment in infrastructure from private investors. But Trump’s loose wording on the campaign trail made it hard to define exactly what he was envisioning. His use of “infrastructure” covers the classic civic infrastructure of highways and bridges, but also the real estate of schools and hospitals. (Senate Democrats have since introduced their own $1 trillion infrastructure plan for repair and maintenance, but through federal spending.)

(Georgina Voss)

Then there’s the wall. The megastructure promised by Trump to run the length of the U.S.-Mexico border was given its own #FuckingWall hashtag on Twitter in January from the former Mexican president Vincente Fox Quesada. “Nobody builds better walls than me, believe me,” Trump declared when he launch his presidential campaign in June 2015. And shares of concrete suppliers leapt when he was elected, following expectations that the new administration would funnel resources toward the industry. But enthusiasm appears to have become muted since. With few details on the materials needed or other components of the project, the industry has been kept in the dark about what its involvement might be.

This political focus on infrastructure feels like it’s been a long time coming, but slowness is built into the concrete industry. The big machines on the showroom floor are long-term investments; I get the impression that, like a Phillipe Patek watch, one never truly owns a concrete pump, but merely looks after it for the next generation. And the industry takes its time in adopting new technologies. “It’s an uphill struggle,” says Kristy Wolfe, a professor in the Civil Engineering and Construction School at Bradley University. “No one wants to take the risk. They know what they’re getting if it’s the way they’ve always done it”.

There are at least some digital tchotchkes at the show—foremen’s smartphones and iPads, the proprietary software threaded into the vehicle controls. At the launch of their new delivery-tracking app WheresMyConcrete, I feel that Mack Trucks have missed a marketing opportunity by not adding “Dude.” Indeed, like the hunger for new technologies, lightheartedness is in short supply. I take a break to visit the Adult Entertainment Expo, which delights in double meaning and metaphors in a way that concrete does not. My friends create the game “Porn or Concrete?” from the photos I send from both events: billboards promising “Vibratory Screeds,” “Schwing Parts,” “Whatever It Takes”; tools, lubricants, harnesses, and “general purpose hard material.”

One of the show’s many exhibitions (Georgina Voss)

When I return to the convention center’s floor, I poll attendees on their feelings about Trump’s grand claims of infrastructure spending. “Dealers are saying they’re excited,” explain staff from Chicago Pneumatic. “They’re buying equipment in anticipation for what’s to come.” Meanwhile, some industry members respond with considerable side-eye. “I think the trillion dollars is ridiculous and it won’t happen,” says Bill Palmer, editor of the Concrete Construction industry website. Others look to the legislative groups to keep the checks and balances on big claims about highways and infrastructure. “I’m not holding it all onto Mr. Trump as the guy to do that,” says Jilka, of Topcon.

Hiring shortages are an example of the complexities around infrastructure. This industry tends to the multigenerational, but that’s changing, as Wolfe, the Bradley University professor, explains. “In the past, we saw a lot of ‘If my dad was an electrician and my grandad was an electrician then I’ll be an electrician.’ We don’t see so much of that any more.” She hopes that a renewed focus by the government on vocational work will draw students to careers in construction.

Yet hiring is contentious. Five days after the election, Concrete Construction published a gentle column in which their editor acknowledged that, although he was not a fan of Trump, the new administration might support construction by easing regulations and adopting skilled labor programs. But if the industry really wanted to have the workers to build all of the new civic infrastructure that had been promised? “Better build a big door in that wall.”

Palmer received several angry emails from readers cancelling their subscriptions. In December, the magazine published a counter-response from the president of a California concrete cutting company, who was furious at the implication that the infrastructure plan would come at the expense of American workers. “I do understand this trade-off” Palmer says. “They want foreign labor but they don’t. They don’t like illegal immigration, but on the other hand it keeps labor prices down. ‘If this guy uses illegal immigrants so he can do the job cheaper than I can, well, that’s just not fair.’”

Down in the merch stand are t-shirts that say Skilled Labor Isn’t Cheap; Cheap Labor Isn’t Skilled. It’s the only political slogan I see at the show.

* * *

On Wednesday, the SPEC MIX Bricklayer 500 World Championships are held in the convention center’s back lot. “WHO IS THE WORLD’S BEST BRICKLAYER?” demands a banner above a red Ford F-250 truck, which the world’s best bricklayer will take home. Lines of bricks and mortar are laid out for 25 competitors, and for an hour, these competitors each build a wall as their families look on from the stands, decked out in matching team t-shirts. The walls must be constructed 26 feet and 8 inches long, 8 inches wide, and as tall as possible. At the end, most only come up to the midriffs of the red-jacketed judges who evaluate each creation for sleek and structural perfection. Marks are docked for chipped or misaligned bricks.

The SPEC MIX Bricklayer 500 World Championships (Georgina Voss)

These considerations are not simple aesthetics. At grander scales, small mistakes with concrete can have enormous and terrible effects. Back in 1993, when Steve Bannon, Trump’s chief strategist, was still an investment banker, he was hired to jump-start the failing Biosphere 2 eco-project, a sealed manned greenhouse designed to play-test future space colonization. Oxygen levels had plummeted unexpectedly in the facility, threatening the health of the crew. It transpired that one of the culprits was the dome’s exposed concrete struts, which sucked up carbon dioxide before the plants had a chance to grab it for photosynthesis. Bannon was brought in to stop the project hemorrhaging further money, though his appointment was highly controversial.

Trump is not at the bricklaying competition to defend his wall-building prowess. Moreover, talk about the border wall—which, should it happen, is supposed to be 1,000 miles long and up to 55 feet high—is conspicuously absent at World of Concrete. Scattered around the show, copies of Masonry magazine carry an open letter to Trump from the Mason Contractors Association of America, whose stand sits in front of the wall-building challenge. The letter offers support for rebuilding existing infrastructure, but doesn’t say anything about the wall.

On Friday, the morning of inauguration, heavy clouds roll in over the mountains and a gray drizzle falls on the streets. I arrive at the Trump Hotel 30 minutes after the new president has been bestowed, curious to see what the mood is like. In the gift shop, customers are picking through Ivanka leather jackets and wine (Sauvignon Blanc is the only choice). News comes in that the Make America Great Again caps are discontinued and reduced; one couple quickly starts stacking hats into a pile, 10 high.

A young man by the tills effusively thanks everyone around him. “My voice is hoarse from cheering! It was such an honor celebrating with you all,” he tells the people in line, who nod politely. “This was such a great event. You guys make America great again!” In the twinkling lobby, families pose for photos in front of the floor-to-ceiling chandeliers.

The Trump hotel is a loner on the Strip. It’s one of only two buildings on its block, awkwardly nudged against a much shorter corporate office and standing watch over an empty construction site. Skeleton structures in the netherworld of either being built up or torn down surround it—a distinctive view that has been noted by hotel guests on TripAdvisor. As massive golden buildings go, it’s pretty eye-catching; the tint of its exterior gives it an edge over the slightly duller bronze Wynn and Encore Casino the next block over. But in a family of idiosyncratic, even downright weird Las Vegas casinos, I find it unimaginative and unambitious. Down the road, the cobalt MGM Grand needed 60 different concrete mixes when it was built to support all its various weights and structures. At the time, it was the largest hotel complex in the world.

I recall a demonstration I’d seen earlier in the week in the outside arena, across from the riding trowels on their concrete ice rink. An exhibitor in hard hat and heavy gloves had carefully and gracefully transformed several inches of bulky wet gray mass into a smooth surface. There’s a lesson here, I think. It’s easy to promise magnificent, gleaming, trillion-dollar things made of concrete if you believe the substance is malleable to your will. But a career dedicated to taming concrete—trying to tame it—forces you to be patient. It demands locking in for long-term consequences, and understanding that materials matter more than metaphor

“Anyone can mix up concrete,” says Palmer, of Concrete Construction. “But that doesn’t mean it’s going to last. It’s enormously complex and very easy to do wrong. You can put something up now and think it’s okay, but the problems won’t show up for several years. I can do a floor and think it’s great and it might be perfect the day I leave, but what’s it going to look like two years from now?”

Georgina Voss

The Government’s Secret Wiki for Intelligence

0
0

During the final weeks of the Obama administration, officials began to worry that the results of ongoing investigations into Russia’s election-related hacking might get swept under the rug once President Trump took office. They decided to leave a trail of breadcrumbs for congressional investigators to find later, according to a report from The New York Times.

In another age, the paper trail may have taken the form of notes stuffed into a box in a forgotten archive. But this being the 21st century, some of the breadcrumbs were submitted to an online wiki. According to the Times, intelligence officers in various agencies rushed to complete analyses of intelligence about Russian hacking and file the results, at low classification levels, in a secret Wikipedia-like site for intelligence analysts. There, the information would be widely accessible among the intelligence community.

That site, called Intellipedia, has been around for more than a decade. It’s made up of three different wikis, at different classification levels: one wiki for sensitive but unclassified information, another for secret information, and a third for top secret information. Each wiki can only be accessed by employees in the U.S. intelligence community’s 17 agencies who have the appropriate clearance level.

Intellipedia was formally launched in 2006, but grew slowly at first. “It was received skeptically by most,” said Carmen Medina, the former CIA director for the study of intelligence and one of the first officials to green-light the project. “Analysts were not really rewarded for contributing to Intellipedia.”

Since then, the wikis have grown steadily. According to a release celebrating the site’s second anniversary, the system housed nearly 50,000 articles by March 2008. In January 2014, the National Security Agency responded to a Freedom of Information Act request with the latest statistics: The three domains had just over 269,000 articles, more than 40 percent of which were found on the top secret wiki. (It’s not clear whether articles are duplicated across the wikis.)

Built on the same software platform as Wikipedia, Intellipedia's articles are often cribbed directly from the free encyclopedia, but with sensitive classified information added by analysts. “About everything that happens of significance, there’s an Intellipedia page on,” Sean Dennehy, one of the site’s founders, told The Washington Post in 2009. An article about the terrorist attack in Mumbai was filled with sensitive information before it was reported in the popular press, Dennehy said.

In 2009, Dennehy and Don Burke, both CIA analysts, won a Service to America Medal for their work on Intellipedia.

Their site is meant to help analysts collaborate across agencies: Improving cross-agency communication was one of the main recommendations set forth by the 9/11 Commission, which led to the establishment of the Office of the Director of National Intelligence in 2005. That’s the agency that now has custody of Intellipedia.

Unlike on Wikipedia, Intellipedia edits are tied to analysts’ identities. “We want people to establish a reputation,” said Thomas Fingar, the former Deputy Director of National Intelligence for Analysis at ODNI, at an event at the Council of Foreign Relations in 2008. “If you’re really good, we want people to know you’re good. If you’re making contributions, we want that known. If you’re an idiot, we want that known too.”

A number of Intellipedia pages have been declassified or made available to the public through FOIA requests. Some are silly, like the entry for Area 51, which, for some reason, briefly describes a cafeteria in Fort Bliss, Texas. Others, like the entry for the Nevada nuclear test site or for the Bay of Pigs invasion, are mostly filled with information from Wikipedia, but have short redacted passages that may contain classified details. A request for the page on Edward Snowden only turned up an empty placeholder. (A website called The Black Vault has compiled dozens of these FOIA requests and responses.)

Perhaps the best real-world example of how analysts use Intellipedia came in a recent story about Palantir, the secretive tech contractor led by Peter Thiel, published last month in The Intercept. Documents leaked by Snowden included pages from an internal wiki maintained by GCHQ, the NSA’s British counterpart, which included a link to a page about Palantir on Intellipedia. Other Intellipedia pages in Snowden’s leaks included links to other Palantir programs that the NSA uses, suggesting that the wiki is sometimes used for sharing technical information about intelligence software.

Intellipedia may have seen its traffic spike this week: If nothing else, the shoutout to the wiki in the Times might have sent curious analysts combing through the pages to see what intelligence was scattered there for safekeeping.

APIntellipedia allows analysts from 17 different intelligence agencies to share classified information.

Now You Can Use NASA's Volcano-Tracking Technology to Monitor Your Baby

0
0

The best way to monitor a baby, most parents will tell you, is with your eyes and ears.

Does the baby seem to need something? Okay. Go get that thing for the baby. Is the baby safe and calm? Excellent. Good monitoring, everyone.

The fact that babies have an extraordinarily effective built-in parental notification system (wailing loudly) has not quelled the market for high-tech baby monitors. If anything, the availability of cheap sensors—plus the appetite for anything wifi-connected and smartphone-controlled—has made this segment of the Baby Industrial Complex take off in recent years.

But who needs the latest ready-to-buy device when you can build your own sensor network using NASA’s volcano-tracking technology?

NASA is making public the code that powers its monitoring system atop Mount St. Helens as part of its latest Software Catalog, a deep portfolio of software with all kinds of technical applications—“all free of charge to the public, without any royalty or copyright fees,” according to NASA’s website. NASA has been sharing select pieces of its software this way since 2014.

“We’re pleased to transfer these tools to other sectors and excited at the prospect of seeing them implemented in new and creative ways,” Dan Lockney, the head of NASA’s Technology Transfer program, said in a statement.

The Mount St. Helens software is “generic enough to be deployed on different types of sensor networks—not just volcano monitoring—with minimal changes and time investments,” NASA says. In other words, it’s a program that could power a network of sensors in any sort of field environment. Most modern baby monitors are just that: A sensors or group of sensors that delivers data via Bluetooth back to a smartphone or other device. NASA’s volcano-monitoring software also lets you control and visualize the network of sensors through a web browser interface.

This includes, as NASA puts it, “monitoring for real-world events, and reacting to those events,” which, as any parent can verify, is as essential in a nursery as it is in a natural disaster.

Athit Perawongmetha / Reuters

The Internet’s Impact on Creativity: Your Thoughts

0
0

Is the internet helpful or hurtful to human creativity? I posed that question to the reader discussion group known as TAD, and the consensus seems to be: It’s both. It’s complicated. And naturally, it depends a lot on what form of creativity you’re talking about. Here’s how one reader sums it up:

Because of the Internet I write more and receive feedback from people I know (on Facebook) and online strangers (on TAD and other platforms that use Disqus). I use it as a jumping-off place and resource for planning lessons for my high-school students in science.

However, I don’t practice music as often as I used to.

On a similar note, another reader confesses, “I draw less because I’m always on TAD”:

As a sketch artist, I appreciate my ability to Google things I want to draw for a reference point, but that doesn’t make me more creative. I already had the image in my head and the ability to draw. I honed my skills drawing people the old fashioned way, looking at pictures in books or live subjects and practicing till my fingers were going to fall off.

In my opinion, the internet also encourages people to copy the work of others that goes “viral” rather than creating something truly original. The fact that you can monetize that viral quality also makes it more likely that people will try to copy rather than create.

That’s the same reason a third reader worries that “the internet has become stifling for creativity”:

Maybe I am not looking in the right place, but most platforms seem to be more about reblogging/retweeting/reposting other people’s creations. Then there is the issue of having work stolen and credits removed.

As another reader notes, “This is the central conflict of fan fiction”:

It’s obviously creative. On the other hand, it is all based on blatant copying of another writer’s work. How much is this a huge expansion of a creative outlet, and how much is this actually people choosing to limit their own creativity by colonizing somebody else’s world rather than creating a new one?

The fanfic debate is fascinating, and more readers expand on it here.

For my part, I tend to think the internet has encouraged and elevated some amazing new forms of creativity based on reaction and re-creation, collaboration and synthesis. Take this delightful example:

Those creative forms are a big part of my job too: When I go to work, I’m either distilling my colleagues’ articles for our Daily newsletter or piecing together reader emails for Notes, and those curatorial tasks have been exciting and challenging in ways that I never expected. But I’ve also missed writing fiction and poetry and literary criticism, and I worry sometimes that I’m letting those creative muscles atrophy. If you’re a fanfic reader or writer (or videographer, or meme-creator, or content-aggregator) and would like to share your experience, please let us know: hello@theatlantic.com.

This next reader speaks up for creativity as “the product of synthesis”:

It’s not so much a quest for pure “originality,” as it is a quest for original perspectives or original articulations. I’d say that my creativity has been fueled by letting myself fall into occasional rabbit holes. Whether that’s plodding through artists I don’t know well on Spotify or following hyperlinks in a Wiki piece until I have forgotten about what it was that I initially wondered, that access to knowledge in a semi-random form triggers the old noggin like little else.

On the other hand: So much knowledge! So many rabbit holes! Jim is paralyzed:

Read On »

Mohammad Ismail / ReutersWomen work together at an internet cafe in Kabul, Afghanistan, on March 8, 2012.

Scandal-Plagued Uber Is Still Dominating App Stores

0
0

It’s only March, but already 2017 seems like a year that will involve seismic shifts for the ride-sharing industry. A continuous stream of controversy for Uber, the leading ride-sharing company in the U.S., has caused negative press and public outcry, along with questions about the future of the industry and its companies.

First, there was the #DeleteUber campaign, spurred by customers angry at what they felt was the company’s role in breaking a strike by the New York Taxi Workers Alliance in protest against Trump’s immigration ban. Then came the backlash to Susan Fowler’s blog post, where the former Uber engineer detailed the systematic sexism she faced during her time at the company. A week later, Recode broke the story that an Uber executive left the company after not disclosing a sexual-harassment allegation at his previous job. And earlier this week, an unbecoming video leaked of Uber’s CEO arguing with a driver who criticized changes to the ways the company pays its drivers. On Friday, the New York Times reported allegations that the company had been using a secret tool to deceive enforcement authorities in cities where it was not yet legally able to operate.

The fallout from Uber’s PR crisis has people both inside and outside of the industry speculating about whether this means that Lyft, Uber’s main rival in the U.S., will finally get its shot at becoming the top ride-sharing app. According to the New York Times, 200,000 users really did delete Uber in early February during the #DeleteUber campaign.

But despite the barrage of negative news, Uber has largely maintained its lead as the most popular ride-sharing app. Data from App Annie, an analytics firm that tracks mobile apps, shows that, although Lyft briefly surpassed Uber in iPhone app downloads for the first time ever during a #DeleteUber surge in late January, Uber regained its top position after just two days and had more downloads than Lyft in the Apple app store for all of the month of February. Lyft has never managed to surpass Uber’s download volume on Google Play. “On the whole, we have not seen a significant shift in download trends for Uber in light of recent events,” said Christine Kim, a spokesperson for App Annie. An Uber spokesperson declined to comment on the record.

While those numbers stand in contrast to the articles predicting Uber’s demise, they probably wouldn’t surprise scholars who study consumer behavior. As my colleague Alana Semuels has written, changing consumer behavior en masse is rather difficult. Most of the time, consumers don’t follow through with what they might be saying (or tweeting) about their buying habits. Previous studies of how negative media portrayal of a product impacted sales—which, to be fair, looked at a completely different industry (beef)—found that the effects were often temporary, waning or disappearing a few weeks later.

Of course, there are two ostensibly more important metrics when it comes to measuring the health of a ride-sharing operation: ridership and the number of drivers. The last time the two companies disclosed ridership, Lyft lagged behind Uber with 52.6 million rides in the U.S. in the fourth quarter of 2016, while Uber logged 78 million rides in the U.S. in December alone. In March, Lyft reported over 100,000 drivers, while Uber hit 400,000 back in 2015. (Uber hasn’t updated its numbers since.)

While consumers on the whole have not stopped downloading Uber, their customers seem to be vocal about their displeasure in a different way: According to App Annie data, Uber’s star rating has been declining despite steady download volume. Since Fowler’s blog post, new ratings of Uber’s app in the Apple store have largely consisted of 1-star reviews. In contrast, Lyft has been averaging 4.5-stars reviews for the same period.

Lyft, for its part, has been seizing the current moment, and the perceived weakness of its main competitor, by raising money. On Wednesday, The Wall Street Journal reported that the company started seeking more funding from its investors this week, with hopes of raising some $500 million. Given that customers seem pleased with the company right now, at least comparatively, that’s probably a smart move. Lyft still has significant ground to cover when it comes to catching up with Uber, and that progress will require a lot more than their competitor’s PR implosion.

Kai Pfaffenbach / Reuters

Uber’s Secret Program Raises Questions About Discrimination

0
0

When a technology company decides to block a person from using its service, it’s usually obvious to that person. There may be an email to notify users when their account has been suspended. Or, you know, you try to log in and you can’t.

With Uber, it isn’t so straightforward.

The ridesharing giant created a mirror of its own app, a map-view intended to give some users the perception that they could hail a ride—even showing phantom cars moving across a map!—that was never going to show up.

Uber confirmed to The Atlantic the existence of the program, which was first reported by Mike Issac of The New York Times, but emphasized in a statement that the fake version of the app was meant to protect drivers—not to mislead local investigators in states where Uber’s arrival had generated controversy, as the Times reported. The program launched internally as Greyball and was later renamed VTOS, short for Violations of Terms of Service—a reflection of Uber’s justification for creating it in the first place.

“This program denies ride requests to fraudulent users who are violating our terms of service,” an Uber spokesperson said in a statement, “whether that’s people aiming to physically harm drivers, competitors looking to disrupt our operations, or opponents who collude with officials on secret ‘stings’ meant to entrap drivers.”​

One such sting involved a code enforcement inspector in Portland, Ore., named Erich England, who posed as a rider and attempted to hail an Uber as part of an operation in 2014, when Uber launched its service in Portland without seeking approval from the city. As the Times reported: “But unknown to Mr. England and other authorities, some of the digital cars they saw in the app did not represent actual vehicles. And the Uber drivers they were able to hail also quickly canceled. That was because Uber had tagged Mr. England and his colleagues—essentially Greyballing them as city officials—based on data collected from the app and in other ways. The company then served up a fake version of the app populated with ghost cars, to evade capture.”

There’s a clear question of legality here, which Uber declined to discuss on the record. The attorneys general in five states—California, Hawaii, Massachusetts, Oregon, and Texas—either declined to speak on the record about whether they were investigating Uber over the program, or did not immediately respond to a request for comment on Friday afternoon.

But there’s something else at stake here, too. Uber’s ghost app raises a pressing question for anyone who lives or works, at least part of the time, on the internet—and, in the United States, that's nearly everyone: What does the right to refuse service look like in digital environments?

At least some of the time, as Uber proves, it’s invisible. And the implications of this invisibility are troubling.

It’s only natural that Uber would be the company to push this question forward, and not just because of its astonishing streak of recent controversies. Uber has dramatically reshaped the way people think about the integration of digital and physical worlds, and the possibilities for using digital interfaces to make something happen at the street level. Today, people fully expect to be able to touch a button on their phone to make a car pull up to the curb they’re standing on; a few short years ago the concept was magical. This shift in expectation happened so quickly that what it means for other cultural norms and laws—like the right to refuse service—is still uncertain.

In brick-and-mortar environments, being refused service happens face to face—meaning, the person being turned away from a bar, for instance, knows they're being denied entry, even if they don’t always agree with the rationale. But what happens when there’s an elaborate facade designed to keep a person from knowing that they were never allowed in?

In Uber’s view, this strategy offers an added layer of protection to drivers, not only preventing potential harm but possible retaliation that might come from someone after they find out they have been blocked. In principle, this makes sense—it’s like building in a layer of plausible deniability to de-escalate tension. As a destructive force to an industry long dominated by taxis, Uber has met huge resistance in some markets, and the outcome has been messy, to saythe least.

Such secrecy, however, also shields Uber from any public scrutiny over who gets denied its service and why. “It’s critically important for people to know they’re being refused service,” Ethan Zuckerman, the director of the Center for Civic Media at MIT, told me. “It’s what allows people to file claims for discrimination, allows them to gather evidence and demonstrate that a class of people is being excluded.”

“Greyballing police may primarily raise the concern that Uber is obstructing justice,” Zuckerman added, “but Greyballing for other reasons—a bias against Muslims, for instance—would be illegal and discriminatory, and it would be very difficult to make the case it was going on.”

The history of race-based discrimination in the United States if full of examples of this very dynamic playing out in non-digital environments, like the housing market. Even after the passage of the Fair Housing Act of 1968, real estate companies would tell black renters that they had no units available—then later rent those units to white people. President Donald Trump was sued for this practice in the 1970s, and ultimately reached an agreement with the Justice Department, but still denies having done anything wrong, as The Washington Post reported last year.

Just like prospective renters being told a neighborhood was full, “those discriminated against would simply have a poor experience with Uber and move on to another service,” Zuckerman said. “Uber likes to make the case that it can innovate because it’s largely unregulated, but it can also discriminate for the same reasons.”

In digital environments, as in physical ones, the refusal of service—and any tactics used to refuse it, however secretive—should prompt observers to ask: Who is actually being protected here? And from what?

Adnan Abidi / ReutersUber CEO Travis Kalanick, addresses a gathering at an event in New Delhi, India, December 16, 2016

‘Artificial Intelligence’ Has Become Meaningless

0
0

In science fiction, the promise or threat of artificial intelligence is tied to humans’ relationship to conscious machines. Whether it’s Terminators or Cylons or servants like the “Star Trek” computer or the Star Wars droids, machines warrant the name AI when they become sentient—or at least self-aware enough to act with expertise, not to mention volition and surprise.

What to make, then, of the explosion of supposed-AI in media, industry, and technology? In some cases, the AI designation might be warranted, even if with some aspiration. Autonomous vehicles, for example, don’t quite measure up to R2D2 (or Hal), but they do deploy a combination of sensors, data, and computation to perform the complex work of driving. But in most cases, the systems making claims to artificial intelligence aren’t sentient, self-aware, volitional, or even surprising. They’re just software.

* * *

Deflationary examples of AI are everywhere. Google funds a system to identify toxic comments online, a machine learning algorithm called Perspective. But it turns out that simple typos can fool it. Artificial intelligence is cited as a barrier to strengthen an American border wall, but the “barrier” turns out to be little more than sensor networks and automated kiosks with potentially-dubious built-in profiling. Similarly, a “Tennis Club AI” turns out to be just a better line sensor using off-the-shelf computer vision. Facebook announces an AI to detect suicidal thoughts posted to its platform, but closer inspection reveals that the “AI detection” in question is little more than a pattern-matching filter that flags posts for human community managers.

AI’s miracles are celebrated outside the tech sector, too. Coca-Cola reportedly wants to use “AI bots” to “crank out ads” instead of humans. What that means remains mysterious. Similar efforts to generate AI music or to compose AI news stories seem promising on first blush—but then, AI editors trawling Wikipedia to correct typos and links end up stuck in infinite loops with one another. And according to human-bot interaction consultancy Botanalytics (no, really), 40 percent of interlocutors give up on conversational bots after one interaction. Maybe that’s because bots are mostly glorified phone trees, or else clever, automated Mad Libs.

AI has also become a fashion for corporate strategy. The Bloomberg Intelligence economist Michael McDonough tracked mentions of “artificial intelligence” in earnings call transcripts, noting a huge uptick in the last two years. Companies boast about undefined AI acquisitions. The 2017 Deloitte Global Human Capital Trends report claims that AI has “revolutionized” the way people work and live, but never cites specifics. Nevertheless, coverage of the report concludes that artificial intelligence is forcing corporate leaders to “reconsider some of their core structures.”

And both press and popular discourse sometimes inflate simple features into AI miracles. Last month, for example, Twitter announced service updates to help protect users from low-quality and abusive tweets. The changes amounted to simple refinements to hide posts from blocked, muted, and new accounts, along with other, undescribed content filters. Nevertheless, some takes on these changes—which amount to little more than additional clauses in database queries— conclude that Twitter is “constantly working on making its AI smarter.”

* * *

I asked my Georgia Tech colleague, the artificial intelligence researcher Charles Isbell, to weigh in on what “artificial intelligence” should mean. His first answer: “Making computers act like they do in the movies.” That might sound glib, but it underscores AI’s intrinsic relationship to theories of cognition and sentience. Commander Data poses questions about what qualities and capacities make a being conscious and moral—as do self-driving cars. A content filter that hides social media posts from accounts without profile pictures? Not so much. That’s just software.

Isbell suggests two features necessary before a system deserves the name AI. First, it must learn over time in response to changes in its environment. Fictional robots and cyborgs do this invisibly, by the magic of narrative abstraction. But even a simple machine-learning system like Netflix’s dynamic optimizer, which attempts to improve the quality of compressed video, takes data gathered initially from human viewers and uses it to train an algorithm to make future choices about video transmission.

Isbell’s second feature of true AI: what it learns to do should be interesting enough that it takes humans some effort to learn. It’s a distinction that separates artificial intelligence from mere computational automation. A robot that replaces human workers to assemble automobiles isn’t an artificial intelligence, so much as machine programmed to automate repetitive work. For Isbell, “true” AI requires that the computer program or machine exhibit self-governance, surprise, and novelty.

Griping about AI’s deflated aspirations might seem unimportant. If sensor-driven, data-backed machine learning systems are poised to grow, perhaps people would do well to track the evolution of those technologies. But previous experience suggests that computation’s ascendency demands scrutiny. I’ve previously argued that the word “algorithm” has become a cultural fetish, the secular, technical equivalent of invoking God. To use the term indiscriminately exalts ordinary—and flawed—software services as false idols. AI is no different. As the bot author Allison Parrish puts it, “whenever someone says ‘AI’ what they're really talking about is ‘a computer program someone wrote.’”

Writing at the MIT Technology Review, the Stanford computer scientist Jerry Kaplan makes a similar argument: AI is a fable “cobbled together from a grab bag of disparate tools and techniques.” The AI research community seems to agree, calling their discipline “fragmented and largely uncoordinated.” Given the incoherence of AI in practice, Kaplan suggests “anthropic computing” as an alternative—programs meant to behave like or interact with human beings. For Kaplan, the mythical nature of AI, including the baggage of its adoption in novels, film, and television, makes the term a bogeyman to abandon more than a future to desire.

* * *

Kaplan keeps good company—when the mathematician Alan Turing accidentally invented the idea of machine intelligence almost 70 years ago, he proposed that machines would be intelligent when they could trick people into thinking they were human. At the time, in 1950, the idea seemed unlikely; Even though Turing’s thought experiment wasn’t limited to computers, the machines still took up entire rooms just to perform relatively simple calculations.

But today, computers trick people all the time. Not by successfully posing as humans, but by convincing them that they are sufficient alternatives to other tools of human effort. Twitter and Facebook and Google aren’t “better” town halls, neighborhood centers, libraries, or newspapers—they are different ones, run by computers, for better and for worse. The implications of these and other services must be addressed by understanding them as particular implementations of software in corporations, not as totems of otherworldly AI.

On that front, Kaplan could be right: abandoning the term might be the best way to exorcise its demonic grip on contemporary culture. But Isbell’s more traditional take—that AI is machinery that learns and then acts on that learning—also has merit. By protecting the exalted status of its science-fictional orthodoxy, AI can remind creators and users of an essential truth: today’s computer systems are nothing special. They are apparatuses made by people, running software made by people, full of the feats and flaws of both.

Reuters/China Stringer Network

Is It Wise to Foil North Korea’s Nuclear Tests With Cyberattacks?

0
0

Last year, North Korea’s missile tests started having major problems: Tests of the Musudan, a medium-range missile, failed nearly nine times out of ten, surprising some experts. The country had pushed its nuclear program forward relatively quickly, and avoided some key errors. What had changed?

According to a detailed new report from The New York Times, creaky parts and bad engineering probably played a role—but those problems may have been compounded by an American campaign of cyberattacks on the missile launches, ramped up under President Obama.

Attacking another country’s military arsenal, whether by bomb or by malicious code, always comes with the potential of escalation. Targeting North Korea’s nuclear program—the pride and joy of the country’s volatile supreme leader, Kim Jong Un—is especially dicey.

For one, it could prompt North Korea to retaliate. The pariah state showed its willingness to launch cyberattacks on the U.S. when its state-sponsored hackers obtained and published private emails and information from Sony Entertainment in 2014. Leaking information from a movie studio is a far cry from a cyberattack on, say, a piece of critical infrastructure like the U.S. electrical grid—a feat the U.S. military fears North Korea may one day be capable of—but the Sony hack may have been something of a warning shot.

Attacking another country’s nuclear arsenal risks disrupting the delicate balance of deterrence that generally keeps powerful militaries from lobbing nukes at one another. The prospect of mutually assured destruction that has thus far staved off nuclear war could be thrown into jeopardy.

If a nation expects its valuable warheads to be destroyed at any moment, it could develop a “use it or lose it” mentality, said Vince Houghton, the historian and curator at the International Spy Museum in Washington, D.C. That could encourage an unpredictable leader like Kim to launch a working missile before it’s too late and it’s remotely disabled. What’s more, a country that thought it had disabled an adversary’s nuclear arsenal “might be more tempted to take the risk of launching a preemptive attack,” wrote David Sanger and William Broad in the Times.

The last known time the U.S. military trained a cyber-weapon on another country’s nuclear program was when it infected nuclear control systems in Iran with Stuxnet, a sophisticated piece of malware that was co-developed with Israeli forces, nearly a decade ago.

But that strike, which set back Iran’s nuclear program by years, isn’t the same as meddling in North Korea’s nuclear program, Houghton said. “Stuxnet was an attack on a country that didn’t yet have weapons, so the idea was to avoid a shooting war while still slowing down progress toward a deliverable warhead,” he said. “When you have a country that is already a nuclear power, the dynamic is somewhat different.”

Launching targeted cyberattacks is just one in a host of options available to the president for disrupting North Korea’s nuclear development. Houghton said China, North Korea’s friendly neighbor and political benefactor, may be more willing to tolerate a cyberattack on its client state than a conventional attack. But if it’s all the same to Beijing, Houghton said, a straightforward military strike could be just as effective—if, of course, it were able to hit all of the sensitive targets at once, including some that may be hidden underground or in caves.

“In my opinion, if you are taking the risk of a country responding to a cyberattack with a nuclear response, why use something as touch and go as cyber?” he said. “Just drop a JDAM”—a computer-guided bomb—“on their nuke sites, or command and control facilities. Why get cute?”

A conventional attack could, of course, provoke a dangerous, immediate response from North Korea. But a cyberattack might set off dangerous longer-term consequences, said James Acton, the co-director of the Nuclear Policy Program at the Carnegie Endowment for International Peace.

China and Russia have long been worried that the U.S. would use its cyberweapons to disrupt their nuclear arsenals, Acton said, even though the U.S. has assured the countries that it’s not developing technology to undermine their nuclear deterrence. “To actually see the U.S. exercise its capability will be pretty concerning to them,” Acton said. “In the longer term, this could set off very serious alarm bells in Beijing and Moscow.”

The fear that the next Stuxnet could be aimed at them might prod Russia and China into taking drastic steps to protect their nuclear programs. China could decentralize its nuclear arsenal, for example, to prevent the U.S. from being able to eavesdrop on or scramble launch orders sent to commanders in the field. Or Russia could press on with risky ideas like its nuclear-armed unmanned underwater drone, which could carry out a mission with little human intervention once it’s been set along its path.

It’s hard to know just how successful the cyberattacks on North Korea were. Missile tests have high failure rates, and the recent ones could have just been the result of human error, a faulty batch of parts, or just bad luck. The cyberattacks could be justified if they really made a difference, Acton said. “I could make an argument, in principle, that if it were effective at slowing down the program, it might be worth doing—even at the risk of pissing off the Russians and the Chinese.”

Korean Central News Agency / handout via Reuters

An Acoustic Throw-Back for the New York Knickerbockers

0
0

There was something decidedly unnerving about the atmosphere at Madison Square Garden during the Knicks-Warriors matchup on Sunday night.

“It was weird. It was really weird,” said Steve Kerr, the head coach for the Golden State Warriors, in a postgame interview. “It felt like church.”

Madison Square Garden had announced an unusual experiment, via a message on the arena’s jumbo screen, before the game: “The first half of today’s game will be presented without music, video, or in-game entertainment so you can experience the game in its purest form. Enjoy the sounds of the game.”

Courtesy of Madison Square Garden

It’s not that it was quiet, exactly. You could still hear a roiling crowd in the stands, the bleat of a referee’s whistle, the reliable thud of the ball, and the squeaking of sneakers on polished maple. But to anyone who is accustomed to the pace and sound of a professional game, things were definitely different. One sportswriter described the atmosphere as “incredibly uncomfortable.”

The players certainly noticed. “I didn’t like it,” said Kristaps Porzingis, a center for the Knicks, in a postgame interview. The Knicks lost the game, 112 to 105.

“That was pathetic,” said Draymond Green, a power forward for the Warriors. “You don’t go back to what was bad. It’s like, computers can do anything for us. It’s like going back to paper. Why would you do that?”

Spokespeople for Madison Square Garden and the Knicks declined to discuss the genesis for the experiment, or comment further.

The professionalization of basketball has led to, among other things, an unusual convergence of sport and sound. More precisely, though, it is a convergence of basketball and broadcast technology. Broadcast television, in particular, has forever changed the game, such that even the smallest technical details shape people’s expectations about what a basketball game should be.


Technicians have come up with all kinds of workarounds to give TV viewers the sense of atmosphere at the actual game. One of the best, and simplest, is the addition of floor-level microphones aimed at capturing the sweet echo of a ball being dribbled down the court. Backboards are also outfitted with contact mics, to amplify bank shots and swishes. (Television networks have been strategic about their use of this technology. They want viewers at home to hear the ball, but not the trash talk among players.) Today, people take for granted these kinds of auditory enhancements—but they’re part of why revisiting classic basketball footage can make historic games seem even older than they actually are.  (Also, those shorts.)

Then there are the ways that broadcasting bleeds back onto the court.

The rhythms of broadcast television are now built into the very game, most famously in the form of TV timeouts—those mandatory breaks in play so that viewers watching a televised game don’t miss any action during commercials. TV timeouts have changed coaching strategies, and changed the way basketball fans experience the game in-person, too. Major arenas have created new traditions around the necessity of these media pauses—features like kiss-cams, interactive guessing games, Jock Jams style music, and other team-specific rituals that are their own forms of broadcast.

People accept the game as it is now, because it’s what we are used to. But it wasn’t always so. In 1967, in the early days of TV timeouts, the crowd at an NFL game loudly booed in response to a television-prompted pause in play. The episode was described in newspaper accounts the next day.

Basketball games, too, were different back then. Listen to what a basketball game (a very, very exciting one) sounded like in 1962, for example:



If anything, the experiment at Madison Square Garden Sunday night was like a return to an earlier era of broadcast, when fans could listen to games on the radio, but before most games were televised. But the experiment also begs a deeper question: What is basketball in its purest form? Is the sport somehow more authentic when it is not broadcast, or when it is not being played professionally, or when there are no spectators at all? People often describe the purity of college hoops as something tied to the fact that the players are not paid. But there’s a strong argument that the lack of compensation for college players is instead what makes college basketball impure.

As it turns out, the first basketball game ever to be televised in the United States was a college game—and it was played at Madison Square Garden. The year was 1940. The matchup: Fordham vs. Pittsburgh.(Pittsburgh won.) That game didn’t even begin to approach the highly-produced spectacle that pro basketball games have become. It simply couldn’t have, given the technologies available at the time. Besides, it’s unlikely very many people even watched it on television. TV sets in New York City numbered in the hundreds at the time, according to Fordham’s sports information office. It would be another seven years before the World Series was televised. Ditto the first televised State of the Union address. But when television finally took off, it happened quickly and irreversibly. TV has left as deep an imprint in the sporting world as it has in the rest of American culture.

In 1945, nearly one-fifth of people in the United States still didn’t know what a television was, according to a Gallup poll that year. And even among those who’d heard of TV, most people had never seen one in person. Between 1947 and 1957, American television ownership jumped from 10 percent to about 85 percent. By the 1960s, with the emergence of color television there was no turning back. “For color television, basketball is unusually ideal,” The San Mateo Times wrote in 1967. “First of all, the lighting in the usual arena setup is more than adequate, and the short distances involved keep color distortion to a minimum.”

“Just about every sports producer and director we talked with agreed that basketball couldn’t be better for television if it had been created by a television producer,” the Times concluded.

And now, a return to a pre-television aesthetic doesn’t feel authentic or pure, but rather unnatural.

“It was ridiculous,” said Green. But he could have just as easily been talking about the history of broadcast as he went on: “It changed the flow of the game. It changed everything.”

Brad Penner / USA TODAY Sports / ReutersDerrick Rose (25), a point guard for the New York Knicks, drives against Stephen Curry (30) and Draymond Green (23) of the Golden State Warriors on March 5.

The Cyberwar Information Gap

0
0

U.S. government hackers began developing destructive malware meant to disrupt Iran’s nascent nuclear program as early as 2006, and deployed an early version of the worm in Iran the following year. But it wasn’t until 2010 that the first public reports about the cyberattack—dubbed Stuxnet—began to surface.

At around the same time as the U.S. was working on Stuxnet, it attempted a similar attack on North Korea’s nuclear program. That effort failed: The malware never reached the computers that controlled the country’s nuclear centrifuges. But it wasn’t reported until 2015, years after it happened. Just this weekend, The New York Times described a series of cyberattacks on North Korea’s missile launches that took place in 2016, during Barack Obama’s final year as president.

The timing of these landmark reports emphasizes the yawning gap that often opens between a high-profile state-on-state cyberattack and the moment it’s revealed to the public.

For one, the effects of a military cyberattack often aren’t observable to civilians or journalists. Unlike a conventional strike—which might feature planes streaking across the sky or troops deploying on the ground—a cyberattack can be launched remotely and silently, and inflict damage only on a very limited target. (It’s also a lot easier to experiment with destructive malware in secret than it is to quietly test a nuclear bomb.)

When a cyberattack has been carried out, at least one party, the attacker, knows about it immediately. Sometimes, the attack’s target quickly becomes aware of what happened, but often, because of the confusing and covert nature of cyberwar, the victim remains in the dark for months or even years. When Chinese hackers stole personal data on more than 22 million Americans from the Office of Personnel Management, they gained access to two database systems in May and October of 2014—but OPM didn’t discover them until May and April 2015, respectively.

Once aware of a cyberattack, the governments involved have to decide whether or not to publicize it. Sometimes, it’s in the best interest of both the attacker and the attacked to keep a hacking incident quiet. The reputation of the target country might suffer if it acknowledges that a successful attack was carried out against it, and it could even feel pressured to strike back if it became public. Meanwhile, the aggressor may benefit from keeping its cyber capabilities secret from other adversaries.

As the Times worked on the story about last year’s cyberattacks on North Korea, it was in contact with the Office of the Director of National Intelligence, and agreed to withhold certain details from the final story “to keep North Korea from learning how to defeat [the attacks].” James Lewis, a security-policy expert at the Center for Strategic and International Studies, said one of the Times reporters reached out to him several months ago. Lewis recommended the reporters check in with the DNI before publishing, which they did.

“It would have been better unpublished (unless the North Koreans finally woke up, and there was then no harm to going public),” Lewis wrote in an email. Now that they’re widely known, the cyberattacks may prompt Russia and China to take risky new moves to protect their own nuclear arsenals from American malware, James Acton, a nuclear-policy expert at the Carnegie Endowment for International Peace, told me this weekend.

When neither side is willing to go public, it takes dogged reporting to uncover a cyberattack. The Reuters story about the failed Stuxnet-style cyberattack on North Korea was sourced to several anonymous high-level intelligence officials, and came about five years after the initial incident. The Times story was a year in the making, and was assembled through interviews and a thorough review of public records and information.

But sometimes, it is in the best interest of the government that’s been hit by hackers to publicly attribute the strike to its perpetrator. The U.S. has shown a willingness to do this: On three separate occasions, the intelligence community has pointed fingers for a cyberattack, either through official statements or more subtly through the press.

After sensitive emails and documents from Sony Entertainment officials were leaked in 2014, the FBI said it had determined that North Korea was behind the hack. The OPM hack took place that same year, and after the hack was made public in 2015, although the government never released a formal statement, top members of Congress consistently blamed China for the incursion. And when WikiLeaks began to publish private emails from top Democrats, all 17 agencies in the intelligence community put out a joint statement singling out Russia as the aggressor.

State-on-state cyberattacks are a new enough phenomenon that international norms for dealing with them are still developing. Part of the U.S. government’s willingness to call out foreign state-sponsored hackers comes from a belief that doing so—and imposing consequences—will act as a deterrent against future cyberattacks.

But under President Trump, the U.S. government may be less willing to attribute cyberattacks than it was under Obama. As I wrote in December, Trump’s hostility toward investigations that focused on Russia’s election-related hacking, and his repeated public skepticism about the possibility of attributing hacking accurately at all, suggests he won’t put a premium on tracking down the origin of a cyberattack—or might avoid making such a determination public, if it’s ever reached.

This weekend, Trump made the unfounded claim that Obama ordered surveillance on his presidential campaign in the leadup to the election, and demanded that congressional investigators fold that question into their ongoing inquiry into Russian electoral interference. In the past, Trump has also called for investigations into leaks to media about Russia-related intelligence reports—a move that was seen as designed to distract from questions about Russia’s role in cyberattacks on Democrats.

If the U.S. becomes unwilling to come forward with details about cyberattacks that target American government agencies, businesses, or individuals, they may not come out for years—surfacing only when journalists connect the dots and publish the details.

Korean Central News Agency / Handout via ReutersNews of cyberattacks aimed at North Korea's missile program emerged this weekend, about a year after they began.

Should Journalists Be More Cautious of WikiLeaks?

0
0

Since around the time of the presidential election in November, the U.S. media has taken a hard look at its tumultuous love affair with WikiLeaks. News organizations had lapped up the documents that the site was churning out: first, thousands of emails from the Democratic National Committee, then thousands more from the personal Gmail account of Hillary Clinton’s campaign manager, John Podesta.

The U.S. intelligence community now says the emails were stolen by Russian hackers and passed along to WikiLeaks for publication, an allegation Assange continues to deny. As the source of the leaked information came into focus, some news organizations began to rethink their eager participation in amplifying it. “Every major publication, including The [New York] Times, published multiple stories citing the DNC and Podesta emails posted by WikiLeaks, becoming a de facto instrument of Russian intelligence,” the Times wrote in a detailed postmortem of Russia’s meddling around the U.S. election.

So when WikiLeaks dumped thousands of electronic documents stolen from the CIA on its website on Tuesday—a leak it called “the largest intelligence publication in history”—the media got its first chance since the election to try out their new skeptical approach.

At first blush, the new WikiLeaks reporting didn’t look much different than the old WikiLeaks reporting.

The Timespublished a piece written by three journalists that repackaged the contents of a WikiLeaks press release announcing the CIA document dump. The story’s breathless second paragraph read: “If the documents are authentic, as appeared likely at first review, the release would be the latest coup for the anti-secrecy organization and a serious blow to the CIA, which maintains its own hacking capabilities to be used for espionage.”

The Times article didn’t mention the possibility of a connection between the Russian government and WikiLeaks, which was the focus of a report published by the Director of National Intelligence in January. The Washington Post included a paragraph about WikiLeaks’ track record in its story about the CIA documents, and quoted Nicholas Weaver, a security expert at the University of California, Berkeley, speculating that the data was “probably legitimate or contains a lot of legitimate stuff,” in part because of the sheer size of the leak.

To be fair, the WikiLeaks dump is momentous, and the Times and the Post published stories about it before it was more than a few hours old. They attempted to check whether the leak was genuine, and made it clear that their determinations of the leak’s authenticity were only preliminary. It is, after all, easy to slip in a few fabricated documents in a trove of thousands.

The question of how to approach WikiLeaks seems yet unsolved. Should journalists absolve the site of its apparent participation in a Russian campaign to tip the results of the U.S. election? Does the gravity of the documents contained in the CIA leak necessitate reporting on them, even before they’re thoroughly vetted? If these documents appear genuine, how much should news articles question why WikiLeaks published them?

For its part, WikiLeaks appears to be shifting its strategy with its latest document dump. In the past, it has let the public loose on its leaked documents with little more than a few paragraphs of introduction, occasionally building search functions to let users sift through the largest dumps. The CIA leak, on the other hand, came with a detailed press release and analysis of the some key findings from the documents, written in a journalistic style.

Uncharacteristically, WikiLeaks appears to have gone out of its way to redact sensitive information and withhold malicious code from the CIA documents it made public. That’s a slight departure from previous leaks, which were wholly unfiltered. In an opinion piece published in the Times in November, Zeynep Tufekci, a scholar of technology and society, wrote about the difference between whistleblowing and document dumping:

Whistle-blowing … is a time-honored means for exposing the secret machinations of the powerful. But the release of huge amounts of hacked data, with no apparent oversight or curation, does the opposite. Such leaks threaten our ability to dissent by destroying privacy and unleashing a glut of questionable information that functions, somewhat unexpectedly, as its own form of censorship, rather than as a way to illuminate the maneuverings of the powerful.

The analyses in the WikiLeaks release appear to be nudging reporters toward a few storylines in particular: bureaucratic infighting between the CIA and the National Security Agency, and the dangers of “cyberweapons proliferation,” to name two. But a section of the release with answers to frequently asked questions includes an odd section that speaks directly to journalists.

One answer encourages reporters who might fear that others will “find all the best stories before me.” WikiLeaks responds: “Unlikely. There are very considerably more stories than there are journalists or academics who are in a position to write them.”

Stranger still is another answer that suggests WikiLeaks left some of the juiciest documents out of its initial summary. “WikiLeaks has intentionally not written up hundreds of impactful stories to encourage others to find them and so create expertise in the area for subsequent parts in the series. They’re there. Look.” The answer goes on to say, “Those who demonstrate journalistic excellence may be considered for early access to future parts.”

Here, WikiLeaks sounds less like a purveyor of newsworthy documents and more like an exclusive club that will only accept reporters who complete a scavenger hunt to the organization’s satisfaction. And the race has already begun.

Peter Nicholls / ReutersWikiLeaks founder Julian Assange makes a speech from the balcony of the Ecuadorian Embassy in London.

When Algorithms Don’t Account for Civil Rights

0
0

As people live more of their lives online, the necessity of figuring out how to extend offline protections to virtual practices becomes even more important. One way in which this problem is evident is bullying. Schools have long had punitive systems in place that, though far from perfect, sought to make their classrooms and hallways safe environments. Extending those same systems to the online world has been a significant challenge—how can they monitor what happens online and in private? And what’s the appropriate punishment for bad behavior that happens on the internet?

Another area that has proven difficult for this act of offline-to-online translation is the rules that protect Americans from discriminatory advertising. The internet is chock-full of ads, many of which are uncomplicated efforts to get people to buy more home goods and see more movies and so on. But things get a lot more tricky when the goods being advertised—such as housing, jobs, and credit—are those that have histories of being off-limits to women, black people, and other minorities. For these industries, the federal government has sought to make sure that advertisers do not help further historical oppression, via laws such as the Fair Housing Act and the Equal Credit Opportunity Act.

By design, many social-media companies and other websites have a ton of data about who users are and what they’re interested in. For advertisers, that makes promoting goods on those sites particularly appealing, since they can aim their ads at the narrow slice of people who might be interested in their products. But targeting, taken too far, can be the same as discrimination. And while it’s perfectly legal to advertise men’s clothing only to men, it’s completely illegal to advertise most jobs exclusively to that same group.

For businesses like Facebook and other platforms where companies advertise, that can create a challenge. They must figure out how to avoid discriminatory ads while remaining attractive to advertisers.

The weaknesses in virtual protections have become quite apparent. In the fall of 2016, a ProPublica investigation concluded that Facebook’s advertising platform had some serious deficiencies. The option for advertisers to target users based on their assigned “ethnic affinity,” the piece said, made it possible for companies to exclude entire groups of people from viewing their ads in a way that was not only ethically dubious, but also may have run afoul of civil-rights laws. While Facebook has denied any legal wrongdoing, the company announced several changes to their advertising platform in February—including renaming the ethnic-affinity designation (to “multicultural affinity”) and preventing the use of the category for ads related to housing, credit, and jobs.

For Facebook and some other platforms, advertising revenues were incorporated into their business plans without, they claim, compromising on their egalitarian mission statements or crossing any legal lines. They have done so by posting generic advertising agreements and making advertisers say they will abide by anti-discrimination clauses. Some agreements are also intended to prevent more generic forms of scamming and false advertising. But it’s difficult to monitor whether advertisers actually comply—ads are generally coordinated by algorithms. Thus, as sites grow and bring in ever more money, these platforms must choose to what extent greater profits are worth running the risk of discrimination, insofar as the value of the advertising somewhat hinges on how precise the targeting can be.

Steve Satterfield, the manager of privacy and public policy at Facebook, told me that the site currently has around 4 million advertisers. When it comes to addressing targeted ads that might impinge on civil rights, Satterfield says, “it is a hard thing to identify those ads and to be able to take action on them.” That’s because not every ad that targets users based on race or ethnicity is exclusionary, and not every type of ad falls within the purview of federal civil-rights law.  

By and large, Americans have gotten used to the idea that ads are crafted to reach specific groups in specific ways: Ads for beer appear during sports games, while ads for toy stores pop up during children’s programs. Sites that cull data from users’ behavior and content offer advertisers even more customization. Aaron Rieke, a principal at the technology consulting firm Upturn, says that it’s pretty common practice for marketers to use information such as geography and census data to piece together information about racial groups—which means that platforms can enable discrimination even if they don’t give advertisers the sort of explicit “ethnic affinity” option that Facebook once did.

Doc Searls, the founder of ProjectVRM at Harvard, which works on issues of standards and protocols for technology, says that the world that Facebook and some of its other social-media brethren inhabit, which includes mining users’ every interaction on a platform for data about who they are and what they are interested in—is an increasingly appealing option for advertisers, but a potentially problematic one when it comes to protecting users’ rights.

The advertising these platforms offer is a significant departure from how marketing worked for a long time, Searls says. “An important thing about advertising of the traditional kind, the kind that Madison Avenue practiced for more than 100 years, is that it's not personal. It's aimed at large populations. If you want to reach black people, you go to Ebony back in the day. And if you wanted to reach camera people, you went to a camera magazine,” he told me. “The profiling was pretty minimal, and it was never personal.”

Prior to civil-rights laws, advertisers could be blatant about who they were trying to attract or reject. They could, for instance, say that minorities weren’t allowed to move into a neighborhood, or that women weren’t invited to apply for jobs. That meant that minorities and women endured less-favorable options when it came to housing, loans, and jobs. The Fair Housing Act, enacted in 1968, and the Equal Credit Opportunity Act, enacted in 1974, made it illegal to withhold promotions for housing or credit, or differentiate offers, based on characteristics such as race, ethnicity, or sex.

These laws, along with the fact that many ads are never actually vetted by human eyes, but rather run through an algorithm before posting, makes the culpability of Facebook and other social-media platforms hard to determine, in a legal sense. “The question of when, if ever, Facebook as the platform that carries those advertisements becomes legally complicit is complex,” says Rieke.

When it comes to assessing culpability in the realm of online discrimination, the Communications Decency Act is often used to determine whether or not internet platforms are at fault for illegal content that appears on their sites. The law, passed in 1996, essentially says that platforms that host a ton of user-uploaded content, such as Facebook, YouTube, or Craigslist, can’t generally be held responsible for a user posting something that is discriminatory, according to Olivier Sylvain, a professor at Fordham Law School.

But posting paid advertising that violates anti-discrimination laws is different, Sylvain says: “They are on the hook when they contribute one way or another in their design and the way in which the information is elicited.” One example that helps to illustrate the limits of the protections offered to companies by the Communications Decency Act (CDA) involved a website called Roommates.com. The platform, a forum to help individuals find roommates, was sued for violating the Fair Housing Act by allegedly allowing for gender discrimination in housing. A court ruled that because the site’s design required users to fill in fields about gender in order to post, it couldn’t rely on immunity offered by CDA as a defense. Roomates.com ultimately won its lawsuit, but the platform now makes adding information about gender optional. (Roomates.com did not respond to a request for comment.)

But a lot of times the role of the platform is more subtle. Often sites don’t require advertisers to perform a discriminatory act—they just don’t successfully ensure that they can’t. And whether that makes them liable is far from settled.

One solution is that the industry could ease up on targeting. This is not as profit-unfriendly as it sounds: Searls is of the of the mind that increasingly specific tracking isn’t the most enduringly profitable path for advertisers anyway. “Targeting doesn't work,” he said, before adding some nuance. “I should put it this way: The more targeted an ad is, the creepier it is and the more likely people are to resist it and block it.” That creepiness factor could lead to a shift in the supply and demand dynamics of advertising, as users ramp up their use of ad-blocking software. He thinks that bad publicity about racially targeted ads is a sign of more general pushback against targeting to come.

This may well come true someday, but it seems unlikely that it will happen anytime soon. In the meantime, advertisers’ activities remain relatively unchecked. Perhaps one way to reduce discrimination is for users to be given some say. Google, for instance, has created an ad-settings page that aims to let users have some control over the profiles the company builds about them, and thus the ads that they are served. In theory, this could be a neat solution.

In practice, though—at least early iterations of the tool—proved to be in some ways inefficient. A 2015 study from Carnegie Mellon University investigated how the tool performed, how transparent the practices of advertisers were, and whether or not the opportunity for discrimination in advertising would persist, despite users’ greater ability to control the ads they were seeing. What they found was cause for concern. The study indicated a statistically significant difference in ads shown to men and women whose profiles suggested they were looking for jobs, with men being much more frequently targeted for ads offering high-paying jobs than women were.

Since 2015, Google’s Ad Settings page has gotten some additional updates and a spokesperson for the company wrote in an email, “Advertisers can choose to target the audience they want to reach, and we have policies that guide the type of interest-based ads that are allowed. We provide transparency to users with ‘Why This Ad’ notices and Ad Settings, as well as the ability to opt out of interest-based ads.” Still, it seems that even with the best intentions, there’s still much work to be done when it comes to giving users more control as the antidote to bad ads.

This shifts attention back onto the sites that host advertisements. Cynthia Dwork, a computer scientist who does research at Microsoft and at Harvard University, is trying to take a systems-based approach to studying fairness in algorithms—starting with those used for placing ads.

The initial question of her work centered around how to run a fair advertising platform. That question is difficult to answer since advertisers often aren’t targeting ads based on explicitly discriminatory information, which makes nailing down intent slippery, Dwork told me. One possibility would be for social-media companies to place more restrictions on what information can be used in targeting an ad. The trouble there is that they don’t want to expressly tell advertisers (their customers) what to do, or limit their ability to target audiences based on market research, so long as they don’t appear to be engaging in unfair practices.

“Even defining fairness is complex,” Dwork said. She gave an example about choosing a set of applicants for a job interview. To make the selection of that group fair, she said, one might say that the group must reflect the demographics of the country at large. But if the company were to have a search process not fully attuned to the diversity of talent and select only weak applicants from certain minority groups, it would ensure that they don’t get the position. In that instance, the fairness exists in appearance only. That’s why culturally aware systems are necessary, she says—better understandings of actual, fair similarities can be deduced. She gives another example to illustrate this point: Smart minority children might be steered towards studying math, while smart white kids might be steered specifically toward finance. If an algorithm looking for promising students isn’t aware that a similarity in aptitude but a difference in culture, and thus field of study, exists, it might miss an entire group of students. A smarter algorithm would take this into consideration and view both groups of students similarly.

“Without a mathematics to capture values of interest to society, such as fairness, we quite literally do not know what we are building,” she told me. Dwork says that’s why she’s worried about getting it right, but there’s also a need to move quickly. “I’m concerned that the theory will be too late to influence practice, and that ‘values’ will too often be viewed as ‘getting in the way’ of science or profit,” she said.

It is hard to imagine social-media companies, which derive so much of their revenues from highly-targeted advertising, doing anything that gives their customers less information to act on. Indeed, Rieke doesn’t think that the coming years will involve collecting, or selling, less data. “I don't see sites making less use of their users' data in the future for marketing purposes,” he says. That means the work of researchers such as Dwork and those at companies like Facebook will become all the more important in shaping and implementing policies that can create a more equitable internet, even as they create a more profitable one, too.

This is about more than just advertising. In 2016, the rental platform Airbnb faced accusations that hosts on their site were discriminating by refusing reservations for black users. To address this, the company has said it will put new antidiscrimination clauses in place, change booking policies, and punish hosts who improperly reject potential guests. Ride-hailing companies have faced similar accusations of discrimination by those using their platforms. On the whole, it seems that many technology-based companies have failed to consider the diversity of users when designing and building their platforms. In order to keep growing and retaining the many different people who use their sites, they’ll have to come up with a solution—quickly.

Dado Ruvic / Reuters

A Visual Search Engine for the Entire Planet

0
0

At this moment in history, there are more satellites photographing Earth from orbit than just about anyone knows what to do with. Planet, Inc., has more than 150 orbiting cameras, each the size of a shoebox. DigitalGlobe has five dump-truck-sized sensors. And more startups are planning to launch their own.

What should we do with all that imagery? How can we search it and process it? Descartes Labs, a startup that uses machine learning to identify crop health and other economic indicators in satellite imagery, has created a tool to better index and surf through it. They call it Geovisual Search.

Geovisual Search allows users to find similar-looking objects in aerial maps of China, the United States, and the world. It’s free and available online right now. Click on a visible feature—like an oil tank, an empty swimming pool, or a stack of shipping containers—and Geovisual Search will find other objects like it on the map.

Here’s a search, for instance, for solar farm-looking features in China:

(Courtesy of Descartes Labs)

“Imagine these big data sets coming along from Planet. Suddenly you’re getting daily pictures of the globe. You kind of want to count these things, every single day, and watch how they change through time,” says Mark Johnson, the CEO of Descartes Labs.  

“The neural nets that we trained here are the beginning of counting oil tanks, or buildings, or windmills. Imagine we wanted to look at sustainable energy infrastructure—solar farms, solar panels on roof—you could start to think about counting their growth through time. You start to get really interesting data streams,” he told me.

It’s a legitimately cool way to search satellite imagery, and it’s great to be able to surf through the terrain of China and the United States as a whole. It reminded me of Terrapattern, an art project created by artists and geographers at Carnegie Mellon University last summer. Terrapattern had a near-identical interface and near-identical capabilities to Descartes Search, but it only accessed certain urban areas in the U.S., including Pittsburgh, New York, and the Bay Area.

The Decartes team tips their hat to Terrapattern in their announcement blogpost, calling the earlier project a “ground-breaking demonstration of visual search over satellite imagery.”

“We loved it.  The demo aligned with many ideas we had been kicking around at Descartes Labs, and it was great to see somebody just go out and do it,” the blog post says.

Despite this admiration, Descartes only ran their implementation past the Terrapattern team 12 hours before its release. “Their approach is virtually the same as what we did a year ago, with some tweaks to deal with scale,” said Golan Levin, who led the Carnegie Mellon team, in an email.

“It’s quite typical for new-media artworks to, er, ‘inspire’ commercial projects—this is unfortunately quite common,” he said. “Since our team is artists and students and academics, the chance or option to have collaborated would have been much more fun.”

In fact, Levin has written about how Google Streetview, Sony Eyetoy, and a Nike product called the “Chalkbot” were all inspired by new-media artistic experiments. He added that Terrapattern is now working with a major satellite-imagery provider and a design firm to create a similarly scaled-up version of its product.

Perhaps this method of searching a geographic environment will eventually have the same renown as Google  Streetview. If the sheer amount of new daily satellite imagery continues to expand, it seems like a possible fate. For its part, Descartes plans to keep expanding the use of machine-learning algorithms on satellite imagery. It will also continue producing its corn-health forecasts.

Andrew Winning / Reuters

Social Media’s Silent Filter

0
0

A few months ago, in the wake of the fake-news debacle surrounding the election, Facebook announced partnerships with four independent fact-checking organizations to stomp out the spread of misinformation on its site. If investigators from at least two of these organizations—Snopes, Politifact, ABC News, and FactCheck.org, all members of the Poynter International Fact Checking Network—flag an article as bogus, that article now shows up in people’s News Feeds with a banner marking it as disputed.

Facebook has said its employees have a hand in this process by separating personal posts from links that present themselves as news, but maintains that they play no role in judging the actual content of the flagged articles themselves. “We believe in giving people a voice and that we cannot become arbiters of truth ourselves,” wrote Adam Mosseri, the vice president of Facebook’s News Feed team, in introducing the change.

The announcement was an early step in Facebook’s ongoing revision of how it defines its role as a platform on which people consume news. Through the tumult of the election and under heavy public pressure, the company has gone from firmly denying any status as a media company to now acknowledging (albeit vaguely) some degree of responsibility for the information people take in. “The things that are happening in our world now are all about the social world not being what people need,” Mark Zuckerberg told Recode after he published a sweeping, 6,000-word manifesto on the company’s future last month. “I felt like I had to address that.”   

Missing from this evolving self-portrayal, however, has been significant mention of a distinct kind of editorial practice that Facebook and most other prominent social-media platforms are involved in. Thus far, much of the post-election discussion of social-media companies has focused on algorithms and automated mechanisms that are often assumed to undergird most content-dissemination processes online. But algorithms are not the whole story. In fact, there is a profound human aspect to this work. I call it commercial content moderation, or CCM.

* * *

CCM is the large-scale screening by humans of content uploaded to social-media sites—Facebook, Instagram, Twitter, YouTube, and others. As a researcher, I have studied this process in detail: In a matter of seconds, following pre-determined company policy, CCM workers make decisions about the appropriateness of images, video, or postings that appear on a given site— material already posted and live on the site, then flagged as inappropriate in some way by members of the user community. CCM workers engage in this vetting over and over again, sometimes thousands of times a day.

While some low-level tasks can be automated (imperfectly) by processes such as matching against known databases of unwanted content, facial recognition, and “skin filters,” which screen photos or videos for flesh tones and then flag them as pornography, much content (particularly user-generated video) is too complex for the field of “computer vision”—the ability for machines to recognize and identify images. Such sense-making processes are better left to the high-powered computer of the human mind, and the processes are less expensive to platforms when undertaken by humans, although not without other costs.

Increasingly, CCM work is done globally, in places such as the Philippines, India, and elsewhere, although CCM workers can also be found in call centers in rural Iowa, online on Amazon Mechanical Turk, or at the headquarters of major Silicon Valley firms—typically without full-time employment and all it entails, such as access to quality health care. CCM workers are almost always contractors, in many cases limited in term due to their high rate of burnout.

Over the past six years, I have spoken with and interviewed dozens of CCM workers who have labored in locales as diverse as Mountain View, Scotland, and the Philippines. Despite cultural, ethnic, and linguistic challenges, they share similarities in work-life and working parameters. They labor under the cloak of NDAs, or non-disclosure agreements, which disallow them from speaking about their work to friends, family, the press, or academics, despite often needing to: As a precondition of their work, they are exposed to heinous examples of abuse, violence, and material that may sicken others, and such images are difficult for most to ingest and digest. One example of this difficulty is in a recent and first-of-its-kind lawsuit filed by two Microsoft CCM workers who are now on permanent disability, they claim, due to their exposure to disturbing content as a central part of their work.

CCM workers are often insightful about the key role their work plays in protecting social-media platforms from risks that run the gamut from bad PR to legal liability. The workers take pride in their efforts to help law enforcement in cases of child abuse. They have intervened when people have posted suicidal ideation or threats, saving lives in some cases—and doing it all anonymously.

CCM workers tend to be acutely aware, too, of the outsized role social-media platforms have in determining public sentiment. One CCM worker, a contractor for a global social-media firm headquartered in Silicon Valley, described his discomfort in the way he had to deal with war-zone videos. He brought up the case of Syria, which he and others cited to me as examples of the worst material they had to see on the job in terms of its level of violence and horror. He explained that much of the material being uploaded to his company’s video-distribution platform seemed to come from civilian cell phones, who wanted, it was supposed, to disseminate footage of the civil war’s nightmare for advocacy purposes.

This CCM worker, whose identity I am ethically bound to protect as a condition of my research, pointed out that such content violated the company’s own community codes of conduct by showing egregious violence, violence against children, blood and gore, and so on. Yet a decision came down from the group above him—a policy-setting team made up of the firm’s full-timers—to allow the videos to stand. It was important to show the world, they decided, what was going on with Syria and to raise awareness about the situation there.

Meanwhile, the employee explained to me, other videos flooded the platform on a daily basis from other parts of the world where people were engaged in bloody battle. Juárez was one pointed example he gave me. Although the motives of the uploaders in those cases were not always clear, no leeway was given for videos that showcased violence toward civilians—beheadings, hangings, and other murders. Whether or not the policy group realized it, the worker told me, its decisions were in line with U.S. foreign policy: to support various factions in Syria, and to disavow any connection to or responsibility for the drug wars of Northern Mexico. These complex, politically charged decisions to keep or remove content happened without anyone in the public able to know. Some videos appeared on the platform as if they were always supposed to be there, others disappeared without a trace.

* * *

Is an editorial practice by any other name, such as CCM, one that the public ought to know more about? Social-media firms treat their CCM practices as business or trade secrets and typically refuse to divulge their internal mechanisms for decision-making or to provide access to the workers who undertake CCM. There is no public editorial board to speak of, no letters to the editor published to take issue with practices. For this story, I contacted four social media platforms for comment: Facebook, Instagram, Snapchat, and YouTube. Instagram declined to speak on the record; the others haven’t responded.

Meanwhile, platforms have frequently released new media-generation tools without a demonstrated understanding of their potential social impact. Such has been the case of Facebook Live and Periscope, both of which were introduced as fun ways to livestream aspects of one’s life to others, but have served key roles in serious and violent situations; the cases of Philando Castile and Keith Lamont Scott are two recent examples. At the behest of law enforcement, Facebook turned off the Facebook Live feed of Korryn Gaines as she was using the platform to document an attempted arrest. After her feed went dark, she was shot and killed and her son was shot and wounded by police.

Such cases starkly illustrate the incentive for social-media companies like Facebook to shy away from the “media company” label. Media companies are held to public criticism and account when they violate ethics and norms. It’s a great position for a firm to be able to reap profits from circulating information, and yet take no responsibility for its veracity or consumption.

Social-media companies often appear eager to completely and cost-effectively mechanize CCM processes, turning to computer vision and AI or machine-learning to eliminate the human element in the production chain. While greater reliance upon algorithmic decision-making is typically touted as leap forward, one that could potentially streamline CCM’s decision-making, it would also eliminate the human reflection that leads to pushback, questioning, and dissent. Machines do not need to sign NDAs, and they are not able to violate them in order to talk to academics, the press, or anyone else.

What key information and critiques are now missing from view, and to the benefit of whom? In the absence of openness about these firms’ internal policies and practices, the consequences for democracy are unclear. CCM is a factor in the current environment of fake-news proliferation and its harmful results, particularly when social-media platforms are relied upon as frontline, credible information sources. The public debate around fake news is a good start to a critical conversation, but the absence of the full understanding of social-media firms’ agenda and processes, including CCM, make the conversation incomplete.

Albert Gea / ReutersMark Zuckerberg at the Mobile World Congress in Barcelona, Spain

Lady Liberty’s Cloak of Darkness

0
0

The metaphors have become inescapable, it seems.

Consider the temporary dimming of the Statue of Liberty Tuesday night, which occurred on the eve of a widely publicized women’s strike, shortly after a new executive order curbing immigration to the United States, and at a time of deep uncertainty and partisanship in the country.

It had to mean something, right? In a word: Nope.

A portion of the lighting system that illuminates the statue had experienced a “temporary, unplanned outage,” Jerry Willis, a spokesman for the monument, told me in a statement emailed shortly before midnight. The outage, he explained, was “most likely” due to renovation work, including a project involving a new emergency generator, that began after Hurricane Sandy in 2012.

But the possibility of deeper meaning was too delicious for some to resist—especially because the official explanation came from an employee of the National Park Service, which has become its own cultural symbol for resistance to the Trump administration. “Somebody’s trying to tell us something,” one person said in response to the NPS statement, which I posted on Twitter. Many others sent winky-face gifs.

EarthCam footage shows the partial outage at the Statue of Liberty Tuesday night. (EarthCam)

The Statue of Liberty has a storied history of lighting snafus. When it was first unveiled in 1886, the lights didn’t work at all. Then, for a period of weeks shortly thereafter, a lighting-design error caused Lady Liberty to appear headless, illuminated only from the shoulders down. (Her torch could be seen, but it appeared to be floating in midair.)

Technological failures like these are often mined for metaphor. That’s because they’re an easy target. When the Statue of Liberty was in disemheaded-body mode, in the 1880s, it was at the height of a fierce battle over which government agency should pay for the lighting scheme. Go figure.

Similarly, the Titanic wasn’t just a ship that sank, it was seen as a catastrophic failure of hubris. The lesson was this: Put too much faith in technology, and you will be let down. The Titanic wasn’t only not unsinkable, as its creators had claimed, but it sank on its maiden voyage. It was a failure as spectacular as it was tragic.

One of the reasons people were so obsessed with Y2K in late 1999 was because it represented more than an isolated technological problem. It was also an expression of uncertainty about the dawn of a new millennium, at a time when computers and the internet were beginning to dramatically reshape society.

“This can be conceptualized as a special kind of ripple effect in which a strong metaphor for technological failure enters the cultural lexicon and becomes a defining feature for how technology is perceived,” wrote the authors of The Social Amplification of Risk in 2003. “Thus, technological failures and crashes may become collectively viewed through a single, over-arching concept that provides a convenient explanatory mechanism for why such failures occur.”

A military salute for the president's arrival at Liberty Island during the inauguration of the Statue of Liberty, then more commonly referred to as “the Bartholdi Statue,” in 1886. (LoC)

How a person perceives technological failure is also deeply tied to that person’s level of trust in institutions like government. This is also why the Statue of Liberty lighting debacle is particularly fertile for metaphor: because it involved the technological failure of a deeply symbolic national icon—an icon which is managed by an agency that has itself become an emblem of the fight against the symbolic extinguishing of liberty’s light.

Here’s how Emma Lazarus describes what that light represents in her 1883 poem, “The New Colossus,” which was engraved in bronze and affixed to the base of the statue in 1903:

Not like the brazen giant of Greek fame,
With conquering limbs astride from land to land;
Here at our sea-washed, sunset gates shall stand
A mighty woman with a torch, whose flame
Is the imprisoned lightning, and her name
Mother of Exiles. From her beacon-hand
Glows world-wide welcome; her mild eyes command
The air-bridged harbor that twin cities frame.
"Keep ancient lands, your storied pomp!" cries she
With silent lips. "Give me your tired, your poor,
Your huddled masses yearning to breathe free,
The wretched refuse of your teeming shore.
Send these, the homeless, tempest-tost to me,
I lift my lamp beside the golden door!"

It wasn’t just Lazarus’s poem that made the statue an icon of immigration, it was the actual experience of the 12 million immigrants who entered the United States by way of Ellis Island, many of whom describe laying eyes on the statue as a defining moment in their lives. On the decks of boats entering New York Harbor, crowds of newcomers to the United States would dance and weep for joy. This happened even in terrible weather. Seymour Rexsite, who came to the United States from Poland when he was 8, described approaching Ellis Island in a miserable, driving rainstorm. “Everybody was on deck, no matter, they didn't mind the rain at all,” Rexsite told the Associated Press in 1986 at the time of the statue’s centennial. “Just to cheer that they came, they came to America.”

The sight of a “tremendous lady in the mist,” as the historian Virginia Yans-McLaughlin put it to the AP that year, had almost a “fantasy element to it.” So, you can see why it might be tempting to interpret the darkening of the statue, hours after President Donald Trump signed a new immigration ban, as a political act.

Metaphors are usually an attempt to clarify or understand the world—often either using technology to explain greater forces in the world, or vice versa—but they don’t always prove useful. The writer William Gass, in a 1976 conversation with The Paris Review, compared his inclination to think, feel, and perceive the world metaphorically to an overindulgence in junk food—one that, when used incorrectly, could lead to “paradoxes and confusions of every kind.”

Even when a metaphor is effective, however—perhaps especially then—it doesn’t always overlap cleanly with reality. It’s easy to find emotional resonance when you go looking for it, but it’s still crucial to differentiate between the most powerful explanation for something and the most likely one. There’s a relatively short distance between total subjectivism and conspiracy theories or propaganda.

Sometimes, in other words, the lights just go out.

Library of CongressA stereographic image of the torch and part of the arm of the Statue of Liberty, on display at the 1876 Centennial Exhibition in Philadelphia, 10 years before its completion.

A Behavioral Economist Tries to Fix Email

0
0

Can anything be done to make people happier with their jobs? What can prevent people from overeating? Will people like beer with balsamic vinegar in it just because they’ve been told it contains a “secret ingredient”?

These are some of the questions that Dan Ariely, a behavioral economist at Duke University, has studied in his research over the years, which spans in scope from the weighty to the quotidian. He has attempted to puzzle out the intricacies of human motivation and decision making, and two threads that come up often in his research are why people so often make choices that leave them worse off, and how tweaking small things might ward off needless, irrational suffering.

So when he realized that reading and sending emails was consuming an ever-expanding portion of his time—Ariely regularly receives hundreds of emails a day, excluding spam—he wondered if there were something he and others could be doing differently in managing their online correspondence. What small behavioral tricks could he deploy to make the whole ordeal less stressful?

Everybody recognizes how much we are destroying productivity with the current [way we] email,” Ariely told me. The consulting firm McKinsey estimates that knowledge workers spend a little over a quarter of their workdays managing email, and Ariely was drawn to thinking critically about something that so many people spend so much of their days on. “I care about the ways in which we mismanage our health, mismanage our money, and mismanage our time,” he says. “From all of those, time is probably the easiest place, not to optimize perfectly, but to make some substantial improvements.”

One feature of email that seemed to leave room for improvement is the fact that every new message, whether vitally urgent or completely worthless, warrants an immediate ping. This, despite the fact that most people have a hard time ignoring messages as they arrive. In a 2002 paper, for instance, researchers found that for 70 percent of messages, it took workers an average of only six seconds to react to a notification indicating an email was unread. (Perhaps, 15 years later, that average time has even gotten shorter.) And these interruptions come at a steep cost (in terms of both productivity and stress), as it takes a while for people to return to the task they were originally doing when they were made aware of the message’s arrival.

“The first thing we should question is this idea that all emails are created equal,” Ariely says. “Should each email be able to interrupt people? Is the email from someone’s boss as important as the weekly industry newsletter he’s signed up for?” To get a sense of the answer to these questions, he posted a survey on his blog asking respondents to review the last 40 emails they’d received, saying for each one how long they could’ve waited to read it: Right away? Two hours later? A week later? Never?

What he found was that roughly a third of messages did not clear the bar of needing to be seen at all, and only about a tenth of emails were considered important enough to need to be read within five minutes of receipt.


How Soon Does Any Given Email Need to Be Seen?

When asked to sort 40 recent emails by time-sensitivity, Ariely’s respondents said that the majority were not urgent.

Data: Dan Ariely

In this data, Ariely saw an inefficiency to target. Even though every email, by default, triggers a notification, it’s unlikely that any given email deserves one. So, with the help of some software developers he’d worked with in the past, he tried to come up with a way to tweak the default settings that are baked into email. The result of that work is Filtr, an app that lets users make simple rules, based on the sender, about when an email will show up in their inbox. For example, users can set it up so that emails from family members show up immediately, but non-time-sensitive newsletters show up all at once, at the end of the day.

Ariely hasn’t done any widespread user-satisfaction surveys, but he says that so far Filtr is at the very least making his own digital life less exasperating. “Being able to filter it, and say, ‘Okay, those are the small subset that are actually important to get to, and the rest can wait,’ is a huge improvement to my well-being,” he says.

It’s possible that his anecdotal relief could have wider implications for other distressed emailers. “I think Dan’s idea is really intriguing,” says Gloria Mark, a professor of informatics at University of California, Irvine. Mark, who has published several studies demonstrating the toll that email-related stress takes on workers, would want to study Filtr’s effectiveness before arriving at any conclusions. But she finds the idea promising because it restores a degree of control, the loss of which she says is one of the main reasons people get stressed about email. “I think if people felt they had more control over their information, over their time, they would be a lot less stressed,” she says.

Ariely’s curiosity about email has not just led him to help create Filtr, but also Shortwhale, a web app that makes emails arrive in a form that’s easier for recipients to process. Sending a message to someone who uses Shortwhale means being directed to a webpage with a few questions such as “How urgent is this?” and “Do you need a reply?” In the form where senders are to enter their messages, they’re prompted to “Go straight to the point” and are encouraged to “Make this a multiple choice question” if possible.

Ariely, who is a happy user of Shortwhale, says that this additional information has been a boon because he normally would receive a high volume of email but little in the way of useful context. “What I’ve learned through Shortwhale is there are lots of ... people who just want to send me things and say, ‘just for your information.’ And it’s so good to know what people have in mind.”

The system, which Ariely says has a very small user base, does have its downsides. “There’s something a little bit obnoxious about Shortwhale, in the sense that you say, ‘When you write me, please write me this way,’” he says. “That puts more burden on you as the sender... I have to say that when I started with this, the arrogance involved in it really bothered me.” And besides, for the concept of Shortwhale to be truly powerful, it would need to be integrated into the email clients that everyone uses. “Just imagine how much sense it would make if the person who sent you an email had a way to very quickly tell you that for this email, they need a response within a week, or ‘no response necessary,’” he says. From his surveys, Ariely had learned of an expectations gap between senders who didn’t need responses and recipients who thought they might.

For now, Filtr and Shortwhale both remain niche widgets that dutifully serve the needs of their creator and a small handful of high-volume emailers who, like him, have gone out of their way to adopt them. But in Filtr, Gloria Mark, the informatics professor, sees the seed of a powerful idea—“batching,” or checking email at set times, in batches—that, if fleshed out and taken up more widely, could make countless people much less stressed out about email.

There are two different types of batching, the first more well-known than the second. Productivity bloggers extol the virtues of this first type of batching, which consists of checking email only at set times and handling them all at once, as opposed to addressing each one as it comes in. It is an appealing idea, and many are no doubt drawn to it because it appears to eliminate distractions. Whether it actually leaves people better off is hard to determine: When Mark studied this behavior, she found that batchers who received a lot of email rated themselves as more productive, but were no less stressed.

There is a second type of batching, one that Mark believes would get to the root of email-related stress. “I’ve argued for years, and I’ll explain why, that the organization should batch emails,” Mark says. What she means is that a company could set it up so that when emails are sent, their delivery is postponed until the next batch-delivery time. Mark imagines a situation where emails arrive in three waves: at the beginning of the day, after lunch, and at the end of the day. “The logic for doing that is because then everybody’s expectation is that emails only come three times a day,” she says, adding, “Everybody knows that everybody’s getting emails at a certain time.” The insight here is that what really makes email so stressful is the social expectation of a quick response—something individual batchers don’t have control over, but something an employer could.

Mark’s ideal system does allow for urgent communications, just not through email. If workers need to contact one another with time-sensitive requests, she says, they’d pick up the phone, send an instant message, or talk in person.

Her solution, though elegant, is probably not likely to catch on anytime soon, because it’d require rewriting the digital-communication social contract. Lately, though, Mark says that in her life, email hasn’t been the prominent stressor it once was. These days, it’s something else. “I find myself checking the news every break that I get,” she says. “It’s replaced email by a long shot.”


Related Video

Robert Nickelsberg / Getty

The Still-Misunderstood Shape of the Clitoris

0
0

The clitoris really isn’t that confusing. Or it shouldn’t be, anyway. Nonetheless, acknowledging the shape, size, or even existence of this essential body part has not always been par for the course—even in the medical profession. As a 2005 report from the American Urological Association puts it, “the anatomy of the clitoris has not been stable with time as would be expected. To a major extent its study has been dominated by social factors.”

However, heralded by some as a sexual and physiological revolution, a new 3-D printed model of the clitoris is being used to change the public’s view of female sexuality. Free to download, the life-size model was designed by the French engineer, sociologist, and independent researcher Odile Fillod and released early last year.

At 10 centimeters in length, from the tip of the glans to the end of one “crus” (or leg), the model clitoris is bigger than expected. That’s tactical: It was created to dispel misinformation. Many dictionaries and even medical texts dub the clitoris as “pea-sized.”

* * *

Historical accounts of the clitoris are plagued by disparagement or ignorance. Though Magnus, a renowned scholar in the Middle Ages, considered the clitoris as homologous to the penis, not all who succeeded him agreed. In the 16th century, Vesalius argued the clitoris did not appear in “healthy women.” The Malleus Maleficarum, a 1486 guide for finding witches, suggested the clitoris was the “devil’s teat”; if the tissue were to be found on a woman it would prove her status as a witch. And in the 1800s, women seen as suffering from “hysteria” were sometimes subject to clitoridectomies.

It wasn’t until 1981 that the Federation of Feminist Women's Health Clinics created anatomically correct images of the clitoris. Published in A New View of a Woman’s Body, the images were part of a wider attempt to provide thorough, accurate information to women to support their health. Decades later, in 2009, the first 3-D sonography of the stimulated clitoris was completed by French Researchers.

Ignorance persists today. As the University of Western Sydney clinician and physiotherapy researcher Jane Chalmers explains, the subject of the clitoris is still avoided or ignored. “Several major medical textbooks omit the clitoris, or label it on diagrams but have no description of it as an organ,” she says. “This is in great contrast to the penis that is always covered in-depth in these texts.”

As a researcher who focuses on the vulva and pelvis as well, Chalmers says she is often harassed online. “I frequently face questions of ‘Why would you want to study that?’ and snide comments along the lines of, ‘She must be a lesbian.’”

The problem, many suggest, starts early. A recent research paper examined 55 qualitative studies in more than 10 countries. Its authors found that young people tend to have negative views of the sex education they received in school. The researchers noted that many students reported that very little was ever said about sexual pleasure, female pleasure in particular.

In France, where the model clitoris originates, sex education often teaches outdated attitudes, according to Fillod. Official guidelines for sex ed are “terribly sexist, heteronormative, even homophobic,” she says. In particular, social norms are often inaccurately linked to biological information. For example, Fillod explains that children are taught “that boys are more focused on genital sexuality, whereas girls care more about love and the quality of relationships, in part because of their ‘specific anatomical-physiological characteristics.’” She is not alone in her concern about this curriculum. In 2015 the Haut Conseil à l’Egalité, a government body which monitors gender equality, reported that school-based sex education in France was riddled with sexism.

Determined to do something about the problem, Fillod partnered with a Toulouse-based documentary-film production company to prepare a series of videos with alternative materials. In the process, Fillod realized that a life-size 3-D model of the clitoris would be a useful visual aid. In French biology textbooks,” she explains, “the clitoris is never correctly pictured in the drawings showing the female genital apparatus, and even quite often not pictured at all.”

As an engineer at the École Centrale Paris, who has been independently researching sex and gender issues in biomedical science since 2013, Fillod was prepared for the task. “Providing a free and open-access model that could be 3-D printed by anyone appeared as an ideal solution,” she says. “It would not be just for me and this video, but for anyone wanting to use such a 3-D model for educative purposes.”

* * *

Before she could create a model, Fillod had to understand what was known about the clitoris. A review of available scientific literature provided Fillod with a defined shape and realistic average size of the clitoris and bulbs. With the dimensions understood, Fillod collaborated with the fablab at the Cité des Sciences et de l’Industrie in Paris, a scientific museum. They helped her transform the data into the stylised printable model.

The model can be downloaded and output by anyone with access to a 3-D printer. It is “anatomically correct, life-size, and in 3-D, which is far superior to the drawings that are generally available,” Fillod says. That a life-size clitoris is 10 centimeters may be the first shock—but the wishbone-like shape of the organ is certainly the second.  

Fillod says she hopes the model will help spread better knowledge of women’s genital anatomy. It might be used for sex ed in schools, for one. But everyone else who might encounter a clitoris has something to learn from the model, too. Chalmers explains that understanding this “neat little organ” is important because medical professionals are now beginning to appreciate that it has a role in immune health. Being able to identify and understand the clitoris means knowing when something is wrong—both for women and for their doctors.

Considering the fact that clitoral pain (as well as infections, inflammation, and disease) are quite common, Chalmers contends that better understanding of the clitoris is essential. She adds that as the clitoris is closely tied to female sexual pleasure, the lack of knowledge about it amplifies inequality for women.

That may be about to change. According to Fillod, sex therapists, sex educators, school nurses, biology teachers, and sex-information institutions have all shown interest in using the model. And some French schools have already adopted it, although Fillod expects sex-educational use will not be widespread unless the Minister for Education supports the idea. Nonetheless, the model is sure to start conversations, even just among those who read about it. The size and shape present an anatomical reality that is more difficult to ignore than a small drawing or written description. As the American Urological Association concluded in 2005, “It is impossible to convey clitoral anatomy in a single diagram.”

* * *

Further afield from France, globalmediacoverage of the model reveals an interest in knowing more about the clitoris. But the same cultural preconceptions that inspired the model have attached themselves to its reception. “Some of the journalists who interviewed me and published a paper about it clearly wanted to send some kind of message,” Fillod says, mentioning a myriad of examples of different motives attached to what should be a scientific model. “My concerns about this misinformation are mainly about the creation of new urban legends which reinforce gender stereotypes and the sex dichotomy.”

Though misconceptions inevitably will continue, Fillod’s model is not alone in battling the lack of “cultural cliteracy,” as it has become known. Cliteracy, a project led by the artist Sophia Wallace, explores the history of the clitoris, the myths around it, and an introduction to the artists who have challenged misconceptions of the past.

As for the model, there is more to be done. The 3-D printing lab where it was generated hopes that a future a model might depict the pelvic bones, the clitoris, and the vagina. They believe that a more complex model would more reveal how the clitoris fits into the whole of the female genitalia, whether for stimulation or general health. They hope that future models could be printed in “a flexible material to approximate the anatomical reality.”

But it’s possible that further realism won’t help matters. While model clitoris can clarify the form of the organ in the abstract, it might smack of scientism to believe that an accurate model of a body part would magically clarify how it fits into the lived experience of the human beings who possesses it. A more complex model could be useful, in other words, but it is unlikely to help the clitoris escape its troubled history. As Fillod’s model has shown, anatomically correct information can help those willing to learn. But for those predisposed to misconception, an anatomically correct, scientifically rendered 3-D model might reinforce that bias, rather than rectify it.


This article appears courtesy of Object Lessons.

Marie DocherA life-size, 3-D printed model of the clitoris developed by Odile Fillod

Your Hot Hands Can Give Away Your Smartphone PIN

0
0

If you were protecting your smartphone passcode from someone lurking over your shoulder, or from unseen security cameras, you might cover the screen as you tap in the PIN’s four or six digits. But once you’ve unlocked the phone, perhaps you’d let down your guard, and leave the screen in full view—especially if it’s off.

That would be unwise, according to researchers at two German universities. At an upcoming conference on human-computer interactions, they will present a new study that explains how someone armed with a thermal-imaging camera would have little trouble extracting your passcode from the heat signature left on your smartphone’s screen. It even works 30 seconds after you last touched it.

In a short video, two of the researchers demonstrate how easy the attack is. A guy enters a PIN to unlock his phone, then turns off the screen and puts it down on a table. He gets up to grab a cup of coffee, as an attacker quietly strides in, points a small handheld thermal camera at the phone for a moment, and walks back out.

What happens next is a little like a higher-tech version of a smudge attack, in which a snooper examines the oily residue left on a screen by a user’s finger to reconstruct the phone’s login passcode or pattern. In a 2010 paper that introduced that method, researchers from the University of Pennsylvania called smudges a form of “information leakage” that can be collected and analyzed with nothing more than a regular camera and photo-editing software.

The smudge attack was surprisingly good at decoding Android passcode patterns, those shapes that users trace on their lockscreens to get into their phones. The streaking in the residue left behind after an unlock can even show the direction the user dragged his or her finger, making imitating the pattern trivial. But for strings of numbers like an iPhone PIN, the smudge attack isn’t quite as useful: It can reveal which numbers are included in the PIN, but not what order they were tapped. That still cuts down drastically on the set of possible passcodes, but finding the real one will still take some guesswork.

This is where the thermal attack excels. Because heat decays at a known rate, a person typing in a PIN with four different digits would leave behind four heat traces of slightly different temperatures: The first digit entered would be coolest, and the last digit would be warmest. If a thermal image contains only three or two heat traces, the attacker can infer that the PIN contains at least one digit more than once. The phone’s exact PIN isn’t immediately clear in these cases, but it can be guessed in three or fewer tries. And if there’s only one heat trace, the attacker knows the PIN is just one digit repeated four times. (In 2011, researchers at the University of California in San Diego used a similar approach to guess at ATM PIN numbers.)

Deriving a PIN from a thermal image takes more than just eyeballing it. The researchers behind the technique, who are affiliated with the University of Stuttgart and Ludwig Maximilian University of Munich, developed a six-step process for extracting PINs from images.

First, a thermal camera set to capture temperatures between about 66 and 90 degrees Fahrenheit snaps a photo of the target smartphone screen. Then, software converts the color image to grayscale and applies a filter to reduce noise. Next, a two-step operation removes the background entirely, leaving only the heat traces. The main features of the heat traces are then detected and extracted: For a PIN, this will result in one to four circles. From there, the final step analyzes the relative heat of each PIN to determine the most likely order for the passcode’s digits.

The thermal attack also works on Android patterns: It can trace the finger’s path across the screen, and figure out the pattern’s direction based on the relative temperature of the beginning and end.

The technique is shockingly successful. If the thermal image is taken within 15 seconds of a PIN being entered, it’s accurate nearly 90 percent of the time. At 30 seconds, it’s about 80 percent accurate. But at 45 seconds or more, the accuracy drops to 35 percent and below.

For swiped patterns, thermal attacks can guess the right shape 100 percent of the time even 30 seconds after it’s entered—but only if the shape has no overlaps. Introducing one overlap brings the accuracy at 30 seconds down to about 17 percent, and two overlaps reduces it all the way to 0. Overlapping patterns had the opposite effect of PINs with duplicate numbers, the researchers found: Duplicate numbers made the attacker’s job easier, while complex, overlapping patterns made it nearly impossible.

Courtesy of Yomna Abdelrahman

For Android users, then, choosing a pattern that crosses over itself would be the most obvious way to defend against thermal attacks. More generally, any sort of tapping and swiping around that happens after unlocking a phone is enough to foil a thermal camera, since doings so adds spots of heat to the screen that can confuse the attacker.

But Yomna Abdelrahman, a Ph.D. candidate at the University of Stuttgart and one of the primary authors of the research paper, said a more complex system might be able to tell different actions apart, based only on their heat signatures. “If we have a learning algorithm, we can actually differentiate between PIN entry and usage,” she said.

The researchers proposed a few ways smartphone hardware can defend against thermal attacks. Briefly increasing screen brightness to its maximum, or triggering a short burst of CPU activity, could heat up the entire phone and make PIN detection difficult.

Some people may be predisposed to a natural defense: Cool hands make it harder to detect heat traces from PIN entry, the researchers found, because the difference in temperature between the screen’s glass and the finger is less pronounced. Hot hands, on the other hand, may prolong the window of attack.

I usually use a fingerprint reader to log into my iPhone, but when I can’t, I type in a long password that has letters, numbers, and symbols. Since it takes a bit longer to type it in than a four-digit PIN, an attacker would have less time to capture the heat traces after I finish typing—but what if I typed it quickly?

“I would guess that if you are a fast typist that means the contact time is reduced, which will influence the amount of heat transferred,” Abdelrahman told me. “Hence, the heat traces left behind will be less, so still it might be hard to infer the long PINs.”

But if I’m typing quickly, I may be exerting more pressure with each stroke, she said, which could end up increasing the intensity of the heat traces I leave behind.

Maybe I’ll just keep my phone in my pocket.

Itziar Aio / GettyAn unattended smartphone still carries telltale heat signatures for up to a minute after it's used.

Car Wars

0
0

The stakes are impossibly high. Self-driving cars are arguably the great technological promise of the 21st century.

They are in that rare class of technology that might actually change the world. And not just in the way everyone in Silicon Valley talks about changing the world—but really, fundamentally change it. So much so that their dramatic life-saving potential is almost secondary to the other economic, cultural, and aesthetic transformations they would cause.

Those who aren’t able to drive themselves today—people who are blind, for example—would be granted a new level of transportation freedom. Mass adoption of self-driving cars would create and destroy entire industries, alter the way people work and move through cities, and change the way those cities are designed and connected.

To build the technology that prompts all this change is to be in an enormous position of power.

That’s why the race to bring self-driving cars to the masses is so intense. It’s also what makes this particular competition echo other transformative moments in technological history—going all the way back to the Railroad Wars, at least. (Incidentally, there was a different kind of driverless car back then.) “The Wright brothers jump into my brain immediately,” John Leonard, an engineering professor at M.I.T., told me in 2015. “But maybe it’s kind of like a decentralized space race. Like Sputnik, but between the traditional car companies and their suppliers versus tech companies and their startups.”

There’s a lot of money at stake. A lot a lot. We’re talking billions of dollars per year in potential profits, maybe more. All of the major players know this. For some companies, it is a fight to the death. Each one intends to come out on top.

* * *

Waymo (formerly Google)

When Google (now Alphabet) launched its self-driving car program in 2009, it had no competition to speak of. Culturally, the idea of a self-driving car was novel. Even the flying cars in 20th-century science fiction tended to have human drivers. So when Google began to go public with information about the project, in 2010, its level of seriousness about the effort wasn’t yet clear. “Some of these things will turn out to be wildly successful, and others will just fade away,” one investor told The Los Angeles Times at the time, referring to Google’s suite of unusual projects.

Wild success still isn’t a guarantee, but it’s now obvious that Google—which has since spun off its self-driving-car unit into a company called Waymo—is deeply invested in the work it’s doing. Its test fleet is now on public roads in four states: California (since 2009), Texas (2015), Arizona (2016), and Washington state (2016). “We’ve self-driven more than 2 million miles mostly on city streets,” Waymo says on its website. “That’s the equivalent of over 300 years of human driving experience, considering the hours we’ve spent on the road. This builds on the 1 billion miles we’ve driven in simulation in just 2016.”

All that driving and a near-perfect safety record—a reputation that has undoubtedly helped buoy the public’s perception of self-driving vehicles.

Uber

Uber catapulted itself into the self-driving car space in truly Uberesque fashion: With a scandal. In 2015, the ride-sharing giant hired an entire department away from Carnegie Mellon—some 40 robotics experts and engineers, including several top experts in autonomous-driving systems.

Since then, Uber’s commitment to the future of self-driving cars has only intensified. (Consider the business incentive of eliminating human drivers, who get a cut of every ride they give.) In 2016, Uber began testing its self-driving vehicles on public roads in Pittsburgh, it doubled down its own proprietary street-mapping system—ostensibly to reduce reliance on competitors like Google and Apple—and poached Google’s top mapping expert to do so. Uber also acquired a fledgling self-driving truck company, Otto, for $680 million in 2016—but more on that in a minute.

Given the talents of its employees and how much venture capital the company has on hand, Uber has emerged as a formidable player in the emerging self-driving car industry. Yet Uber continues to be plagued by controversies.

After a dustup over Uber’s refusal to seek permits for its self-driving cars in California in late 2016, the company changed course and applied for a state testing permit. In February 2017, Waymo filed a federal lawsuit claiming a former Google engineer had stolen self-driving car secrets before leaving the company to found Otto. Waymo says that when Uber acquired Otto, the former Google engineer used the information he allegedly stole to help build a circuit board for Uber’s self-driving car systems. The legal battle is poised to be the first major intellectual-property fight of the driverless car era.

Apple

Apple remains one of the more mysterious and intriguing players in the self-driving car game. On one hand, Apple can’t afford not to pursue this emerging technology if many of its major competitors are. On the other, Apple? A car company? To be fair, though, that’s what people said of Google in 2010. And not all self-driving car companies will manufacturer vehicles themselves; some will just lease out the self-driving software for auto manufacturers.

For years it was rumored that Apple had a secret self-driving car project in the works. But there were also reports that the project— which according to The Wall Street Journal had hundreds of dedicated employees as of 2016—was plagued by organizational and managerial problems. It wasn’t until December of 2016 that Apple officially made it known that it is working in some capacity on self-driving cars, via a letter to the National Highway Traffic Safety Administration.

“The company is investing heavily in the study of machine learning and automation,” wrote Steve Kenner, Apple’s director of product integrity, “and is excited about the potential of automated systems in many areas, including transportation.”

Other than that, however, Apple has remained characteristically secretive about its work.

Tesla

Tesla wants to bring driverless cars to the market, but it has a markedly different approach than Waymo, which may be its biggest competition. While Google wants to build fully self-driving systems from the ground-up, its critics say this will take too long. In the interest of making everybody safer sooner, Tesla is adding increasingly autonomous systems, bit by bit, to its existing high-end vehicles. But there’s a big debate over which method—fully autonomous versus incrementally autonomous—is actually better for public safety.

Tesla’s CEO, Elon Musk, has said it’s “morally reprehensible” to wait until the technology is advanced enough for complete autonomy. Yet critics of the Tesla approach say that here-and-there semi-autonomous features may present too much of a gray area for today’s drivers to safely navigate. The marketing around Autopilot, the current Tesla system, has arguably left people with the impression that Tesla’s cars are more autonomous than they really are. The very name, Autopilot, certainly suggests it might be okay for human drivers to stop paying attention.

This concern came up again in the spring of 2016, when a Tesla driver who was using the Autopilot feature died in a car accident. At the time, Tesla’s Autopilot feature was in beta mode, meaning the drivers who tested it on public roads were required to acknowledge any risks involved. Federal investigators eventually concluded Autopilot was not to blame in the fatal crash.

Tesla already claims on its website that all of its vehicles “have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.” This is, at best, slightly misleading. Tesla’s hardware may eventually allow for a “full self-driving” system, but it’s definitely not there yet.

Whichever approach to building a truly autonomous car is the right one, Tesla’s sense of urgency is helping to quicken the pace of competition in the driverless-car space.

Legacy automakers

Like Tesla, several legacy automakers are announcing their entry into the driverless-car space with incremental assisted-driving systems. This approach makes sense for them: After all, they already manufacture cars that people can go and buy—something that isn’t true of Apple, Google, or Uber—which means one of the best hopes for legacy carmakers to stay in business is to evolve now rather than attempting to play catchup later (which they may still have to do).

But some legacy companies have gone farther than others. While nearly every major automaker pays lip service to the importance of developing autonomous vehicles, only some have backed up their talk with action. Volvo stands out among the more committed, for instance. In a project Volvo is calling Drive Me, the automaker will put a fleet of 100 driverless cars on the highways in Sweden. (As with tests by Google and Uber on public roads in the United States, humans will sit behind the wheel, ready to take control of the vehicles if needed.) In March 2017, Toyota unveiled its first self-driving car  prototype. The car came out of Toyota’s artificial intelligence research institute, which it launched with a $1 billion investment in 2015.

There are also some partnerships between tech firms and automakers. Chrysler and Google announced in May 2016 that they would team up to make a driverless minivan, while Volvo and Uber announced a partnership in August 2016.

Newcomers

We should expect to see more startups in the self-driving car space in the years to come. One example is Drive.ai, which launched in August 2016 and is creating deep-learning software for driverless cars.

There will be others. Chris Urmson, the longtime head of Google’s driverless car initiative, left the company in August 2016, at a time when the project seemed to be shedding several key players. In December 2016, the technology-focused news website Recodereported that Urmson is starting his own self-driving-cars venture.

Technology history tells us that the first company to build a technology is not always the company that ends up making a windfall. That may well be the case in the realm of autonomous vehicles.

There are many uncertainties in all this, but one thing is clear: The cultural space occupied by the automobile is undergoing rapid, radical transformation. There are sure to be big winners and losers along the way.

Darren Staples / ReutersA driverless pod is tested in Milton Keynes, England, on October 11, 2016.

The 9-1-1 Paradox

0
0

The warnings began trickling in around supper time on Wednesday night.

AT&T was experiencing what officials in several states described as a nationwide outage affecting 9-1-1 services. The problem was that some AT&T customers who dialed 9-1-1 in at least 18 states couldn’t reach emergency dispatchers—and instead would hear either a busy signal or a phone that kept ringing and ringing and ringing and ringing and ringing.

It was nearly three hours before public officials gave the all clear that the issue had been resolved.

AT&T declined repeated requests for information about the scope and cause of the outage, but local officials acknowledged potential problems in Alabama, Arkansas, California, Colorado, Florida, Indiana, Kansas, Kentucky, Louisiana, Maryland, Massachusetts, North Carolina, Nevada, Pennsylvania, Tennessee, Texas, Virginia, West Virginia, and Washington, D.C.

Now, the Federal Communications Commission says it is investigating what happened.

The outage was stunning in part because of its scope, but also because of how reliable 9-1-1 services usually are. One massive failure, however, is all it takes to raise a host of serious questions about the integrity of a critical public-safety system in the United States.

“The reliability of the system is really astonishing given the age of some of the components,” said Trey Forgety, a cybersecurity expert and the director of government affairs for the National Emergency Number Association, a professional organization focused on policy, technology, and operations related to emergency communications in the United States.

The patchwork of switches, routers, and cables that are now involved in completing a 9-1-1 call includes bits of technology that are no longer manufactured. One widely used component was just discontinued last year, Forgety says.

“The switches that we call selective routers—those devices in some cases date back to the early 1980s,” Forgety told me. “These are things you can’t get parts for anymore. Some of these switches, if you ever lost one in its entirety—if you had a fire, or a flood, or a terrorist attack—they’re not actually replaceable.”

Outdated switches are unlikely to have played a role in the AT&T outage, however. The scuttlebutt among leaders in the emergency-response industry was that a scheduled or automatic network-configuration change by one of AT&T’s vendors was to blame. AT&T did not respond to a request to confirm this characterization of the outage.  

Today, just routing a 9-1-1 call involves multiple companies, which are often geographically remote from where the call is placed. Yet placing an emergency call is still “pretty darn reliable,” says Roger Hixson, NENA’s director of technical issues. “Practically everything is duplicated so that if one piece fails, the rest of the system can handle 9-1-1 calls pretty seamlessly,” he told me. “Very seldom do we have a case where there’s a failure that actually affects the ability of calls to get to 911 centers.”

Very seldom isn’t never. In 2006, a computer glitch caused an 9-1-1 outage that lasted for nearly seven hours in Pittsburgh. The death of at least one child was tied to the outage, during which time a father tried for 19 minutes to call for help when he found his infant son not breathing. More than 200 other calls to 9-1-1 went unanswered during the outage, according to news reports at the time.

The FCC said in 2014 it would prioritize reversing a growing “trend of large-scale ‘sunny day’ 911 outages,” a reference to system failures caused by software and database errors rather than bad weather, which was more of a problem in the days of analog telephone. “It’s more complex today than it used to be,” Hixson said.

The United States began implementing its universal emergency telephone line nearly 50 years ago. Incidentally, it was AT&T that first announced, in 1968, that it would use the digits 9-1-1 as the nation’s emergency code. 9-1-1 was selected because it was short, easy to remember, and not a sequence already in use.

But it was several decades before 9-1-1 was a near-nationwide service. Not even one-quarter of the U.S. population had access to the service by the mid-1970s. By the late 1980s, about half of Americans could dial 9-1-1 in an emergency. Today, about 96 percent of the United States is covered by 9-1-1, according to NENA.

While many more people can use 9-1-1 today, a system failure like the one that occurred on Wednesday night would have been practically unheard of even two decades ago. “It would have been difficult for anything to happen nationwide 20 years ago, because these systems are basically localized in parts of states,” Hixson said. “Now we have nationwide carriers. If they’re not careful about how they do certain things, or how much duplication redundancy they have, it becomes more difficult to manage.”

More difficult still because up to three-quarters of 9-1-1 calls now come from mobile devices, Hixson told me. New technology has introduced several new potential vulnerabilities. The paradox is that while 9-1-1 systems are likely more reliable than ever, there are more points at which things can go wrong.

Along with wireless network outages, there are distributed denial of service attacks to worry about—the kind of attack against a call center’s computer system that would flood dispatchers with fake calls, making all the phones ring at once. There’s also the possibility of an attack that would manipulate key information that goes out to emergency responders—a way of making them show up at the wrong house, for example.

“That’s a significant threat,” Forgety told me. “The worst one we can imagine is if some malicious actor wants to undertake an act of terrorism and hamper the local response to that [attack]—disrupting 9-1-1 communications entirely.”

Ironically enough, the aging systems first responders use to communicate with one another may offer a layer of protection in such a scenario. “Those radio systems typically use old-school technology,” Forgety said. Which makes them safe from computer-age attacks. Then again, he added, “nothing is impossible.”

Getty Images
Viewing all 6870 articles
Browse latest View live




Latest Images