Quantcast
Channel: Technology | The Atlantic

Of Course America Fell for Liquid Death

$
0
0

When you think about it, the business of bottled water is pretty odd. What other industry produces billions in revenue selling something that almost everyone in America—with some notable and appalling exceptions—can get basically for free? Almost every brand claims in one way or another to be the purest or best-tasting or most luxurious, but very little distinguishes Poland Spring from Aquafina or Dasani or Evian. And then there is Liquid Death. The company sells its water in tallboy cans branded with its over-the-top name, more over-the-top melting-skull logo, and even more over-the-top slogan: “Murder your thirst.”

Liquid Death feels more like an absurd stunt than a real company, but it’s no joke. You can find its products on the shelves at Target, 7-Eleven, Walmart, and Whole Foods. After the great success of its plain canned water, it has branched out into iced tea and seltzer, with flavors such as Mango Chainsaw, Berry It Alive, and Dead Billionaire (its take on an Arnold Palmer). On Monday, Bloomberg reported that the company is now valued at $1.4 billion, double the valuation it received in late 2022. That would make it more than one-tenth the size of the entire no- and low-alcohol-beverage industry. All of this for canned water (and some edgily named teas).

But not really. Liquid Death is not a water company so much as a brand that happens to sell water. To the extent the company is selling anything, it’s selling metal, in both senses of the word: its literal aluminum cans, which it frames as part of its environmentally motivated “Death to Plastic” campaign, and its heavy-metal, punk-rock style. Idiosyncratic as all of this might seem, the company’s strategy is not a departure from modern branding. If anything, it is the perfect distillation.

Liquid Death isn’t just an excuse for marketing. Metal cans probably do beat plastic bottles, environmentally speaking, but both are much worse than just drinking tap water. You can nurse a can of Liquid Death at a party, and most people will probably mistake it for a beer. But there are lots of canned nonalcoholic drink options. Even the company’s CEO, Mike Cessario, has acknowledged that the water is mostly beside the point: He worked in advertising for years before realizing that if he was ever going to get to make the kinds of ads he wanted to make, he’d have to create his own product first. “If you have a valuable brand,” he told Bloomberg this week, “it means that people have a reason to care about you beyond the small functional difference” between Liquid Death’s water and any other company’s.

That’s how you end up with a company that makes double-entendre-laced videos featuring porn stars and that partners with Fortnite, Zack Snyder’s Rebel Moon, and Steve-O, of MTV’s Jackass. On Instagram and TikTok, it is the third-most-followed beverage, behind only Red Bull and Monster; Liquid Death takes social-media comments trashing the product and turns them into songs with names such as “Rather Cut My Own D**k Off” and absurd taste-test commercials in which contestants are made, in one instance, to lick sweat off a man’s back.  

All of this, in one way or another, is about building the brand, because the brand is what’s important; the brand is all there is. Plenty of companies sell branded T-shirts or hoodies, but Liquid Death has gone all in. It offers dozens of different T-shirt and hoodie designs, plus beach chairs and watches and neon signs and trading cards and casket-shaped flasks and boxer briefs.

Liquid Death, Cessario likes to say, is by no means unique in its focus on marketing. “Like every truly large valuable brand,” he toldThe Washington Post last year, “it is all marketing and brand because the reason people choose things 98 percent of the time is not rational. It’s emotional.”He has a point. And in recent years, marketing has become ever more untethered from the underlying products. As I previously wrote, many companies have begun deployingmeta-advertisements: advertisements that are about advertisements or refer explicitly to the fact that they’re advertisements.

Think of CeraVe’s Super Bowl commercial in which Michael Cera pitches an ad featuring him at his awkward, creepy best to a boardroom full of horrified executives. Or the State Farm commercial that also aired during the Super Bowl, in which Arnold Schwarzenegger struggles to enunciate the word neighbor while playing “Agent State Farm” in an ad within the ad. Think of the Wayfair commercials in which characters say things like “Are we in a Wayfair commercial?” or the Mountain Dew commercials in which celebrities decked out in biohazard-green Mountain Dew gear discuss “how obvious product placement is.”

The appeal of these ads is that they make no appeal at all—at least no traditional appeal, no appeal having to do with the product they’re ostensibly selling. They wink at the viewer. They say: Weknow that you know what we’re trying to do here, so we’re just gonna cut the crap and be straight with you. They flatter the viewer, make them feel like they’re in on the joke. The marketing strategy is to renounce marketing strategies. As with most advertising, it’s hard to know for sure whether this actually works, but companies seem to think it does; after all, more and more of them are sinking millions into meta-ads.

You can think of Liquid Death as the apotheosis of meta-advertising. It doesn’t just say Forget the product for a moment while you watch this ad. It dispenses with the product entirely. The advertisement is the product. What Liquid Death is selling is not so much purified water as purified marketing, marketing that has shed its product—the soul without the body. The company writes the principle straight into its manifesto: “We’re just a funny beverage company who hates corporate marketing as much as you do,” it reads. “Our evil mission is to make people laugh and get more of them to drink more healthy beverages more often, all while helping to kill plastic pollution.”

It’s easy to dismiss Liquid Death as a silly one-off gimmick, but the truth is that many of us routinely fall for just this sort of appeal. The same thing is happening when we respond to the Visible phone service Super Bowl commercial in which Jason Alexander rehashes his “Yada yada” bit from Seinfeld and declares, “I’m in an ad right now.” And how could it not? Marketing is virtually inescapable. Brands are clamoring for our attention at every moment. It’s nice to feel, for a moment, like we’re not being advertised to—like Liquid Death is just a good bit and not, as it now is, a billion-dollar business.

Illustration by The Atlantic. Source: Shutterstock.

The Drama Kings of Tech

$
0
0

One Tuesday last month, Mark Zuckerberg uploaded a video to Instagram, but not to his Stories, where it would quickly disappear. This one was a keeper. He put it right on his permanent grid. It shows Zuckerberg sitting on his living-room couch in comfy pants and a dark T-shirt, while his friend Kenny records him through Meta’s mixed-reality headset. Zuckerberg proceeds to rattle off a three-and-a-half-minute critique of Apple’s new mixed-reality headset, the Vision Pro. His tone is surprisingly combative. At certain points, he sounds like a forum post come to life. “Some fanboys get upset” when people question Apple, he says, but his company’s much cheaper headset is not only a better value; it is a better product, “period.” CEOs of the world’s most valuable companies don’t often star in this kind of video. It reminded me of a commercial that a car-dealership owner might make about a rival.

I don’t mean to moralize. Marketing is a matter of taste, and Zuckerberg is entitled to his. I mention this video only because it’s part of a larger atmosphere of chippiness in the world of Big Tech. Just last year, a slow-burn feud between Zuckerberg and Elon Musk flared into threats of violence—albeit refereed—when Musk suggested that the two face off in a cage match. “Send Me Location,” Zuckerberg replied on Instagram. In the weeks that followed, Musk, who habitually lobs sexual taunts at his rivals, called Zuckerberg a “cuck” and challenged him to “a literal dick measuring contest.” But amid the tough talk, Musk also seemed to be playing for time. He said that he’d contacted Italy’s prime minister and minister of culture, and that they had agreed to host the fight in an “epic location” among the ruins of ancient Rome. Zuckerberg implied that this was all news to him. Within days, Musk said he would ask his Tesla to drive him to Zuckerberg’s house to fight him in his backyard. He even said that he would livestream it. Alas, Zuckerberg was out of town. Eventually, both men got injured—they are, after all, middle-aged—and the whole idea was abandoned.

Captains of industry have been known to mix it up on occasion. Collis Huntington, the American industrialist and railway magnate, once called Leland Stanford a “damned old fool.” Michael Ovitz said that David Geffen was part of a “Gay Mafia,” determined to bring him down. But, to my knowledge, none of them ever proposed a cage match. Even in their histrionics, the drama kings of tech aim to disrupt.

Their schoolyard feuding cuts an odd contrast with the earnestness that so often emanates from Silicon Valley. We have long known, for instance, that very serious conflicting views about AI safety played a role in November’s boardroom drama at OpenAI, but it was also driven by interpersonal resentments. Last week, The New York Timesreported that before Sam Altman’s ouster, Mira Murati, the company’s chief technology officer, sent Altman a private memo “outlining some of her concerns with his behavior.” According to the Times, she told OpenAI’s board that when Altman went to sell some new strategic direction, he would put on a charming mask, but when people dissented or even just delayed, he would freeze them out. In a statement posted to X, Murati described these anonymous claims as misleading, and said that the previous board members were scapegoating her to save face. Altman reposted Murati’s post with a heart emoji, the lingua franca of reconciliation at OpenAI. Now that he’s back with a new board in place, the company line is: It’s time to move on.

Musk, who cannot seem to stand the idea that there might be tech drama somewhere that does not involve him, has been trolling OpenAI relentlessly on X. Last week, he posted a doctored image of Altman holding up a visitor’s pass that read “ClosedAI,” and followed up this past Tuesday with a word-cloud image of the company’s logo, in which every word was lie. (Not his best work.) He also filed a lawsuit against the company. It alleges that by pursuing material gain instead of the good of all humanity, OpenAI’s executives have breached their “founding agreement.” The company responded with a blog post that soberly refuted some of Musk’s claims, but Altman also went to X to respond personally. He tracked down an old Musk post from 2019, in which he had thanked Altman for criticizing Tesla’s naysayers. Altman replied with “anytime” and a salute emoji, implying that Musk is now the one bitterly rooting against the high cause of innovation.

Moguls in other sectors rarely put one another on blast like this in public. (They have the decency to call a reporter and do it on background.) It’s hard to know whether this performative strain in tech culture reflects something essential about the industry. Maybe its leaders are just unusually visible, because the legacy media are more interested in them, or because they figure so prominently on the social-media platforms that they operate. Or maybe a few outlier personalities—Musk in particular—are responsible for most of the soap-opera vibes. It could also be the general cultural atmosphere. Over the past 20 years, a fashion for aggrieved and confrontational behavior has migrated out of reality television into the wider entertainment and business worlds, and also into politics, in the person of Donald Trump.

If the tech titans weren’t so self-serious, their bad behavior might simply blend into this broader coarsening. Folk wisdom and life experience tell us that rivalries and infighting will emerge, organically, anywhere that there is money and power. That’s why we direct our scrutiny wherever those things accumulate. But the leaders of the tech world want to wave us off, on the grounds that they are playing for higher stakes than just money and power. They tell us that yesterday’s technologists were the framers of our very civilization and that today’s are ushering in a benevolent future. They assure us that they have thought through that future’s risks and know exactly which ones to worry about, up to and including those that may be existential. They insist that they are the grown-ups. We will believe it when we see it.

Illustration by The Atlantic. Sources: Chris Ratcliffe / Odd Andersen / Chesnot / Getty.

Elon Musk Just Added a Wrinkle to the AI Race

$
0
0

Yesterday afternoon, Elon Musk fired the latest shot in his feud with OpenAI: His new AI venture, xAI, now allows anyone to download and use the computer code for its flagship software. No fees, no restrictions, just Grok, a large language model that Musk has positioned against OpenAI’s GPT-4, the model powering the most advanced version of ChatGPT.

Sharing Grok’s code is a thinly veiled provocation. Musk was one of OpenAI’s original backers. He left in 2018 and recently sued for breach of contract, arguing that the start-up and its CEO, Sam Altman, have betrayed the organization’s founding principles in pursuit of profit, transforming a utopian vision of technology that “benefits all of humanity” into yet another opaque corporation. Musk has spent the past few weeks callingthe secretive firmClosedAI.”

It’s a mediocre zinger at best, but he does have a point. OpenAI does not share much about its inner workings, it added a “capped-profit” subsidiary in 2019 that expanded the company’s remit beyond the public interest, and it’s valued at $80 billion or more. Meanwhile, more and more AI competitors are freely distributing their products’ code. Meta, Google, Amazon, Microsoft, and Apple—all companies with fortunes built on proprietary software and gadgets—have either released the code for various open-AI models or partnered with start-ups that have done so. Such “open source” releases, in theory, allow academics, regulators, the public, and start-ups to download, test, and adapt AI models for their own purposes. Grok’s release, then, marks not only a flash point in a battle between companies but also, perhaps, a turning point across the industry. OpenAI’s commitment to secrecy is starting to seem like an anachronism.

This tension between secrecy and transparency has animated much of the debate around generative AI since ChatGPT arrived, in late 2022. If the technology does genuinely represent an existential threat to humanity, as some believe, is the risk increased or decreased depending on how many people can access the relevant code? Doomsday scenarios aside, if AI agents and assistants become as commonly used as Google Search or Siri, who should have the power to steer and scrutinize that transformation? Open-sourcing advocates, a group that now seemingly includes Musk, argue that the public should be able to look under the hood to rigorously test AI for both civilization-ending threats and the less fantastical biases and flaws plaguing the technology today. Better that than leaving all the decision making to Big Tech.

OpenAI, for its part, has provided a consistentexplanation for why it began raising enormous amounts of money and stopped sharing its code: Building AI became incredibly expensive, and the prospect of unleashing its underlying programming became incredibly dangerous. The company has said that releasing full products, such as ChatGPT, or even just demos, such as one for the video-generating Sora program, is enough to ensure that future AI will be safer and more useful. And in response to Musk’s lawsuit, OpenAI published snippets of old emails suggesting that Musk explicitly agreed with these justifications, going so far as to suggest a merger with Tesla in early 2018 as a way to meet the technology’s future costs.

Those costs represent a different argument for open-sourcing: Publicly available code can enable competition by allowing smaller companies or independent developers to build AI products without having to engineer their own models from scratch, which can be prohibitively expensive for anyone but a few ultra-wealthy companies and billionaires. But both approaches—getting investments from tech companies, as OpenAI has done, or having tech companies open up their baseline AI models—are in some sense sides of the same coin: ways to overcome the technology’s tremendous capital requirements that will not, on their own, redistribute that capital.

[Read: There was never such a thing as “open” AI]

For the most part, when companies release AI code, they withhold certain crucial aspects; xAI has not shared Grok’s training data, for example. Without training data, it’s hard to investigate why an AI model exhibits certain biases or limitations, and it’s impossible to know if its creator violated copyright law. And without insight into a model’s production—technical details about how the final code came to be—it’s much harder to glean anything about the underlying science. Even with publicly available training data, AI systems are simply too massive and computationally demanding for most nonprofits and universities, let alone individuals, to download and run. (A standard laptop has too little storage to even download Grok.) xAI, Google, Amazon, and all the rest are not telling you how to build an industry-leading chatbot, much less giving you the resources to do so. Openness is as much about branding as it is about values. Indeed, in a recent earnings call, Mark Zuckerberg did not mince words about why openness is good business: It encourages researchers and developers to use, and improve, Meta products.

[Read: OpenAI’s Sora is a total mystery]

Numerous start-ups and academic collaborations are releasing open code, training data, and robust documentation alongside their AI products. But Big Tech companies tend to keep a tight lid. Meta’s flagship model, Llama 2, is free to download and use—but its policies forbid deploying it to improve another AI language model or to develop an application with more than 700 million monthly users. Such uses would, of course, represent actual competition with Meta. Google’s most advanced AI offerings are still proprietary; Microsoft has supported open-source projects, but OpenAI’s GPT-4 remains central to its offerings.

Regardless of the philosophical debate over safety, the fundamental reason for the closed approach of OpenAI, compared with the growing openness of the tech behemoths, might simply be its size. Trillion-dollar companies can afford to put AI code in the world, knowing that different products and integrating AI into those products—bringing AI to Gmail or Microsoft Outlook—are where profits lie. xAI has the direct backing of one of the richest people in the world, and its software could be worked into X (formerly Twitter) features and Tesla cars. Other start-ups, meanwhile, have to keep their competitive advantage under wraps. Only when openness and profit come into conflict will we get a glimpse of these companies’ true motivations.

Sergei Gapon / AFP / Getty

Universities Have a Computer-Science Problem

$
0
0

Last year, 18 percent of Stanford University seniors graduated with a degree in computer science, more than double the proportion of just a decade earlier. Over the same period at MIT, that rate went up from 23 percent to 42 percent. These increases are common everywhere: The average number of undergraduate CS majors at universities in the U.S. and Canada tripled in the decade after 2005, and it keeps growing. Students’ interest in CS is intellectual—culture moves through computation these days—but it is also professional. Young people hope to access the wealth, power, and influence of the technology sector.

That ambition has created both enormous administrative strain and a competition for prestige. At Washington University in St. Louis, where I serve on the faculty of the Computer Science & Engineering department, each semester brings another set of waitlists for enrollment in CS classes. On many campuses, students may choose to study computer science at any of several different academic outposts, strewn throughout various departments. At MIT, for example, they might get a degree in “Urban Studies and Planning With Computer Science” from the School of Architecture, or one in “Mathematics With Computer Science” from the School of Science, or they might choose from among four CS-related fields within the School of Engineering. This seepage of computing throughout the university has helped address students’ booming interest, but it also serves to bolster their demand.

Another approach has gained in popularity. Universities are consolidating the formal study of CS into a new administrative structure: the college of computing. MIT opened one in 2019. Cornell set one up in 2020. And just last year, UC Berkeley announced that its own would be that university’s first new college in more than half a century. The importance of this trend—its significance for the practice of education, and also of technology—must not be overlooked. Universities are conservative institutions, steeped in tradition. When they elevate computing to the status of a college, with departments and a budget, they are declaring it a higher-order domain of knowledge and practice, akin to law or engineering. That decision will inform a fundamental question: whether computing ought to be seen as a superfield that lords over all others, or just a servant of other domains, subordinated to their interests and control. This is, by no happenstance, also the basic question about computing in our society writ large.


When I was an undergraduate at the University of Southern California in the 1990s, students interested in computer science could choose between two different majors: one offered by the College of Letters, Arts and Sciences, and one from the School of Engineering. The two degrees were similar, but many students picked the latter because it didn’t require three semesters’ worth of study of a (human) language, such as French. I chose the former, because I like French.

An American university is organized like this, into divisions that are sometimes called colleges, and sometimes schools. These typically enjoy a good deal of independence to define their courses of study and requirements as well as research practices for their constituent disciplines. Included in this purview: whether a CS student really needs to learn French.

The positioning of computer science at USC was not uncommon at the time. The first academic departments of CS had arisen in the early 1960s, and they typically evolved in one of two ways: as an offshoot of electrical engineering (where transistors got their start), housed in a college of engineering; or as an offshoot of mathematics (where formal logic lived), housed in a college of the arts and sciences. At some universities, including USC, CS found its way into both places at once.

The contexts in which CS matured had an impact on its nature, values, and aspirations. Engineering schools are traditionally the venue for a family of professional disciplines, regulated with licensure requirements for practice. Civil engineers, mechanical engineers, nuclear engineers, and others are tasked to build infrastructure that humankind relies on, and they are expected to solve problems. The liberal-arts field of mathematics, by contrast, is concerned with theory and abstraction. The relationship between the theoretical computer scientists in mathematics and the applied ones in engineers is a little like the relationship between biologists and doctors, or physicists and bridge builders. Keeping applied and pure versions of a discipline separate allows each to focus on its expertise, but limits the degree to which one can learn from the other.

[Read: Programmers, stop calling yourself engineers]

By the time I arrived at USC, some universities had already started down a different path. In 1988, Carnegie Mellon University created what it says was one of the first dedicated schools of computer science. Georgia Institute of Technology followed two years later. “Computing was going to be a big deal,” says Charles Isbell, a former dean of Georgia Tech’s college of computing and now the provost at the University of Wisconsin-Madison. Emancipating the field from its prior home within the college of engineering gave it room to grow, he told me. Within a decade, Georgia Tech had used this structure to establish new research and teaching efforts in computer graphics, human-computer interaction, and robotics. (I spent 17 years on the faculty there, working for Isbell and his predecessors, and teaching computational media.)

Kavita Bala, Cornell University’s dean of computing, told me that the autonomy and scale of a college allows her to avoid jockeying for influence and resources. MIT’s computing dean, Daniel Huttenlocher, says that computing’s breakneck pace of innovation makes independence necessary. It would be held back in an arts-and-sciences context, he told me, or even an engineering one.

But the computing industry isn’t just fast-moving. It’s also reckless. Technology tycoons say they need space for growth, and warn that too much oversight will stifle innovation. Yet we might all be better off, in certain ways, if their ambitions were held back even just a little. Instead of operating with a deep understanding or respect for law, policy, justice, health, or cohesion, tech firms tend to do whatever they want. Facebook sought growth at all costs, even if its take on connecting people tore society apart. If colleges of computing serve to isolate young, future tech professionals from any classrooms where they might imbibe another school’s culture and values—engineering’s studied prudence, for example, or the humanities’ focus on deliberation—this tendency might only worsen.

[Read: The moral failure of computer scientists]

When I raised this concern with Isbell, he said that the same reasoning could apply to any influential discipline, including medicine and business. He’s probably right, but that’s cold comfort. The mere fact that universities allow some other powerful fiefdoms to exist doesn’t make computing’s centralization less concerning. Isbell admitted that setting up colleges of computing “absolutely runs the risk” of empowering a generation of professionals who may already be disengaged from consequences to train the next one in their image. Inside a computing college, there may be fewer critics around who can slow down bad ideas. Disengagement might redouble. But he said that dedicated colleges could also have the opposite effect. A traditional CS department in a school of engineering would be populated entirely by computer scientists, while the faculty for a college of computing like the one he led at Georgia Tech might also house lawyers, ethnographers, psychologists, and even philosophers like me. Bala told me that her college was established not to teach CS on its own but to incorporate policy, law, sociology, and other fields into its practice. “I think there are no downsides,” she said.

Mark Guzdial is a former faculty member in Georgia Tech’s computing college, and he now teaches computer science in the University of Michigan’s College of Engineering. At Michigan, CS wasn’t always housed in engineering—Guzdial says it started out inside the philosophy department, as part of the College of Literature, Science and the Arts. Now that college “wants it back,” as one administrator told Guzdial. Having been asked to start a program that teaches computing to liberal-arts students, Guzdial has a new perspective on these administrative structures. He learned that Michigan’s Computer Science and Engineering program and its faculty are “despised” by their counterparts in the humanities and social sciences. “They’re seen as arrogant, narrowly focused on machines rather than people, and unwilling to meet other programs’ needs,” he told me. “I had faculty refuse to talk to me because I was from CSE.”

In other words, there may be downsides just to placing CS within an engineering school, let alone making it an independent college. Left entirely to themselves, computer scientists can forget that computers are supposed to be tools that help people. Georgia Tech’s College of Computing worked “because the culture was always outward-looking. We sought to use computing to solve others’ problems,” Guzdial said. But that may have been a momentary success. Now, at Michigan, he is trying to rebuild computing education from scratch, for students in fields such as French and sociology. He wants them to understand it as a means of self-expression or achieving justice—and not just a way of making software, or money.


Early in my undergraduate career, I decided to abandon CS as a major. Even as an undergraduate, I already had a side job in what would become the internet industry, and computer science, as an academic field, felt theoretical and unnecessary. Reasoning that I could easily get a job as a computer professional no matter what it said on my degree, I decided to study other things while I had the chance.

I have a strong memory of processing the paperwork to drop my computer-science major in college, in favor of philosophy. I walked down a quiet, blue-tiled hallway of the engineering building. All the faculty doors were closed, although the click-click of mechanical keyboards could be heard behind many of them. I knocked on my adviser’s door; she opened it, silently signed my paperwork without inviting me in, and closed the door again. The keyboard tapping resumed.

The whole experience was a product of its time, when computer science was a field composed of oddball characters, working by themselves, and largely disconnected from what was happening in the world at large. Almost 30 years later, their projects have turned into the infrastructure of our daily lives. Want to find a job? That’s LinkedIn. Keep in touch? Gmail, or Instagram. Get news? A website like this one, we hope, but perhaps TikTok. My university uses a software service sold by a tech company to run its courses. Some things have been made easier with computing. Others have been changed to serve another end, like scaling up an online business.

[Read: So much for ‘learn to code’]

The struggle to figure out the best organizational structure for computing education is, in a way, a microcosm of the struggle under way in the computing sector at large. For decades, computers were tools used to accomplish tasks better and more efficiently. Then computing became the way we work and live. It became our culture, and we began doing what computers made possible, rather than using computers to solve problems defined outside their purview. Tech moguls became famous, wealthy, and powerful. So did CS academics (relatively speaking). The success of the latter—in terms of rising student enrollments, research output, and fundraising dollars—both sustains and justifies their growing influence on campus.

If computing colleges have erred, it may be in failing to exert their power with even greater zeal. For all their talk of growth and expansion within academia, the computing deans’ ambitions seem remarkably modest. Martial Hebert, the dean of Carnegie Mellon’s computing school, almost sounded like he was talking about the liberal arts when he told me that CS is “a rich tapestry of disciplines” that “goes far beyond computers and coding.” But the seven departments in his school correspond to the traditional, core aspects of computing plus computational biology. They do not include history, for example, or finance. Bala and Isbell talked about incorporating law, policy, and psychology into their programs of study, but only in the form of hiring individual professors into more traditional CS divisions. None of the deans I spoke with aspires to launch, say, a department of art within their college of computing, or one of politics, sociology, or film. Their vision does not reflect the idea that computing can or should be a superordinate realm of scholarship, on the order of the arts or engineering. Rather, they are proceeding as though it were a technical school for producing a certain variety of very well-paid professionals. A computing college deserving of the name wouldn’t just provide deeper coursework in CS and its closely adjacent fields; it would expand and reinvent other, seemingly remote disciplines for the age of computation.

Near the end of our conversation, Isbell mentioned the engineering fallacy, which he summarized like this: Someone asks you to solve a problem, and you solve it without asking if it’s a problem worth solving. I used to think computing education might be stuck in a nesting-doll version of the engineer’s fallacy, in which CS departments have been asked to train more software engineers without considering whether more software engineers are really what the world needs. Now I worry that they have a bigger problem to address: how to make computer people care about everything else as much as they care about computers.

Max Whittaker / The New York Times / Redux

Flying Is Weird Right Now

$
0
0

Somewhere over Colorado this weekend, while I sat in seat 21F, my plane began to buck, jostle, and rattle. Within seconds, the seat-belt indicator dinged as the pilot asked flight attendants to return to their seats. We were experiencing what I, a frequent flier, might describe as “intermediate turbulence”—a sustained parade of midair bumps that can be uncomfortable but by no means terrifying.

Generally, I do not fear hurtling through the sky at 500 miles per hour, but at this moment I felt an unusual pang of uncertainty. The little informational card poking out of the seat-back pocket in front of me started to look ominous—the words Boeing 737-900 positively glared at me as the cabin shook. A few minutes later, once we’d found calm air, I realized that a steady drumbeat of unsettling aviation stories had so thoroughly permeated my news-consumption algorithms that I had developed a phobia of sorts.

More than 100,000 flights take off every day without issue, which means that incidents are treated as newsworthy anomalies. But it sure feels like there have been quite a few anomalies lately. In January, a Japanese coast-guard plane and a Japan Airlines plane collided on the runway, erupting in flames; a few days later, a door blew out on an Alaska Airlines Boeing 737 Max 9 jet shortly after takeoff. Then, in just the past few weeks:

  • A United Airlines flight in Houston heading to its gate rolled off the runway and into the grass.
  • Another United flight, en route from Houston to Fort Myers, Florida, made an emergency landing after flames started shooting out of one of its engines.
  • Yet another United flight was forced to make an emergency landing when a tire fell off the plane moments after takeoff.
  • Still another United flight, this one heading from San Francisco to Mexico, made an emergency landing due to a hydraulic-system failure.
  • The National Transportation Safety Board announced that it was investigating a February United flight that had potentially faulty rudder pedals.
  • Roughly 50 passengers were injured in New Zealand when pilots lost control of a Boeing plane and it plummeted suddenly.
  • A post-landing inspection revealed that an external panel was missing from a Boeing 737-800 plane that had landed in Oregon this past Friday.

United released a statement to passengers suggesting the incidents on its flights were unrelated but also “reminders of the importance of safety.” In that same statement, Scott Kirby, the company’s CEO, said that the incidents “have our attention and have sharpened our focus.”

This is only a partial list of the year’s aeronautic mishaps, which are prodigious: Consider investigations into Alaska Airlines that revealed numerous doors with loose bolts, the Airbus grounded for a faulty door light, or the Delta Boeing whose nose wheel popped off and “rolled down” a hill as the flight prepared to take off.  

[Read: The carry-on-baggage bubble is about to pop]

Many people are wondering: What is going on with airplanes? In January, the booking site Kayak reported that it had seen “a 15-fold increase” in the use of its aircraft filter for Boeing 737 Max planes, suggesting that anxious travelers booking flights were excluding them from their searches. In response to the palpable audience interest, there’s been an uptick in media interest in aviation stories.

Meanwhile, poking fun at Boeing—whose standards and corporate culture have understandably come under scrutiny in the past few years after it was charged with fraud and agreed to pay $2.5 billion in settlements—has become a meme, a way to nervously laugh at the cavalcade of bad news and to gesture at the frustration over corporate greed that seems to put overcharged air travelers at risk. (Boeing responded to the Alaska Airlines door incident by acknowledging that the company“is accountable for what happened,” and pledged to make internal changes. And last week, Executive Vice President Stan Deal sent a message to employees outlining steps the company is taking to improve its planes’ safety and quality, including adding new “layers” of inspection to its manufacturing processes.)

Despite all of this, flying has, in a historical sense at least, never been safer. A statistician at MIT has found that, globally, the odds of a passenger dying on a flight from 2018 to 2022 were 38 times lower than they were 50 years earlier. The National Safety Council found in 2021 that, over the course of a person’s life, the odds of dying as an aircraft passenger in the U.S. “were too small to even calculate.” One aviation-safety consultant recently told NBC News, “There’s not anything unusual about the recent spate of incidents—these kinds of things happen every day in the industry.” A separate industry analyst told Slate in February, “Flying is literally safer than sitting on the ground … I don’t know how I can stress that enough.” That we know so much about every little failure and close call in the skies is, in part, because the system is so thorough and so safe.

So what’s really going on? I suspect it’s a confluence of two distinct factors. The first is that although air safety is getting markedly better over time, the experience of flying is arguably worse than ever. The pandemic had a cascading effect on the business of air travel. One estimate suggests that in the past four years, roughly 10,000 pilots have left the commercial airline industry, as many airlines offered early retirement to employees during the shutdown and pre-vaccine periods, when fewer people were traveling. There are also shortages of mechanics and air traffic controllers.

All of that is now coupled with an increase in passenger volume: In 2023, flight demand crept back up to near pre-pandemic levels, and staffing has not caught up. It is also an especially expensive time to fly. Pile on unruly passengers, system outages, baggage fees, carry-on restrictions, meager drink and snack offerings, and the trials and tribulations of merely coexisting with other travelers who insist on lining up at the gate 72 hours before their zone boards and you have a perfectly combustible situation. Air travel is an impressive daily symphony of logistics, engineering, and physics. It’s also a total grind.

Trust in Boeing declined in recent months, according to consumer surveys, even if consumers still trust the airline industry as a whole. It makes sense that the distrust in Boeing would bleed outward. All conspiracy theories are rooted in some aspect of personal experience, and plenty of information exists out there to confirm one’s deepest suspicions: TheNew York Times described Boeing’s past safety issues as “capitalism gone awry” in 2020, and there is plenty of evidence that the company culture hasn’t changed enough since then. At least two aviation experts (one a former Boeing employee) have publicly stated their concerns about flying in certain Boeing planes. It doesn’t help that Boeing is the subject of an NTSB investigation and is struggling to present the requested evidence in the Alaska door case, or that earlier this month a Boeing whistleblower died by suicide.

[Read: What’s gone wrong at Boeing]

Then there is the second factor: vibes. Existing online means getting exposed to so much information that it has become quite easy to hear about individual problems, but incredibly difficult to determine their overall scale or relevance. On TikTok, you might be exposed to entire genres of ominous flight videos: “Flight Attendant Horror,'” “Scary Sounding Planes,” “The Scariest Plane.” Even those who are not specifically mainlining these clips may suffer from an algorithmic selection bias: the more interest a person has in the recent plane malfunctions, the more likely that person might be to see more stories and commentary about planes in general. Meanwhile, an uptick in interest in stories about airline mishaps can lead to an increase in coverage of airline mishaps, which has the effect of making more routine issues feel like they’re piling up. Some of that reporting can be downright sensational, and news organizations are now also covering incidents they would have previously ignored.

This distortion—between public perception of an issue (planes are getting less safe!) and the more boring reality (they’re actually very safe)—is exacerbated by the intensity and density of information. It is a modern experience to stumble upon a meme, theory, or narrative and then see it in all of your feeds. Similarly, platforms make it easier for complex, disparate stories to collapse into simpler ways of seeing the world. Air safety slots nicely into this framework and, given the sterling record of the industry, a couple of loose or missing screws on a Boeing jet begins to feel both like a systemic failure and proof of something bigger: a kind of societal decay at the hands of increasing shareholder value.

These are feelings, vibes. They aren’t always accurate, but often that doesn’t matter because they’re so deeply felt. If that word—vibes—feels more prevalent in the lexicon in recent years, perhaps it is because more weird, hard-to-interpret information is available, pushing people toward trusting their gut feelings. Today’s air-travel anxiety sits at the intersection of these vibes, anecdotes, legitimate and troubling news reports, and the algorithmic distortion of the internet, creating a distinctly modern feeling of a large, looming problem, the exact contours of which are difficult to discern.

The vibes are off—this much we know for certain. Everything else is up for debate.

Dennis Stock / Magnum

The IRS Finally Has an Answer to TurboTax

$
0
0

During the torture ritual that was doing my taxes this year, I was surprised to find myself giddy after reading these words: “You are now chatting with IRS Representative-1004671045.” I had gotten stuck trying to parse my W-2, which, under “Box 14: Other,” contained a mysterious $389.70 deduction from my overall pay last year. No explanation. No clues. Nothing. I tapped the chat button on my tax software for help, expecting to be sucked into customer-service hell. Instead, a real IRS employee answered my question in less than two minutes.

The program is not TurboTax, or any one of its many competitors that will give you the white-glove treatment only after you pony up. It is Direct File, a new pilot program made by the IRS. It walks you through each step in mostly simple language (in English or Spanish, on your phone or laptop), automatically saves your progress, shows you a checklist of what you have left to do, flags potential errors, and calculates your return. These features are already part of TurboTax, but Direct File will not push you to an AI chatbot that flubs basic questions. And most crucial, it’s completely free.

That Direct File exists at all is shocking. That it’s pretty good is borderline miraculous. This is the same agency that processes your tax return in a 60-something-year-old programming language and uses software that is up to 15 versions out of date. The only sure thing in life, after death and taxes, is that the government is bad at technology. Remember the healthcare.gov debacle? Nearly 3 million people visited the site on the day it launched in 2013; only six people were actually able to register for insurance. As of the end of last year, about half of .gov websites are still not mobile friendly.

Direct File isn’t perfect—the program is available in only 12 states, and it isn’t able to handle anything beyond the simplest tax situations—but it’s a glimpse of a world where government tech benefits millions of Americans. In turn, it is also an agonizing realization of how far we are from that reality.

Right now, Direct File is sort of akin to when Facebook (or rather TheFacebook) was a site for Harvard students run out of Mark Zuckerberg’s dorm room: Most people can’t use it, and the product is still a work in progress. The IRS has strategically taken things slowly with Direct File. In part to avoid the risk of glitches, it officially launched just last week, well into tax season, and with many restrictions. Only midway through my own Direct File journey did I realize that I owed some taxes on a retirement account, and thus couldn’t actually file on the site. I then sheepishly logged in to TurboTax like a teenager crawling back to their ex; for now, it offers a more seamless experience than Direct File. Unlike on the IRS program, I could upload a picture of my W-2, and TurboTax immediately did the rest for me.

For many years, taxpayer advocates have dreamed of a free government tax portal, similar to websites where you pay parking tickets and renew your driver’s license. Computers and taxes are made for each other: Even as far back as 1991, when most Americans didn’t own a computer, you could have found at least 15 different kinds of private tax software. Lots of other countries, such as Japan, Germany, and New Zealand, already have their own government-run tax sites. According to a distressing New York Times report, Estonians can file online in less than three minutes.

Sure, America’s tax code—unlike Estonia’s!—is an alphabet soup of regulations, but the multibillion-dollar tax-prep industry has also gone to great lengths to stop Americans from filing their taxes for free. After all, why would anyone pay TurboTax upwards of $200 to file if they didn’t have to? (Intuit, the parent company of TurboTax, has an answer: “Filing taxes without someone advocating for your highest refund could be a recipe for overpaying the Internal Revenue Service and [state] departments of revenue, organizations with titles that clearly state their focus, generating revenue for the government,” Rick Heineman, an Intuit spokesperson, told me.)

[Read: The golden age of rich people not paying their taxes]

In 2022, the Inflation Reduction Act shook loose $15 million for the IRS to study the feasibility of creating its own program—and so began Direct File. The program could have been contracted out, as much of the government’s technology is. (The original, disastrous healthcare.gov was the end result of 60 contracts involving 33 outside vendors.) Instead it was made almost entirely by the government’s own programmers, product managers, and designers, Bridget Roberts, the head of the Direct File team, told me.

Engineers created a prototype by mapping out the tax code into a series of steps: The software has to know that a millionaire homeowner doesn’t need to see any of the questions that apply only to low-income renters, for example. Then designers tested language to make sure that taxpayers could easily understand it. “We were going through constant user research—putting pieces of Direct File in front of taxpayers and getting their feedback,” Roberts said. Early guinea pigs were asked to screen-share while they tested Direct File. “That way, if there were any bugs, we would fix them before we moved on,” she said. It all sounds more Sam Altman than Uncle Sam.

The government could not have made something like this even 10 years ago. Unlike in the pre-healthcare.gov days, “now there is a generation of civic-tech innovators who want to go into government or want to work with the government,” Donald Moynihan, a public-policy professor at Georgetown, told me. In the past decade, attention given to the government’s technological deficiencies has led to the creation of agencies such as the United States Digital Service and 18F—both of which hire tech workers for temporary stints in the public sector. Other agencies, such as Veterans Affairs, have hired more than 1,000 of their own tech workers. The salaries are nowhere near as good as in Silicon Valley, but surely a government gig can be more fulfilling than tinkering with the user experience for Instagram share buttons all day. Amid the tech layoffs in 2023, the government launched a tech-jobs board and endeavored to hire 22,000 tech workers. Last month, the federal government began pushing to hire AI talent by boosting salaries and introducing incentives such as student-loan repayment.    

[Read: Why is there financing for everything now?]

That is how you get something like Direct File. Both the USDS and 18F, Roberts said, were brought in to help create the product, working alongside IRS engineers. There have been other successes from these groups too. Consider COVIDtests.gov, where until recently you could order free tests in basically a minute. Or my personal favorite, analytics.usa.gov, where you can monitor how much traffic government sites are getting. (In the past week, it shows, Direct File has gotten nearly 450,000 clicks.) Many .gov websites, although not necessarily wonderful, no longer feel like they’re a time portal to 1999.

But the work has been halting, at best. The more I played around with Direct File, the more frustrated I grew that there isn’t more government technology like it. Certain websites have gotten a facelift, but most of the government’s digital services lag behind: Some state unemployment systems still run on outdated, buggy portals and mainframe computers that crashed during the pandemic, delaying much-needed checks. Last year, a glitch in the Federal Aviation Administration’s 30-year-old computer system grounded thousands of flights and caused the first nationwide stop on air travel since 9/11. “Another healthcare.gov could happen today,” Mikey Dickerson, a former administrator of the United States Digital Service, told me. In fact, a similar debacle is happening right now: The Department of Education’s attempt to revamp its financial-aid form led to dire glitches that have upended the entire college-admissions cycle.

Ultimately, the fundamental reasons the government is bad at tech haven’t changed much. Bureaucracy is bureaucracy, Dickerson told me: Too often, the government operates under a model of collecting a list of everything it wants in a tech product—a months-long endeavor in itself—enlisting a company that can check them all off, and then testing it only when basically all the code has been written. The government is “not capable of keeping up with the crushing wave of complex systems that are becoming more and more obsolete,” he said. Hiring processes remain a problem too. Because the government doesn’t have a good way to evaluate a candidate’s technical skills, it can take nine months or longer to wade through the applicant pool and make a hire, Jen Pahlka, the author of Recoding America, told me. “There’s more people who want to work in government than we can absorb,” she said.

Everything had to go right to unleash Direct File. Congress set aside money. Programmers created something from scratch instead of revamping an online service built on outdated code. All to build the government’s own TurboTax—a long-heralded dream for some of the Leslie Knope types who work in civic tech. But even now, after all this work, the future of Direct File is in doubt. The IRS has not committed to anything beyond this year, and that Americans will clamor for Direct File next spring is not a given: By one measure, Direct File’s total employees are outnumbered by just the lobbyists working for Intuit.

And so, Direct File is the essence of government tech right now—a work in progress. “Increasingly, the face of government is digital,” Moynihan said. “We mostly see government on our phones and laptops, as opposed to going to an office somewhere or calling someone on a phone.” The dream of tapping a button on my iPhone and chatting with the DMV, or the VA, or Medicare, is just that: a dream. But hey, at least until April 15, I still have IRS Representative-1004671045.

Illustration by The Atlantic. Source: Viktoriia Oleinichenko / Getty.

It’s Time to Give Up on Ending Social Media’s Misinformation Problem

$
0
0

If you don’t trust social media, you should know you’re not alone. Most people surveyed around the world feel the same—in fact, they’ve been saying so for a decade. There is clearly a problem with misinformation and hazardous speech on platforms such as Facebook and X. And before the end of its term this year, the Supreme Court may redefine how that problem is treated.

Over the past few weeks, the Court has heard arguments in three cases that deal with controlling political speech and misinformation online. In the first two, heard last month, lawmakers in Texas and Florida claim that platforms such as Facebook are selectively removing political content that its moderators deem harmful or otherwise against their terms of service; tech companies have argued that they have the right to curate what their users see. Meanwhile, some policy makers believe that content moderation hasn’t gonefar enough, and that misinformation still flows too easily through social networks; whether (and how) government officials can directly communicate with tech platforms about removing such content is at issue in the third case, which was put before the Court this week.

We’re Harvard economists who study social media and platform design. (One of us, Scott Duke Kominers, is also a research partner at the crypto arm of a16z, a venture-capital firm with investments in social platforms, and an adviser to Quora.) Our research offers a perhaps counterintuitive solution to disagreements about moderation: Platforms should give up on trying to prevent the spread of information that is simply false, and focus instead on preventing the spread of information that can be used to cause harm. These are related issues, but they’re not the same.

As the presidential election approaches, tech platforms are gearing up for a deluge of misinformation. Civil-society organizations say that platforms need a better plan to combat election misinformation, which some academics expect to reach new heights this year. Platforms say they have plans for keeping sites secure, yet despite the resources devoted to content moderation, fact-checking, and the like, it’s hard to escape the feeling that the tech titans are losing the fight.

[Read: I asked 13 tech companies about their plans for election violence]

Here is the issue: Platforms have the power to block, flag, or mute content that they judge to be false. But blocking or flagging something as false doesn’t necessarily stop users from believing it. Indeed, because many of the most pernicious lies are believed by those inclined to distrust the “establishment,” blocking or flagging false claims can even make things worse.

On December 19, 2020, then-President Donald Trump posted a now-infamous message about election fraud, telling readers to “be there,” in Washington, D.C., on January 6. If you visit that post on Facebook today, you’ll see a sober annotation from the platform itself that “the US has laws, procedures, and established institutions to ensure the integrity of our elections.” That disclaimer is sourced from the Bipartisan Policy Center. But does anyone seriously believe that the people storming the Capitol on January 6, and the many others who cheered them on, would be convinced that Joe Biden won just because the Bipartisan Policy Center told Facebook that everything was okay?

Our research shows that this problem is intrinsic: Unless a platform’s users trust the platform’s motivations and its process, any action by the platform can look like evidence of something it is not. To reach this conclusion, we built a mathematical model. In the model, one user (a “sender”) tries to make a claim to another user (a “receiver”). The claim might be true or false, harmful or not. Between the two users is a platform—or maybe an algorithm acting on its behalf—that can block the sender’s content if it wants to.

We wanted to find out when blocking content can improve outcomes, without a risk of making them worse. Our model, like all models, is an abstraction—and thus imperfectly captures the complexity of actual interactions. But because we wanted to consider all possible policies, not just those that have been tried in practice, our question couldn’t be answered by data alone. So we instead approached it using mathematical logic, treating the model as a kind of wind tunnel to test the effectiveness of different policies.

Our analysis shows that if users trust the platform to both know what’s right and do what’s right (and the platform truly does know what’s true and what isn’t), then the platform can successfully eliminate misinformation. The logic is simple: If users believe the platform is benevolent and all-knowing, then if something is blocked or flagged, it must be false, and if it is let through, it must be true.

You can see the problem, though: Many users don’t trust Big Tech platforms, as those previously mentioned surveys demonstrate. When users don’t trust a platform, even well-meaning attempts to make things better can make things worse. And when the platforms seem to be taking sides, that can add fuel to the very fire they are trying to put out.

Does this mean that content moderation is always counterproductive? Far from it. Our analysis also shows that moderation can be very effective when it blocks information that can be used to do something harmful.

Going back to Trump’s December 2020 post about election fraud, imagine that, instead of alerting users to the sober conclusions of the Bipartisan Policy Center, the platform had simply made it much harder for Trump to communicate the date (January 6) and place (Washington, D.C.) for supporters to gather. Blocking that information wouldn’t have prevented users from believing that the election was stolen—to the contrary, it might have fed claims that tech-sector elites were trying to influence the outcome. Nevertheless, making it harder to coordinate where and when to go might have helped slow the momentum of the eventual insurrection, thus limiting the post’s real-world harms.

[Read: So maybe Facebook didn’t ruin politics]

Unlike removing misinformation per se, removing information that enables harm can work even if users don’t trust the platform’s motives at all. When it is the information itself that enables the harm, blocking that information blocks the harm as well. A similar logic extends to other kinds of harmful content, such as doxxing and hate speech. There, the content itself—not the beliefs it encourages—is the root of the harm, and platforms do indeed successfully moderate these types of content.

Do we want tech companies to decide what is and is not harmful? Maybe not; the challenges and downsides are clear. But platforms already routinely make judgments about harm—is a post calling for a gathering at a particular place and time that includes the word violent an incitement to violence, or an announcement of an outdoor concert? Clearly the latter if you’re planning to see the Violent Femmes. Often context and language make these judgments apparent enough that an algorithm can determine them. When that doesn’t happen, platforms can rely on internal experts or even independent bodies, such as Meta’s Oversight Board, which handles tricky cases related to the company’s content policies.

And if platforms accept our reasoning, they can divert resources from the misguided task of deciding what is true toward the still hard, but more pragmatic, task of determining what enables harm. Even though misinformation is a huge problem, it’s not one that platforms can solve. Platforms can help keep us safer by focusing on what content moderation can do, and giving up on what it can’t.

Illustration by Matteo Giuseppe Pani

We’re All Just Fodder

$
0
0

It was always going to end this way. The truth about Kate Middleton’s absence is far less funny, whimsical, or salacious than the endless memes and conspiracy theories suggested. In a video recorded and broadcast by the BBC, the princess says she has cancer, and that she had retreated from the public eye to deal with her condition while attempting to shield her children from the spotlight. Instead, she had to contend with the internet giggling about whether she’d had a Brazilian butt lift. My colleague Helen Lewis summed it up succinctly this afternoon: “I Hope You All Feel Terrible Now.”

What is there to learn from such a sad situation? The internet is made up of people, yet its architecture abstracts this basic truth. As I wrote a few weeks ago, at the center of this months-long story was essentially “a sea of people having fun online because it is unclear whether a famous person is well or not.” Underneath the memes was always something a little bit gross and indefensible.

[Read: Just asking questions about Kate Middleton]

Perhaps humans are just wired this way—to gawk and gossip. There’s nothing new about hounding a member of the royal family or invading the privacy of a celebrity to sell tabloids or go viral. You don’t even have to be a scold about it: Famous people are wealthy and beloved at least in part because they’re fun to talk about. Exactly what we do and don’t know about their internal lives is part of the allure—the discourse comes with the territory, to a degree.

But Kate Middleton, of course, is a human too. During this saga, I kept thinking about the reappraisal of Britney Spears in 2021, as well as the backlash toward past media and tabloid coverage of her rise. A New York Times documentary dredged up old coverage of Spears from the mid-aughts, showing a young woman clearly in distress, being picked apart by glossy magazines. Her suffering became entertainment. The response to this film was swift; some of the people and institutions that had shamelessly delighted in her pain backtracked: Glamour publicly apologized to the pop star on its Instagram account, noting, “We are all to blame for what happened to Britney Spears.”

Contrast the Spears reckoning with the Middleton drama and, if you’re being generous, you can see some of that newfound attitude in the media. I was struck by Lewis’s observation that “Britain’s tabloid papers have shown remarkable restraint” throughout this mess. Progress, perhaps, but what’s also telling is that they didn’t really need to do the dirty work: Random people on the internet were doing it for them. They recklessly speculated, memed, and used their amateur sleuthing and networked faux expertise to concoct elaborate, semi-plausible explanations for her absence. Was Kate’s face actually Photoshopped from a Vogue spread? It wasn’t, but the conspiratorial tweet got 51.1 million views anyhow. Missing from much of the discourse was the idea that its main character was a person who was likely struggling. In essence, the internet democratized the tabloid experience, turning the rest of us into paparazzi and addled editors workshopping headlines and cover images—not to sell magazines, but to amass some kind of fleeting online popularity.

[Read: Kate Middleton and the end of shared reality]

In my least charitable moments, I see this toxic dynamic as the lasting legacy of social media—a giant, metrics-infused experiment in connectivity that has had a flattening, pernicious effect. In 2021, I interviewed Elle Hunt, a journalist who’d tweeted an innocuous opinion about horror movies one evening and woke up to find she was trending on Twitter, her feeds choked with thousands of furious replies and threats. When I asked her to describe the experience of becoming Twitter’s main character for the day, she summed it up thusly: “You’re repurposed as fodder for content generation in a way that’s just so dehumanizing.” Three years later, these words resonate even stronger. What Hunt described to me then as “a platform failure,” feels to me now like a learned behavior of the internet, where people, famous and not, are repurposed as fodder for content generation.

The cycle repeats itself endlessly. This afternoon, the memes about Middleton shifted—from jokes about her whereabouts to jokes about how awful it was that everyone had been making fun of a cancer patient. Feeling bad about the memes tweets immediately became a meme unto themselves. Despite the tone shift, the reason for these posts is the same: They’re a way to take a person and repurpose their life for entertainment and engagement. If this sounds exhausting and depressing, it’s because it is.

But the internet is also too big to be one thing. Clicking through social media this afternoon, I saw dozens of heartfelt testimonials, apologies, and well-wishes for the princess. For a moment, from my perspective, it felt like watching a collective of people come to their senses. A recognition, perhaps, of the humanity of the person at the center of the maelstrom.
Then, only a few seconds later, I saw a different post. It was a screenshot from the blockchain platform Solana, where users can create their own cryptographic tokens for others to invest in. The name of the token in the screenshot is “kate wif cancer,” and its logo is a still of the princess sitting on a bench, taken from this afternoon’s video. The coin’s market cap briefly surpassed $120,000. Only six minutes later, the price had cratered—the result of a standard memecoin sell off. An awful thing happened. Some people made a joke about it. Other people made some money. And then everyone moved on.

Odd Andersen / AFP / Getty

Social Media Is Not What Killed the Web

$
0
0

This article was featured in the One Story to Read Today newsletter. Sign up for it here.

“Was the internet really this bad?” I wondered to myself as I read the September 1995 issue of The Atlantic. I was reading the issue in digital form, displayed on Netscape Navigator 3 on a mid-’90s Macintosh. Or, at least, on a software version of the browser and Mac provided on the website OldWeb.Today. The site houses an emulator that connects to the Internet Archive’s record of websites, providing a full computing experience of the World Wide Web of three decades ago.

That experience was the badness I was pondering. Not the magazine itself—which began publishing online with this issue, whose cover story asked “How Lincoln Might Have Dealt With Abortion”—but the way I was reading it. The article page looked awful: The nameplate was strangely positioned, and the text was hard to read. Resizing the browser window fixed the layout, but my eyes and brain still struggled to process the words. I was alive and online that fall 29 years ago, but in my memory the web was magical, like a portal into a new way of life—not a clunky mess like this. Now, having had the chance to travel back in time, I wonder if the clunkiness wasn’t in its way a midwife to that wonder.

Sometime in late 1994, a friend of mine opened a program called Mosaic in the basement computer lab at the university library. “You’ve got to see this,” he said as he started typing in akebono.stanford.edu. A gray page loaded with “Yahoo” printed at the top of a bullet list of blue links. Nothing special, but I was impressed: The World Wide Web was still new, and finding anything of use was difficult. This new, playfully named website offered a directory of sites by category—computers, politics, entertainment, and so forth.

[Read: Yahoo, the destroyer]

Now, using the OldWeb emulator, I’d been transported back into this era: 1995 to 1996. I didn’t know where to go on the web back then—Google wouldn’t arrive for another few years—so I returned to primeval Yahoo for help. Poking through this directory anew, I visited a website on film and television careers, where I took in an interview with the prop master David Touster (the most exciting part of his job: “the pleasure of creating a vision with creative people”). I visited a webzine about gender equality, illustrated with loosely rendered, line-drawing figures that, I recalled, were a bit of an aesthetic at the time. I visited a site called WebEthics.com to see how the early internet thought about online dangers. The biggest one turned out to be money. Commercial websites should disclose their purpose, Web Ethics said. There was a list of websites that failed to do this, called the Dirty Dozen. The top entry, a site called All Business Network, was accused of being a stealth infomercial. No. 2 read “Coming soon,” and the other 10 slots were blank.

This is how the internet felt back then: promising but empty. Nobody says surfing the web anymore, but at the time the phrase made sense as a description of the lugubrious, often frustrating task of finding entertainment. A visitor online felt like a beach bum waiting to catch a wave. (Channel surfing described a similar vibe one got from watching television.) A lot would change in the years that followed. For one thing, much to the chagrin of the operators of WebEthics.com, the internet quickly commercialized. But even then, “content,” as we call it today, was rare. You might read an article or visit a brochure-ware website for a car or a vacuum, or even purchase a book at Amazon. What you wouldn’t do was spend your whole day online.

Connectivity was one reason. The library computer lab was connected via high-speed ethernet, but home use still monopolized the phone line as bits were eked out slowly from a modem. Wi-Fi wasn’t yet widely available, and a computer was a place you had to go in the house. Using the OldWeb emulator on my laptop, I recalled how much we used to rely on the status bar at the bottom of the window (now mostly retired) for updates on the process of loading a webpage, and on the little browsing animation—Netscape’s was a view of shooting stars—for distraction while we waited. Online life was mostly waiting.

Because every click brought more delay, one clicked more deliberately. Browsers displayed visited links in a different color (purple by default, instead of blue). They still do this, but nobody cares anymore; using the OldWeb browser reminded me that those purple links helped you navigate a strange and arduous terrain. Yes, that’s where I meant to go, or Nope, already been there.

Once you reached your destination, you’d be confronted with a series of distractions. Screens were small back then, with low-resolution text and graphics. On The Atlantic’s old website, the type was small and pixelated. Italics were not truly semicursive, with curved letterforms, but slanted versions of roman. Lines of text ran most of the way across the screen without a break. In order to read an entire article using Netscape in Macintosh System 7, I had to interrupt myself repeatedly to click the scroll-bar button. These minor glitches may have worn away our capacity to focus. But we had no idea how much worse that problem could become.

Much has been made of the ways in which social-media sites made internet life compulsive and all-consuming. Web search and shopping, too, have turned people’s data into ads, leading them to spend ever-greater quantities of time and money online. But my OldWeb visit revealed to me that the manufacturers of computer devices and their basic software made this transformation possible. Instagram or Google would have been compelling on the old internet, but they’re surely more so now, seen on bright displays with the pixel density of a printed magazine. Before the web was good—before PCs were good—one had trouble spending hours just in Word or Excel. That may have been a blessing.

It’s easy to portray the websites and browsers on OldWeb.Today as primitive, early steps along an evolutionary path. But at least some of what hadn’t yet been figured out about the web simply wasn’t worth pursuing. The World Wide Web of the 1990s was a place you went into for a little while until it spat you out. As an activity, it had an end—which came when someone needed the phone, when your eye strain overcame your interest, when the virtual ocean failed to spawn a wave worth surfing. Now the internet goes on forever.

Illustration by Ben Kothe / The Atlantic. Source: Getty.

The End of Foreign-Language Education

$
0
0

A few days ago, I watched a video of myself talking in perfect Chinese. I’ve been studying the language on and off for only a few years, and I’m far from fluent. But there I was, pronouncing each character flawlessly in the correct tone, just as a native speaker would. Gone were my grammar mistakes and awkward pauses, replaced by a smooth and slightly alien-sounding voice. “My favorite food is sushi,” I said—wo zui xihuan de shiwu shi shousi—with no hint of excitement or joy.

I’d created the video using software from a Los Angeles–based artificial-intelligence start-up called HeyGen. It allows users to generate deepfake videos of real people “saying” almost anything based on a single picture of their face and a script, which is paired with a synthetic voice and can be translated into more than 40 languages. By merely uploading a selfie taken on my iPhone, I was able to glimpse a level of Mandarin fluency that may elude me for the rest of my life.

HeyGen’s visuals are flawed—the way it animates selfies almost reminded me of the animatronics in Disney’s It’s a Small World ride—but its language technology is good enough to make me question whether learning Mandarin is a wasted effort. Neural networks, the machine-learning systems that power generative-AI programs such as ChatGPT, have rapidly improved the quality of automatic translation over the past several years, making even older tools like Google Translate far more accurate.

At the same time, the number of students studying foreign languages in the U.S. and other countries is shrinking. Total enrollment in language courses other than English at American colleges decreased 29.3 percent from 2009 to 2021, according to the latest data from the Modern Language Association, better known as the MLA. In Australia, only 8.6 percent of high-school seniors were studying a foreign language in 2021—a historic low. In South Korea and New Zealand, universities are closing their French, German, and Italian departments. One recent study from the education company EF Education First found that English proficiency is decreasing among young people in some places.

Many factors could help explain the downward trend, including pandemic-related school disruptions, growing isolationism, and funding cuts to humanities programs. But whether the cause of the shift is political, cultural, or some mix of things, it’s clear that people are turning away from language learning just as automatic translation becomes ubiquitous across the internet.

[Read: High-school English needed a makeover before ChatGPT]

Within a few years, AI translation may become so commonplace and frictionless that billions of people take for granted the fact that the emails they receive, videos they watch, and albums they listen to were originally produced in a language other than their native one. Something enormous will be lost in exchange for that convenience. Studies have suggested that language shapes the way people interpret reality. Learning a different way to speak, read, and write helps people discover new ways to see the world—experts I spoke with likened it to discovering a new way to think. No machine can replace such a profoundly human experience. Yet tech companies are weaving automatic translation into more and more products. As the technology becomes normalized, we may find that we’ve allowed deep human connections to be replaced by communication that’s technically proficient but ultimately hollow.

AI language tools are now in social-media apps, messaging platforms, and streaming sites. Spotify is experimenting with using a voice-generation tool from the ChatGPT maker OpenAI to translate podcasts in the host’s own voice, while Samsung is touting that its new Galaxy S24 smartphone can translate phone calls as they’re occurring. Roblox, meanwhile, claimed last month that its AI translation tool is so fast and accurate, its English-speaking users might not realize that their conversation partner “is actually in Korea.” The technology—which works especially well for “high-resource languages” such as English and Chinese, and less so for languages such as Swahili and Urdu—is being used in much more high-stakes situations as well, such as translating the testimony of asylum seekers and firsthand accounts from conflict zones. Musicians are already using it to translate songs, and at least one couple credited it with helping them to fall in love.

One of the most telling use cases comes from a start-up called Jumpspeak, which makes a language-learning app similar to Duolingo and Babbel. Instead of hiring actual bilingual actors, Jumpspeak appears to have used AI-generated “people” reading AI-translated scripts in at least four ads on Instagram and Facebook. At least some of the personas shown in the ads appear to be default characters available on HeyGen’s platform. “I struggled to learn languages my whole life. Then I learned Spanish in six months, I got a job opportunity in France, and I learned French. I learned Mandarin before visiting China,” a synthetic avatar says in one of the ads, while switching between all three languages. Even a language-learning app is surrendering to the allure of AI, at least in its marketing.

Alexandru Voica, a communications professional who works for another video-generating AI service, told me he came across Jumpspeak’s ads while looking for a program to teach his children Romanian, the language spoken by their grandparents. He argued that the ads demonstrated how deepfakes and automated-translation software could be used to mislead or deceive people. “I'm worried that some in the industry are currently in a race to the bottom on AI safety,” he told me in an email. (The ads were taken down after I started reporting this story, but it’s not clear if Meta or Jumpspeak removed them; neither company returned requests for comment. HeyGen also did not immediately respond to a request for comment about its product being used in Jumpspeak’s marketing.)

The world is already seeing how all of this can go wrong. Earlier this month, a far-right conspiracy theorist shared several AI-generated clips on X of Adolf Hitler giving a 1939 speech in English instead of the original German. The videos, which were purportedly produced using software from a company called ElevenLabs, featured a re-creation of Hitler’s own voice. It was a strange experience, hearing Hitler speak in English, and some people left comments suggesting that they found him easy to empathize with: “It sounds like these people cared about their country above all else,” one X user reportedly wrote in response to the videos. ElevenLabs did not immediately respond to a request for comment. (The Atlantic uses ElevenLabs’ AI voice generator to narrate some articles.)

[Read: The last frontier of machine translation]

Gabriel Nicholas, a research fellow at the nonprofit Center for Democracy and Technology, told me that part of the problem with machine-translation programs is that they’re often falsely perceived as being neutral, rather than “bringing their own perspective upon how to move text from one language to another.” The truth is that there is no single right or correct way to transpose a sentence from French to Russian or any other language—it’s an art rather than a science. “Students will ask, ‘How do you say this in Spanish?’ and I’ll say, ‘You just don’t say it the same way in Spanish; the way you would approach it is different,’” Deborah Cohn, a Spanish- and Portuguese-language professor at Indiana University Bloomington who has written about the importance of language learning for bolstering U.S. national security, told me.

I recently came across a beautiful and particularly illustrative example of this fact in an article written by a translator in China named Anne. “Building a ladder between widely different languages, such as Chinese and English, is sometimes as difficult as a doctor building a bridge in a patient's heart,” she wrote. The metaphor initially struck me as slightly odd, but thankfully I wasn’t relying on ChatGPT to translate Anne’s words from their original Mandarin. I was reading a human translation by a professor named Jeffrey Ding, who helpfully noted that Anne may have been referring to a type of heart surgery that has recently become common in China. It's a small detail, but understanding that context brought me much closer to the true meaning of what Anne was trying to say.

[Read: The college essay is dead]

But most students will likely never achieve anything close to the fluency required to tell whether a translation rings close enough to the original or not. If professors accept that automated technology will far outpace the technical skills of the average Russian or Arabic major, their focus would ideally shift from grammar drills to developing cultural competency, or understanding the beliefs and practices of people from different backgrounds. Instead of cutting language courses in response to AI, schools should “stress more than ever the intercultural components of language learning that tremendously benefit the students taking these classes,” Jen William, the head of the School of Languages and Cultures at Purdue University and a member of the executive committee of the Association of Language Departments, told me.

Paula Krebs, the executive director of the MLA, referenced a beloved 1991 episode of Star Trek: The Next Generation to make a similar point. In “Darmok,” the crew aboard the starship Enterprise struggles to communicate with aliens living on a planet called El-Adrel IV. They have access to a “universal translator” that allows them to understand the basic syntax and semantics of what the Tamarians are saying, but the greater meaning of their utterances remains a mystery.

It later becomes clear that their language revolves around allegories rooted in the Tamarians’ unique history and practices. Even though Captain Picard was translating all the words they were saying, he “couldn’t understand the metaphors of their culture,” Krebs told me. More than 30 years later, something like a universal translator is now being developed on Earth. But it similarly doesn’t have the power to bridge cultural divides the way that humans can.

Illustration by Matteo Giuseppe Pani.

Baltimore Lost More Than a Bridge

$
0
0

You could see the Francis Scott Key Bridge from Fort McHenry, the pentagon-shaped keep that inspired the bridge’s namesake to write the verses that became our national anthem. You could see it from the pagoda in Patterson Park, another strangely geometric landmark from which I’ve cheered on teams at Baltimore’s annual kinetic-sculpture race. You could see it from the top of Johns Hopkins Hospital, the city’s biggest employer. This morning, my husband sent me a photo of the familiar view out his window at work—now dominated not by the soaring bridge, but by a hulking container ship, halted in the middle of the water with metal strewn over and around it.

Videos of the bridge’s collapse are stunning. At about 1:30 a.m., the ship, called the Dali, lost power and crashed into one of the bridge’s central pillars. Within 15 seconds, the straight line of the bridge’s span bends and breaks, and the entire structure tumbles into the harbor.

The bridge was one of only three roadways crossing Baltimore’s defining waterways, and until this morning, each of those routes served its own purpose. The I-95 tunnel, which cuts across the mouth of the harbor, was for people commuting between Baltimore and Washington, D.C. The famously congested Baltimore Harbor Tunnel—part of I-895—passes beneath the Patapsco River and was for people bypassing the city completely. The Key Bridge, farther down the river toward the Chesapeake Bay, handled the least traffic of the three. But it was part of the Baltimore Beltway, the circular highway that forms the unofficial boundary of the Baltimore metro area and shuttles suburbanites into the city to help make it run. Of the three routes, the Key Bridge was the most visible and beautiful, standing alone above the water in a long, graceful arch.

[David A. Graham: Why ships keep crashing]

Officials had enough notice of the Dali’s distress that it blocked cars from entering the bridge before its collapse, but Maryland’s transportation secretary told reporters this morning that the department was searching for six missing construction workers who may have fallen into the 48-degree water. The crew was working to fix potholes—to keep Baltimore’s beat-up roads in good enough shape to keep traffic flowing into the city. Two workers have already been pulled from the water, one of whom was in such bad shape that they couldn’t be asked what happened. As of about 10:08 a.m., no one but the construction crew was believed to have fallen into the water. But had the collapse happened a few hours later, hundreds of people might well be dead: On average, about 31,000 cars and trucks cross the bridge every day.

The cars, for now, can be rerouted. But the remnants of the bridge (not to mention the Dali) are blocking the city’s waterways for any other ships that are scheduled to enter. Baltimore is now America’s 17th-biggest port by tonnage—a respectable rank, if a far cry from the early days of the United States, when shipping made the city the third-most-populous in the country—and may well drop further down the list if the harbor remains inaccessible. (Maryland Governor Wes Moore has yet to comment on when the port might reopen for business.) But Baltimore is a city defined by water. The Gwynns Falls and the Jones Falls trickle through our parks. The Inner Harbor is our Times Square; our economy is tied up in trade and transportation. Ships are in the city’s bones. The brackish harbor is in its heart.

Baltimore is also a city that can’t catch a break, full of people who find joy in its absurdities. The Trash Wheel Family—a set of four solar and hydro-powered, googly-eyed machines that keep litter in the city’s rivers from entering the harbor—are local celebrities. Every week, a group of magnet-fishers meets at the harbor to pluck benches, scooters, and other treasures from the water, proudly displaying their haul along the sidewalk. Every year, bicycle-powered moving sculptures shaped like dragons and dogs and fire trucks compete to paddle down a short stretch of the harbor without capsizing. But no one ever really forgets that the harbor itself is visibly polluted, that much of the city’s infrastructure is breaking and broken, that the state has held back funding to fix it, that Baltimore’s mayoral administrations have been riddled with corruption, that people are still getting by on too little, that the murder rate is still too high.

[Read: The aftermath of the Baltimore bridge collapse]

Baltimore Harbor is one of the city’s most important links to the rest of the world; to cut it off is to clog our blood supply. Moore has already said that the bridge will be rebuilt to honor this morning’s victims. We can still get out of the city with trains and cars. But this morning, Baltimore feels that much more claustrophobic. Looking out toward the Chesapeake used to be an exercise in optimism, in feeling all the possibilities of being connected to the wider world and the terrifyingly wide swell of the Atlantic. Today, it’s an exercise in mourning and resolve.

Tasos Katopodis / Getty

You Can’t Even Rescue a Dog Without Being Bullied Online

$
0
0

Lucchese is not the world’s cutest dog. Picked up as a stray somewhere in Texas, he is scruffy and, as one person aptly observed online, looks a little like Steve Buscemi. (It’s the eyes.)

Isabel Klee, a professional influencer in New York City, had agreed to keep Lucchese, or Luc, until he found a forever home. Fosters such as Klee help move dogs out of loud and stressful shelters so they can relax and socialize before moving into a forever home. (The foster can then take on a new dog, and the process restarts.) Klee began posting about Luc on TikTok, as many dog fosters do. “I fell in love with him, and the internet fell in love with him,” she told me over the phone earlier this month. “Every single video I posted of him went viral.” In one such video, which has attained nearly 4 million views since it was published in October, Klee’s boyfriend strokes Luc, who is curled up into his chest like a human infant. The caption reads, “When your foster dog feels safe with you 🥲🫶.”

Beneath this post are comments such as “this is so special 🥹🥹” and “Wow my heart 😩❤️❤️.” And then there are others: “If this story doesn’t end with you adopting him I’m going to SCREAM FOREVER,” and “If you don’t adopt him already, I will slice you into dozens of pieces.”

The idea behind Klee’s posts, as with any foster’s, is to generate attention to help a rescue dog find their forever home: More eyeballs means more possible adopters. But something strange also tends to happen when these videos are posted. Even when the comment sections are mostly positive, a subset of commenters will insist that the foster dog shouldn’t go anywhere—that people like Klee are doing something wrong by searching for the dog’s forever home. Sure, some of the comments are jokes. (Klee seemed generally unbothered by them in our conversation: “I don’t think people have any ill will toward me or the situation,” she said.) But others don’t seem to be. “We frequently get absurd comments like ‘these dogs are forming lifelong bonds with you, only to be abandoned again and have social anxiety and abandonment PTSD,’” April Butler, another dog foster and content creator, who runs a TikTok account with more than 2 million followers, told me over email.

[Read: Please get me out of dead-dog TikTok]

Becoming a dog foster effectively means signing up to be a pseudo–content creator, if you aren’t, like Klee and Butler, one already: You are actively working to interest your audience in adoption by taking photos and videos of your temporary pup looking as cute as possible. You could opt out of the circus entirely, but doesn’t that sweet, nervous dog deserve every bit of effort you can muster? The whole thing is a neat summary of the odd social-media economy: People post, and audiences feel entitled to weigh in on those posts, even when the conversation becomes completely unmoored from anything resembling reality. Even when the subject at hand is something as inoffensive and apolitical as animal fostering.  

Of course, people have long been unusually cruel on social media. Last year, my colleague Kaitlyn Tiffany reported on how strangers have unabashedly trolled the relatives of dead people, even children, over their vaccine status, suggesting that something about this brutality is endemic to the social web: “As much talk as there has been about whether or not social media has caused political polarization by steering people in certain directions and amplifying certain information with out-of-control algorithms (an assumption that recent scientific research calls into question), it’s useful to remember that even the most basic features of a social website are conducive to the behavior we’re talking about.” Psychologists note the “online disinhibition effect,” whereby people act with less restraint when they’re writing to others over the internet. Even the worst comments on dog-fostering videos pale in comparison with the harassment and even real-life violence that has resulted from other abuse on social media.

[Read: How telling people to die became normal]

Posting cute little videos of dogs in need—the internet’s bread and butter, really—can draw some low-grade cyberbullying. People who’d never accuse a dog foster to their face of being heartless apparently have no problem sending such messages on Instagram. Algorithms, optimizing for engagement, can encourage public pile-ons. What once might’ve been a conversation among family, friends, and neighbors suddenly reaches a new scale as feeds blast out local dog-foster posts around the country and the world (which is, of course, partly the point). People who have no connection to that particular region, or intention to adopt, suddenly have opinions about where the dog should end up, and can share them.

Users seem to be developing a parasocial relationship with these animals. “People can get very connected to these dogs they see online,” Jen Golbeck, who teaches information studies at the University of Maryland and fosters dogs herself, told me. She explained that followers on social media see “the selfless sacrifice, the care, the love that fosters give to the dogs,” only to feel betrayed when they hear that the dog is moving along in the system. Social media encourages these parasocial dynamics time and time again. Fans project onto the personal lives of beloved celebrities, bullying their enemies until the celeb has to release a statement telling people to back off. Average teenagers find themselves becoming a trending topic for millions; hordes of people speculate about a missing Kate Middleton, only to have her come forward and reveal a cancer diagnosis.

I started fostering last fall, and since then, I’ve been thinking a lot about influencer creep—a term coined by the media scholar Sophie Bishop to describe how so many types of work now involve constantly keeping up with social platforms. In an essay for Real Life magazine, Bishop writes about expectations to post and post and post, coupled with “the on-edge feeling that you have not done enough” to promote yourself online. This creep now touches even volunteer work. Though I’ve never been bullied, I find myself contemplating the same double bind that haunts so much of online life: post, and risk all the negative consequences of posting, or don’t post, and risk missing out on all the opportunities that come with reaching a larger audience.

[Read: I don’t like dogs]

Some commenters may be acting out of genuine concern for animal welfare, but their moral case is limited. Research suggests that even temporarily putting a shelter dog in foster care improves their stress levels and sleep. “I highly doubt moving from a foster home to an adoptive home is anywhere as stressful as returning to and living in the shelter,” Lisa Gunter, a professor at Virginia Tech and one of the study’s authors, told me over email. “Caregivers and their homes increase shelters’ capacity for care. To ask caregivers to adopt their animals reduces shelters’ ability to help dogs in their community.” Lashing out on behalf of a dog can have the effect of diminishing the human on the other side of the screen—dropping a foster dog off at their new home is difficult enough without a Greek chorus of internet strangers harassing you.

And explaining this, it turns out, is another content opportunity. Some creators have recently taken to making moving montages set to wistful music such as Phoebe Bridgers’s “Scott Street.” As the sad music swells, they flash clips of recent foster pets, pointing out that they had to say goodbye to each dog in order to meet the next one.

Butler’s version, which she posted after receiving “hundreds and hundreds” of comments and messages encouraging her to keep a foster named Addie, got nearly 5 million views. The comment section here is much friendlier. Perhaps social media can help educate and move the fostering conversation along. Or maybe the fostering conversation is just more fodder, content blocks that the algorithm gobbles up. The content economy cycles onward.

Illustration by The Atlantic. Sources: Brothers91 / Getty.

Sam Bankman-Fried’s Dream Came True

$
0
0

If there’s a single image that defines the crypto frenzy of 2021 and 2022, it’s that of the actor Matt Damon, calm and muscled, delivering the immortal proverb “Fortune favors the brave.” It was part of an ad for Crypto.com, yet it somehow captured the absurdity of what the crypto industry promised at the time: not just a digital asset, but a ludicrously magnified vision of the future.

Sam Bankman-Fried was the opposite of all that. The crypto mogul did not outwardly aspire to build futuristic crypto-powered cities or hype up ape-themed NFT video games. Even though he was a persistently disheveled Millennial who apparently slept on a bean bag, Bankman-Fried was the industry’s rule-following adult in the room. Regulating crypto was a good idea, he often said, even if it came at the expense of his business.

SBF, it turns out, was not a rule follower. In November 2022, FTX—his $32 billion crypto exchange—was suddenly unable to pay out customer deposits and collapsed soon after. Almost exactly a year later, SBF was convicted of seven counts of fraud and conspiracy after a trial that led his own lawyer to call him“the worst person [he’d] ever seen do a cross-examination.” This morning, Bankman-Fried was sentenced to 25 years in federal prison—a judgment that marks the end of a protracted legal saga, and one of the most striking downfalls in the history of American finance.

But in the interim, the crypto industry has ironically become more similar to the vision that SBF always said he had for it. SBF and many of his more explicitly anti-government competitors are out of the picture, the NFT-driven hype bubble has summarily popped, and more and more crypto-backed investment products are making their way into the mainstream. Maybe now crypto is finally ready to grow up.

SBF had always distinguished himself from other crypto CEOs with his relatively sober rhetoric around what these tokens could actually do for people. Crypto was invented at the height of the Great Recession as a decentralized alternative to the traditional financial system—a place explicitly beyond the purview of big banks and heedless regulators.

For executives like the Winklevoss twins, who run a crypto firm called Gemini, the appeal is at least partly ideological, a potential path to self-determination. “Bitcoin is your best defense against the Fed,” Tyler Winklevoss wrote on X in 2021. The eccentric software magnate and crypto influencer Michael Saylor once famously described Bitcoin as “a swarm of cyber hornets serving the goddess of wisdom, feeding on the fire of truth, exponentially growing ever smarter, faster, and stronger behind a wall of encrypted energy.” (Don’t think about it too hard.)

This sort of breathlessness is par for the course in crypto, but SBF signaled that he wanted to work within the established system, as opposed to building parallel rails. When he founded FTX, in 2019, Bitcoin was a decade old but still closely associated with fraud and bubbles. As a businessman and trader, he tried to fast-track the process of bringing crypto mainstream, guiding this world of notoriously lawless, scam-ridden financial instruments into the full light of regulatory clarity and cultural maturity. When FTX bought the naming rights to the Miami Heat’s basketball stadium’s main sports arena, in 2021, and spent millions on a Super Bowl ad in a bid to make the company a household name, SBF claimed it was all part of a plan to build that legacy as a shepherd for the industry.

Of course, this was all downstream from SBF’s carefully cultivated image—part of what built his reputation outside the finance world. His obsession with giving his money away (he once said he would spend more than $100 million to stop Donald Trump in 2024) underlay a mentality that crypto is simply a pathway to money, rather than a statement in and of itself. During the trial, SBF’s lawyers quoted his father saying it explicitly: “Sam started FTX as a way to earn to give.” His carefully cultivated image even made it to today’s sentencing: According to one reporter, SBF’s defense described him as a friend to animals and a charitable giver.

It’s hard to say how much of that image was real; in one memo to himself, revealed during the trial, SBF apparently considered “com[ing] out as Republican.” And where is crypto now? SBF is going away, and his onetime rival Changpeng Zhao was recently forced to resign from his position as CEO of the largest crypto exchange in the world after pleading guilty to violating money-laundering laws (the new venture he launched while awaiting sentencing, an education start-up called Giggle Academy, is decidedly not a crypto company). Do Kwon, who co-founded one of the projects responsible for the 2022 crypto crash, was arrested in Montenegro last year and is on trial for fraud.

Although it certainly helps that these rule breakers are out of the picture, crypto’s subdued demeanor in 2024 has a lot to do with the fact that government regulators have made a point of nailing crypto cowboys such as SBF and Zhao to the wall. It goes beyond specific vendettas against bad actors. SEC Chairman Gary Gensler—seen by many as the crypto industry’s biggest nemesis—recently described crypto as a “a field that’s been rife with fraud and manipulation.” Last year, mostly one of sobriety and recovery for crypto, was punctuated by the SEC’s near-constant announcements of new fines for misbehaving companies in this industry.

Crypto’s cultural profile remains low relative to the fever pitch of 2021, but the crypto industry is somehow on the road to recovery. Coins are up across the board. Bitcoin ETFs—long hailed as a kind of messianic vehicle for bringing the mainstream onboard the crypto train—are finally out in the world. And even blockchain-oriented venture-capital firms seem to be emerging from hibernation. Call it cautious optimism: Although crypto won’t ever be the kind of buttoned-up, totally law-abiding industry the U.S. government would probably like it to be (look no further than the recent meme-coin frenzy to see this puerility in action), it now appears far more integrated into the existing financial system than it did just a few years ago.

One need only glance at the many dozens of pages of victim-impact statements now filed with the Southern District of New York to get a sense of the real harm caused by the FTX fraud. As the gap between crypto the industry and crypto the cypherpunk paradigm continues to widen, today’s sentencing serves as a stark reminder of what crypto really is in practice. It turns out to be a lot like how SBF saw crypto in the first place. No more illusions, no more world-changing expressions of libertarian values. In a post-FTX world, maybe crypto is really just money.

Yuki Iwamura / Bloomberg / Getty

Why Rich Shoppers Get So Angry About Hermès

$
0
0

Should you want to own an Hermès Birkin handbag, there are two main reasons that’s probably not in the cards. The first limiting factor is that even in its smallest size and most basic format, the Birkin, which has been one of the luxury industry’s ultimate brass rings for decades, has a starting price tag of more than $11,000—roughly what you’d currently pay for a gently used 2013 Honda Accord. The second is that even if you have the money, one does not simply waltz into one of the hundreds of Hermès boutiques worldwide and walk out with their bag of choice, and certainly not a Birkin. There are too few of these bags—of most types of Hermès bags, at this point—to satisfy everyone willing to pay up, even at five-figure prices. And if they’re in stock, they probably still aren’t available to you.

For a chance at getting a Birkin, you have to play the “Hermès Game,” according to aspirants who gather online to discuss what they’ve gleaned about its vague rules. Most agree that in order to increase your odds of being offered a Birkin or Hermès’s similar (and similarly popular) Kelly bag, you need to build a purchase history at an Hermès store by buying products that are more readily available—shoes, home goods, silk scarves, jewelry. Nothing Hermès sells is affordable, so building such a history would cost, at a minimum, thousands of dollars. Prospective customers commonly report being told by sales associates, who are said to have broad authority over how coveted bags are meted out after arriving in stores, that priority for scarce products goes to loyal customers. How much one would have to buy in order to demonstrate loyalty, relative to your competition on any given boutique’s client list, is anyone’s guess. The overwhelming majority of people are going to be turned away when they ask for a Birkin, even if they pick up a pair of sandals and a few bangles here and there.

You can probably guess how this goes over. People with enough money for a frisky little Birkin purchase are generally not used to hearing the word no, and some of them react to it like their civil rights are being violated. According to a lawsuit filed in California last month by two people who recently had no luck buying a Birkin (though one of them already owned at least one of the bags), what has been violated is actually federal antitrust law. Hermès has a monopoly on Birkin bags, so the suit alleges, and the Hermès Game amounts to tying, a potentially anticompetitive practice in which buyers are required to purchase additional, unwanted goods as a precondition of receiving a desirable product.

[Read: Something odd is happening with handbags]

Hermès did not respond to a request for comment. So far, legal experts seem dubious on the suit’s merits. Hermès does not control the robust secondhand market for its bags, and people sometimes do just walk into a boutique off the street, ask nicely, and get lucky. But the lawsuit’s very existence is a glimpse inside the luxury industry’s most precarious balancing act: How do you sell putatively rare things at corporate scale?

The most important thing to understand about why Hermès bags drive so many people wild is that they’re actually pretty rare, compared with the products made by the brand’s closest competitors. Hermès is a huge company—it has a bigger market cap than Nike—but it has mostly resisted modern, high-capacity manufacturing methods. Instead, it has trained an army of traditional leatherworkers and other tradespeople to do things the old way at a large scale. Birkins and Kellys are assembled by hand, beginning to end, by a single craftsman. According to a 2019 story in T, The New York Times’ style magazine, the process for the Kelly takes 20 to 25 hours of work; for certain Birkins, some estimates put it at as high as 40 hours. Those practices put a hard limit on supply that can’t be quickly or easily raised, and they also help Hermès and its fans spin a compelling tale about why its prices—high even among luxury brands, though not by as wide a margin as they once were—are justifiable. If anything, the lively resale market for Birkins, where pristine bags almost always sell for more than their retail price, suggests that the brand is undercharging relative to what customers will bear. Luxury is an industry built on hierarchy, and when it comes to handbags, Hermès is alone at the top among global brands.

All of this—the European workshops, the training academy for craftspeople, the plying of centuries-old trades—is the stuff that many of Hermès’s competitors encourage you to assume that they, too, must be doing, by virtue of being based in France or Italy and selling very expensive things under an extremely old name. The reality is a little different. It’s true that most of these brands do still operate ateliers and workshops where old-school craftsmen develop new designs or manufacture the company’s most expensive tier of products. Even so, modern luxury is a high-volume business that has been modernized, scaled up, and made far more efficient, most prominently by LVMH, the corporate conglomerate that owns brands including Louis Vuitton, Dior, and Fendi. Many of the changes LVMH implemented are now standard operating procedure for major brands at large. Scale and efficiency mean less handwork and greater speed on a larger number of products. The biggest players have at their disposal a staggering array of material resources and mammoth capacity to produce goods, many of which are created by methods and available in quantities that are not especially distinct from other types of consumer goods.

This glut is the paradox at the heart of the luxury industry. These goods derive cultural and monetary value from scarcity, but there are relatively few situations in which demand genuinely outstrips supply. Luxury brands, then, must manufacture the illusion of scarcity, which is one reason that limited editions and collaborative releases have become so popular—they impose brief bouts of lack on top of industrial abundance. The regular stuff, which is what most people buy anyway, is still there waiting for you, no matter where you are, if you want to pony up.

[Read: Make the collabs stop]

Over the decades of growth that have turned the executives of these luxury conglomerates into some of the richest men in the world, their wealthy customers have settled into a routine that flatters a sense of exceptionalism: What the customers want is almost always available to them, and that availability still feels special and enticing because those products are theoretically not available to some unseen other, even as sales of those exact same products tick ever upward worldwide. Buying luxury goods isn’t intoxicating to so many people because they all love fine craftsmanship, or even necessarily because they all want everyone to know exactly how much money they have. At least in part, it’s because arriving at a velvet rope and being let inside is a thrill, and modern luxury businesses have found ways to preserve that feeling while raising the velvet rope for as many paying customers as humanly possible.

When customers groomed into this kind of acquisitive ease encounter actual, material scarcity from which they are not exempted—when, say, a Birkin isn’t available to them even though they have $15,000 to spend—the effect can be combustible. The market’s paradox, which brands are adept at keeping out of sight and out of mind, becomes just a little too visible. Luxury goods, at least in the truest sense of the term, aren’t infinitely scalable—intense investments of resources, materials, skill, labor, and time are inherent to the enterprise, and their limitations cannot be fully mitigated.

But the luxury industry has changed since the days when handcraft dominated what was then a much smaller and far less efficient market. The term now refers less to the exceptional material attributes of an object and more to its price point, which was a trade-off made to turn the business into a fabulously profitable global juggernaut. Most of these goods aren’t rare; they’re merely expensive, which is sort of obvious if you think about it for too long. Which is why Hermès drives so many wealthy customers to distraction. Its velvet rope is one of the last that a single credit-card swipe isn’t guaranteed to lift.

Illustration by The Atlantic. Source: Neyya / Getty.

AI Has Lost Its Magic

$
0
0

I frequently ask ChatGPT to write poems in the style of the American modernist poet Hart Crane. It does an admirable job of delivering. But the other day, when I instructed the software to give the Crane treatment to a plate of ice-cream sandwiches, I felt bored before I even saw the answer. “The oozing cream, like time, escapes our grasp, / Each moment slipping with a silent gasp.” This was fine. It was competent. I read the poem, Slacked part of it to a colleague, and closed the window. Whatever.

A year and a half has passed since generative AI captured the public imagination and my own. For many months, the fees I paid to ChatGPT and Midjourney felt like money better spent than the cost of my Netflix subscription, even just for entertainment. I’d sit on the couch and generate cheeseburger kaiju while Bridgerton played, unwatched, before me. But now that time is over. The torpor that I felt in asking for Hart Crane’s ode to an ice-cream sandwich seemed to mark the end point of a brief, glorious phase in the history of technology. Generative AI appeared as if from nowhere, bringing magic, both light and dark. If the curtain on that show has now been drawn, it’s not because AI turned out to be a flop. Just the opposite: The tools that it enables have only slipped into the background, from where they will exert their greatest influence.

Looking back at my ChatGPT history, I used to ask for Hart Crane–ice-cream stuff all the time. An Emily Dickinson poem about Sizzler (“In Sizzler’s embrace, we find our space / Where simple joys and flavors interlace”). Edna St. Vincent Millay on Beverly Hills, 90210 (“In sun-kissed land where palm trees sway / Jeans of stone-wash in a bygone day”). Biz Markie and then Eazy-E verses about the (real!) Snoop Dogg cereal Frosted Drizzlerz. A blurb about Rainbow Brite in the style of the philosopher Jacques Derrida. I asked for these things, at first, just to see what each model was capable of doing, to explore how it worked. I found that AI had the uncanny ability to blend concepts both precisely and creatively.

[Read: The AI Mona Lisa explains everything]

Last autumn, I wrote in TheAtlantic that, at its best, generative AI could be used as a tool to supercharge your imagination. I’d been using DALL-E to give a real-ish form to almost any notion that popped into my head. One weekend, I spent most of a family outing stealing moments to build out the fictional, 120-year history of a pear-flavored French soft drink called P’Poire. Then there was Trotter, a cigarette made by and for pigs. I’ve spent so many hours on these sideline pranks that the products now feel real to me. They are real, at least in the way that any fiction—Popeye, Harry Potter—can be real.

But slowly, invisibly, the work of really using AI took over. While researching a story about lemon-lime flavor, I asked ChatGPT to give me an overview of the U.S. market for beverages with this ingredient, but had to do my own research to confirm the facts. In the course of working out new programs of study for my university department, I had the software assess and devise possible names. Neither task produced a fraction of the delight that I’d once derived from just a single AI-generated phrase, “jeans of stone-wash.” But at least the latter gave me what I needed at the time: a workable mediocrity.

I still found some opportunities to supercharge my imagination, but those became less frequent over time. In their place, I assigned AI the mule-worthy burden of mere tasks. Faced with the question of which wait-listed students to admit into an overenrolled computer-science class, I used ChatGPT to apply the relevant and complicated criteria. (If a parent or my provost is reading this, I did not send any student’s actual name or personal data to OpenAI.) In need of a project website on short order, I had the service create one far more quickly than I could have by hand. When I wanted to analyze the full corpus of Wordle solutions for a recent story on the New York Times games library, I asked for help from OpenAI’s Data Analyst. Nobody had promised me any of this, so having something that kind of worked felt like a gift.

The more imaginative uses of AI were always bound to buckle under this actual utility. A year ago, university professors like me were already fretting over the technology’s practical consequences, and we spent many weeks debating whether and how universities could control the use of large language models in assignments. Indeed, for students, generative AI seemed obviously and immediately productive: Right away, it could help them write college essays and do homework. (Teachers found lots of ways to use it, too.) The applications seemed to grow and grow. In November, OpenAI CEO Sam Altman said the ChatGPT service had 100 million weekly users. In January, the job-ratings website Glassdoor put out a survey finding that 62 percent of professionals, including 77 percent of those in marketing, were using ChatGPT at work. And last month, Pew Research Center reported that almost half of American adults believe they interact with AI, in one form or another, several times a week at least.

[Read: Things get strange when AI starts training itself]

The rapid adoption was in part a function of AI’s novelty—without initial interest, nothing can catch on. But that user growth could be sustained only by the technology’s transition into something unexciting. Inventions become important not when they offer a glimpse of some possible future—as, say, the Apple Vision Pro does right now—but when they’re able to recede into the background, to become mundane. Of course you have a smartphone. Of course you have a refrigerator, a television, a microwave, an automobile. These technologies are not—which is to say, they are no longer—delightful.

Not all inventions lose their shimmer right away, but the ones that change the world won’t take long to seem humdrum. I already miss the feeling of enchantment that came from making new Hart Crane poems or pear-soft-drink ad campaigns. I miss the joy of seeing any imaginable idea brought instantly to life. But whatever nostalgia one might have for the early days of ChatGPT and DALL-E will be no less fleeting in the end. First the magic fades, then the nostalgia. This is what happens to a technology that’s taking over. This is a measure of its power.

Piyavachara Arunotai / Getty

Did You Feel That?

$
0
0

In the decade I have lived in California, I’ve learned to be on edge for “The Big One”—an earthquake so powerful, it can bring down houses. The roughly 10 or so tremors I have actually experienced haven’t been like that. Mostly, the shakes are big enough to jolt me upright but small enough to leave me doubting: Was that what I thought it was?

Today, tens of millions of East Coasters got to experience that feeling firsthand when a magnitude 4.8 quake hit just outside Tewksbury, New Jersey, some 50 miles west of New York City. The rumbling was felt from Maine down to Philadelphia, sending books tumbling off shelves and cellphones blaring with emergency alerts warning about possible aftershocks. So far, the physical damage appears to be minimal. (“New Yorkers should go about their normal day,” New York City Mayor Eric Adams said in a press conference.)

By now, I’m fully accustomed to the specific pageantry that accompanies these tiny quakes: First you feel it, then you Google it, and then you post about it. The internet does not often work as well as most of us would like; it is riddled with all kinds of problems from the inconvenient (clunked-up search) to the outright dangerous (political disinformation). But the earthquake internet works tremendously well. Almost instantly, you can easily find information about whether that rattling was a quake and, if so, basic details such as the epicenter and magnitude.

The United States Geological Survey reported today’s quake within five minutes, a geophysicist for the organization told me. (On the West Coast, where earthquake-detection mechanisms are more common, a second system can send push alerts in mere seconds.) And within 20 minutes of the quake today, the USGS website already had a map of how intense the quake felt in 2,500 different locations, presumably culled in part from submissions. Of course, most people probably aren’t checking a government website right after an earthquake. Google takes this info and puts it in its standard red alert box, so even a basic search like earthquake will probably tell you what you need to know. (Earthquake nj and nyc earthquake have been the top trending searches in the U.S. today, a Google spokesperson told me.)

That earthquakes have been efficiently optimized for the web is especially useful for managing bigger earthquakes that are real emergencies. But a tiny earthquake—when the damage is minimal, if not nonexistent—can also provide a rare communal touchpoint when any sort of shared reality is harder to come by. In moments like these, people can set aside their differences and instead focus on the important question: Did you feel that? Today’s earthquake set off a slew of chatter on social media, making X feel more like the Twitter of the old days. Workdays were interrupted as people paused to consider the ground beneath them—usually ignored, until it’s not. Many Americans took the opportunity to commiserate and come together after a stressful 30 seconds of rumbling.

Like the Earth, sometimes we all just need to blow off some steam.

Illustration by The Atlantic. Source: Prykhodov / Getty.

What Neuralink Is Missing

$
0
0

Until recently, in all of human history, the number of true cyborgs stood at about 70. Ian Burkhart has kept a count because he was one of them—a person whose brain has been connected directly to a computer.

Burkhart had become quadriplegic in a swimming accident after a wave ran him into a sandbar and injured his spine. He was later able to receive an implant from a research study, which allowed him to temporarily regain some movement in one hand. For seven and a half years, he lived with this device—an electrode array nestled into his motor cortex that transmitted signals to a computer, which then activated electrodes wrapped around his arm. Burkhart now heads the BCI Pioneers Coalition, an organization for the small cohort of other disabled people who have volunteered their brain to push the boundaries of brain-computer-interface technology, or BCI.

Last month, Burkhart, along with perhaps millions of other people, watched the debut of the newest cyborg. In a video posted on X, the first human subject for Elon Musk’s BCI company, Neuralink, appeared to control a laptop via brain implant. Neuralink has not published its research and did not respond to a request for comment, but the device presumably works this way: The subject, a paralyzed 29-year-old named Noland Arbaugh, generates a pattern of neural activity by thinking about something specific, like moving the cursor on his computer screen or moving his hand. The implant then transmits that pattern of neural signals to the computer, where an AI algorithm interprets it as a command that moves the cursor. Because the implant purportedly allows a user to control a computer with their thoughts, more or less, Musk named the device Telepathy.

[Read: Demon mode activated]

Burkhart watched Arbaugh play hands-free computer chess with a mix of approval and frustration at how clearly the demo was created for investors and Musk fans, not for disabled people like him. It’s no secret that Musk’s real goal is to create a BCI device for general consumers, and not just so we can move a cursor around; he envisions a future in which humans can access knowledge directly from computers to “achieve a symbiosis with artificial intelligence.” That dream is ethically fraught—privacy, for instance, is tricky when your thoughts are augmented by proprietary algorithms—but it is also a long way from being realized. Researchers have sort of managed two-way information transfer with rats, but no one is sure how the rats felt about it, or whether it’s an experience they’d be willing to pay for at a mall kiosk.

Yet a more modest vision for a safe, workable neuro-prosthesis that would allow disabled people to use a computer with ease is realizable. The question is whether our social structures are ready to keep pace with our advanced science.

It’s taken decades for BCI tech to get to this point—decades of scientists building prototypes by hand and of volunteers who could neither move nor speak struggling to control them. The most basic challenge in mating a brain and a computer is an incompatibility of materials. Though computers are made of silicon and copper, brains are not. They have a consistency not unlike tapioca pudding; they wobble. The brain also constantly changes as it learns, and it tends to build scar tissue around intrusions. You can’t just stick a wire into it.

Different developers have tried different solutions to this problem. Neuralink is working on flexible filaments that thread inconspicuously—they hope—through the brain tissue. Precision Neuroscience, founded in part by former Neuralink scientists, is trying out a kind of electrode-covered Saran Wrap that clings to the surface of the brain or slips into its folds. Then there’s the Utah Array, a widely used model that looks a little like a hairbrush with its bristly pad of silicone spikes. That’s what Burkhart had in his head until 2021, when the study he was part of lost funding and he decided to have the implant taken out. He was worried surgeons might have to “remove some chunks of brain” along with it. Luckily, he told me, it came out “without too much of a fight.”

Once an implant is in place, the tiny signals of individual neurons—measurable in microvolts—have to be amplified, digitized, and transmitted, preferably by a unit that’s both wireless and inconspicuous. That’s problem number two. Problem three is decoding those signals. We have no real idea of how the brain talks to itself, so a machine-learning algorithm has to use a brute-force approach, finding patterns in neural activity and learning to correlate them with whatever the person with the implant is trying to make the computer do.

None of these problems is trivial, but they’ve been substantially tackled over the past 30 years of BCI research. At least six different companies are now testing applications such as desktop interfaces (like the one that helped Arbaugh play chess), drivers for robotic limbs and exoskeletons, and even speech prostheses that give voice to thought. Proof-of-concept devices exist for all of these by now.

But that only brings us to problem number four—which has nothing to do with engineering and might be harder to solve than all the others. This problem is what Ben Rapoport, the chief science officer at Precision, described to me as “the productization of science.” It’s where engineering successes run into political and economic obstacles. To roll out even a basic point-and-click medical BCI interface, developers would have to win approval not just from the FDA but also from “payers”: Medicare, Medicaid, and private insurance companies. This is make-or-break: Medical devices, even ingenious ones, won’t get to consumers if insurance won’t cover them. Few people can afford such expenses out of pocket, which means too small a pool of potential consumers to make production profitable.

[Read: I’m disabled. Please help me.]

Other devices have cleared this hurdle—cochlear implants, deep-brain stimulation devices, pacemakers—and it’s not unlikely that BCI implants could join that list if insurers decide they’re worth the expense. On the one hand, insurance companies might argue that BCI devices aren’t strictly medically necessary—they’re “life-enhancing,” not “life-sustaining,” as Burkhart put it—but on the other hand, insurers are likely to see them as cost-efficient if their implementation can save money on other, more expensive kinds of support.

Even so, there’s a limit to what brain implants can do and what they can replace. The people who would benefit most from BCI devices, people with major motor impairments like Arbaugh and Burkhart, would still depend on human labor for many things, such as getting in and out of bed, bathing, dressing, and eating. That labor can easily cost as much as six figures a year and isn’t typically reimbursed by private health-insurance companies. For most people, the only insurer that covers this kind of care is Medicaid, which in most states comes with stringent restrictions on recipients’ income and assets.  

In Ohio, where Burkhart lives, Medicaid recipients can’t keep more than $2,000 in assets or make more than $943 a month without losing coverage. (A waiver program raises the monthly income cap for some to $2,829.) The salary they’d have to make to cover both expenses and in-home care out of pocket, though, is much more than most jobs pay. “A lot of people don’t have the opportunity to make such a giant leap,” Burkhart said. “The system is set up to force you to live in poverty.”

In addition to his work with the BCI Pioneers Coalition, Burkhart also leads a nonprofit foundation that fundraises to help people with disabilities cover some of the expenses insurance won’t pay for. But these expenses would be “nowhere near the size that would pay to get a BCI or anything like that,” he told me. “We do a lot of shower chairs. Or hand controls for a vehicle.”

Starting in the late 20th century, simple switch devices began to enable people with severe motor disabilities to access computers. As a result, many people who would previously have been institutionalized—those who can’t speak, for example, or move most of their body—are able to communicate and use the internet. BCI has the potential to be much more powerful than switch access, which is slow and janky by comparison. Yet the people who receive the first generation of medical implants may find themselves in the same position as those who use switch technology now: functionally required to stay unemployed, poor, or even single as a condition of accessing the services keeping them alive.

Musk may be right that we’re quickly approaching a time when BCI tech is practical and even ubiquitous. But right now, we don’t have a social consensus on how to apportion resources such as health care, and many disabled people still lack the basic supports necessary to access society. Those are problems that technology alone will not—and cannot—solve.

Illustration by The Atlantic. Source: Arturo Holmes / Getty.

The Web Became a Strip Mall

$
0
0

One morning in 1999, while I sat at the office computer where I built corporate websites, a story popped up on Yahoo. An internet domain name, Business.com, had just sold for $7.5 million—a shocking sum that would be something like $14 million in today’s dollars.

The dot-com era, then nearing its end, had been literally named for addresses such as this one. By that time, it had become easy for anyone to register and own a domain name—typically at a cost of $70 for a two-year stretch—which encouraged “squatting,” wherein people would buy an address simply because they thought it would have some future value. Desirable URLs worked and traded like real estate, with actual domain-name agents, escrow, rental and sale deals, and commissions. The web was a place, and where you could be found mattered.

I pondered the web’s placeiness after receiving a notice to renew one of my personal domains recently. It seemed pointless now: I’d bought the address decades ago, and it has been years since it got any real traffic. Being found online has long ceased to involve acquiring a plot of digital real estate for ourselves. Instead, we submerge in Big Tech service platforms, hoping to find engagement: by gaming the YouTube algorithm, perhaps, or spam-replying to Elon Musk’s posts on X.

Much is made of the tendency toward “personal brands” in the current era of the web, but domain names arguably originated the phenomenon. Back in the day, the ease of remembering and correctly typing a domain name into a browser address bar was paramount. Dot com was most desirable because people thought of it first; “Business.net” would have seemed like a knockoff by comparison. A short name was ideal. Likewise, a distinctive one: Yahoo.com, say. But, counterintuitive to the rules of brands and trademarks more generally, generic domain names were also highly desirable. When your mom or accountant sat down at a blank browser in 1999, they might not have known what they were looking for. Thus the appeal of Business.com (for what? for business). An apartment-rental site called Viva.com ended up using Rent.com as its primary domain instead, because people were looking to rent a place, not to live abstractly in Spanish.

Those of us who had commercial and creative lives starting in the dot-com era developed a special relationship with the domain name. Before owning or leasing a server became easy and cheap, people had “home pages” instead, their URLs often occluded behind long, forgettable domains owned by your university or internet-service provider. To own a domain, by contrast, was to exist at a top level online, akin to Yahoo or Amazon, at least in name. A domain staked a claim in the internet’s Wild West. It was, well, a domain, a lair, a realm. Don’t find me at www-la1.my-webhost.net/~ianb; visit me at Bogost.com.

Many ordinary people would register a domain name as a way of affirming a commitment to a creative or commercial project, even if it never came to fruition. For years, I have paid to renew domains such as GelateriousEffects.com (a hypothetical brand for my gelato hobby) and Baudrillyard.com (a postmodernist lawn-mowing game I never built). To renew them was to keep those dreams alive.

Google changed all of this. The ability of a website to appear in search results—in response to a query such as apartment for rent in Kansas City—became more important than the ability to remember a URL. A practice that became known as search-engine optimization, SEO, supplanted domain-name speculation. To be discoverable online once meant putting up a shingle, having a place where your internet stuff happened. In the search age, controlling the route to that place became more important.

The social-media era further undermined domains. Home pages, to say nothing of personal websites, gave way to accounts on platforms. You didn’t have a blog anymore; you hosted a blog on WordPress or Blogger. You had not a home page but a profile on MySpace or Facebook; instead of a stand-alone web store, you might just start an Etsy shop. Search has spread from Google to everything: You find stuff by fishing with keywords, not by navigating to a location. If the internet feels different to you today than it once did, this may be a large reason—a convenience that has reoriented everything.

Today, value lies in the ability to link, not the name of the place linked. QR codes allow people to access a catalog, a menu, or an ordering form by pointing a camera at a sign rather than typing an address. Instagram users, free to put links on their profile pages but unable to place them in the captions of their images, began using and referencing “Link in bio” services. Often, links in bios point to other services, such as Linktree, which branch out to more profiles elsewhere—YouTube, TikTok, and so on, an endless slink between places that incidentally have names, rather than named places. Domain names were invented because people couldn’t be expected to remember numerical server addresses as the internet grew. Now one doesn’t even have to remember the names.

Domain names persist, of course, and they continue to bear value. In 2007, Business.com was sold at a 46-fold premium, for $345 million. You are probably reading this article on TheAtlantic.com, a corner of cyberspace from which this magazine would never decamp. But even so, you likely arrived here through a web search, or by clicking a link on LinkedIn or perhaps one shared via text message. A domain is necessary but no longer notable. A dot com is just a historical accident of the web’s structure.

After three decades collecting domains, the ones I own have started to feel burdensome. I will never make Baudrillyard—it should have been a tweet, not a project. I will never open a gelato shop—it’s enough to churn some ice cream for my friends. Bogo.st, a domain I registered during the heyday of URL-shortening tools (so I could personalize my short links on blogs or Twitter), costs me a modest $40 a year to maintain, but I never use it anymore. I’d rather spend that cash on cheeseburgers to put in my human mouth than on virtual plots on the internet.

So I’ve begun letting my domains lapse—the equivalent of finally junking an unfinished project in the closet or letting a yard grow feral. Business.com is just a website now. Viva.com is a European bank. Yahoo.com is a joke. A domain used to mark off the space for a dream online. Now most of those dreams have been realized, or abandoned.

Illustration by Ben Kothe / The Atlantic

You No Longer Have to Type Anymore

$
0
0

As a little girl, I often found myself in my family’s basement, doing battle with a dragon. I wasn’t gaming or playing pretend: My dragon was a piece of enterprise voice-dictation software called Dragon Naturally Speaking, launched in 1997 (and purchased by my dad, an early adopter).

As a kid, I was enchanted by the idea of a computer that could type for you. The premise was simple: Wear a headset, pull up the software, and speak. Your words would fill a document on-screen without your hands having to bear the indignity of actually typing. But no matter how much I tried to enunciate, no matter how slowly I spoke, the program simply did not register my tiny, high-pitched voice. The page would stay mostly blank, occasionally transcribing the wrong words. Eventually, I’d get frustrated, give up, and go play with something else.

Much has changed in the intervening decades. Voice recognition—the computer-science term for the ability of a machine to accurately transcribe what is being said—is improving rapidly thanks in part to recent advances in AI. Today, I’m a voice-texting wizard, often dictating obnoxiously long paragraphs on my iPhone to friends and family while walking my dog or driving. I find myself speaking into my phone’s text box all the time now, simply because I feel like it. Apple updated its dictation software last year, and it’s great. So are many other programs. The dream of accurate speech-to-text—long held not just in my parents’ basement but by people all over the world—is coming together. The dragon has nearly been slain.

“All of these things that we’ve been working on are suddenly working,” Mark Hasegawa-Johnson, a professor of electrical and computer engineering at the University of Illinois at Urbana-Champaign, told me. Scientists have been researching speech-recognition tools since at least the mid-20th century; early examples include the IBM Shoebox, a rudimentary computer housed within a wooden box that could measure sounds from a microphone and associate them with 16 different preprogrammed words. By the end of the 1980s, voice-dictation models could process thousands of words. And by the late ’90s, as the personal-computing boom was in full swing, dictation software was beginning to reach consumers. These programs were joined in the 2010s by digital assistants such as Siri, but even these more advanced tools were far from perfect.

“For a long time, we were making gradual, incremental progress, and then suddenly things started to get better much faster,” Hasegawa-Johnson said. Experts pointed me to a few different factors that helped accelerate this technology over the past decade. First, researchers had more digitized speech to work with. Large open-source data sets were compiled, including LibriSpeech, which contains 1,000 hours of recorded speech from public-domain audiobooks. Consumers also started regularly using voice tools such as Alexa and Siri, which likely gave private companies more data to train on. Data are key to quality: The more speech data that a model has access to, the better it can recognize what’s being said—“water,” say, not “daughter” or “squatter.” Models were once trained on just a few thousand hours of speech; now they are trained on a lifetime’s worth.

The models themselves also got more sophisticated as part of larger, industry-wide advancements in machine learning and AI. The rise of end-to-end neural networks—networks that could directly pair audio with words rather than trying to transcribe by breaking them down into syllables—has also accelerated models’ accuracy. And hardware has improved to allow more units of processing power on our personal devices, which allows bigger and fancier models to run in the palm of your hand.

Of course, the tools are not yet perfect. For starters, their quality can depend on who is speaking: Voice-recognition models have been found to have higher error rates for Black speakers compared with white speakers, and they also sometimes struggle to understand people with dysarthric, or irregular, speech, such as those with Parkinson’s disease. (Hasegawa-Johnson, who compiles stats related to these issues, is the principal researcher at the Speech Accessibility Project, which aims to train models on more dysarthric speech to improve their outputs.)

The future of voice dictation will also be further complicated by the rise of generative AI. Large language models of the sort that power ChatGPT can also be used with audio, which would allow a program to better predict which word should come next in a sequence. For example, when transcribing, such an audio tool might reason that, based on the context, a person is likely saying that their dog—not their frog—needs to go for its morning walk.

Yet like their text counterparts, voice-recognition tools that use large language models can “hallucinate,” transcribing words that were never actually spoken. A team of scholars that recently documented violent and unsavory hallucinations, as well as those that perpetuate harmful stereotypes, coming from OpenAI’s new audio model, Whisper. (In response to a request for comment about this research, a spokesperson for OpenAI said, in part, “We continually conduct research on how we can improve the accuracy of our models, including how we can reduce hallucinations.”)

So goes the AI boom: The technology is both creating impressive new things and introducing new problems. In voice dictation, the chasm between two once-distinct mediums, audio and text, is closing, leaving us to appreciate the marvel available in our hands—and to proceed with caution.

Illustration by The Atlantic. Source: Getty.

America Is Sick of Swiping

$
0
0

Modern dating can be severed into two eras: before the swipe, and after. When Tinder and other dating apps took off in the early 2010s, they unleashed a way to more easily access potential love interests than ever before. By 2017, about five years after Tinder introduced the swipe, more than a quarter of different-sex couples were meeting on apps and dating websites, according to a study led by the Stanford sociologist Michael Rosenfeld. Suddenly, saying “We met on Hinge” was as normal as saying “We met in college” or “We met through a friend.”

The share of couples meeting on apps has remained pretty consistent in the years since his 2017 study, Rosenfeld told me. But these days, the mood around dating apps has soured. As the apps seek to woo a new generation of daters, TikTok abounds with complaints about how hard it is to find a date on Tinder, Hinge, Bumble, Grindr, and all the rest. The novelty of swiping has worn off, and there hasn’t been a major innovation beyond it. As they push more paid features, the platforms themselves are facing rocky finances and stalling growth. Dating apps once looked like the foundation of American romance. Now the cracks are starting to show.

In 2022, a Pew Research Center survey found that about half of people have a positive experience with online dating, down from October 2019. With little success on the apps, a small but enthusiastic slice of singles are reaching for speed dating and matchmakers. Even the big dating apps seem aware that they are facing a crisis of public enthusiasm. A spokesperson for Hinge told me that Gen Z is its fastest-growing user segment, though the CEO of Match Group, the parent company of Tinder and Hinge, has gone on the defensive. Last week, he published an op-ed headlined “Dating Apps Are the Best Place to Find Love, No Matter What You See on TikTok.” A spokesperson for Bumble told me that the company is “​​actively looking at how we can make dating fun again.”

In part, what has changed is the world around the apps, Rosenfeld said. The massive disruptions of the pandemic meant that young people missed out on a key period to flirt and date, and “they’re still suffering from that,” he told me. Compared with previous generations, young people today also have “a greater comfort with singleness,” Kathryn Coduto, a professor of media science at Boston University, told me. But if the apps feel different lately, it’s because they are different. People got used to swiping their hearts out for free. Now the apps are further turning to subscriptions and other paid features.

Tinder, for example, launched a $499-a-month premium subscription in December. On Hinge, you can signal special interest in someone’s profile by sending them a “rose,” which then puts you at the top of their feed. Everyone gets one free rose a week, but you can pay for more. Hinge users have accused the app of gatekeeping attractive people in “rose jail,” but a spokesperson for the app defended the feature: Hinge’s top goal is to help people go on dates, she said, claiming that roses are twice as likely to lead to one.

It’s the same process that has afflicted Google, Amazon, Uber, and so many other platforms in recent years: First, an app achieves scale by providing a service lots of people want to use, and then it does whatever is needed to make money off you. This has worked for some companies—after 15 years, Uber is finally profitable—but monetization is especially tricky for dating apps. No matter how much you fork over, apps can’t guarantee that you will meet the love of your life—or even have a great first date. With dating apps, “you’re basically paying for a chance,” Coduto told me. Paying for a dating-app subscription can feel like entering a lottery: exciting but potentially a waste of money (with an added dose of worry that you look desperate). And there has always been a paradox at the core of the apps: They promise to help you meet people, but they make money if you keep swiping.

Over the past few years, the big dating companies have faltered as businesses. Tinder saw its paid users fall by nearly 10 percent in 2023, and the big apps have been beset by layoffs and leadership changes. Bumble and Match Group have seen their stock prices plummet as investors grow frustrated. Perhaps the biggest problem that the apps might face is not that people are abandoning them en masse—they aren’t—but that even a small dip could prove detrimental. The current big apps’ edge relies on lots of people using them. Apps such as Tinder and Grindr “have an enormous network advantage over newcomers,” Rosenfeld said, for the same reasons Facebook does: It’s not that they’re amazing; it’s that they’re giant. If you want to meet other single people, the apps are where other single people are.

So far, the big apps’ efforts to avoid this doom loop have involved the same basic feature that has been around since the beginning: swiping. “We’re essentially at a tipping point for at least this version of the technology,” Coduto said. Like so many other industries, dating apps swear they have the answer: AI. George Arison, the CEO of Grindr, told me that the app plans to use AI (with users’ permission) to suggest chat topics and power an “AI wingman” feature, and to scan for spam and illegal activity. Hinge’s CEO has suggested that AI will help the app coach users and enable people to find matches, and a product leader at Tinder said last month that the app has used AI to power safety features, adding that the technology can help users select their profile photos.

But AI also holds the potential to unleash chaos on the apps: Bot-written messages and bot-written profiles don’t exactly sound like a recipe for finding love. For Gen Z, the future may hold a grab bag of sliding into DMs, reluctant swiping, and generally doing what humans have always done—seek companionship and love through any means they can muster. With all the time spent online now, people are finding love on Strava, Discord, and Snapchat, among many other sites. In a sense, any app can be a dating app.

Traditional dating apps might be most useful not to young people but to those middle-aged and older, with money to spare. They are more likely to be part of “thin” dating markets, or segments of the population where the number of eligible partners is relatively small, Reuben Thomas, a professor at the University of New Mexico, told me. Online dating is “really useful for people who don’t have that rich dating environment in their offline lives,” Thomas said.

In this way, the future of dating apps may look more like their past: a place for older daters to go after exhausting other options. In the 2000s, the heyday of OkCupid, eHarmony, and desktop dating, middle-aged people were the power users, Thomas said. Millennials had their fun on Tinder in the 2010s; many found lasting relationships. But as a top choice for young people looking for love, dating apps may have been a blip.

Illustration by Matteo Giuseppe Pani. Source: Getty.

Welcome to the Golden Age of User Hostility

$
0
0

What happens when a smart TV becomes too smart for its own good? The answer, it seems, is more intrusive advertisements.

Last week, Janko Roettgers, a technology and entertainment reporter, uncovered a dystopian patent filed last August by Roku, the television- and streaming-device manufacturer whose platform is used by tens of millions of people worldwide. The filing details plans for an “HDMI customized ad insertion,” which would allow TVs made by Roku to monitor video signals through the HDMI port—where users might connect a game console, a Blu-ray player, a cable box, or even another streaming device—and then inject targeted advertisements when content is paused. This would be a drastic extension of Roku’s surveillance potential: The company currently has no ability to see what users might be doing when they switch away from its proprietary streaming platform. This is apparently a problem, in that Roku is missing monetization opportunities!

Although the patent may never come to fruition (a spokesperson for Roku told me that the company had no plans to put HDMI ad insertion into any products at this time), it speaks to a dispiriting recent trend in consumer hardware. Internet-connected products can transform after the point of purchase in ways that can feel intrusive or even hostile to users. Another example from Roku: Just last month, the company presented users with an update to its terms of service, asking them to enter a pre-arbitration process that would make it harder to sue the company. On one hand, this isn’t so unusual—apps frequently force users to accept terms-of-service updates before proceeding. But on the other, it feels galling to be locked out of using your television altogether over a legal agreement: “Until I press ‘Agree’ my tv is essentially being held hostage and rendered useless,” one Roku customer posted on Reddit. “I can’t even change the HDMI input.”

A Roku spokesperson confirmed that a user does have to agree to the latest terms in order to use the company’s services but said that customers have the option to opt out, by sending a letter, in the actual mail, to the company’s general counsel (though the window to do so closed on March 21). “Like many companies, Roku updates its terms of service from time to time,” the spokesperson told me. “When we do, we take steps to make sure customers are informed of the change.”

Back in the day, a TV was a TV, a commercial was a commercial, and a computer was a computer. They have now been mixed into an unholy brew by the internet and by opportunistic corporations, which have developed “automatic content recognition” systems. These collect granular data about individual watching habits and log them into databases, which are then used to serve ads or sold to interested parties, such as politicians. The slow surveillance colonization of everyday electronics was normalized by free internet services, which conditioned people to the mentality that our personal information is the actual cost of doing business: The TVs got cheaper, and now we pay with our data. Not only is this a bad deal; it fundamentally should not apply to hardware and software that people purchase with money. One Roku customer aptly summed up the frustration recently on X: “We gave up God’s light (cathode rays and phosphorus) for this.”

And this phenomenon has collided with another modern concern—what the writer and activist Cory Doctorow evocatively calls “enshittification.” The term speaks to a pervasive cultural sense that things are getting worse, that the digital products we use are effectively being turned against us. For example: Apart from its ad-stuffed streaming devices, Roku also offers a remote-control app for smartphones. In a Reddit post last month, a user attached a screenshot of a subtle ad module that the company inserted into the app well after launch—gently enshittifying the simple act of navigating your television screen. “Just wait until we have to sit through a 1 minute video ad before we can use the remote,” one commenter wrote. “Don’t give them any ideas,” another replied.

[Cory Doctorow: This is what Netflix thinks your family is]

Part of Doctorow’s enshittification thesis involves a business-model bait and switch, where platforms attract people with nice, free features and then turn on the ad faucet. Roku fits into this framework. The company lost $44 million on its physical devices last year but made almost $1.6 billion with its ads and services products. It turns out that Roku is actually an advertising company much like, say, Google and Meta. And marketing depends on captive audiences: commercial breaks, billboards that you can’t help but see on the highway, and so on.

Elsewhere, companies have infused their devices with “digital rights management” or DRM restrictions, which halt people’s attempts to modify devices they own. I wrote last year about my HP inkjet printer, which the company remotely bricked after the credit card I used to purchase an ink-cartridge subscription expired. My printer had ink (that I’d paid for), but I couldn’t use it. It felt like extortion. Restrictive rights usage happens everywhere—with songs, movies, and audiobooks that play only on specific platforms, and with big, expensive physical tech products, such as cars. The entire concept of ownership now feels muddied. If HP can disable my printer, if Roku can shut off my television, if Tesla can change the life of my car battery remotely, are the devices I own really mine?

[Read: My printer is extorting me]

The answer is: not really. Or not like they used to be. The loss of meaningful ownership over our devices, combined with the general degradation of products we use every day, creates a generally bad mood for consumers, one that has started to radiate beyond the digital realm. The mass production and Amazon-ification of cheap consumer goods is different from, say, Boeing’s decline of quality in airline manufacturing allegedly in service of shareholder profits, which is different from televisions that blitz your eyeballs with jarring ads; yet these disparate things have started to feel linked—a problem that could be defined in general by mounting shamelessness from corporate entities. It is a feeling of decay, of disrespect.

In some areas, it means that quality goes down in service of higher margins; in others, it feels like being forced to expect and accept that whatever can be monetized will be, regardless of whether the consumer experience suffers. People feel this everywhere. They feel it in Hollywood, where, as the reporter Richard Rushfield recently put it, the entertainment industry is full of executives “who believe the deal is more important than the audience”—and that consumers ultimately “have no choice but to buy tickets for the latest Mission Impossible or Fast and Furious—because they always have and we own them so they’ll see what we tell them to see.” People feel it in unexpected places such as professional golf: Recently, I was surprised to read an issue of the Fried Egg Golf newsletter that compared NBC Sports’ weak PGA Tour broadcasts to the ongoing debacle at Boeing. “Is there a general lack of morale amongst people right now?” the author wrote. “Does anyone take pride in their work? Or are we just letting quality suffer across all domains for the sake of cutting costs?”

These last two examples aren’t Doctorowian per se: They are merely things that people feel have gotten worse because companies assume that consumers will accept inferior products, or that they have nowhere else to go. In this sense, Doctorow’s enshittification may transcend its original, digital meaning. Like doomscrolling, it gives language to an epochal ethos. “The problem is that all of this is getting worse, not better,” Doctorow told me last year when I interviewed him about my printer-extortion debacle. He was talking about companies locking consumers into frustrating ecosystems but also about consumer dismay at large. “The last thing we want is everything to be inkjet-ified,” he said.

Doctorow’s observation, I realize, is the actual reason I and so many others online are so worked up over a theoretical patent that might not come to fruition. Needing to do a hostage negotiation with your television is annoying—enraging, even—but it is only a small indignity. Much greater is the creeping sensation that it has become standard practice for the things we buy to fail us through subtle, technological betrayals. A little surveillance here, a little forced arbitration there. Add it up, and the real problem becomes existential. It sure feels like the inkjets are winning.

Illustration by The Atlantic. Sources: Getty; Shutterstock.

Tupperware Is in Trouble

$
0
0

For the first several decades of my life, most of the meals I ate involved at least one piece of Tupperware. My mom’s pieces were mostly the greens and yellows of a 1970s kitchen, purchased from co-workers or neighbors who circulated catalogs around the office or slipped them into mailboxes in our suburban subdivision. Many of her containers were acquired before my brother and I were born and remained in regular use well after I flew the nest for college in the mid-2000s. To this day, the birthday cake that my mom makes for my visits gets stored on her kitchen counter in a classic Tupperware cake saver—a flat gold base with a tall, milky-white lid made of semi-rigid plastic. Somewhere deep in her cabinets, the matching gold carrying strap is probably still hiding, in case a cake is on the go.

If you’re over 30 and were raised in the American suburbs, you can probably tell a similar story, though your mom’s color choices might have been a little different. As more and more middle-class women joined the workforce, new products that promised convenience in domestic work, which largely still fell to them, became indispensable tools of the homemaking trade. Reusable food-storage containers kept leftovers fresher for longer, made packing lunches easier, and, when combined with the ascendantly popular microwave, sped the process of getting last night’s dinner back onto the table. Tupperware became such a dominant domestic force that its brand name, like Band-Aid and Kleenex, is often still used as a stand-in for plastic food-storage containers of any type or brand.

In theory, Tupperware should be even more popular now than it was decades ago. The market for storage containers, on the whole, is thriving. Practices such as meal-prepping and buying in bulk have further centered reusable food containers in America’s eating habits. Obsessive kitchen organization is among social media’s favorite pastimes, and plastic storage containers in every conceivable size and shape play an outsize role in the super-popular videos depicting spotless, abundant refrigerators and pantries on TikTok, Instagram, and YouTube. But Tupperware has fallen on hard times. At the end of last month, for a second year in a row, the company warned financial regulators that it would be unable to file its annual report on time and raised doubts about its ability to continue as a business, citing a “challenging financial condition.” Sales are in decline. These should be boom times for Tupperware. What happened?

[Read: Home influencers will not rest until everything has been put in a clear plastic storage bin]

The Tupperware origin story is a near-perfect fable of 20th-century American ingenuity. Earl Tupper, the product’s namesake, was a serial inventor who used mid-century advances in plastics technology to develop the first range of airtight food containers affordable for middle-class households. Tupperware debuted in 1946, but it didn’t really take off until a few years later, when Brownie Wise, a divorced mom from Michigan, began selling the stuff to friends and neighbors she invited to her home. Her success caught the attention of Tupperware, and then of women across America who had gotten a taste of the working life during World War II but had been displaced from their wartime jobs by men returning from military service. Many of them began selling the food containers to their friends and neighbors in the living-room showcases that became Tupperware parties.

In the following decades, the range of plastic tubs, matte and mostly opaque, expanded in color, shape, and size—1960s pastels gave way to the citrus oranges, goldenrod yellows, and avocado greens of the ’70s and ’80s, and then to the rich reds and hunter greens of the ’90s. Modestly sized lidded bowls were joined by dry-goods canisters, pie keepers, pitchers, and measuring cups. A kitchen full of Tupperware products became a symbol of social and domestic success: Practical but a little pricey, the storage containers were trafficked through women’s community bonds, and owning them telegraphed a commitment to order, cleanliness, and sensible stewardship of the family’s time and food budget. Particularly stylish moms matched their Tupperware collections to their kitchen decor.

Tupperware did not respond to an interview request on the company’s current woes, but the problems it faces are not difficult to see. The first is that, in a lot of ways, its products are still those products. Much of Tupperware’s range still looks at least a little bit like it did decades ago—textured, pliant plastic that obscures what’s inside. Some of these products are a clear nostalgia play to tempt younger shoppers with the retro, rainbow-colored bowls and tubs their mom used, but many of the products just look dingy, clunky, old. And nostalgia is not necessarily something buyers want in plastic kitchenware. Since Tupperware’s heyday, what we know about the safety of plastics that were commonly used in food storage has changed, and the public’s buying preferences along with it. Like most older plastic containers, Tupperware made before 2010 contains a type of chemical called BPA, which is associated with a host of health problems including infertility, fetal abnormalities, and heart disease. Tupperware has since removed BPA from its products, but on a visual level, many of them still appear to be the bowls you might now wish your mom hadn’t microwaved so frequently when you were a kid.

Tupperware’s competitors have multiplied in recent decades, and most of them have been more adept at signaling newness and cleanliness to customers. OXO, Pyrex, and Rubbermaid, for example, all sell popular lines of containers that use crystal-clear hard plastic or glass and have mechanical latches or seals to prevent spills and keep food airtight. They look neat and orderly—even expensive—in the bright, cool-toned LED lighting of modern refrigerators. At $50 to $80 for a modestly sized set, they actually tend to cost less than their Tupperware equivalents, which can top $90 for a set of basic plastic bowls. For buyers more concerned about price than beauty, Ziploc and Glad make sets of cheap, thin plastic containers that can be bought for $10 or $12 at most grocery stores.

Where exactly one buys Tupperware has also become an issue over time. The bulk of the company’s dwindling sales volume still comes from women selling to their social circles, which is now more of a liability in the United States than it was a generation or two ago. This type of “direct sales” model has proliferated widely in the social-media era, with distant Facebook friends pestering people to buy things like essential oils and cheap leggings. It has prompted a significant backlash, creating a consumer base that’s tired of sales pitches from acquaintances and suspicious of products sold in that format. Tupperware parties still work for the brand in some parts of the world, but for mainly the same reason that they worked in mid-century America: In 2013, Indonesia became Tupperware’s biggest market, thanks in large part to a growing population of workforce-curious women who embraced the opportunity to make some extra cash for their family by doing business with other women in tight-knit social communities. Sales in North America have continued to decline, and they now make up only a little over a quarter of the company’s total volume, according to its most recent annual report.

Instead of changing with the times, Tupperware has clung to its old ways for decades longer than it probably should have. The company brought in a new CEO late last year, and she has said she will modernize both the products and the company’s sales structure. But those efforts now seem likely to be too little, too late. Other than a brief dalliance with what were then called SuperTarget stores in the early 2000s, the brand didn’t make a serious push into traditional retailing until 2022, and its products are now stocked alongside more modern-looking and frequently less expensive competitors at Target and Macy’s. You can also now buy Tupperware online directly from the brand, but when you arrive at its website, a bright-orange banner across the top alerts you that you’re not shopping with a representative, in case you should want to remedy that and give a co-worker or neighbor the credit for whatever you buy.

None of these issues takes a keen retail or product-development mind to detect. Tupperware’s woes don’t seem to be the result of unpredictable market changes or fickle consumers. Instead, like many once-prosperous 20th-century American companies, Tupperware’s downfall appears to land squarely at the feet of its management. As far back as the 1980s, according to The Wall Street Journal, it was clear to executives at Kraft, then the company’s owner, that high workforce participation among American women was making the direct-sales model less viable; if women had full-time jobs, they mostly didn’t need side hustles or want to go to buying parties, even if they still wanted storage containers and kitchen gadgets. At the same time, Tupperware’s patents began to expire, which created new competition for a brand that had long had very little. At that time, the company still had decades of goodwill in front of it, and it had a direct line to an army of women who could have helped guide the company’s development of newer, better products that people would have been excited to continue stocking their kitchens with. Instead, those products are now made by its competitors and available virtually anywhere that food or home goods are sold. Tupperware, meanwhile, is still waiting for the return of a glorious past that is never coming back.

Alamy

The Homepage of the Black Internet

$
0
0

Illustrations by Frank Dorrey

A few years ago, Stephanie Williams and her husband fielded a question from their son: How had they met?

So they told him. They’d first encountered each other on a website called BlackPlanet.

To the 5-year-old, the answer seemed fantastical. “He clearly didn’t hear ‘website,’ ” Williams, a writer and comic creator, told me. “He was like, ‘Wait, you all met on Black Planet? Like, there’s a planet that’s full of Black people? Why did you leave?!’ ”

Williams had to explain that they’d actually been right here on “regular Earth.” But in some ways, their son’s wide-eyed response wasn’t so off base: From the perspective of the 2020s, there is something otherworldly about the mid-aughts internet that brought his parents together. In a social-media era dominated by the provocation and vitriol of billionaire-owned mega-platforms, it can be hard to imagine a time when the concept of using the internet to connect with people felt novel, full of possibility—and when a site billed as the homepage of the Black internet had millions of active users.

BlackPlanet went live in 1999, nearly three years before Friendster, four years before MySpace, five years before Facebook, and seven years before Twitter. In those early years, the internet was still seen by many as a giant library—a place where you went to find things out. Sure, the web had chat rooms, bulletin boards, and listservs. But BlackPlanet expanded what it meant to commune—and express oneself—online.

The site offered its users the opportunity to create profiles, join large group conversations about topics such as politics and pop culture, apply to jobs, send instant messages, and, yes, even date. It provided a space for them to hone their voice and find their people. A visit to someone’s customizable BlackPlanet page would probably tell you where they grew up, which musicians they idolized, and what they looked like. “That now seems like the most obvious thing in the world,” Omar Wasow, one of the site’s co-founders, told me, “but at the time reflected a real break from the dominant ideas about how this technology was meant to be used.”

BlackPlanet is often overlooked in mainstream coverage of social-media history. But at its peak, it wasn’t just some niche forum. Despite skepticism within the tech industry that a social-networking site geared toward African Americans could be successful, about 1 million users joined BlackPlanet within a year of its launch. By 2008, it had about 15 million members. The site’s cultural reach extended beyond what numbers can capture: BlackPlanet amplified the work of emerging artists, served as a powerful voter-outreach hub for Barack Obama’s first presidential campaign, and fostered now-prominent voices in contemporary media. Gene Demby, a co-host of NPR’s Code Switch podcast, told me he joined BlackPlanet while attending a predominantly white college as a way to make connections beyond his campus. “It was sort of like, ‘Give me all the Black people I can find!’ ”

The site and its users helped establish visual-grammar and technical frameworks—such as streaming songs on personal pages and live, one-on-one chatting—that were later widely imitated. BlackPlanet arguably laid the foundation for social media as we know it, including, of course, Black Twitter.

Now, nearly 25 years after its launch, looking back at BlackPlanet’s glory days can be more than just an exercise in nostalgia. Today’s social-media platforms often seem designed to reward the worst in humanity, subjecting their users to rampant hate speech and misinformation. Perhaps by revisiting BlackPlanet and the story of its rise, we can start to envision a different future for the social web—this time, one with the potential to be kinder, less dangerous, and more fun than what the past two decades have given us.

Omar Wasow met Benjamin Sun in the late 1990s, when they were among the few people of color working in New York City’s tech scene. After graduating from Stanford University in 1992, Wasow had moved back to his hometown and started a hyperlocal community hub and internet-service provider, New York Online, which he operated out of his Brooklyn apartment. The service had only about 1,000 users; Wasow made his actual living by building websites for magazines. So he was excited when he met Sun, then the president and CEO of the social-networking firm Community Connect, which in 1997 launched an online forum for Asian Americans called AsianAvenue.

Wasow, the son of a Jewish economist and a Black American educator, had been thinking about how to build community on the internet for years. Like many early tech enthusiasts, he frequented the bulletin-board systems (BBSes) that proliferated in the late ’80s and early ’90s. Spending time on those primarily text-based, hobbyist-run dial-up services helped him anticipate how popular social technologies could be. Many of the BBSes were standard tech-nerd fare—chats where users would discuss pirating software or gossip about buzzy new product releases. But two sites in particular, ECHO (East Coast Hang Out) and the WELL (Whole Earth ’Lectronic Link), modeled a more salonlike online experience that piqued Wasow’s interest. He realized that people didn’t necessarily want the internet to be just an information superhighway. They wanted connection; they wanted to socialize.

[From the October 2021 issue: Hannah Giorgis on the unwritten rules of Black TV]

Wasow admired the cultural cachet that AsianAvenue had already amassed—enough, by 1999, to compel Skyy Spirits to discontinue a print ad for vodka that featured a racist image of an Asian woman after the site’s users protested. Sun, for his part, wanted to expand Community Connect to new forums for other people of color. They decided to work together to build a new site that would allow users to participate in forum-style group discussions, create personal profile pages, and communicate one-on-one.

But Wasow, Sun, and the rest of the Community Connect team faced a major challenge in launching BlackPlanet: the perception that Black people simply didn’t use the internet. It was true, around the turn of the millennium, that white households were significantly more likely to have internet access than Black ones. At the same time, reports of this “digital divide” had helped foster a myth of what the media historian Anna Everett has termed “Black technophobia.” Well into the aughts, much of the coverage of Black American tech usage had a tone of incredulity or outright condescension. As a result, advertisers and investors were hesitant to back Wasow and Sun’s site. Would it really attract enough users to be viable?

Wasow felt confident that it would. The very first week it went live, in September 1999, a friend teased Wasow about the ticker on BlackPlanet’s homepage, which showed how many people were logged on at any given moment: “I logged in, and it said there were, like, 15 people online,” Wasow remembered him saying. “You sure you want to leave that up? Because it sort of feels like an empty dance floor.” By the next week, the ticker showed closer to 150 people. Every day, the number climbed higher.

Within a few months, BlackPlanet had so many users that they couldn’t possibly have squeezed onto any dance floor in New York City. Wasow began to spend much of his time speaking at marketing conferences and advertising events. Still, he and Sun struggled to attract significant capital. “Even as the site was showing real evidence of just incredible numbers, people had this story that was like, in some ways, ‘That couldn’t be!’ ” Wasow recalled. “Because the digital divide was the narrative in their heads … It wasn’t enough just to show success. We had to be insanely successful.”

By May 2001, less than two years into its run, BlackPlanet had more than 2.5 million registered users. Wasow himself had taught Oprah Winfrey and Gayle King how to surf the Net on national television (after learning how to use a mouse, the women responded on air to emails from Diane Sawyer, Hillary Clinton, and Bill Gates). BlackPlanet had secured advertising deals with the likes of Hewlett-Packard, Time magazine, and Microsoft. In the last quarter of 2002, BlackPlanet recorded its first profit. (Facebook, by contrast, did not turn a profit until 2009, five years after its launch, and Twitter didn’t until 2017, 11 years after its founding.) By then, it was the most popular Black-oriented website in America.

Wasow never forgot one seemingly trivial detail from BlackPlanet’s fledgling days. When the site went live, “the first person who logged in was ‘TastyTanya,’ ” he said, laughing. “For whatever reason, it’s now more than 20 years later and I still remember that screen name.”

I tracked down the woman once known as TastyTanya, who was 20 when she joined the site. Today, she’s a married mother of two young children who works in accounting; she prefers not to have her real name attached to her old handle. When we spoke, she recounted how strangers on the site would strike up conversations with her because someone called TastyTanya just seemed approachable. One man she met on the site even emblazoned her BlackPlanet profile picture onto a CD he burned for her and sent her in the mail, which didn’t seem creepy at the time. As quaint as that might sound now, TastyTanya’s experience perfectly illustrates what made BlackPlanet so fun. In its heyday, the site was largely populated by users just like her, people in their teens and 20s who were doing online what people in their teens and 20s have always done: figuring out who they want to be, expressing their feelings, and, of course, flirting.

[From the May 2024 issue: Hannah Giorgis on LaToya Ruby Frazier’s intimate, intergenerational portraits]

Like many early users, Shanita Hubbard came to BlackPlanet in the early 2000s as a college student, eager to take advantage of the dial-up internet in her dorm room. A member of the Zeta Phi Beta sorority at a historically Black college in South Carolina, Hubbard had heard about a cool-sounding site that would help her meet Zetas on other campuses. She chose the screen name NaturalBeauty79 and peppered her profile with references to her sorority, natural hair, and the music she loved. BlackPlanet soon became a fixture of her undergraduate experience.

Hubbard is now a freelance journalist and the author of Ride or Die: A Feminist Manifesto for the Well-Being of Black Women. When I asked her how she’d describe those days on BlackPlanet to a hypothetical Gen Zer, she laughed: “I feel like I’m trying to explain a rotary phone.”

In retrospect, she told me, it was her first experience understanding how technology could broaden her universe not just intellectually, but socially. On BlackPlanet, Hubbard befriended Black people from all walks of life, including Zetas as far away as California. “What we think Black Twitter is today is actually what BlackPlanet was eons ago in terms of connecting and building authentic community,” Hubbard said. “Except there was levels of protection within BlackPlanet that we never got on Twitter.”

TK
Frank Dorrey

Some of the insulation was a product of the site’s scale and user makeup: BlackPlanet was both smaller and more racially homogeneous than today’s major social-media networks. Its infrastructure played a role too. Users could see who else was online or recently active, send private messages, and sign one another’s digital “guest book,” but group discussions of contentious topics tended to happen within specific forums dedicated to those issues, not on a centralized feed where bad-faith actors would be likely to jockey for the public’s attention. There was no obvious equivalent to the “Retweet” button, no feature that encouraged users to chase virality over dialogue.

BlackPlanet users talked candidly about politics, debated sports, and engaged in conversations about what it meant to be Black across the diaspora. A 2008 study found that the “Heritage and Identity” forum on BlackPlanet (as well as its equivalents on AsianAvenue and another sister site, MiGente), where users started threads such as “I’m Black and I Voted for Bush,” consistently attracted the highest engagement rate. The conversation wasn’t always friendly, but it was rarely hostile in the ways that many Black social-media users now take for granted as part of our digital lives. “There was never a time … where racists found us on BlackPlanet and infiltrated our sorority parties or flooded our little BlackPlanet pages with racist nonsense,” Hubbard said. “It’s almost like the white gaze was just not even a factor for us.”

Eventually, Hubbard began using the site for more than friendly banter. “Everyone likes to pretend it was all about formulating a digital family reunion,” she said. “That’s true. But that doesn’t tell the full story.”

In 2001, when online-dating services such as eHarmony were still in their infancy, BlackPlanet launched a dating service that cost $19.99 a month and helped members screen their would-be love interests. The site offered its members something that is still rare in online romance: Everyone who signed up for BlackPlanet’s dating service wanted to be paired with other Black people.

Soon enough, BlackPlanet romances were referenced in hip-hop lyrics and on other message boards, becoming a kind of shorthand for casual dating among young people. As Hubbard put it, BlackPlanet was “Tinder before there was swiping right, honey.”

If you wanted your BlackPlanet page to look fly—and of course you did—you had to learn how to change the background colors, add music, and incorporate flashing GIFs. At the height of the site’s popularity, the competition led some users to protect their pages by disabling the right-click function that allowed others to access their HTML codes. Giving users the opportunity to digitally render themselves made the site feel less like a staid old-school forum and more like a video game. That’s how BlackPlanet sneakily taught a generation of Black internet users basic coding skills, an accomplishment that remains among Wasow’s proudest.

Every former BlackPlanet user I spoke with for this story recalled doing at least a little coding, though most didn’t know to call it that at the time. Some told me they continued building those skills and went on to work in tech or media, at companies such as Meta and Slate. For others, though, learning HTML was just a way to express personal style. “We were our own webmaster, our own designer, our own developer,” Hubbard said. “We were maintaining it and then we would switch it up every couple of weeks to keep it fresh and poppin’.”

It wasn’t just BlackPlanet users who took note of how much fun customizing one’s own webpage could be. In late 2002, a man named Tom Anderson decided that he and his business partner should start a new social network.

When MySpace launched in 2003, the site included several features that were similar to the ones BlackPlanet had offered for years. But where BlackPlanet and the other Community Connect sites emphasized the value of shared heritage and experiences, MySpace billed itself as the universal social network. “I had looked at dating sites and niche communities like BlackPlanet, AsianAvenue, and MiGente, as well as Friendster,” Anderson told Fortune in 2006 (by then, he was better known as “MySpace Tom”). “And I thought, ‘They’re thinking way too small.’ ”

MySpace didn’t immediately cut into BlackPlanet’s user base. It would take at least five years and the advent of three more major social networks before BlackPlanet saw a significant downturn in its numbers. Even as late as October 2007, when then–presidential candidate Obama joined BlackPlanet, he quickly acquired a large following.

Still, as time went on, some BlackPlanet users found themselves visiting the site less frequently. Mikki Kendall, a cultural commentator and the author of Hood Feminism: Notes From the Women That a Movement Forgot, told me she didn’t spend as much time on BlackPlanet as some of her friends did in part because she thought of it primarily as a meeting space for singles. Also, its interface didn’t appeal to her. “BlackPlanet was both ahead of its time and unfortunately not far enough ahead of its time,” she said. The site was full of delays, and the mobile option seemed all but unusable. “I always felt like it was the bootleg social-media network, even though it wasn’t,” she added. “But it was run like somebody was in the back with a hammer just knocking things together and hoping it came through.”

Some observers I spoke with attributed BlackPlanet’s decline partly to the difficulty its founders had attracting capital. Wasow remembered Community Connect bringing in a total of $22 million by 2004. In 2007, Facebook received $240 million in investment funds just from Microsoft. “What does it take financially to get Facebook to where it is? How much money?” Charlton McIlwain, a professor at NYU and the author of Black Software: The Internet & Racial Justice, From the AfroNet to Black Lives Matter, told me. How far into “the millions and into the billions of dollars has it taken for a Google to experiment and succeed at some things and fail at a lot of things, but then be a dominant player in that ecosystem?” Black American culture has always been a powerful engine of innovation, but this has too rarely translated into actual financial rewards for Black people.

In 2008, three years after Wasow left BlackPlanet to attend graduate school at Harvard, the Maryland-based urban-media network Radio One (now Urban One) purchased Community Connect for $38 million. At the time, BlackPlanet still had about 15 million users. But with Twitter slowly gaining attention outside Silicon Valley and Facebook beginning to overshadow MySpace, BlackPlanet simply didn’t have the resources to continue attracting the same mass of users that it once had. The rise of these social-media giants—and the industry-wide shift to prioritizing mobile experiences—decimated BlackPlanet’s numbers in the years after it was acquired.

Still, the site held on. In February 2019, BlackPlanet got a notable boost. That month, Solange Knowles released the visuals for When I Get Home, her fourth studio album, exclusively on the site. The project arose after Solange tweeted about wanting to release a project on BlackPlanet and caught the attention of Lula Dualeh, a political and digital strategist who had just started in a new role there.

“A lot of people were asking themselves the question What’s next outside of Facebook and Twitter and Instagram? ” Dualeh told me. Maybe the answer could be a return to BlackPlanet. In the days following the rollout of the When I Get Home visuals—a collection of art and music videos—BlackPlanet saw more traffic than it had in about a decade, as old and new visitors alike flocked to the site. Black Twitter was abuzz. “What I didn’t realize is that there was just this underbelly of nostalgia around BlackPlanet,” Dualeh said.

Despite the success of the Solange rollout, BlackPlanet hasn’t seen a significant, lasting bump in numbers. Nostalgia alone won’t be enough to keep users engaged—no matter how much worse Twitter (now X) has gotten. The BlackPlanet interface feels dated, with an early-2010s-Facebook quality to it, even as the posts crawling across the main feed reference music or events from 2024. Alfred Liggins, Urban One’s CEO, acknowledges that there’s work to be done on the technical side. But he argues that the site is still relevant. And although today’s BlackPlanet does often seem like a repository for WhatsApp memes, YouTube links, and conversation prompts copied over from other platforms, some users do continue to use it to share photos and reflections from their real life.

In the current internet landscape, talk of eliminating hostility from large, multiracial platforms feels idealistic at best—particularly when those platforms are owned by egotistical billionaires such as Elon Musk, who has used Twitter to endorse racist claims and alienate parts of its user base. Still, there’s reason to hope that we may be entering a new era of social networking that prioritizes real connection over conflict-fueled engagement. Several new microblogging platforms have launched in recent years. Spill, a Black-owned Twitter alternative co-founded by two of the app’s former employees, joins networks such as Mastodon and Bluesky in offering users a space that isn’t subject to the whims of provocateurs like Musk.

Wasow, for his part, is cautiously optimistic. The emergence of smaller, more dedicated digital spaces, he said, could “take us back to some of that thriving, ‘Let a thousand flowers bloom’ version of online community.” It’s not that he expects people to stop using the huge social networks, Wasow said, just that he can see a world where they log on to Facebook and Snapchat and Instagram less.

The emergence of these new outlets also serves as a useful reminder: The social web can take many forms, and bigger is not always better. The thrill of the early internet derived, in part, from the specificity of its meeting places and the possibility they offered of finding like-minded people even across great distances (or of learning from people whose differing perspectives might broaden your own). Not everyone is lucky enough to meet a future spouse on their web planet of choice. But the rest of us still have the capacity to be transformed for the better by the online worlds we inhabit.


This article appears in the May 2024 print edition with the headline “Before Facebook, There Was BlackPlanet.” When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.

Frank Dorrey

The AI Revolution Is Crushing Thousands of Languages

$
0
0

Recently, Bonaventure Dossou learned of an alarming tendency in a popular AI model. The program described Fon—a language spoken by Dossou’s mother and millions of others in Benin and neighboring countries—as “a fictional language.”

This result, which I replicated, is not unusual. Dossou is accustomed to the feeling that his culture is unseen by technology that so easily serves other people. He grew up with no Wikipedia pages in Fon, and no translation programs to help him communicate with his mother in French, in which he is more fluent. “When we have a technology that treats something as simple and fundamental as our name as an error, it robs us of our personhood,” Dossou told me.

The rise of the internet, alongside decades of American hegemony, made English into a common tongue for business, politics, science, and entertainment. More than half of all websites are in English, yet more than 80 percent of people in the world don’t speak the language. Even basic aspects of digital life—searching with Google, talking to Siri, relying on autocorrect, simply typing on a smartphone—have long been closed off to much of the world. And now the generative-AI boom, despite promises to bridge languages and cultures, may only further entrench the dominance of English in life on and off the web.

Scale is central to this technology. Compared with previous generations, today’s AI requires orders of magnitude more computing power and training data, all to create the humanlike language that has bedazzled so many users of ChatGPT and other programs. Much of the information that generative AI “learns” from is simply scraped from the open web. For that reason, the preponderance of English-language text online could mean that generative AI works best in English, cementing a cultural bias in a technology that has been marketed for its potential to “benefit humanity as a whole.” Some other languages are also well positioned for the generative-AI age, but only a handful: Nearly 90 percent of websites are written in just 10 languages (English, Russian, Spanish, German, French, Japanese, Turkish, Portuguese, Italian, and Persian).

Some 7,000 languages are spoken in the world. Google Translate supports 133 of them. Chatbots from OpenAI, Google, and Anthropic are still more constrained. “There’s a sharp cliff in performance,” Sara Hooker, a computer scientist and the head of Cohere for AI, a nonprofit research arm of the tech company Cohere, told me. “Most of the highest-performance [language] models serve eight to 10 languages. After that, there’s almost a vacuum.” As chatbots, translation devices, and voice assistants become a crucial way to navigate the web, that rising tide of generative AI could wash out thousands of Indigenous and low-resource languages such as Fon—languages that lack sufficient text with which to train AI models.  

“Many people ignore those languages, both from a linguistic standpoint and from a computational standpoint,” Ife Adebara, an AI researcher and a computational linguist at the University of British Columbia, told me. Younger generations will have less and less incentive to learn their forebears’ tongues. And this is not just a matter of replicating existing issues with the web: If generative AI indeed becomes the portal through which the internet is accessed, then billions of people may in fact be worse off than they are today.


Adebara and Dossou, who is now a computer scientist at Canada’s McGill University, work with Masakhane, a collective of researchers building AI tools for African languages. Masakhane, in turn, is part of a growing, global effort racing against the clock to create software for, and hopefully save, languages that are poorly represented on the web. In recent decades, “there has been enormous progress in modeling low-resource languages,” Alexandra Birch, a machine-translation researcher at the University of Edinburgh, told me.

In a promising development that speaks to generative AI’s capacity to surprise, computer scientists have discovered that some AI programs can pinpoint aspects of communication that transcend a specific language. Perhaps the technology could be used to make the web more aware of less common tongues. A program trained on languages for which a decent amount of data are available—English, French, or Russian, say—will then perform better in a lower-resourced language, such as Fon or Punjabi. “Every language is going to have something like a subject or a verb,” Antonios Anastasopoulos, a computer scientist at George Mason University, told me. “So even if these manifest themselves in very different ways, you can learn something from all of the other languages.” Birch likened this to how a child who grows up speaking English and German can move seamlessly between the two, even if they haven’t studied direct translations between the languages—not moving from word to word, but grasping something more fundamental about communication.

[Read: The end of foreign-language education]

But this discovery alone may not be enough to turn the tide. Building AI models for low-resource languages is painstaking and time-intensive. Cohere recently released a large language model that has state-of-the-art performance for 101 languages, of which more than half are low-resource. That leaves about 6,900 languages to go, and this effort alone required 3,000 people working across 119 countries. To create training data, researchers frequently work with native speakers who answer questions, transcribe recordings, or annotate existing text, which can be slow and expensive. Adebara spent years curating a 42-gigabyte training data set for 517 African languages, the largest and most comprehensive to date. Her data set is 0.4 percent of the size of the largest publicly available English training data set. OpenAI’s proprietary databases—the ones used to train products such as ChatGPT—are likely far larger.

Much of the limited text readily available in low-resource languages is of poor quality—itself badly translated—or limited use. For years, the main sources of text for many such low-resource languages in Africa were translations of the Bible or missionary websites, such as those from Jehovah’s Witnesses. And crucial examples for fine-tuning AI, which has to be intentionally created and curated—data used to make a chatbot helpful, human-sounding, not racist, and so on—are even rarer. Funding, computing resources, and language-specific expertise are frequently just as hard to come by. Language models can struggle to comprehend non-Latin scripts or, because of limited training examples, to properly separate words in low-resource-language sentences—not to mention those without a writing system.


The trouble is that, while developing tools for these languages is slow going, generative AI is rapidly overtaking the web. Synthetic content is flooding search engines and social media like a kind of gray goo, all in hopes of making a quick buck.

Most websites make money through advertisements and subscriptions, which rely on attracting clicks and attention. Already, an enormous portion of the web consists of content with limited literary or informational merit—an endless ocean of junk that exists only because it might be clicked on. What better way to expand one’s audience than to translate content into another language with whatever AI program comes up on a Google search?

[Read: Prepare for the textpocalypse]

Those translation programs, already of sometimes questionable accuracy, are especially bad with low-resourced languages. Sure enough, researchers published preliminary findings earlier this year that online content in such languages was more likely to have been (poorly) translated from another source, and that the original material was itself more likely to be geared toward maximizing clicks, compared with websites in English or other higher-resource languages. Training on large amounts of this flawed material will make products such as ChatGPT, Gemini, and Claude even worse for low-resource languages, akin to asking someone to prepare a fresh salad with nothing more than a pound of ground beef. “You are already training the model on incorrect data, and the model itself tends to produce even more incorrect data,” Mehak Dhaliwal, a computer scientist at UC Santa Barbara and one of the study’s authors, told me—potentially exposing speakers of low-resource languages to misinformation. And those outputs, spewed across the web and likely used to train future language models, could create a feedback loop of degrading performance for thousands of languages.

Imagine “you want to do a task, and you want a machine to do it for you,” David Adelani, a DeepMind research fellow at University College London, told me. “If you express this in your own language and the technology doesn’t understand, you will not be able to do this. A lot of things that simplify lives for people in economically rich countries, you will not be able to do.” All of the web’s existing linguistic barriers will rise: You won’t be able to use AI to tutor your child, draft work memos, summarize books, conduct research, manage a calendar, book a vacation, fill out tax forms, surf the web, and so on. Even when AI models are able to process low-resource languages, the programs require more memory and computational power to do so, and thus become significantly more expensive to run—meaning worse results at higher costs.

AI models might also be void of cultural nuance and context, no matter how grammatically adept they become. Such programs long translated“good morning” to a variation of “someone has died” in Yoruba, Adelani said, because the same Yoruba phrase can convey either meaning. Text translated from English has been used to generate training data for Indonesian, Vietnamese, and other languages spoken by hundreds of millions of people in Southeast Asia. As Holy Lovenia, a researcher at AI Singapore, the country’s program for AI research, told me, the resulting models know much more about hamburgers and Big Ben than local cuisines and landmarks.


It may already be too late to save some languages. As AI and the internet make English and other higher-resource languages more and more convenient for young people, Indigenous and less widely spoken tongues could vanish. If you are reading this, there is a good chance that much of your life is already lived online; that will become true for more people around the world as time goes on and technology spreads. For the machine to function, the user must speak its language.

By default, less common languages may simply seem irrelevant to AI, the web, and, in turn, everyday people—eventually leading to abandonment. “If nothing is done about this, it could take a couple of years before many languages go into extinction,” Adebara said. She is already witnessing languages she studied as an undergraduate dwindle in their usage. “When people see that their languages have no orthography, no books, no technology, it gives them the impression that their languages are not valuable.”

[Read: AI is exposing who really has power in Silicon Valley]

Her own work, including a language model that can read and write in hundreds of African languages, aims to change that. When she shows speakers of African languages her software, they tell her, “‘I saw my language in the technology you built; I wasn’t expecting to see it there,’” Adebara said. “‘I didn’t know that some technology would be able to understand some part of my language,’ and they feel really excited. That makes me also feel excited.”

Several experts told me that the path forward for AI and low-resource languages lies not only in technical innovation, but in just these sorts of conversations: not indiscriminately telling the world it needs ChatGPT, but asking native speakers what the technology can do for them. They might benefit from better voice recognition in a local dialect, or a program that can read and digitize non-Roman script, rather than the all-powerful chatbots being sold by tech titans. Rather than relying on Meta or OpenAI, Dossou told me, he hopes to build “a platform that is appropriate and proper to African languages and Africans, not trying to generalize as Big Tech does.” Such efforts could help give low-resource languages a presence on the internet where there was almost none before, for future generations to use and learn from.


Today, there is a Fon Wikipedia, although its 1,300 or so articles are about two-thousandths of the total on its English counterpart. Dossou has worked on AI software that does recognize names in African languages. He translated hundreds of proverbs between French and Fon manually, then created a survey for people to tell him common Fon sentences and phrases. The resulting French-Fon translator he built has helped him better communicate with his mother—and his mother’s feedback on those translations has helped improve the AI program. “I would have needed a machine-translation tool to be able to communicate with her,” he said. Now he is beginning to understand her without machine assistance. A person and their community, rather than the internet or a piece of software, should decide their native language—and Dossou is realizing that his is Fon, rather than French.

Illustration by Matteo Giuseppe Pani. Source: Getty.

The Most Hated Sound on Television

$
0
0

When American viewers flipped open the July 2, 1966, edition of TV Guide, they were treated to a bombshell story. This was the first installment of a two-part series on “the most taboo topic in TV,” the industry’s “best-known and least-talked-about secret,” the “put-on of all time”: the laugh track.

At the time, almost every comedy on air was filmed live in front of a studio audience—or at least pretended to be. Pretty much all of the biggest shows  used a laugh track—The Andy Griffith Show, The Beverly Hillbillies, Green Acres. Savvy viewers might have figured out that not all of the giggles and guffaws were real, but few people outside the industry understood the extent of the artifice. Even shows filmed live added some artificial laughs, sometimes to supplement the audience and sometimes because the laugh track sounded more authentic than the real thing. Behind the scenes, “Laff Boys” played their “Laff Boxes” like magic instruments, calling forth rounds of applause or squeals of delight with the press of a button.

Viewers scorned the laugh track—prerecorded and live chortles alike—first for its deceptiveness and then for its condescension. They came to see it as artificial, cheesy, even insulting: You think we need you to tell us when to laugh? Larry Gelbart said he “always thought it cheapened” M*A*S*H. Larry David reportedly didn’t want it on Seinfeld but lost out to studio execs who did. The actor David Niven once called it “the single greatest affront to public intelligence I know of.” In 1999, Timejudged the laugh track to be “one of the hundred worst ideas of the twentieth century.” And yet, it persisted. Until the early 2000s, nearly every TV comedy relied on one. Friends, Two and a Half Men, Everybody Loves Raymond, Drake & Josh—they all had laugh tracks.

Now the laugh track is as close to death as it’s ever been. The Big Bang Theory, the last major laugh-track show, ended in 2019, and nothing has taken its place. Half of the live comedies on the big-four American TV networks still use laugh tracks, but half of those appear to be ending this year. More tellingly: Can you name a single one? The laugh-track haters had to wait more than 50 years, but finally, they can rejoice.


In a sense, TV episodes are just short movies beamed into your living room. But movies never used laugh tracks, not even in the early, silent days, when it would’ve been easy to layer the sounds of a delighted audience over Charlie Chaplin’s buffoonery. There was simply no need: Every movie had its own live audience right there in the theater, so why bother simulating one? Early TV shows were not so much short movies as radio shows acted out onstage. And because radio shows were recorded in front of a live studio audience for people tuning in at home, TV shows were too. The point of the laugh track was to re-create the communal experience you would have in person, Ron Simon, a curator of television and radio at the Paley Center for Media, told me. It was necessary, one production executive thought, “because TV viewers expect an audience to be there.”

Live-audience laughter had long been sweetened for radio and TV broadcasts, but around 1950, Bing Crosby’s radio show took things a step further, dispensing with the live audience altogether and adding in the laughs later. TV executives soon took a lesson out of Crosby’s book. With the creation of the Laff Box, in the early ’50s, canned laughs proliferated to the point that even shows without the slightest pretense of having been performed for a live studio audience used laugh tracks. Even The Flintstones and The Jetsons did. Some shows were still filmed in front of a real audience, but even they sometimes relied on canned laughs.

Not that the viewers warmed up to the laugh track. There remained a dissonance between viewers’ stated and demonstrated preferences: People railed against the laugh track, but they adored shows that used it. Every so often, the networks would try a show without a laugh track, but none of them lasted long. It’s nice to think that we’re above laugh tracks, that we don’t need them to know what’s funny, but “those social cues help you understand the meaning of comedy,” Sophie Scott, a neuroscientist at University College London who has studied laugh tracks, told me.

By the late 1980s, though, the dominance of the laugh track was starting to erode. Dramedies such as Hooperman and The Days and Nights of Molly Dodd got people accustomed to laughing without any cue, Simon told me, and in the early ’90s, shows such as Dream On and The Larry Sanders Show demonstrated the viability of the unsweetened sitcom. In 1998, a not-yet-famous Aaron Sorkin insisted to ABC executives that adding a laugh track would ruin his first-ever TV show, Sports Night. If he were forced to add one, he said, he’d “feel as if I’d put on an Armani tuxedo, tied my tie, snapped on my cufflinks, and the last thing I do before I leave the house is spray Cheez Whiz all over myself.” The show started out with a laugh track but scrapped it for Season 2.

The laugh track remained a force, though, even as the tides turned against it. In 2003, The New York Timeswrote that “pretty much nobody likes laugh tracks, perhaps because they’re such obvious fig leafs for the embarrassment of weak punchlines, perhaps because they make us feel bossed and condescended to, perhaps because they dehumanize one of the most human actions imaginable.” At the time, Friends was the most popular comedy on TV.

Within a few years, though, a new breed of sitcoms was supplanting the old, first with the arrival of Arrested Development, then with The Office and 30 Rock, and a few years later with Parks and Recreation and Modern Family. Laugh-track shows were coming to seem not just condescending but also stiff and fusty. People began making videos in which they removed the laugh tracks from classic sitcoms to show that they weren’t actually funny. “Living in L.A., you sometimes hear coyotes eating cats, and to me, that’s the sound of a multi-cam laugh track,” Steve Levitan, one of the creators of Modern Family, said a few years into the show’s run. “I just can’t take it anymore.”


Last month, CBS green-lighted a new comedy about two young parents in Texas. It’s a spin-off of The Big Bang Theory and, like the original, will have a laugh track. In short, despite the repeated proclamations ofitsdemise, the laugh track remains. You can still find shows that have it, both on TV and on streaming services, but there is an undead quality to it now. Bob Hearts Abishola, (probably) The Conners, and (probably) Extended Family are ending this year, likely to be replaced by more laugh-track-less shows. And many of those that remain are clear nostalgia plays, such as Netflix’s That ’90s Show, Paramount+’s Frasier revival, and CBS’s The Big Bang Theory spin-off.

Networks and streamers are going to keep swinging, and as long as they do, the laugh track will live on. The older audiences who grew up and spent most of their adult life watching classic laugh-track comedies are still around, and they watch more TV than any other age group. Plus, conventional sitcoms, when they really connect, are more lucrative than any other type of show. But the laugh track simply is not at the center of culture anymore. A laugh-track show hasn’t won the best-comedy Emmy in almost 20 years. If you could once flip through channels and hear laugh track after laugh track, now you can power up your smart TV; toggle among the top shows on Netflix, Hulu, Max, and Amazon Prime; and not hear a single audience reaction.

Robert Thompson, a professor of television and popular culture at Syracuse University, compares the state of the laugh-track sitcom to that of a much older medium: the fresco. “You could still get people to respond to beautiful paintings like Michelangelo painted on the ceiling,” he told me. “It’s just that people aren’t painting that way anymore.” Tourists still come from across the world to see the Sistine Chapel, and millions of people still watch Seinfeld and Friends on streaming services. But they may never lay eyes on a new fresco—or get into a new laugh-track comedy.

That might seem like reason to rejoice. But the death of the laugh track is not—or at least not just—something to celebrate. For all the ire it incurred, for all the bad jokes it disguised, the laugh track was fundamentally about reproducing the experience of being part of an audience, and its decline is also the decline of communal viewership. The era of the family gathering around the living-room TV is over. We don’t all watch the same shows on the same networks, and whatever we watch, we watch on our own personal devices. We don’t go to theaters as often. The laugh track was never more than the illusion of community, but now even the illusion has lost its luster.

There was always something a little dark about the illusion. But there’s arguably something even darker about its loss of appeal. Whether they realized it or not, viewers found comfort in the pretense that they were part of an audience. Now we are content to laugh alone.

Getty




Latest Images