Periscope Depth

Blog

what they gonna do with a lightning rider?

You can tell someone’s trying to bullshit you when they’re selling you on possibilities. Lots of things are possible. But we live in the actual and we’re heading toward the likely. If someone’s sales pitch focuses more on the possible than the actual, they are trying to distract you. They are dazzling you with the money you might win while they’re shifting three cards around on a cardboard box. Find the lady, find the lady.

strategy + business, PwC’s newsletter on business trends, had a gem last week titled “A brief history of tech skepticism“. In trying to gin up enthusiasm for “the metaverse”, James Clive-Matthews drops a list of premature tech skepticism from years past. If you’ve followed this genre for a while, you’ll recognize some of the classics:

Satellite communications (1961): “There is practically no chance communications space satellites will be used to provide better telephone, telegraph, television, or radio service inside the United States.”
—T.A.M. Craven, US Federal Communications Commission

Home computers (1977): “There is no reason anyone would want a computer in their home.”
—Ken Olsen, Founder of Digital Equipment Corporation (DEC)

Laptop computers (1985): “For the most part, the portable computer is a dream machine for the few…the real future of the laptop computer will remain in the specialized niche markets.”
—New York Times

The internet (1998): “By 2005 or so, it will become clear that the internet’s impact on the economy has been no greater than the fax machine’s.”
—Paul Krugman, Winner of the 2008 Nobel Memorial Prize in Economic Sciences

The iPhone (2006): “Everyone’s always asking me when Apple will come out with a cell phone. My answer is, ‘Probably never.’”
—David Pogue, Technology Editor of the New York Times

Hilarious, right? There’s nothing wrong with mocking failed predictions. And I will never rap someone’s knuckles for calling Paul Krugman a moron. But do prior skeptics being wrong mean current skeptics aren’t right?

That seems to be the extent of Clive-Matthews’s argument. He has no track record of “metaverse” success to point to. That’s because the company spearheading it – Meta, nee Facebook – has fumbled every attempt at turning it into a product and is turning its attentions elsewhere. This is not doomsaying; these are observable facts:

  • Meta’s CTO sent out a panicked memo in November 2024, pegging 2025 as a make or break year for the company’s VR products. This came before a substantial reorganization of the Reality Labs division in January of this year. (Insider)
  • Per that same article, Meta has lost $60 billion on mixed-reality and VR products since 2020. Those losses have only grown – Q4 2024 was its worst quarter yet – not diminished.
  • Zuckerberg and Meta have largely shifted their attention to AI and to letting their users slur queer people. Horizon Worlds is no longer the priority it was in 2020.

And this is not because of the newness of the space. Virtual worlds are decades old. The regular users of Second Life (still around!), or longtime World of Warcraft players, or the developers keeping Neverwinter Nights afloat know what makes virtual worlds engaging. Wagner James Au has done some excellent writing in this space.

The problem is that, WoW aside, this is not a multi-billion dollar market. It is an industry powered by fans, hobbyists, and committed developers. But nothing short of a multi-billion dollar market will do. Meta, like all companies, must keep growing to justify its valuation. But when you’re one of the richest companies in the world, how can you grow at the same rate as when you were a scrappy startup? When you claim 3 billion monthly active users, where do you find the next billion?

Clive-Matthews writes:

The above assembly of prognosticating whiffs by often-celebrated minds should fill us with equal doses of schadenfreude and humility. The truth is, of course, no one can predict the future. All we can do when it comes to emerging tech is be inquisitive and actively seek out our blind spots; dig into the use cases, the business cases, and the constraints; and then come to our own conclusions about the likely utility of these unfamiliar things.

But he offers no use cases or business cases for Horizon Worlds. Nor can he. Nor can Meta. They infamously released an online ad for Horizon Worlds in Feb 2025 that was so awful, so uniformly mocked, that Meta took it down within days. More people use Meta’s Quest VR headset to access VRChat and RecRoom than Meta’s own virtual world. The market is not there. In the absence of a market, there is only, for Clive-Matthews, the shame of potentially getting a prediction wrong.

At this point, dismissing Meta’s promises for VR is not skepticism – it’s knowing the difference between marketing and cash flow. It’s reading a press release vs reading a shareholder report. It’s creative writing vs eighth-grade math.

This, we can laugh off. Going to bat for Meta’s vision of VR in 2025 is like asking when the Dodgers are coming back to Ebbets Field. But when glittering generalities get a bigger platform than s+b, I keep one hand on my wallet and one eye on the exit.

Last Friday, Kevin Roose, the New York Times’s legendarily gullible tech columnist, told us “Powerful A.I. is Coming. We’re Not Ready.

I believe that very soon — probably in 2026 or 2027, but possibly as soon as this year — one or more A.I. companies will claim they’ve created an artificial general intelligence, or A.G.I., which is usually defined as something like “a general-purpose A.I. system that can do almost all cognitive tasks a human can do.” [] I believe that hardened A.I. skeptics — who insist that the progress is all smoke and mirrors, and who dismiss A.G.I. as a delusional fantasy — not only are wrong on the merits, but are giving people a false sense of security.

Extraordinary claims require extraordinary evidence. To avoid quoting Roose’s entire column, I’ll sum up the grounds for his belief in two contentions:

  1. Industry insiders, such as Sam Altman and Dario Amodei, think artificial general intelligence is only a few years away. Altman committed to “a few thousand days” in September 2024, which could mean anything from 8 to 19 years; Amodei is committing to “a year or two away.”

  2. Existing A.I. models keep getting better. They can now do advanced math problems, play Go at a competitive level, code software, and summarize research papers.

I’m not going to do better at debunking or contextualizing these points than Ed Zitron or David Gerard or Gary Marcus. The short version:

  1. OpenAI (Sam Altman’s company) and Anthropic (Dario Amodei’s) burned at least $5 billion apiece last year. Their business model is based on using massive amounts of computing power to improve the results of their LLMs, meaning that their costs can only increase. Their continued existence as firms depends on attracting ever-increasing sums of outside investment or on institutional customers who are blind to cost. Altman and Amodei are not just slightly biased in favor of fantastic AI narratives; their jobs depend on being able to tell Masayoshi Son a story.

  2. Even the most trumpeted successes of generative AI “hallucinate” regularly. LLM hallucinations are different from the mistakes that a human might make. Humans form ideas by stringing concepts together in different relationships. LLMs form sentences by determining what sequence of words is likely to follow from a starting word. An LLM may guess what comes next when it reaches an unfamiliar prompt, just like a human might guess when asked something they don’t quite remember. But a human can tell you they were guessing; an LLM can not.

But let’s give Roose the benefit of the doubt. Let’s presume that current in-market genAI products were just as impressive as he says. Let’s pretend that Altman and Amodei were dispassionate commentators rather than white-knuckled gamblers. Even in the rosiest reading of the column, Roose has leapt to a conclusion he can not justify.

Roose is claiming that there is a perceivable path between present-day knowable, fallible chatbots and “artificial general intelligence” – a digital thinking entity that can reason and intuit like a human. This is a form of life that has never been seen on Earth. What Roose is talking about is alchemy. Reliable, sophisticated chatbots, as neat as they are, do not portend the dawn of AGI anymore than an extra life in a video game portends the existence of the immortal soul.

But of course Roose, a water carrier for the most ludicrous Silicon Valley claims, is not trying to sell you on AGI. He’s trying to sell you on “AI” as it exists today. He’s trying to ease the barriers of adoption of AI. He’s selling you a reality (let Meta plagiarize all your books) by promising you a hypothetical (Rosey the Jetsons’ robot). If he only had the reality to sell you on, he wouldn’t have a pitch.

And the reality has not been promising for AI:

AI, a technology that should be able to sell itself, still struggles to find buyers.

It doesn’t irk me to be called a skeptic; I am that. But it irks me when my motives get questioned. I come back to Clive-Matthews’ s+b piece, where he cheekily quotes Douglas Adams:

  1. Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.

  2. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

  3. Anything invented after you’re thirty-five is against the natural order of things.

My objection to LLMs and the metaverse is not that they’re against the natural order of things. My objection is that I have seen them before. Chatbots and virtual worlds were “in the world” when I was born. I’ve been saying “Payment” into a phone for longer than I’ve been talking to customer service reps. Shared virtual spaces have been around since I was a teenager.

I’m not digging my heels in because they’re too new for me. I’m digging my heels in because they’re not new. I am not fresh off the bus; I have been to the city before. I know I am not turning $20 into $50 by picking the red queen. And in 2025, when there’s a guy shuffling three cards in every office and restaurant and store and hotel and TV show, I am just looking for space to breathe.

you can stop dreaming, I’ll dream for you

originally posted on Substack on October 3rd, 2023

I

I’m in no position to give labor organizing advice. I’ve been active in my workplace union since I started this job ten years ago. I’ve been a steward since 2016. There are people with longer and better track records than me; you should go listen to them. But I can say with pride that, if I were hit by a bus tomorrow, I’ve left my union in a stronger place than I found it: more engaged and informed members, more committed and active organizers, and a roadmap to develop more of both. That’s the sort of patient, relentless work that puts a union in a position to seize bigger gains when the opportunity presents itself.

Okay, I will say this with confidence: I am more qualified to talk about labor strategy than former investment banker and Obama official Steven Rattner or WSJ ghoul-in-residence Greg Ip. But that’s not saying much. You’re more qualified! Two dogs tangled on each other’s leashes are more qualified! Anyone who considers the working class with anything greater than nauseated contempt is more qualified.

What insight I have, I’ve gained from:

  1. A few years of painful experience;
  2. Living through an ahistorical surge in union popularity;
  3. Talking with organizers from other unions;
  4. Trainings and books from Labor Notes;
  5. Reading Jane McAlevey.

I return to McAlevey’s book, No Shortcuts: Organizing for Power in the New Gilded Age, from time to time. Even if you’re not building or strengthening a labor union in your own workplace, it’s an immensely valuable book. She introduced me to Marshall Ganz’s way of conceiving of strategy—”turning what you have into what you need to get what you want”—which I’ve used in all aspects of my life, not just organizing. But, most important, it grounds me on where my focus should lie.

Because, as the title itself tells us, there are no shortcuts:

The core argument of this book is that for movements to build maximum power—the power required in the hardest campaigns—there is no substitute for a real, bottom-up organizing model. […] The mobilizing model places primary agency on staff and is only capable of winning under certain restrictive conditions: those that do not require high levels of power. An organizing model places the primary agency for success on an ever-expanding base of ordinary people, and it can win in much more difficult circumstances, those requiring high levels of power. In each model, staff plays a very significant but radically different role. The key difference is where and with whom the agency for change lies.

The working class has two advantages which the ownership class can not take away: their absolute dependence on our output, and our numbers. Without our work, the shelves of their stores, the machines in their factories, and the imagery of their intellectual property is just so much dead weight. And we—the ones do the work—outnumber those who collect the rents, royalties, and profits one hundredfold. But neither our critical position nor our advantage in numbers are useful without organization. What form that organization takes, and who leads it, has been contested ground since the First International.

II

Move over Dark Brandon, this group wants to make Joe Cool a new meme (Politico):

The organization, ProgressNow, is launching a $70 million project to help the president and down-ballot Democrats win the war for voters’ digital attention. The idea is to create, in their own words, an “echo chamber” on the left. At its heart, it is an effort to compete with one they say already operates on all cylinders on the right.

[…]

The center of ProgressNow’s plan is an app that the group has developed called Megaphone. Users who download the app can scroll through a series of liberal memes, videos and graphics created by the organization, add their own captions, and then quickly share them on social media platforms [emphasis mine].

“ProgressNow has become an important partner to the groups supporting the Biden-Harris agenda,” said Anita Dunn, a senior White House adviser and top 2020 Biden strategist who added she was providing her comment in a personal capacity. Ramping up “ensures they will be in an even stronger position to deliver compelling digital communications to people in their communities through a grassroots network that effectively complements efforts in 2024 and beyond.”

[…]

ProgressNow is hiring more than 65 new digital organizers across 10 battleground states to manage a band of volunteers who will be tasked with sharing the organization’s online content far and wide through Megaphone. By the fall of 2024, the group is looking to expand its grassroots army to 13,000 volunteers nationwide.

[…]

The Strategic Victory Fund, an initiative of the leading donor group Democracy Alliance, is planning to raise the majority of the $70 million for the effort.

At the risk of cavilling, every person with a smartphone already has an “app” where they can access “memes, videos, and graphics […] add their own captions, and then quickly share them on social media platforms”. It’s called Facebook. It’s called Twitter X. It’s called Instagram. It’s called TikTok or Reddit or WhatsApp or Discord or Bluesky or Tumblr or Cohost. It’s the Safari or Chrome browser that came installed on your phone. It’s email, if your relatives are old enough.

In the year 2023, nobody who wants to share more pro-Biden memes online is at a loss for where to find them. Nobody is wandering the desert, garments rent, children wailing in their arms, desperate for content.

I do not know anyone engaged in the work of textbanking, doorknocking, or flyering for a political candidate who has ever wished aloud for a new app. In my (limited!) experience, you more frequently hear the opposite: complaints about too many apps. “Are we using Dropbox or Google Drive? How do we invite someone into our Slack? We need to update our flyers; who has the Canva login? Why are we using WhatsApp instead of Signal?” And so on.

$7 million apiece in 10 battleground states could fund campaign offices and a small legion of volunteers. Tell the beleaguered Arizona Democratic Party chair that a super PAC is airdropping $7 million in 2024 and they’d name their next child after you. But giving staffers and volunteers what they want isn’t sexy and cutting edge. It doesn’t excite the big donors who write checks for Democracy Alliance.

All of this is taking at face value that Megaphone is actually an app, not just a website built in Chromium and published on the Play Store.

And the donors, not the staffers, are the target audience. They’re the ones who need to be convinced that this is what will turn things around for the Democrats. They vaguely recall that the Obama campaign’s success in getting out the youth vote had something to do with “online”. And even if they do want to put a sincere effort into influencing critical elections, they don’t have the time or expertise to do so. It’s easier to be told that one donation will assuage their concerns.

These efforts also survive because they’re resume padding for professional nonprofit consultants. Anyone who worked on building or deploying Megaphone has a bullet point they can talk about for the next ten years, whether it moves the needle or not. Further, “Progressive Advocacy Group Develops App for Cool Biden Memes” gets headlines on Politico; “Progressive Advocacy Group Hires More Staff in Battleground States” wouldn’t even merit an email digest.

Donors drive both parties in the United States. Donors are the target audience. The parties need voters to get elected; it’s not like the voting is a fiction. But they rely on donors, and the professionals the donors pay for, to tell them how to reach the voters. This is why the interests of the parties and the interest of their donors seem so closely aligned.

III

Following layoffs, Boston University announces ‘inquiry’ into Ibram Kendi’s Antiracist Center (Boston Globe)

The assessment comes the week after Kendi, a celebrity author, scholar of race, and antiracism advocate laid off more than half the center’s staff.

[…]

Since its announced launch in June 2020, just days after the murder of George Floyd in Minneapolis, the center has raised tens of millions of dollars from tech entrepreneurs, Boston-area corporations, and thousands of small donors.

[…]

[A Boston University spokesperson] also provided a list of the center’s achievements, including: funding for numerous research projects, collaboration in a project launched by journalists at the Atlantic magazine (where Kendi is a contributing writer) to track racial disparities in COVID data, and organizing two “policy convenings” on antibigotry and data collection related to race and ethnicity.

In 2020, following the nationwide demonstrations and protests over the murder of George Floyd (among others), my employer started a reading and discussion group on diversity, equity, and inclusion. The first book we were assigned was Ibram X. Kendi’s How to Be an Antiracist. Since we were still working remotely, we all received a hardcover copy—free!—with plenty of time to read it. We were told this would be the first in a series of readings and discussions that my employer would hold. If there was a second, I never heard about it.

In How to Be an Antiracist, Dr. Kendi makes a compelling case that race, and racism, are the result of policies meant to serve the powerful, rather than bad ideas passed from individual to individual. He makes this case through good reporting on data, an examination of the historical record, and memoir. The memoir is the shakiest portion. He tries to marry each chapter in his study—on culture, on sexuality, on language—with a vignette from his own life. Not all of them match or resonate.

Dr Ibram X. Kendi

I believe Dr. Kendi is sincere in his commitment to change the material structures that prop up racism in this country. I do not believe he is a grifter. I couldn’t say this of everyone to came to prominence in the antiracist movement between 2013 and 2020. I believe he thinks he is doing the right thing.

I do not believe the Boston University Center for Antiracist Research is merely a form of 21st century diversity indulgence. I do not believe it is a box for billionaires and major corporations to deposit coins and show a commitment to ending racism: a sizable check, a well-crafted press release, a values statement on an unindexed page of their website. I do not believe that.

But, if it were, would it look any different?

The best work that BU’s own spokesperson could cite, when challenged on what the Center has done so far, was “funding” and “policy convenings”; i.e., passing along the money that they received and holding meetings.

If one believes, as Dr. Kendi writes in How to Be …, that “locat[ing] the roots of problems in power and policies” is the fundamental work of anti-racism, then the most obvious target is those institutions that have the power and that can write, or lobby for, the policies. These would include companies like Vertex Pharmaceuticals and Twitter, the company Jack Dorsey founded. But Dr. Kendi’s Center depends on them for philanthropy. There’s no way the Center could do the work it does without the donations that Vertex, Jack Dorsey, and others provided. No one else is funding that sort of work.

As with a progressive meme app, the target audience for the Center’s work is its donors. The donors are the ones Dr. Kendi and the other staff need to impress: not the BU student body, not the residents of Boston, and certainly not the Black community in America. 

Any center—any institution—that relies on massive capital contributions is not going to challenge capital. Capital guards its power jealously. It defends the right to accumulate as much as possible. It justifies that power in part through philanthropy. This is how the Rockefeller Foundation is still writing million dollar checks to causes like the Center, even though a Rockefeller hasn’t been relevant since before I was born.

The complaint surfaced in the Globe article (and echoed elsewhere, such as by the former head of the center), is that Dr. Kendi had trouble delegating decisions, and that this slowed the center’s work. Management isn’t an innate talent. It’s not surprising that Dr. Kendi may have had friction with his staff. But at least some of this friction may have come from having to generate concrete solutions to a problem caused by capitalism while being dependent on capital.

“Americans have long been trained to see the deficiencies of people rather than policy,” wrote Dr. Kendi in How to Be an Antiracist. “It’s a pretty easy mistake to make: People are in our faces. Policies are distant.” Identifying the Center for Antiracist Research so strongly with Dr. Kendi gave Boston University a way to make headlines in a time when all of America was glued to the news. It also gave them a scapegoat for when an impossible task started to fall apart.

IV

Many years ago, HR brought a bunch of staff members into a meeting to freely discuss the results of a survey they’d done on job satisfaction. My colleagues in another department—far enough away that I never saw them on a given day, but still connected to my work—spoke at length about their challenges. They were burned out; there was no clear path to job advancement; their supervisors were inconsistent and inconsiderate. HR nodded and said they’d see what they could do.

Outside the conference room, I nodded to usher the colleagues over to a private corner. “I heard what you were saying,” I told them. “I’m a union rep. Could we meet more regularly to talk about conditions in your department?”

Many of my most productive conversations as a union rep have come as fast follow-ups to all-staff emails or HR updates. I swim in the wake of policy like a remora.

Our union sends out periodic emails and holds regular meetings to update our members on workplace conditions. They have an active social media presence and all the buttons and posters you could want. But there is no substitute for the personal touch of organizing: the one-on-one conversation, the discussion with a small group, the little asks, the recruiting of fellow organizers, power mapping, strategizing, and so forth.

The work is not glamorous. I’ve led relatively few pickets in my time. I’ve drafted no petitions and I’ve never started a walkout. But I’ve turned bystanders into activists and activists into organizers. I have helped turn our labor union from an idea into a presence around the office: a tangible force that management has to acknowledge. I’ve done that not through my incredible genius or personal charisma, but through the slow, relentless work of putting power into other people’s hands. The power does not flow from me.

There’s a trillion dollar non-profit sector in this country, a vast edifice promising to do good. It’s staffed by thousands upon thousands of people. Many of them—probably most of them—come to their work with the sincere intent to do good. But we must always return to one question: does the work put power in the hands of people, or concentrate power in the hands of the office? Are institutions being attacked or placated? Afflicted people know what they want: peace, land, bread. Does this work get them closer to that or no?

Capital subsumes and appropriates all critiques made against it: all cynicism, all anger, all despair. It’s the edgy billboard encouraging you to join the gig economy; it’s a commercial with Kendall Jenner giving a Pepsi to a cop; it’s the science-fiction rebellion. But it still requires our hand on the tiller. Alienated, we have nothing; organized, we could have everything.

Ideas are cheap; labor is dear.

someday my body will be just a body

Keep a home long enough and you’ll come up with “laundry TV.” It’s the kind of show you put on in the background while doing a simple chore that doesn’t require much problem-solving: folding laundry, washing dishes, scrubbing the bathroom. It doesn’t have to be TV—it could be a pdocast—but TV fills the role very well.

FreeVee (Amazon’s rebranding of imdbTV) has been a treasure trove in this regard. Broadcast TV is ideal laundry TV. The plots are formulaic, no matter how good the writing is. The characters all revert to their mean by the end of each episode. And there were a lot of good broadcast primetime shows in the recent Golden Age. Person of Interest turned progressive/libertarian concerns about government surveillance into fast-paced suspense drama. The Good Wife made a soapy law office drama relevant by peppering in every liberal blogger’s favorite talking points. And because they were both CBS shows, you could count on a character recapping any major plot developments before and after every commercial break. You didn’t even have to look up from separating your linens!

Having a TV show on as background noise isn’t unusual. It’s how most people consume TV and it has been for years. But until the streaming era, you were limited to what your cable package provided. Netflix’s big push into streaming meant you could rewatch all your favorites, or even binge something you’d always heard about but never tried. Then tablets grew cheaper and phones grew more powerful, and suddenly your TV could travel with you. You never needed to spend a second alone with the sound of your own thoughts again.

The promise of streaming was that we could all “cut the cord”—sever our relationship with the old cable monopolies. We did a fearless and searching inventory of just how many of our dozens of channels we watched in a month, and decided we could do without most of them. I was a cordcutter and I loved it. The only time I’ve paid for a full cable package in the last 15 years was when I needed reliable HBO access to review TK True Detective (S2) for Decider. Xfinity sent me a cable box which I returned, unopened, a year later.

Of course, as soon as the big networks realized they were losing viewers to Netflix and only getting licensing fees in return, they got into the game. Now the three major broadcast networks each own their own streaming platform: ABC (Hulu), NBC (Peacock), CBS (Paramount+). Amazon, Discovery/HBO, and Apple have also gotten into the mix. Many of us have had to reconstitute our old cable subscriptions by subscribing to half a dozen different streaming platforms, netting out to the same monthly payment.

But the expansion of offerings has not led to an expansion of material. The back catalogs are still under the control of giant media companies. $100/month for an infinite library of a century of television and film might be worth it. But nobody’s offering that. In fact, media companies and rights holders are going out of their way to restrict content.

One example: M was home sick last week and watched more Paw Patrol than I’d normally be comfortable with. Paw Patrol has now been around long enough to generate spin-offs. The latest is Paw Patrol: Rubble and Crew, centering around the backhoe-driving bulldog and his extended family of construction workers. You can watch it on Nickelodeon. But you can’t watch it on Paramount+, the streaming platform owned by Paramount Global, parent company of Nickelodeon. Paramount+ only has 7 of Paw Patrol‘s 10 seasons, and if you think 182 episodes about a coastal tourist trap that’s outsourced its emergency services to dogs is sufficient, please explain that to my child.

It’s not that they haven’t gotten around to transferring the newest episodes off of video cassette or ripping them off of DVD. It’s all going over the same fiber. But our household only pays for internet, not for cable TV. I have no reason to pay for Nickelodeon. Paramount knows this, and teases episodes of Rubble & Crew, or its other shows, before each Paw Patrol episode we stream on Paramount+.

That explains why networks would restrict media to one platform when they could otherwise make it available elsewhere. But it doesn’t explain why networks would pull a show mere weeks after it airs, or even shelve it before it aired at all:

Two years ago, Nautilus was big news. A vast, expensive Disney+ prequel to Jules Verne’s 20,000 Leagues Under the Sea, Nautilus promised to tell the early story of Captain Nemo as he embarked on an epic submarine adventure, seeking revenge on his former captors the East India Company. A colossal replica submarine was built. Several soundstages on Australia’s Gold Coast were given over to it. Hundreds of crew members were hired alongside hundreds of extras. Filming took almost a year. The Queensland government claimed that the series would inject A$96m into the local economy. It looked certain to be a hit; an exciting new big-budget spectacle, underpinned with contemporary themes, based on a legendary piece of intellectual property. Nautilus couldn’t go wrong.

Except nobody is going to see Nautilus because, even though it has already been made, Disney+ has decided not to stream it. […] Disney+ has a different reason for getting rid of Nautilus: it was axed as a cost-cutting exercise.

In May, Disney+ announced a content removal plan designed to cut US$1.5bn worth of content, meaning it substantially reduces the company’s value, giving it a lot less tax to pay. Nautilus is not the only victim: a live-action TV adaptation of The Spiderwick Chronicles was also completed and then axed. Disney isn’t the only network to abandon shows that have largely been made, with HBO Max cancelling the second season of feminist porn comedy Minx just as it was finishing production (only for Starz to buy it, saving it from never seeing the light of day).

AMC has also deleted shows with completed and unaired seasons, such as the animated drama Pantheon and legal drama 61st Street, for similar tax purposes. It pulled the plug on its adaptation of Adrienne Celt’s Invitation to a Bonfire partway through production. One US sitcom, Chad, was pulled just hours before it was due to air on US network TBS.

Shows are also steadily vanishing from streaming platforms. Earlier this year, Disney+ removed a range of high-profile titles such as Willow, The One and Only Ivan, Big Shots and The Mysterious Benedict Society. Nor is it alone. In a similar move, Warner Bros Discovery has also removed dozens of shows like Westworld, Raised by Wolves, Gordita Chronicles, Run and Love Life from its platforms to save money, as has Paramount+ with its Grease spin-off The Pink Ladies and Jordan Peele’s Twilight Zone remake.

Westworld was a huge deal when it first aired: a show whose scope and spectacle made it a talking point in the entertainment press, even if the writing itself was middling. You’d think HBO would’ve wanted to keep it available for years. And Willow was on Disney+ for less than a year before Iger pulled the plug. If you didn’t catch it within a few months of launch, you never will (through legal means, anyway). And the 96m dollarbucks Disney invested in Nautilus is apparently better for them as a write-off than as the cost of a finished work of art.

The streaming business model relies on a constant flow of new content. Viewers have to have something to anticipate, a reason to keep their account active for another month. Investors need a splashy visual for the pitch deck or the earnings call: a $100MM action flick starring The Rock? a modern fantasy police thriller starring Will Smith? a high-stakes political drama starring Kevin Spacey? That certainly sounds like a successful media studio. Keep the money coming.

But the actual work produced—the “content”—isn’t meant to linger. You’re not meant to return to it and prize it highly. Netflix’s first original film, Beasts of No Nation (2015), was nominated for a BAFTA and a Golden Globes, and netted Idris Elba a SAG Award. But Netflix doesn’t care if you watch it today. Netflix released 17 original feature films the next year; 40 the year after that. The only one that’s had any staying power in the popular consciousness is A Christmas Prince.

(I’m being unfair; the documentaries 13th and Get Me Roger Stone, released in 2016 and 2017 respectively, each received a good measure of critical attention. That’s a one hot film every twelve month average.)

In 2022—the same year Columbia Pictures released 10 films—Netflix released over 170 features. I challenge you to peruse the list and recall the first detail about more than a dozen of them. And that’s not even touching on their original television series! It’s a pretty remarkable list of shows. Wikipedia orders them by category (Drama, Comedy, Animation), and then by airdate. You start each list with a name you remember (House of Cards; Orange is the New Black; Bojack Horseman) and then scroll down through an avalanche of series you’ve never heard of (Soundtrack; Spinning Out; Hollywood; White Lines; Grand Army; Midnight Mass; Justice Served; Transatlantic; Brews Brothers; Sneakerheads; The Chair; Decoupled; Boo, Bitch; and I haven’t even gotten to the kids’ stuff yet).

There was a brief window of time where the new era of home media promised a deep dive into your favorites. We binged 24 or The Wire or Sports Night on DVD. We queued up reruns of Friends or The Office on Netflix while we mopped our kitchens. The major media companies had learned the lesson of the heyday of digital piracy: consumers just wanted an infinite library of convenient media. They would torrent it if no one could provide it in a user-friendly format, but they would gladly pay.

Then Netflix, Amazon, Hulu and the others decided the cost of licensing an infinite library was too much of a price to pay. Then the free money faucet from the ZIRP era shut off. Now we’re hunting from platform to platform again, wondering who’s hosting our favorites.

This is because art is not a calling or a form of entertainment or a shared cultural experience to the owners of these platforms. Art is an asset. “Intellectual property” is a piece of capital, like land or an assembly line or a pizza oven. Of course, it’s a much more financializable piece of capital than any traditional asset. A tract of land can only be subdivided so many times, but a piece of “content” can generate cash any time someone is willing to pay for a license to “consume” it.

This is, as ever, not cynicism or an exaggeration. This is literally how executives talk about it:

Warner Bros. Discovery CEO David Zaslav sees a major challenge in his media empire: the underutilization of “Harry Potter,” “Lord of the Rings” and DC.

“One of the other real strengths of Warner Bros. is we talk about the great IP that Warner Bros. owns,” Zaslav said at the Goldman Sachs Communacopia and Technology Conference. “But, for us, the challenge is that our content, our great IP — “Harry Potter,” DC, “Lord of the Rings” — that content has been underused.”

Zaslav went on to note that the company hasn’t done “long-form Superman” in over 10 years. Zack Snyder’s “Man of Steel” premiered back in 2013. And though WBD has a stake in The CW, the network behind the currently running “Superman and Lois,” it is not the majority shareholder of the network.

“We haven’t done anything with ‘Harry Potter’ for more than a decade. We haven’t done anything with ‘Lord of the Rings,’” Zaslav said.

Zaslav makes a few basic errors there—Batman vs Superman and Justice League are both less than a decade old; the Fantastic Beasts series has put out three movies—but those only make Zaslav’s viewpoint more galling. A nearly constant presence in the cinemas for the last ten years, global brand recognition, theme parks that bring in thousands of tourists a year: not enough. In the year that Barbie became Warner Bros’ biggest grossing film of all time, the CEO of Warner Bros Discovery thinks it’s time to commit even harder to old favorites.

Zaslav can say this with a straight face less than a year after cancelling the release date of a nearly completed Batgirl film because he’s talking to the finance community. He’s not talking to the dozens of actors or hundreds of crew members whose work will never see the light of day. He’s not talking to the writers and actors who are still out of work because Warner Bros Discovery would rather throw away $500 million than share the streaming data necessary to pay out fair residuals.

So long as capital has had a hand on the scale, art has never belonged to the artists or to the audience. Every advance that has given a little bit of control back to creators or to viewers—residuals, fair use, home taping, moral rights, any of it—has been hard fought since it was won and constantly under attack since. Were it not for the tightening financial markets, the writers and actors picketing the major studios might have already lost. But that’s how progress is made: when the opportunity arises, in the face of unreasoning opposition, one inch at a time.

we can’t go out but we still wanna dance

I

Invite someone who lived in Boston ten years ago to go to karaoke in Allston and you will conjure a specific vibe.

The neighborhood of Allston sits between Boston College and Boston University, and it isn’t far from Northeastern or Harvard either. It was known until recently for a high concentration of college students, recent graduates, immigrants, and anyone else looking for cheap, dense housing. So it acquired the usual businesses that service that population: a wide variety of takeout restaurants, laundromats, liquor stores, comic book stores, and more than its share of karaoke studios. You could rent a private room at Jin Karaoke or Do Re Mi for a floating rate—never clearly stated; frequently subject to change—and page through a sticky binder of mostly Korean songs in search of a 90s hit you recognized. What the venue lacked in service, they made up for in a blind eye to your concealed fifth of Jack Daniels.

When some friends invited me to a karaoke bar in Allston this past Saturday, that was the context I had in mind. But when I showed up, a young man at the door checked my ID. “Wand him,” said an older man leaning on the wall nearby. “All males get wanded.” I was wearing my travel vest with over a dozen pockets that evening, so I had to take quite a few things out—sunglasses, pen, flashlight—as the bouncer scanned me. 

“It’s his first night,” said the older man. “Got to show him how it’s done.”

Once inside, I gave my name and the name of my party to a host at a podium console. “You’re a few minutes early,” he said, “but you’re welcome to wait at the bar.” When I assented, he keyed a walkie talkie on the host stand. “One coming up.”

At the top of the stairs, a man in a suit touched his earpiece. “I got him.”

I would expect this level of security for a club in Miami, not a karaoke bar in Allston. But the vibe has changed in ten years. None of the bars or restaurants I remember are open anymore. The karaoke place I went to on Saturday used to be Sunset Bar and Grille, an Allston staple. But even Sunset wouldn’t have asked for more than an ID check at the front. And the idea of any bar on Harvard Ave charging for bottle service—$450 for a handle of Hennessy VSOP; $800 for Henny XO—would’ve been a punchline when I lived there.

I recounted all this to S. when I got home, much to her amusement. “We always joked about Do Re Mi being a front for something,” she said. “But … maybe you actually found one?”

That’s a common joke when you find the fourth mattress store in a three-block stretch, or a furniture retailer that’s been “going out of business” for the last six years, or a Boston karaoke bar with South Beach prices: it’s a front; it’s money laundering; it’s organized crime. You see the prices, you consult your own sense of what constitutes a “successful” business, and you can’t reconcile the two. You can’t prove it, but you can’t shake the sense that no one could afford to keep their doors open with this little foot traffic. Who is this for?

You’re struck by this comical disbelief because you can’t imagine this venue doing enough business to cover its overhead. And if that were the purpose of a business, you’d be right.

II

In the U.S., the median wage worker of 25 years or older has been at their job for 4.9 years (BLS data for 2022). That’s a 10% decline over the last decade (5.4 years in January 2012). A wide variety of factors may have contributed to shorter tenures: rapid economic growth post-pandemic leading people to change jobs; an older and longer-serving cohort of workers retiring or being retired by COVID; and other factors.

Four to five years, by some coincidence, is the average longevity of a small business in the U.S. (JPMorgan Chase, also citing BLS data). That’s pre-pandemic data, but I doubt 2020 improved those numbers.

So owning your own small business or working for an employer (large or small) seem to be equally as risky. I call attention to this because entrepreneurs are often cited as “risk takers” whose “gamble” on a small venture entitles them to the profits it generates. Based solely on recent data, the “risk” seems about the same as punching a clock.

(I’ll be the first to admit that BLS data on employee tenure aggregates workers who voluntarily separated from their job as well as workers who got laid off. BLS data on small business lifespan also aggregates establishments that closed involuntarily with establishments that closed because their owner found a better opportunity. But the people defending entrepreneurs as “risk takers” aren’t coming at this from a more robust view of the data than I am, so I’m in good company.)

So if owning your own business is no more stable than collecting a paycheck, why do it? One advantage: tax law favors an entrepreneur in ways that it doesn’t favor a wage worker. I can’t deduct the cost of my commute from my wage income; I could account for mileage if I were self-employed. The ability to deduct business expenses, write off losses, and shelter oneself from liability are all advantages that an entrepreneur has over a wage worker.

Moreover, running a small business allows you to spend someone else’s money. Whether you take out a bank loan, solicit private equity, or sell shares on a publicly traded exchange, you can take a gamble with a stranger’s chips rather than your own. And if the gamble completely fails, you have legal protections available to you as well.

I’m not suggesting that small business owners have it all worked out for them. I know too many freelancers to say that with a straight face. For one thing, it’s not easy to get your hands on someone else’s money. Venture capitalists won’t take a meeting with just anyone. Loan officers at banks are notoriously picky. For all the talk of “free” enterprise, credit—the fuel that powers business expansion—still hinges on who you know and what you look like.

But the market for credit was a lot freer when interest rates were lower! When capital is free, or nearly free, it means riskier businesses can pursue different business models. Rather than trying to build a workable product, the goal is to Get Big Fast. Find an opportunity, flood the market with ads and sign-up bonuses to attract subscribers, then become the monopoly price-setter. You can afford to run your business at a loss for years if the money spigot never shuts off.

There are, broadly generalizing, two ways to make money in business: either create a product or service that customers are willing to exchange money for, or tell a story about how valuable your company will be when market conditions are right and let investors give you money.

    When interest rates—the cost of credit—are at or near zero, why bother with the first option? If you’re a good salesperson, you can try selling a thousand different customers, or you can try selling yourself to a loan officer. 

    And on the other side of the equation, if you oversee a pool of funds that must be invested for growth, you can afford to wager on riskier enterprises. If you make a hundred bets, ten bets that pay off tenfold will cover ninety bets that lose everything. And everything looks like a good bet in a growing economy.

    III

    But as of late, I’ve wondered if some larger, structural issues might have been more to blame for the decline of organized crime, starting in the late Seventies and early Eighties. What if a better explanation is that crime was legalized, allowing corporations to seize their lucrative markets?

    Put simply: the massive economic “reforms” initiated by neoliberal politicians in that era, now responsible for the massive inequality in the world today, also essentially legalized most forms of fraud and vice. Having been bolstered at the expense of everyone else, Big Business found that it could out-compete the mob, purchasing the kind of no-consequence, laissez faire attitude from politicians and police which organized crime had once had—only at a scale of which the wiseguys could only dream.

    IV

    When you think of a business as a “front” for a criminal enterprise, you think of an establishment that does not deliver on its ostensible purpose. The mattresses, or the imported furs, or the bottles of liquor behind the bar, are a facade. The real purpose is to legitimize a flow of money from an illicit source.

    But, in my lifetime, the dividing line between a self-sufficient business and a bootstrapped front has grown harder to see. We’ve lived through a housing crisis brought on by “NINJA loans” and ended with a bank bailout. We’ve lived through WeWork, Theranos, and Juicero. The fever dream of cryptocurrency and NFTs is about to break, and if the final chapter of the Elon Musk hasn’t been written we’ve at least seen the outline.

    When I started this train of thought, I was debating whether a business that appeared to have overhead costs vastly exceeding its potential revenue was a legitimate establishment or a criminal front. Now, what I’m saying to you is this: in an era of easy credit, labor exploitation, and “job creator” mythology, what’s the difference?

    Ideas are cheap; labor is dear.

    I hope they never understand us

    We’ve taken three domestic flights as a family so far this year, two within a span of four weeks. It’s about as easy as it’s ever been to book a flight these days. Whether that flight will actually take off on time—or at all—and reach its intended destination is another question. Airlines did not take advantage of the lull in air travel in the last three years to get their affairs in order, nor the recent resurgence in demand to staff up properly.

    An added wrinkle, when flying with a toddler, is determining the proper seats to book. There are general rules of thumb that are true regardless of airline or plane body: short flights, direct flights when possible, seats near the restrooms, et cetera. But there are still a baffling array of options to choose from.

    For instance, on our preferred airline, JetBlue, I can book:

    • Blue Basic
    • Blue
    • Blue Extra

    “Blue Extra” is distinct from Blue in that it offers early boarding and expedited security. However, we can also board early—though not as early as the Blue Extra bookers—because we fly with a young child who needs assistance getting settled. And I can still add “Priority Security” to a Blue fare. Fare class has no bearing on seat class, though: Blue Basics, Blues, and Blue Extras alike may choose an “Even More Space” seat for an additional fee. “Blue Basic” allows you to bring on one personal bag but not one carry-on bag, though if you book with JetBlue’s preferred credit card, you are entitled to one free checked bag per passenger.

    And if you come down with COVID you might be canceling all this anyway, so read the policy on refundable fares closely.

    This proliferation of seats was sold to the American consumer as an increase in “choices.” Frank Lorenzo, the corporate raider who bought Continental Airlines in 1981, is credited as the pioneer of lowering airline fares by introducing new choices in seat classes. In air travel, as in every other consumer-facing industry I can think of, introducing “choice” meant reducing the quality of the existing offering, selling the cheaper and shabbier version as a “value.” Customers who preferred the original offering are always welcome to pay more. But reducing the price of the original offering, or continuing to provide it at a price consistent with inflation, is never part of the plan.

    The division of airplane seats into “business” class vs “coach” class—and further subdivisions; JetBlue is downright austere compared to its competitors—has nothing to do with the function of the plane. Business class passengers weigh as much per capita as coach passengers. Their luggage, food needs, and restroom usage are about the same. Replacing the more spacious business-class seats with coach-class seats would allow airlines to pack more passengers aboard. But creating this artificial distinction serves two functions:

    1. It lets the airline practice price discrimination, the ability to charge more for people who are willing to pay more. Business-class passengers are usually expensing their fare to an employer or client, who can account for the increased price as a business expense. Leisure travelers, who usually have a limited vacation budget and fewer options to write it off, don’t have that option. Having flown first class a few times, I can assure you that it is indeed nicer. I can also assure you that, by any objective accounting of inches of seat width and pitch per dollar, it’s not worth it.
    2. It serves as a subtle form of marketing. Coach passengers shuffle past the first-class flyers. They note their spacious, clean seats and the way flight attendants dote on them. Like a Patek Philippe watch ad in the back of Sports Illustrated, it gives the middle-income consumer something to aspire to someday. It puts a dream in their head. And if that dream only pays off once—splurging on a first-class seat for a special trip; picking one airline over another when you get to travel for work—that’s still added value that the airline didn’t have to pay for.

    I had these functions in mind when I read professional sycophant Megan McArdle’s Washington Post op-ed the other week: “Want employees to return to the office? Then give each one an office.” As someone who’s written extensively about working conditions in the information economy during the pandemic, I was at first inclined to chalk this up as a broken clock giving the right time. But, like the man said, you do not, under any circumstances, gotta hand it to McArdle.

    McArdle starts by noting some backlash she received for a prior column in which she discouraged young women from applying to jobs that could be done fully remote and with little direct supervision (“lazy jobs”). To prove her impartiality, McArdle turns to the employer side of the equation with some unsolicited advice:

    Why not give those workers offices worth returning to?

    And I mean this literally: offices. Not open floor plans where people can hear every word, sneeze and gum-chew that comes from a co-worker’s mouth. Nor dispiriting cubicles that make them feel like rats in a maze. But small rooms with desks and doors that close.

    […]

    Bosses are presumably well aware of these drawbacks, since so many of them arrange to have, you know, offices. Surely, they can understand how employees might find offices enticing, too.

    If only the leader knew!

    Jack Lemmon at his desk in a massive open office in THE APARTMENT

    The open office, of course, predates the post-Nixon expansion of the finance, insurance, and real estate (FIRE) sectors—the white-collar boom, the rise of the yuppie, America’s transition away from manufacturing and toward services. But it was the (first) tech bubble that made open offices seem hip rather than soulless. Startup firms flooded with venture capital needed to expand quickly from garages and college campuses into “real” offices. This often meant converting space from a prior, abandoned use—urban industry—into office space. Lease a warehouse, throw up some drywall, run some CAT5, and voila! We can’t give everyone an office; we don’t have time to build offices. We’re moving fast, we’re breaking things.

    (Throughout this essay, as in McArdle’s column, I’m of course talking about private sector service firms. We don’t need to speculate whether workers on an assembly line—in 1920, in 2020, in any era—would prefer an office with a door.)

    But what once seems cutting-edge quickly becomes ubiquitous. What started as a cost-saving measure for bootstrapping firms becomes a cost-cutting measure for the entrenched firms that they were disrupting. And what was once a hallmark of innovation is now everyone’s least favorite part of the office.

    McArdle is not wrong that rank-and-file employees hate open offices. She’s not even wrong per se about why:

    [T]he open office certainly doesn’t help make workers want to come back. It is, as aforementioned, noisy, which makes it difficult to do any work that requires sustained concentration. Yet somehow, an open office also makes one reluctant to chat with co-workers or make important telephone calls, knowing that every word will have an audience of dozens, some of them visibly annoyed that your chitchat is derailing their train of thought. Tightly spaced desks in open offices also often feel cramped; everyone has just enough room to type without jostling elbows.

    But she’s not correct, or not entirely correct, about why the open office floorplan persists. You can’t be a libertarian and understand how markets work. Comprehending economics interferes with libertarian thought. If you become one late in life, rather than in youthful enthusiasm, they excise the relevant bits of cortical matter with a grapefruit spoon.

    She acknowledges the cost savings: thirty people take up much less space in an open floorplan than in thirty distinct offices. She even nods to the coming commercial real estate apocalypse, as billions of dollars in office leases are due to expire in the next few years. But it’s not primarily about cost and it never has been.

    If you want to produce more of anything—insurance claims, ad copy, financial reports, widgets—you need either more work going into the product or a more efficient means of production. You need more labor or more capital. You need more work (longer hours, more staff) or more efficient work. As work gets more efficient through advances in technology or practice, firms tend to downsize. A typing pool full of secretaries gets replaced by one assistant with Microsoft Office. But a firm can’t grow on efficiency alone. So they have to hire more and more such assistants until you once again have a floor full of entry-level employees, only now they’re called “knowledge workers” rather than “secretaries.”

    An open floor plan from Season 1 or Season 2 of MAD MEN

    So even the most efficient firm, if it wants to grow, will need to expand its headcount. And any firm subject to the whims of the market—either dependent on finance or competing with firms that are—must grow or die.

    As a firm grows to a certain scale, it becomes harder for its owners to make sure the work is getting done. This is true of all enterprises. In a cottage industry, the householder can peek over his family members’ shoulders. In a small shop, the master loses sight of his apprentices every once in a while. On a factory floor, the owner has to hire foremen to keep the assembly line up to speed. A global enterprise requires directors, vice presidents, presidents, and chiefs.

    (They call this the principal-agent problem in the academic literature)

    A manager may be responsible for the output of a team of ten subordinates, but they do not receive ten times their subordinates’ salary. An employee may know more about the day-to-day operations of the business than their director, but the employee must still defer to the director’s wishes. How can this be enforced? Salary and bonuses are the obvious levers, but these are tough to tie to specific goals. There’s always the threat of firing, but that arrow can only be loosed once. Besides, interviewing to refill a role is a pain.

    But a series of modest perks can work wonders:

    1. A private office lets employers reward staff who are willing to work harder in service of the firm. Being a manager means making the firm’s values your own. You can bond with your direct reports, but you also have to be able to fire them if they’re not hitting their numbers. That requires a certain degree of alienation from the people you see everyday. What better place to retreat from the pleading eyes of your colleagues than a 9’x8′ room?
    2. A private office also serves as a subtle form of marketing. It reminds employees that your manager, no matter what you think of them, has a value to the firm that you lack. It reminds managers that they are separate—physically distinct—from the people who produce the firm’s actual output. It’s a lure to the ambitious and a goad to the rebellious.

    The chief task of any capitalist enterprise is not generating profits or reducing costs. It’s maintaining control. A worker is asked to give their employer seven or more hours of their day to produce a good or service that the worker will not directly benefit from. The wage they receive is excellent motivation, especially if there is no easy alternative to wage labor. But even that won’t last forever. And if an employer concedes control over their employees, they concede the ability to swap out laborers as easily as they might swap out laptops.

    You can tell the workplace struggle is a question of control, not of revenue, when employers fight for control at the expense of lost revenue. Starbucks is letting its contract with Cornell University expire rather than ease up on union-busting. Entertainment studios and streaming platforms wait out the WGA and SAG-AFTRA strikes with bad faith offers, even in the face of a season with no new media. Facebook has pulled news links from Canadian users’ feeds, even in the face of historic wildfires. Uber was willing to pull its service from Minneapolis in the face of a recent city ordnance guaranteeing a minimum wage to their drivers, and only backed down when the mayor vetoed a recent measure and negotiated a private compromise. Zoom is dragging staff back to the office in defiance of their product’s chief value proposition. And it took an astonishing level of organizing, agitation, and the credible threat of a strike for UPS to come back to the table with a contract that ended the two-tier wage system.

    McArdle’s column lists every benefit of private working space within an office except one: a visible distinction that reinforces existing hierarchies. It’s an understandable oversight. The libertarian fantasy treats every commercial exchange—between customer and retailer, between employee and supervisor, between the hirer and the fired—as a free exchange between equals. It recognizes no coercion in the private sphere. It has no concept of power beyond naked force.

    But this model of economic coercion, like the libertarian fantasy, is on its way out. Commercial leases are expiring and not being renewed. Workers are finding and exercising remarkable power over their own jobs, despite the best efforts of employers and the Federal Reserve. And all of us bear vivid memories of who is “essential”—and how that “essential” work will be rewarded—from the start of the pandemic.

    We’ve been given the briefest glimpse of another way of ordering the economy: a fabulous party, overheard through a swinging door while we toil over the dishes. We know we can do better than McArdle’s vision of a slightly more equitable corporate future. We know a better world is possible. The next step is to build it.

    Ideas are cheap; labor is dear.

    always an angel, never a god

    I

    It’s a joke so old I don’t know how to properly source it: the horror of asking a coworker how they’re doing and getting a sincere response. “Well, my mom’s in the hospital; my kid broke his arm; I’m daydreaming about how my life would have been different if I’d taken that year off after college and backpacked across Asia …” Whoa! Easy there, Spalding Gray! I was just making mouth sounds to acknowledge your presence. I don’t want an actual human relationship with you.

    When you get a wage job, you don’t get paid per unit of output. A cashier at McDonald’s doesn’t get paid for every value meal they sell; I don’t get paid each time I send an email that begins “Just circling back …” We both get paid by the hour. Our employer is not buying the goods or services we produce for them; they’re buying our time. They’re paying to keep us handy, whether it’s the busy season or the slow afternoon lull.

    When you understand this, you realize that the value you bring to your employer is not the good or service you’re producing. It’s your expertise. It’s your ability to make those hours productive. It’s your ingenuity and initiative. It’s your capacity not just to bag value meals but also to answer open-ended questions from customers, to hang a sign on the McFlurry machine when it’s out of order, to mop up if there’s a slow period, to break up fights, to control access to the restroom, to stay a little late if your replacement hasn’t shown, et cetera. An exhaustive list of every single thing you might possibly have to do in an eight-hour shift, and what response the franchise owner and McDonald’s corporate would prefer, would fill a phone book.

    For this infinite creativity you receive, in exchange, a finite wage disbursed over a limited period of time. That wage is supposed to keep you alive—fed, healthy, rested, socialized, entertained—until the start of your next shift. It has to account for all the time you’re not at the job. Your employer assumes (very little) responsibility for your basic needs while you’re on the clock; the wage is meant to handle everything else. You are expected to spend seven to eight hours a day, thirty-five to forty hours a week, thinking about your employer’s problems. Your employer is not going to spend forty hours thinking about yours.

    There’s a quote on another subject that I’ll bowdlerize for this point: your employer isn’t paying you to work for them; they’re paying you to leave. They’re paying you to go home at the end of your shift. Normally, when we ask another human being to care about our problems—when we enter into a relationship with them—there’s an expectation of reciprocity. I care about you; I hope you care about me as well. When we encounter a relationship that clearly doesn’t work this way, we are naturally repulsed. A friend we haven’t spoken to in years reaches out, asks some questions, mouths some pleasantries, then immediately launches into a multi-level marketing pitch. We feel frustrated and perhaps a little hurt! We wanted connection; we’re receiving a transaction instead.

    Our compensation is partly a function of the value of what we produce, but it’s partly a function of how expensive our life outside of work is, too. So our employer has an incentive to charge as much as they can when they resell what we produce, but they also have an incentive for our life outside of work to cost as little as possible. This is why employers have a regrettable tendency to overlook or force out employees with care obligations (children, elderly relatives), health issues (pregnancy, chronic illness), or rising costs of living (rents). They would love a reserve army of young, healthy college grads whose living conditions are being subsidized by family and who’ve never heard the word “collective bargaining” in their lives.

    When a machine breaks down, the manager would like to replace the faulty part with an interchangeable, standardized part. When an employee breaks down, the employer would like to replace their creativity and ingenuity with someone equally creative and ingenious. That tension between the need to have creativity on tap and the need for seamless replacement governs most of labor relations in a capitalist economy. They would prefer robots; they are settling for humans.

    II

    You could read thousands of words on the new examples and use cases of AI before you found someone willing to admit that “artificial intelligence” is not actually “intelligence”. It’s not problem-solving; it’s not invention; there’s no initiative there. 

    Large language models (LLMs) and neural networks scan through massive datasets, find likely patterns, and iterate on those patterns until they achieve a desired outcome. ChatGPT is as “smart” as the autocomplete on your phone, except it’s not limited to the things you type. It has millions of words to search through: Wikipedia, news articles, scholarly publications, webpages, etc. But it doesn’t “understand” what it reads any more than your Chrome browser loading a Wikipedia page on World War I “understands” the Triple Entente.

    It’s still very impressive that ChatGPT can reproduce legible responses to questions phrased in ordinary English. But those responses are no more likely to be correct or useful than a random person with access to Google. And even Google, when it worked, did not provide a single answer—it provided a selection of answers, ranked by its own criteria of popularity and relevance. With Google, you can see that you have multiple options. ChatGPT obscures that by pretending to be definitive.

    There’s more to intelligence than being broadly read. If I ask ChatGPT “what are the likely side effects of childhood vaccines?”, it makes a world of difference if it’s drawing its answers from peer-reviewed scholarly journals or Washington Times op-eds. If I ask “how can employees improve conditions in their workplace?”, I’ll get different answers from the IWW than from SHRM. ChatGPT can scrape all those sources and can weight the most likely response based on which words follow each other. But it can’t understand the self-interest of the parties involved. It doesn’t understand why a bloc of independent workplace organizers might have a different solution than a professional network of Human Resources executives.

    For instance, here’s an exchange I had with ChatGPT:

    Q: I’m an employee at McDonald’s. Addressing me as a manager who sincerely cares about my health and well-being outside of work, what would you tell me to do if a customer is complaining about a broken McFlurry machine?

    A: As a manager who sincerely cares about your health and well-being, I would advise you to handle the situation in a professional manner. Here are some steps you can take:

    1. Apologize: Start by apologizing to the customer for the inconvenience caused by the broken McFlurry machine. Let them know that you understand their frustration and that you are working to resolve the issue as quickly as possible.

    2. Offer an alternative: If the customer still wants a dessert, suggest an alternative menu item that is available. You could also offer a discount or a free item to make up for the inconvenience.

    3. Provide an explanation: If the customer asks why the machine is broken, provide a brief explanation. However, avoid making any promises about when the machine will be fixed or giving out too much technical information.

    4. Contact maintenance: Make sure to contact your maintenance team or a technician to fix the machine as soon as possible. Let the customer know that you are working on the issue and that you appreciate their patience.

    5. Follow up: After the machine is fixed, follow up with the customer to let them know that it is working again. You could offer them a discount on their next visit or a free item to show your appreciation for their patience.

    Remember that customers can be frustrated when their favorite menu item is not available, but it’s important to remain calm, empathetic, and professional in dealing with the situation. By handling the situation in a professional manner, you can help ensure that the customer leaves with a positive impression of the restaurant and that they will continue to be a loyal customer in the future.

    This is clear, sensible English! It’s also not an answer to the question:

    1) Aside from glossing their response with “as a manager who sincerely cares about your health and well-being”, nothing in here suggests an interest in the petitioner’s health and well-being. FOOTNOTE Credit where due, this is very much like some managers I’ve worked with. /FOOTNOTE

    2) McDonald’s employees generally do not have the leeway to offer customers discounts or free food as a consolation for unavailable items (contra point #2).

    3) McDonald’s employees generally do not have an ongoing relationship with their customers that would let them follow up days or weeks after the exchange (contra point #5).

    These answers were clearly drawn from some general text on management and customer service. Perhaps they were tailored slightly for quick-serve retail. But anyone who’s been to McDonald’s will know that this isn’t how it’s done.

    ChatGPT works because it has a massive dataset to train on. It only appears to produce “intelligent” answers, though, because it obscures its sources. The variety and creativity of human writing has been anonymized and smoothed over. Words—fluid, intangible symbols of human meaning—have become interchangeable parts.

    III

    One of the great paradoxes of the last six years has been reconciling the declining percentage of US workers in labor unions with the increased interest in unionizing.

    The former isn’t hard to understand. In the middle of the last century, major US employers and major trade unions established a labor peace to share in postwar bounty. That bounty grew scarce as other industrial powers recovered, as currency controls were relaxed to ease investment across borders, and as cyberneticization made global conglomerates feasible. US employers broke union power by relocating to states—or countries—with softer worker protections. The purchasing power of an American wage stagnated, while the return to capital investment climbed.

    Chart from the Economic Policy Institute showing the divergence between productivity and compensation starting in 1979. Headline: "The gap between productivity and a typical worker's compensation has increased dramatically since 1979".

    Given how weak unions have become, though, why the sudden interest?

    The recession of 2008 taught a generation of workers that anything employers granted workers—generous benefits, a salary sufficient to cover a mortgage, a foosball table in the break room—could be snatched back at a whim. The 2020 pandemic taught another generation just how much of our work was truly “essential”. The notion of loyalty to an employer has never been less popular in my lifetime. Workers are realizing how little they have in common with the people paying them and how much they have in common with strangers in the same boat.

    Historically, the most reliable way for workers in an industrial economy to build worker power has been through a union. So it’s little wonder that a generation of workers seeking power would turn to this historic institution.

    It’s a long road, fraught with booby traps and chokepoints. Union-busting is so easy that the outgoing CEO of Starbucks could confidently tell a Senate committee that his company had “not broken the law” in firing union organizers, despite having been found to do so by the NLRB three weeks earlier. The unprecedented success of an independent union in organizing Amazon’s warehouse on Staten Island has not been matched by equal success elsewhere. The major TV and film studios are stonewalling writers—with directors and technicians soon to join them—rather than share their obvious gains.

    It’s easy to single out particular villains in each of these narratives: Howard Schulz, Jeff Bezos, David Zaslav. It’s easy to ascribe these holdouts to greed. But greed alone would not explain everything these companies have done to stave off collective action. Starbucks is closing down every store in Ithaca, NY rather than allowing their newly formed unions to gain a foothold. Studios are eating costs that likely exceed the cost of settling with the WGA. Ask any labor reporter or long-time organizer and they’ll tell you: employers would rather burn money than share it.

    What’s at stake is not a salary or a payout. What’s at stake is control. 

    Once an employer grants that their workforce can dictate, or even shift, the terms of production, they have conceded sacred ground. The myth of employers as “job creators” depends on a job being something an employer may grant or withhold out of noblesse oblige, not something that exists in partnership with the worker doing it. The ability of a board to make promises to investors depends on the board being the only voice that matters. This is why employers fight corps-a-corps, flag-nailed-to-the-mast, against union concessions.

    And this is why all reporting on AI must be viewed not as a question of “language” or “intelligence” or “creativity”, but as a question of control. Who dictates where LLMs will be used: the rank-and-file employee or the manager supervising them? Whose labor is being replaced by a neural network: an executive or a frontline worker? Where do the benefits of increased productivity or reduced costs go: into paychecks or into buybacks?

    This, also, is why debates over the efficacy of AI output are secondary, if not irrelevant. Tech industry boosters love to make bold predictions about what AI might do: IBM claims it can replace 5,000 empty roles; consultants claim AI can “easily spit out” screenplays. Every attempt to replicate or demonstrate these claims using existing models fails. “So much for AI,” think the critics. But the question is not whether a machine learning (ML) replacement can improve upon or substitute human input. The question is whether consumers and shareholders will settle for it. 

    Self-checkout is slower, less reliable, and more prone to theft than human checkout, but you can fit three self-serve scanners in the space that one checkout belt used to take. Self-driving cars are no better and often worse than human drivers, but auto makers are still rolling them out. Search engine optimization has made writing worse, but every publisher makes it a priority. Reaching a human in customer service—especially at America’s largest companies—has become an exception rather than a rule.

    AI may never be able to replicate a human’s efforts at a sitcom script, a coffee order, or a customer service email. It doesn’t need to. If it is cheaper, and if the companies providing those services have the final say, you will accept whatever AI gives you.

    The alternative is a model of the economy where employers and investors do not have sole power over what a firm does. Americans have been conditioned for generations to view any loosening of the investor grip over the firm as barbed-wire prison-camp socialism. But that bogeyman has lost its shadow over the last fifteen years.

    What we call AI is merely an exhaustive but idiosyncratic look at the past: billions of words, millions of documents, everything already published. But there’s nothing “intelligent” about that. Intelligence means prediction: faulty prediction, biased prediction, but prediction nonetheless. Intelligence looks toward the future. And if the American economy, to say nothing of the global system of industry, has a future, it will look nothing like the past.