He Studied Cognitive Science at Stanford. Then He Wrote a Startling Play About A.I. Authoritarianism.
He Studied Cognitive Science at Stanford. Then He Wrote a Startling Play About A.I. Authoritarianism.
When I saw “Data,” a zippy Off Broadway play about the ethical crises of employees at a Palantir-like A.I. company, last month, I was struck by its prescience. It’s about a brilliant, conflicted computer programmer pulled into a secret project — stop reading here if you want to avoid spoilers — to win a Department of Homeland Security contract for a database tracking immigrants. A brisk theatrical thriller, the play perfectly captures the slick, grandiose language with which tech titans justify their potentially totalitarian projects to the public and perhaps to themselves.
“Data is the language of our time,” says a data analytics manager named Alex, sounding a lot like the Palantir chief Alex Karp. “And like all languages, its narratives will be written by the victors. So if those fluent in the language don’t help democracy flourish, we hurt it. And if we don’t win this contract, someone else less fluent will.”
I’m always on the lookout for art that tries to make sense of our careening, crises-ridden political moment, and found the play invigorating. But over the last two weeks, as events in the real world have come to echo some of the plot points in “Data,” it’s started to seem almost prophetic.
Its protagonist, Maneesh, has created an algorithm with frighteningly accurate predictive powers. When I saw the play, I had no idea whether such technology was really on the horizon. But this week, The Atlantic reported on Mantic, a start-up whose A.I. engine outperforms many of the best human forecasters across domains from politics to sports to entertainment.
I also wondered how many of the people unleashing A.I. tools on us really share the angst of Maneesh and his co-worker, Riley, who laments, “I come here every day and I make the world a worse place.” That’s what I think most people who work on A.I. are doing, but it was hard to imagine that many of them think that, immersed as they are in a culture that lauds them as heroic explorers on the cusp of awe-inspiring breakthroughs in human — or maybe post-human — possibility. As a New York magazine review of “Data” put it, “Who gets so far at work without thinking through — and long since justifying — the consequences?”
But last week, Mrinank Sharma, a safety researcher at Anthropic, quit with the sort of open letter that would have seemed wildly overwrought in a theatrical script. “The world is in peril,” he wrote, describing constant pressure at work “to set aside what matters most.” Henceforth, said Sharma, he would devote himself to “community building” and poetry. Two days later Zoë Hitzig, a researcher at OpenAI, announced her resignation in The New York Times, describing the way the tool could use people’s intimate data to target them with ads.
Subscribe to The Times to read as many articles as you like.
Michelle Goldberg has been an Opinion columnist since 2017. She is the author of several books about politics, religion and women’s rights and was part of a team that won a Pulitzer Prize for public service in 2018 for reporting on workplace sexual harassment.
