The Latest AI Documentary Asks: Just How Scared Should We Be?
Still from The AI Doc: Or How I Became an Apocaloptimist.
Courtesy of Focus Features Save this story Save this story It’s not easy to get an interview with Sam Altman —just ask Adam Bhala Lough, the filmmaker behind the recent documentary Deepfaking Sam Altman .
Lough originally planned a feature exploring the potential and perils of AI that would center on a conversation with the OpenAI CEO.
But, after having his inquiries ignored for months, he opted instead to commission a chatbot that mimicked Altman’s speech patterns and approximated his facial expressions by way of a digital avatar.
The real Altman did sit down, however, for the new feature The AI Doc: Or How I Became an Apocaloptimist , which hits theaters March 27.
So did Dario Amodei, the CEO of Anthropic , and Demis Hassabis, a cofounder and CEO of Google’s DeepMind Technologies.
(Though the filmmakers say they requested interviews with Meta’s Mark Zuckerberg and X’s Elon Musk , neither made an appearance.) It’s an impressive level of access for codirector and documentary protagonist Daniel Roher, whose 2022 documentary Navalny , about the Russian opposition leader Alexei Navalny, won an Academy Award.
The problem is that once they’re on camera, Altman et al.
say little we haven’t heard before—and they skate by on glib answers concerning their responsibilities to the rest of their species.
When Roher asks Altman why anyone should trust him to guide the rapid acceleration of AI, given its extreme ramifications, Altman replies: “You shouldn’t.” The line of interrogation ends there.
The AI Doc is framed by Roher’s anxiety over the impending arrival of his son and first child with his wife, filmmaker Caroline Lindy.
He wonders what kind of a world his boy will inherit and whether the rise of artificial intelligence will preclude the experiences that develop us into self-sufficient adults.
In Roher’s first several interviews, all his worst fears seem to be confirmed.
Tristan Harris, cofounder of the nonprofit Center for Humane Technology, delivers one of the worst gut punches: “I know people who work on AI risk who don’t expect their children to make it to high school,” he says, invoking a scenario in which the technology demolishes the very infrastructure of traditional education.
Despite the sense of mounting panic, Roher and codirector Charlie Tyrell present an admirably robust crash course in AI and the biggest questions it poses, helped along by Roher’s insistence on defining terms in plain language rather than startup buzzwords.
