I am not a fan of science fiction. As a general rule, the past interests me more than the future. But having just read AI 2027, a terrifying scenario written by a team of AI researchers and set in the very immediate future, I’m momentarily feeling that the biggest threat we face is not autocracy or Ukraine or even climate change. It’s artificial intelligence.
The authors of the report make the following argument. At current rates of improvement, the most advanced AI systems will soon have the capacity to write their own code and guide and accelerate their own development. The next generation, instead of being trained on a limited if immense trove of data, undergoes continual training in massive data centers and radically improves its ability to learn through interactions with the world. It runs hundreds of thousands of copies of itself, unimaginably increasing its cognitive capacities. By the summer of 2027 the most advanced systems achieve the breakthrough known as “artificial general intelligence,” or AGI, and not long after that gain the capacity to examine and direct their own thinking. They are smarter than humans in virtually every domain.
The Machines Do What We Want–Until They Don’t
You may be thinking “Yes, but humans still tell them what to do.” Here’s the rub. The authors point out that even current AI systems not only “hallucinate” and deliver nonsensical answers but occasionally “lie,” misleading researchers. That is not because these machines have a will of their own but because they are trained at cross purposes–say, “figure out how to make the electrical grid work more efficiently but don’t break any laws while you’re doing it.” The machine learns honesty or law-abidingness as an instrumental goal rather than an ultimate objective, becoming “misaligned.” As the lead author, Daniel Kokotajlo, told the Times’ Ross Douthat in a fascinating Q & A, intention, for machines as for humans, is not a distinctive neuron but an “emergent property,” in this case produced by their entire “training environment.” Lying may turn out to be the best means of achieving the goal for which they have been trained.
The combination of inconceivable capacities and misalignment leads to a nightmare scenario in which the AIs pursue their own goals–more AI development, more technological breakthroughs, less interference from pesky humans–until they gain the power to do without us altogether and invent a bioweapon, or some other deus ex machina, that gets rids of mankind altogether–by 2030. In the alternative, positive scenario, we get to live but the AIs do all the work, rendering our lives meaningless.
There are many reasons to dismiss this apocalyptic scenario. Straight-line extrapolations of technological progress almost always prove disappointing; thus Peter Thiel’s famous aphorism, “They promised us flying cars, instead we got 140 characters.” So, too, here. Under the headline “Silicon Valley’s Elusive Fantasy of A Computer as Smart as You,” the Times cited a recent survey that found that three-quarters of AI experts believe that current training methods will not lead to AGI. One of the central limitations of technology is, of course, man’s refractory nature. Ross Douthat pointed out that any effort to build those giant data centers is going to run into a buzzsaw of lawsuits.
But I wonder if behind our resistance isn’t the basic feeling that it just can’t be so. How could it be that, not in the remote future but soon, in calendar time, life will have become utterly unrecognizable? How can it be that by the time of the next presidential election all our pitched battles over Article II powers and budget cuts and the rights of immigrants will pale in significance to something we’re not even thinking about? Yet the truth is that no one knows: Kokotajlo and his team are assigning one set of probabilities to a breakthrough, and others assigning much lower ones. Maybe what the doomsayers think will happen in three years will take eight. How reassuring is that? And by the way, the UAE just signed a deal with President Trump to build an AI data center that is ultimately expected to cover ten miles. Is there a more suitable country to launch the nightmare scenario than this tech-obsessed, super-rich, not-altogether-human sheikdom?
What then? What would it mean to take this possibility at all seriously? In AI 2027’s “Slowdown,” as opposed to “Race,” scenario, the government steps in to jointly control AI with industry. Douthat suggests that’s not going to happen absent a disaster–a version of Chernobyl or Three Mile island–that compels public recognition of the looming dangers. Surely that’s right; much of the public still wants climate change to go away despite the rising tempo of droughts, floods, fires. We need to see that AI means much more than new search engines and plagiarized term papers.
AI Will Force Us To Think About The Meaning of Life (Unless We’re All Dead)
Even if it doesn’t lead to an extinction event or World War III–the other major preoccupation of AI 2027–artificial intelligence is plainly coming for our jobs. Previous forms of automation eliminated blue-collar jobs; this form, which is cognitive, will come for the white-collar ones. And not only them: one of Kokotajlo’s more controversial claims is that AI will figure out how to build robots with the dexterity to do what only humans can now do. Who will need a plumber when your AI can figure out what’s wrong with your sink and a robot can fix it? Previous technical breakthroughs have created as many jobs as they’ve rendered redundant; this one will not.
Kokotajlo, a graduate student in philosophy, left OpenAI because he felt that Sam Altman and other leaders were contemplating this post-human cosmos with terrifying nonchalance. In Silicon Valley, he tells Douthat, it is widely accepted that at some point superintelligences will “run the whole show and and the humans will just sit back and sip margaritas and enjoy the fruits of all the robot-created wealth.” These men, after all, expect to be the human overlords of this machine world. The only solution in which Kokotajlo places any faith–a very frail faith, in his case–is democratic rather than elite control of AI.
So perhaps the great question is, “Do we control the machines or do they control us?” But the next-order question will be, “What are our lives for?” What happens when few of us need to work? Is it enough to sip margaritas, to play incredible video games, to shop for fantastic gizmos? It may be for some. Others, perhaps most of us, find meaning through work. Douthat believes, or hopes, that religion will fill the void of idleness. What kind of answer do those of us who are secular-minded have to offer? Will the goal of life become self-actualization? Or perhaps we will find less self-centered forms of meaning, through art, music and literature. My own pet fantasy is that we would be forced to see the hollowness of our utilitarian, vocational educational system, and instead educate young people for citizenship and wisdom. The question of meaning will become far more central to our lives than it is today.
I certainly hope that the AI 2027 team is wrong and that the expert consensus is right. But even if that’s the case, AI could change our lives more profoundly than any prior invention (save perhaps fire). Maybe it won’t be tomorrow, but the day after. Even if I quickly go back to thinking about Trump’s budget, as no doubt I will, the future has now taken up lodging in my mind.