We Fear AI Because We Don’t Understand Ourselves

All the world’s a stage,
And all the men and women merely players;
They have their exits and their entrances,
And one man in his time plays many parts.
— Shakespeare, As You Like It, II.7

Worrying about the destructive power of artificial intelligence suggests that, for all our technical ability, we remain ignorant about our own nature.  In a time of unmitigated anxiety and safetyism, we define ourselves neither by technology nor by humanism, failing to see that these are not mutually exclusive modes of expression.  Our creations may be an expression of our essential humanity, but when we look at AI, we do not see ourselves.  We see catastrophe.

In a recent interview, Noam Chomsky bluntly summarized the current state of AI, noting “these systems are designed in such a way that, in principle, they can tell us nothing about language, about learning, about intelligence, about thought.  There’s a lot of sophisticated programming.  But basically what it comes down to is sophisticated, high-tech plagiarism.”  In other words, all AI can offer us are intricate repetitions of what we’ve already thought, said, and made.  This terrifies us because we do not understand it.

We no longer entertain the Vitruvian holism that seeks universals through human proportion.  In this view, artificial intelligence would, at best, be a filing system for our data, a fundamentally human metric.  But we no longer believe that by knowing ourselves, we can understand the constituent components of the universe around us.  We no longer put our faith in a philosophical cosmos defined by persistent human qualities and meanings.

Theories of postmodern fragmentation have instead convinced us that eternal, humanistic narratives are a lie and that we exist in an atomized power struggle of materialism and self-serving language.  So it’s ironic that Baudrillard’s Simulacra and Simulation was supposedly an inspiration for one of our most disturbing cautionary tales about AI—the Matrix movies, since they were ultimately about finding meaning in grand narratives, something postmodern thinkers reject as a artifact of power-seeking rhetoric.

There is a scene in The Matrix: Reloaded where Neo confronts the Architect, the AI that originally coded the Matrix and now serves as its monitoring program and curator.  While the first movie followed a fairly predictable “rebellion against authority” plot cliché (cf. Spartacus, One Flew Over the Cuckoo’s Nest, The Empire Strikes Back, 1984, The People vs. Larry Flynt, most of Game of Thrones, and many others), Neo’s conversation with the Architect in Reloaded offers a twist.  Neo considers himself a rebel leader, but the Architect reveals he’s just an embedded safety feature designed to reboot the system when it becomes unstable.  Moreover, there have been at least five previous iterations of Neo who’ve performed the same function.

This is very funny.  The grand revelation at the heart of the Matrix trilogy is the first thing you encounter when you manage to fight your way through all the chatbots and voicemail scripts to the flunky at an IT help desk: have you tried turning it off and on again?  The entire sci-fi epic is based on tech support for dummies.  When your computer doesn’t work, you become Neo, a hero in your own eyes perhaps, but really just a recurring function anticipated by the company’s customer relationship management protocols.  It’s the Wachowskis pranking the world with a Dilbert punchline.

And yet, Reloaded still manages to reach for something a little more profound than the Sunday comics.  It looks like the Buddhist doctrine of impermanence colliding with transactional analysis.  In his discussion of mindfulness meditation, Wherever You Go, There You Are, Jon Kabat-Zinn writes:

Notice, too, that the self is impermanent.  Whatever you try to hold on to that has to do with yourself eludes you.  It can’t be held because it is constantly changing, decaying, and being reconstructed again, always slightly differently, depending on the circumstances of the moment.  This makes the sense of self what is called in chaos theory a “strange attractor,” a pattern which embodies order, yet is also unpredictably disordered.  It never repeats itself.  Whenever you look, it is slightly different. . . . Since we are folded into the universe and participate in its unfolding, it will defer in the face of too much self-centered, self-indulgent, self-critical, self-insecure, self-anxious activity on our part, and arrange for the dream world of our self-oriented thinking to look and feel only too real.

This is an articulation of anitya or “impermanence.”  According to the Encyclopedia of Buddhism, “Anitya expresses the concept that all compounded phenomena (all things and experiences) arise due to causes and conditions and are subject to change, decline, and cessation.  Hence, all phenomena are unstable, unreliable, and constantly changing”—just like the Matrix when it needs a reboot.

Amid that continuous change, though, there are also recurring types in recurring situations, like Neo as “the One,” the other so-called rebels ignorantly convinced that they’re fighting the system, the more overt agents of the simulation, and even the Architect itself—all bodhisattvas and asuras coded into the Matrix’s wheel of Samsara and integral to its revolutions.

In this sense, all that is or will be experienced has happened before.  All of these entities have come into being and played their parts again and again.  And they will continue to recur as they tell the same stories and act in the same situations with the same people, not unlike characters in a video game.

Moreover, the Buddhist perspective is not at odds with psychologistic interpretations.  In his 1964 text on transactional analysis and one of the first pop-psychology books, The Games People Play, Eric Berne describes such recurrence-amid-impermanence in terms of social programming.  He writes:

Social programming results in traditional ritualistic or semi-ritualistic interchanges. . . . As people become better acquainted, more and more individual programming creeps in, so that “incidents” begin to occur. These incidents superficially appear to be adventitious, and may be so described by the parties concerned, but careful scrutiny reveals that they tend to follow definite patterns which are amenable to sorting and classification, and that the sequence is circumscribed by unspoken rules and regulations. . . . Such sequences, which in contrast to pastimes are based more on individual than on social programming, may be called games.  Family life and married life, as well as life in organizations of various kinds, may year after year be based on variations of the same game.

It’s interesting that the language of computer science and that of psychoanalysis often overlaps.  In terms of programming architecture (the Architect’s world), a “routine” is “a section of a program that performs a particular task.”  It is essentially a pre-set pattern or sequence of operations, not unlike karmic rebirths on the wheel of Samsara or Berne’s “games”: “A game is an ongoing series of complementary ulterior transactions progressing to a well-defined, predictable outcome.”  The games people play arise again and again with certain repeated personality types in certain repeated situations.

With this in mind, the question becomes: if artificial intelligence is a narrative extension (or mirror) of us and we transactionally construct ourselves in the same ways over and over, then what could there possibly be to fear?  In this grand act of plagiarism, what is being recapitulated if not human experience and identity?  And if that is the case, the potential catastrophe lies not with our creations but with our inability to know ourselves, to understand that our technology and our human nature are really one in the same.