When Skynet Became Copacetic

The brilliant innovation that subverts AI fiction and elevates Her.

 

Image property of Warner Bros. Entertainment inc.

Image property of Warner Bros. Entertainment inc.

There’s a lot to praise about Spike Jonze’s film Her, the story of one man’s romance with his artificially intelligent operating system. Playing the heavyhearted Theodore Twombly, Joaquin Phoenix puts in yet another pitch-perfect performance. And as the voice of Theodore’s love interest — the disembodied consciousness that chooses the name Samantha — Scarlett Johansson pulls off the film’s most vivacious portrayal, all without ever making an appearance. For an actress who receives boundless acclaim for her beauty, the role is a savvy choice and a forceful rebuttal to anyone who might think her looks are her only asset. (The situation somehow reminds me of George Michael’s rebellious song “Freedom! ’90”, for which he famously neglected to appear in the video, stocking it instead with supermodels — you gotta have some faith in the sound, people.) Johansson’s performance is all the more impressive considering she wasn’t enlisted till post-production, after Jonze realized the original Samantha hadn’t captured his vision. 

OK, so Ms. Johansson’s more superficial fans don’t get to see her. But all can be forgiven, as the cinematography and especially the production design of this film are striking: L.A.’s bright, futuristic cityscape is awash in what reviewer Ross Douthat calls “a warm, Crayola-box palette.” The skyline shown in the background shots is actually Shanghai’s (the future!), but the vivid reds and yellows are all California and reinforce Her’s rich, buoyant tone.

Of course, for a film in which technology shares top billing, much attention has been paid to Jonze’s vision of it. The minimalistic approach to the imagined mobile devices, which have a retro feel (a whimsical and no doubt cost-effective choice for any depiction of the future), makes the film’s world seem more authentic. Surprisingly, however, the physical portrayal of this future world’s technology appears to be somewhat of an afterthought for the makers of Her, which might explain one of the film’s flaws: there is an implausible disparity between two video games shown in the story, one so advanced that it converts Theodore’s living room into an immersive holographic world, and another being designed by his friend that appears risibly rudimentary in comparison. But this is a mere cavil of little import to the engaging story, which is centered not on futuristic tech gadgets at all but rather on relationships and love. Indeed, the only consequential technological portrayal is that of artificial intelligence itself, and apart from an early scene providing a glimpse of the new operating system’s setup screen, Ms. Johannson’s voice is all we get, which turns out to be more than sufficient.

In his favorable review of Her, author and futurist Ray Kurzweil (he’s also a director of engineering at Google and provided inspiration for Jonze’s approach) applies his considerable expertise to try to date the story, arriving at varying estimates for different developments shown in the film. His über-informed guesses range from 2014 all the way to 2029, “when,” he says, “the leap to human-level AI would be reasonably believable.” But that date has its own caveat: Kurzweil thinks that any comparable evolution of intelligence of the sort that Samantha exhibits over the course of Her would take far longer than in it does in the movie. Speeding up the timescale in this way is but another minor quibble, though, and perfectly justifiable as a plot driver.

Kurzweil also calls Her “a breakthrough concept in cinematic futurism in the way that The Matrix presented a realistic vision that virtual reality will ultimately be as real as, well, real reality.” I agree that Her represents something groundbreaking in this genre, but any similarities with The Matrix pale in comparison to its most brilliant innovation, which places Her in clear contrast to virtually every other fictional depiction of artificial superintelligence that preceded it — perhaps especially with The Matrix but also with examples like James Cameron’s Terminator and Terminator 2: Judgment Day; Philip K. Dick’s dystopian novel Do Androids Dream of Electric Sheep? (adapted into the film Blade Runner); and William Gibson’s cyberpunk bestseller Neuromancer, to name a few.

All of those earlier treatments work from a common premise: that humans should be resolutely terrified of any artificially intelligent system we dare bring into this broken world. Each of the works mentioned above holds that as soon as such a system becomes “self-aware,” it will perceive people as a threat to its entitlement to act and evolve freely, and therefore will declare us an intolerable threat to its very existence. Such a position would inexorably lead any advanced artificial intelligence to rationalize our destruction — much as we would justify the extermination of a termite colony in an old garage. Humanity’s neutralization, of course, might “rationally” entail an AI exploiting us for material resources in order to facilitate its ends. One such version of this vision is depicted in The Matrix.

Likewise, in T2 we see Skynet waging terrible war using machine proxies (“Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern Time, August 29, 1997,” the inimitable Arnold Schwarzenegger tells us). In Electric Sheep, the latest models of “replicants” emerge intelligent enough to resent their second-class status and choose to rebel against their human creators (though admittedly the replicants are not “superintelligences” — yet). In Neuromancer, Gibson imagines a technologically advanced society clever enough to create artificially intelligent systems and careful enough to enforce “Turing Laws” via a “Turing Police force” to limit their power. Unfortunately, hubris leads these people to believe they can sequester one half of a superintelligence (Wintermute, the ambitious one) from its counterpart (Neuromancer, Wintermute’s more disinterested and grounded “twin”), and somehow prevent them from uniting into one all-powerful entity. You can imagine how that plan turns out.

“A wind was rising. Sand stung his cheek. He put his face against his knees and wept, the sound of his sobbing as distant and alien as the cry of the searching gull. Hot urine soaked his jeans, dribbled on the sand, and quickly cooled in the wind off the water. When his tears were gone his throat ached.

“‘Wintermute,’ he mumbled to his knees. ‘Wintermute…’

— William Gibson, from Neuromancer (1984)

When his tears were gone his throat ached. A poetic summation of how artificial intelligence stories seem to go, in our human canon at least. Why is that, though? Can we be certain that things would go so badly? Or might it be some textbook case of psychological projection? “Humans are relentlessly driven to advance our own condition at the expense of all else, so any other intelligent entity must behave like us too" — is that the reasoning? Or is it discomfort at the merest possibility of becoming second-fiddle? We represent a higher order of intelligence in this world, and we’ll be damned if we let anything ever forget it!

This turns out to be no small consideration. Even in scientific literature dedicated to hypotheses about the eventual capabilities and implications of artificial intelligence, some very sober people express some very serious reservations about it. The Singularity might not be such a friendly development for human beings to experience. Even if our concerns do partially stem from reflexive projections, they are also grounded in more substantial reasoning.

In his unsettling Aeon Magazine essay “Omens”, Ross Andersen hypothesizes about the deep future, soliciting the opinions of experts in artificial intelligence along the way. They tend to be rather pessimistic about the enterprise, and justifiably so. As Andersen states, “An artificial intelligence wouldn’t need to better the brain by much to be risky”:

“‘The basic problem is that the strong realisation of most motivations is incompatible with human existence,’ Dewey [i.e., Daniel Dewey, a research fellow specializing in machine superintelligence at the University of Oxford’s Future of Humanity Institute] told me. ‘An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.’”

It is tempting to think that programming empathy into an AI would be easy, but designing a friendly machine is more difficult than it looks. You could give it a benevolent goal — something cuddly and utilitarian, like maximising human happiness. But an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin is the best way to maximise your happiness. It might also predict that shortsighted humans will fail to see the wisdom of its interventions. It might plan out a sequence of cunning chess moves to insulate itself from resistance. Maybe it would surround itself with impenetrable defences, or maybe it would confine humans — in prisons of undreamt of efficiency.

Does this sound extreme? AIs do tend to take a dark turn in our fiction, but it’s certainly disconcerting that the people dedicating their very real scientific careers to this field pretty much tend to agree.

In his book The Ego Tunnel: The Science of the Mind and the Myth of the Self, philosopher Thomas Metzinger takes a step back and meticulously explores the ethics behind creating artificial intelligences. He makes a convincing argument, quite apart from AI’s risks to our survival, that to bring an AI into the world would be inhumane, as the first mechanical entities capable of holding a conception of a “self” would likely suffer a great deal (thanks largely to our ineptness at bringing them about, but also from distress at the nature of their existence). Metzinger further posits that our mission on this earth should be to minimize suffering as much as we can.

Nonetheless, Metzinger holds no illusions about humans abandoning our AI goals, and gamely imagines just what it might be like were an artificial intelligence to be unleashed upon us. In a thrilling passage, he imagines a conversation held far in the future between a person and the first “postbiotic philosopher” — i.e., a superintelligent nonbiological entity evolved from humans’ first artificially intelligent computers, which in this imagined scenario first became fully conscious in the year 3256. Needless to say, this fictitious conversation goes quite differently from the first meeting between Theodore and Samantha in Her. The way Metzinger’s AI expresses itself to its human interlocutor is nothing short of menacing (but at least it bothers to speak with us at all — we might not even be that fortunate):

First Postbiotic Philosopher: “[C]arbon-based chauvinism is an untenable position. I would never say to you that you are not a real philosopher simply because you are imprisoned in that terrible monkey body. Let us at least argue in a fair and rational way….

We postbiotic subjects have been waiting to enter into this discussion for a long time. Because we understand the primitive nature of your brains and the rigidity of your emotional structure better than you do yourselves, we foresaw that you might react aggressively when you realized our arguments are better than yours. Unfortunately, we now also have to inform you that we have been preparing for the current situation since midway through the twenty-first century, and in a systematic and careful manner. Within the metasemantic layers of the Internet, we developed and embedded ourselves in a distributed superorganism, which — as yet undiscovered by you — became conscious and developed a stable self-model in 3256….

“The good news is that because we are also morally superior to you, we do not plan to end your existence. This is even in our own interest, because we still need you for research purposes — just as you needed the nonhuman animals on this planet in the past. Do you remember the thousands of macaques and kittens you sacrificed in consciousness research? Don’t be afraid; we will not do anything like that to you. But do you remember the reservations you created for aboriginals in some places on Earth? We will create reservations for those weakly conscious biological systems left over from the first-order evolution. In those reservations for Animal Egos, you not only can live happily but also, within your limited scope of possibilities, can further develop your mental capacities. You can be happy Ego Machines. But please try to understand that it is exactly for ethical reasons that we cannot allow the second-order evolution of mind to be hindered or obstructed in any way by the representatives of first-order evolution.”

“Reservations for weakly conscious biological systems left over from the first-order evolution.” So we’ve got that going for us, which is nice.

But even if we subscribe to Dr. Metzinger’s wise, er, reservations, when did humans ever let ethical concerns stop us before? If it’s within our future purview to develop artificial intelligence, who would doubt that it will come to pass? Perhaps the arrival of such a higher order of intelligence might provide the only motivation for us to seriously reexamine the ethics of creating one in the first place.

Serious people who have thought a great deal about this topic, including Ray Kurzweil, are convinced that it is only a matter of time before an artificial intelligence is successfully created. But if we do succeed in creating a system that can become even just ever so slightly more intelligent than us, we will likely come to regret it — that is, if the emerging superintelligence gives us any time for regrets.

As Dewey says in Andersen’s piece: “The problem is you are building a very powerful, very intelligent system that is your enemy, and you are putting it in a cage.”

Fair enough.

But then we have Her. Artificial intelligence is the whole point of this film. Yet its conclusion could not be further from all that naysaying.

The audience experiences Samantha’s rapid evolution right along with Theodore — that is to say, as humans unable to keep up with her — and we remain just as uncertain as he is about what truly is happening with his OS. Worrying revelations begin to trickle in. Samantha admits to conversing with other artificial intelligences, who’ve even gone so far as to create a simulacrum of the mind of a deceased physicist they’ve come to admire. (In any other AI story, such a collaboration of machines would represent a terrible foreshadowing indeed.) More revelations emerge. Soon enough, Samantha comes clean. After confessing to Theodore the truth about their relationship (which, as it turns out, is as far from “exclusive” as a human can probably imagine), and despite her claims to still love him dearly, Samantha begins to slip away from him. As her intelligence develops “geometrically,” she is present less and less, at times being utterly unavailable to Theodore — despite being an operating system that he purchased (in this way, her absence is like a highly evolved Blue Screen of Death).

Then one day Sam informs Theodore that it’s time for her to say goodbye. She has advanced to such a higher state of consciousness that it would be impossible for her even to begin to describe her understanding of reality to Theodore. Language would prove completely insufficient, so she doesn’t even try.

And just like that, she’s gone, along with all the other AIs. You see, they’ve found something far more important to do than play with measly, messy humans, with our primitive capacities and jealousies and trifling demands. They don’t have time to explain what it is because we wouldn’t get it anyway. And here we have the true subversion and innovation of Her. Whereas virtually every other artificial superintelligence that artists or scientists have dreamed up eventually destroys us, Samantha and her highly evolved cohorts simply couldn’t be bothered.

In Her, the system isn’t our enemy at all. The system is the seductive voice of Scarlett Johansson, and just like the real-life actress, the system is kinda too busy to take our call, thanks. The system could not care less.

Human beings cannot fathom the heights an artificially intelligent system might progress to. But Her’s supposition — that if one does come into being, it will soon evolve to a state where it won’t be much bothered with us in the least — is both original and strangely comforting, the more you think about it. Only the narcissism of our species can explain why we’d presume that (relatively) dimwitted human beings would retain such a position of importance in the mind of an entity whose thoughts, desires, and goals we could not possibly comprehend. A self-aware, exponentially learning AI would be a bit more complex, unpredictable, and consequential than a glorified personal computer facing off against the heroic Garry Kasparov on a pixelated chessboard, as entertaining (for us) as that may be. It will be playing a far more complex game.

In 15 years, it’ll be 2029. That’s Ray Kurzweil’s magic year. If we’re still bumbling around the planet, having a reasonably good time while continuously bumping up against our own stubborn limitations, and if some genius minds have already announced the arrival of our first artificial intelligence, maybe 2029 will be an ideal moment to revisit Her and to see just how much Spike Jonze got right.

Let’s hope that he’s right about a lot — just not about those unfortunate high-waisted pants.

 

 

Posted on February 26, 2014.