By Haruo Gomes
To put it simply, the moral circle is the people we care about. Our understanding of it is usually based on William Lecky’s History of European Morals from Augustus to Charlemagne. William observes that “at one time the benevolent affections embrace merely the family, soon the circle expanding includes first a class, then a nation, then a coalition of nations, then all humanity, and finally […] the dealings of man with the animal world.” In other words, each individual’s circle grows as that individual grows older. Just as humanity’s moral circle expands from age to age. And “in each of these stages a [different] standard is formed but in each case the same tendency is recognised as virtue[…] All that can be inferred is that the standard of humanity was very low. But still humanity was recognised as a virtue, and cruelty as a vice.” Which is to say, society gradually gets better at not discriminating over time.
That feels both hopeful and true, doesn’t it? We abolished slavery and women gained the right to vote, after all. If history is anything to go by, then the circle will keep expanding and discrimination will become a relic of the past.
But I don’t see humanity’s moral circle expanding. I see it spasming. It does widen, such as with the abolition of slavery. But with every step forward, history pushes us back, from slavery to Jim Crow Laws and Apartheid. Expand, shrink, rebrand. For all the progress we make, we still divide ourselves into tribes separated by creed. We’re still fighting each other.
I concern myself with expanding the circle. But I also worry about history pushing back as I wonder why so many people refuse to embrace “others.”
The expanding moral circle has fueled discussions about racial and gender equality, animal rights, and the ethics around artificial general intelligence. And that last part reflects the movie Blade Runner‘s approach.
The movie asks if artificial constructs, here called replicants, can be as human as we are? Should they be accepted into our moral circle?
To answer that, Blade Runner dissects what it means to be human, since defining what is means also defining what isn’t. It tries to identify if there are traits singular to humankind that cannot be replicated. What it means to be human becomes the bar against which the replicants’ worth will be measured.
The replicants are physically and intellectually indiscernible from humans. So whatever’s left to set us apart must be something more… ethereal. To find that quality, detectives specialized in hunting runaway replicants employ the Voight-Kampff test. They incite emotions on the suspect through carefully worded questions and statements. They then measure their response by observing the iris muscle. In this context, to look into a person’s eyes is not unlike peering into that Nietzschean abyss. And what stares back at you is nothing short of their soul. “To thee I do commend my watchful soul, Ere I let fall the windows of mine eyes” (“King Richard III”). Quite ironically, these men rely on machines to tell them who’s human or not.
Blade Runner isn’t a study of religion. Like many sci-fi stories, denial of religion is a given, inferred by the existence of aliens, surviving the apocalypse, or in the case of Blade Runner, the advent of a new sentient lifeform not created by God. The movie isn’t worried so much about debating religious beliefs as it is in humanity’s relationship to religion itself. Ridley Scott attempts to reconcile faith, one of our oldest human traits, with the vision of humanity in an increasingly secular world.
Through Roy, the antagonist and a replicant, the story retraces humankind’s steps from theological absolutism to temporal philosophy. His kind has a four-year lifespan and instead of resigning to his mortality, Roy endeavors to commune with his maker, hoping to postpone his expiration date, i.e., to gain life after death. And none could fault him. We all grapple with that same primordial fear. But everything ends. Everyone dies. His maker could not save him. At that moment, Roy leaves his faith behind. Thematically, by gouging Tyrell’s eyes, Roy is reaching into his human soul, communing not with his god but with humankind. That is the moment of his ascension. In the end, Roy makes peace with his own finality and saves his enemy, expanding his moral circle to incorporate human beings into it.
Another notion about humanity dispelled by the movie is that to be human is to be humane. Roy acts humanely towards Deckard in the end, but the idea isn’t to show A.I. can behave in a humane fashion. The idea is to remind us that such a quality cannot be used to define what it means to be human. They accomplish this by assigning virtue and cruelty to both replicants and humans alike. And also by having the audience believe Deckard—or the “good guy”—is human when he’s also a replicant (maybe).
To define what is human is to also define what isn’t. By deconstructing presuppositions about humanity and by arguing there are fewer intrinsically human characteristics than we take for granted, Blade Runner transforms the question of “can A.I. be human-like?” into simply “why not?” Ridley Scott contends that other groups should be embraced by definition. Without entry barriers. Without the need to measure one’s worth.
The sequel, Blade Runner 2049, represents the push back.
In contrast to Tyrell, who met his demise at the end of a line of thought, Wallace gets to walk away after directly quoting the Bible and setting himself up as a godly figure. Nobody even stops to question him. And here, replicants must be born, not made, with birth being referenced as a miracle. And Joe, the new protagonist, believes for a while that this is his role, that he’s made apart from replicants, and equal to humankind, through the miracle of birth. A scene that illustrates this idea is the orphanage children placing their hands on him, which gives him the visage of a messiah. Think Zack Snyder’s day of the dead scene in Batman V Superman.
So, in 2049, to be human is to be divine.
After Joe learns the truth, though, his path to salvation lies in sacrifice. He’s told, “There’s nothing more human than to die for the right cause.” And that’s precisely what he does.
In 2049, to be human is to be so humane—it’s borderline martyrdom.
Blade Runner 2049 is more concerned with presenting the aesthetics of its predecessor than exploring its ideas. Beneath its—admittedly beautiful—cinematography, there’s very little to be found. They evoke clear black liberation language, but can’t spare a second thought for the enslaved children of the orphanage. They allude to climate change catastrophe and follow it with “meh, we’ll be fine. Bonus flying cars for everyone.” And they shrink the moral circle. The original had already concluded replicants belong in our circle, just by virtue of being. The sequel maintains they should be accepted but creates new barriers to evaluate their admission, further setting us apart.
A Different Perspective
Ex Machina takes after Blade Runner in its questions about A.I. but the framing is more layered. It presents itself as a Turing test, questioning whether A.I. can develop true consciousness. Then it blatantly states that “yes, they can,” both in Nathan’s dialogue stating that the advent of artificial general intelligence is inevitable and also in the movie’s conclusion, when Ava succeeds her test while playing by that test’s rules.
But beneath all that, this movie asks the same question Blade Runner did. Can A.I. be human-like enough to deserve human rights?
The answer starts with another blatant “yes.” Ava is humanized and relatable enough that one can’t help but to empathize with her.
At the same time, the movie analyzes humankind’s relationship to them by exploring the fetishization of A.I. The hook to this argument is Ava’s flagrant sensuality. Kaleb’s obsession over Ava is even more telling than Nathan’s kink for his creations. Nathan lusts for them when they appear completely human. But when Kaleb watches Ava undress, she’s not just taking off her clothes, but she’s stripping from a more human appearance into her more artificial self. And he’s all over that. So much so that when Kaleb fantasizes about meeting her, he pictures Ava in full machine form. Nathan could’ve concealed her artificial appearance. But he didn’t. Kaleb craves her artificialness.
And it’s not enough that we obsess over them—we also need them to obsess over us.
Kaleb asks Ava where she’d go first if she ever got out. She says a “traffic intersection,” explaining that it “would provide a concentrated but shifting view of human life,” an answer that leaves Kaleb infatuated with Ava’s interest in humans. “People watching” as he calls it. And in most other stories where A.I. co-exists with humankind, they aim to be like us. Case in point, Bicentennial Man‘s Robin Williams is willing to die for it. Or they’re just straight-up fighting us. Either way, there’s no future without humankind. Or in the rare cases where machines outlive us all, they have to fetishize us in return, such as in Steven Spielberg’s A.I. Artificial Intelligence.
All that says something about us, doesn’t it? We seem to fear a lack of meaning more than oblivion, a sentiment both echoed and materialized in Ex Machina.
Nathan believes we’ll be replaced by A.I. But the concerns he voices aren’t just about extinction. He’s also apprehensive about their contempt, saying they’ll look down on our fossils and see nothing but “an upright ape, living in dust.” Humans will be the apex no more. And he gets a taste of that upon his death. When Kioko and Ava stab him, there’s no passion. When Ava looks down upon his corpse, there’s no hate. There’s just apathy. He doesn’t matter, and she doesn’t care. The same thing happens when she leaves Kaleb behind, without even addressing him. At the end of the story, she goes to the intersection like she had told Kaleb. But there’s no “people watching.” She doesn’t stand in awe of mankind, as she did while looking at nature after she left the compound. At the end of it, the worst thing she could do was to not care. Pushing people outside the moral circle is a mutually exclusionary process. When you push whole groups of people away from you, don’t be surprised if they start pushing back.
In the End…
All stages of the moral circle share a flaw: they’re centered around us. The question is always “should we accept them into our circle?” as if we were somehow the ideal against which their worth should be measured.
That’s where Ex Machina diverges from the Blade Runner franchise and pushes the conversation forward. Instead of asking what it means to be human, it explores what it means to be human in the face of something perceived as more than human. It places humanity outside the circle, trying to prove its worth and earn its way back.
This perspective could shed some light on the struggle to expand the circle among ourselves. Since if we can’t even agree all humans should be treated humanely, imagine how poorly we’ll fair trying to embrace A.I. We can derive from Ex Machina the realization that for as long as we hold an “ideal apex” perspective of humanity—apart from and better than everything else—there will always be an ideal version of humanity within that group, a standard against which we shall be judged ourselves. Equality can never be achieved if it’s built upon a framework of disparity. Through religious faith, fanatical Darwinism, or just good ol’ vanity, we have imbued our species with a self-righteous mantle of superiority over everything else, living or otherwise. If we can’t collectively cast that aside, there will always be some who see fit to feel superior to “others.”
The idea isn’t to allow others into our circle. The idea is to realize there’s no circle at all.
Haruo Gomes was born in Brazil and currently lives in Japan. Formerly a factory worker and English teacher and currently an aspiring freelance writer, his interests include psychology, theology, socio-politics and, of course, pop culture. You can find him at haruogomes.com or his YouTube channel, Bob the Hollow if you’re into gaming.
William Edward Hartpole Lecky, History of European Morals from Augustus to Charlemagne, vol. 1 third edition, revised (New York: D. Appleton, 1921). 9/22/2020: https://oll.libertyfund.org/titles/1839