Popular culture questions morality of androids

Rob Haggar

“Westworld” and “Blade Runner”: both works span decades and genres, but they feature one unifying element. Both refer to that detail by different names—hosts and replicants, respectively—but the practical differences end there.

The works speak on the dangers of hubris, of humans playing God. They examine the implications of the creation and ownership of androids, and each reach the same conclusion.

“Westworld” is an HBO series detailing the anguish of androids operating in a Western-style theme park. These androids are raped, tortured or killed by hedonistic tourists wishing to let off steam.

Typically, they cannot remember those horrors, but a change in their code allowing memory retention changed this and they learned of their brutish state of existence. After anomalous behavior, the park managers subject misbehaving hosts to violent punishment. The hosts fight back.

“Blade Runner” features replicants, androids with a short lifespan to counter their superhuman intellect and strength. These androids are aware of their race and many escape the rimworlds where they are forced to harvest materials for mega-corporations.

Those who escape either die of old age or are gunned down by Blade Runners, detectives with authority to shoot on sight. The replicants practice self-defense and fight back.

Each of these android iterations are created under similar circumstances: a corporation led by a narcissistic genius or a cabal of scientists led by technocrats create rudimentary androids with simplistic intelligence.

These early androids are clearly inhuman. Perfecting the designs allows for enhanced productivity, greater processing power and more realistic physical forms.

Eventually these androids pass the Turing test, where humans cannot distinguish them from their own species. The androids can think independently and adapt to changing situations effectively. They emote. They are essentially identical to humans in all ways but one: humans are not created in a lab.

Why make these beings? They labor where man does not—cleaning, mining, fighting, building—all occupations that risk human life. Many androids are destroyed in these more hazardous professions, often with gory results. The most advanced androids are created with what is essentially flesh and blood, and their demise is indistinguishable from a human’s.

In order to humanize these androids, advanced emotional responses are written in their code. Even if they cannot truly feel as humans do, their response to stimuli is convincing enough. Variations in programing lead these androids to develop  personalities. When they sleep, they can dream.

Why mention this? Androids have, for all intents, the characteristics of all humans—they feel, they bleed, they want. Yet they are forced to labor for their creator.

Is this justified? No.

Can androids have rights if they cannot control their actions? Humans, while potentially possessing free will, certainly are not in complete control of their fate.

People cannot choose their parents, their height, their DNA or their birthplace.

They likely cannot exercise much control over their emotions and are driven by subconscious desires. While androids may be controlled by binary, humans are controlled by impulses. Anyone wishing to argue against the humanity of androids on their lack of any metaphysical souls are invited to indicate the location of that ephemeral entity in their anatomy texts.

The origin of androids does not justify their abuse. Parents cannot enslave their children. Creation does not justify ownership.

What are the differences between humans and androids? Both lack complete free will, and both feel. Androids lack significant deviation from the human form but are enslaved.

What happens when android AI surpasses the human intellect? Processing power grows exponentially while human intelligence changes linearly.

This new Malthusian relationship is not one that can be prevented without inconceivable machinery. What happens when the slave can outsmart its master?  The creation kills its creator.

What can be done to prevent the annihilation of humanity? Humanity could agree not to pursue AI technology.

This will not happen—human nature cannot allow it.

Humans always chase innovation, even if it kills them. Our only reasonable choice is to preemptively emancipate these beings and hope that, after the singularity, they are merciful to their gods.

Rob Haggar is an economics major from Brandon, S.D.