Autonomous weapons separate humans from warfare, diminishing responsibility

Angles logo
Angles headline
Erica Dorsett

Human societies have long struggled with the ethics of killing. We’ve moralized the act under certain circumstances, marking it as necessary to ensure safety or to end an ongoing conflict. In these cases, a moral sacrifice is made for the greater good.

We have reasoned killing to fit different categories (execution, euthanasia, casualties, collateral damage) to justify our interests (greed, homeland security, preemptive defense, punishment for crime.)

We have rationalized murder as being different than vehicular manslaughter, for example. Though both acts result deaths, the crime lies in the intentions.

With the advancement of artificial intelligence, we face a new set of ethical questions in regard to killing: Do we allow machines to kill for us, and—perhaps more importantly—what kind of people will we become if we do?

Killing has already become mechanized. In warfare, we use advanced weaponry, like long-range missile systems, to efficiently end the killing sooner. Because of the advancement of warfare technology, we’ve become effective not only in the act of killing but also in removing ourselves from it.

We send soldiers to war who send bombs or bullets to our enemies. We send, and in turn may receive, but we are still far removed from the lives we are taking, therefore giving them little thought.

In Spartan society, killing was a deeply intimate act, done with respect and reverence. To Spartans, killing from afar (with bow and arrow) demonstrated cowardice and cultivated a disregard for life. Are we cowards, then, if we do not fight our own battles?

The answer changes dramatically when considered in the context of artificially intelligent weapons.

Autonomous weapons, known as the “third revolution in warfare,” effectively select and engage targets based on criteria provided by human programmers.

Without a doubt, this technology would be extremely efficient in eliminating threats and preventing soldier casualties. These are surely appealing benefits, but they must be considered alongside the pitfalls.

In the context of battle, unlike soldiers, autonomous weapons may not be able to differentiate between combatants and noncombatants. These machines may be unable to rationalize situations and select the most ethical actions as we trust our soldiers to do.

In addition, once AI weapons are widely used, it will be only a matter of time before they are sold on black markets and used by terrorist groups or other organizations or individuals with ill intent.

Though there are valid points on either side of the AI weapons issue, many robotics and AI researchers are open in their opposition of autonomous weapons.

In an open letter on AI weapons, leading researchers from numerous universities across the world wrote, “we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Despite the negative aspects of AI weapons, there are still arguments being made in support of their efficiency. If this argument can be made, it must be made alongside this reality: in using autonomous weapons, we are not efficient in eliminating threats but also in creating them.

Erica Dorsett is a freshman biology and English major from Madison, S.D.