Should We Ban “Killer Robots”?

As the field of Artificial Intelligence experiences a boom, we face an onslaught of ethical quandaries regarding their usage. One of them is the use of AI in weapons, to create what is informally known as killer robots, otherwise known as Lethal Autonomous Weapons (LAWs). These weapons, which are already being developed by many countries, will have the ability to locate targets and attack them without human intervention, which is a problem because many argue that humans must always be directly involved whenever the decision to kill is made. The debate is extremely complicated with many valid points on both sides, but here are my two cents (and two biggest reasons for why I’d say we should pursue the development and eventual use of LAWs).

The reasons are firstly for defence, and secondly to reduce war crimes and loss of life.

Many countries have committed to developing LAWs, as they are powerful enough to turn the tide of the battle. More weapons can be fielded without supervisors, sidestepping manpower constraints—especially relevant in developed countries with fewer draftable people. LAWs can respond rapidly, battling at a superhuman pace. Due to their power, we have to defend ourselves by developing LAWs too. Critics may argue that this global arms race is similar to nuclear proliferation, and LAWs should be banned. However, LAWs are unlike nuclear weapons—their development and testing can be done in secret. They don’t rely on monitorable physical substances like uranium, and their tests, unlike nuclear explosions, aren’t observable from miles away. This can allow rogue states like North Korea, Russia, Iran or Syria to develop LAWs secretly. A disarmament campaign would leave us at their mercy, necessitating that we develop LAWs to defend ourselves.

Beyond just being impossible to ban, LAWs are better tools of warfare. They can reduce loss of life and war crimes. Clearly, replacing human soldiers out in the field with LAWs will save soldiers’ lives. They could target more precisely too, killing fewer non-targets. But LAWs will also be more moral soldiers than humans, saving the lives of civilians. It seems hard to reconcile the fact that robots are amoral with the argument that LAWs could make more ethical decisions than humans. After all, the decision to kill must be made by someone capable of distinguishing right from wrong—ostensibly only humans have that capacity. Yet is there much difference between programming moral rules predetermined by humans into a robot, and having a human make the decision on the battlefield? In fact, having LAWs carry out these decisions is better because they have no emotions. In the heat of war, emotions run high, perverting humans’ moral compasses. Humans in war frequently have PTSD and mental breakdowns, leading some to commit atrocities. Facing a dangerous cocktail of emotions like grief, rage, and fear, they may neglect human rights as outlined by the Geneva Conventions. During the Japanese Occupation in my country, Singapore, Japanese have raped women and killed children, and there have been numerous other cases throughout history of such war crimes. Yet LAWs can be programmed to follow Geneva Conventions, and by carrying out the decisions that we have carefully considered beforehand, can maintain our humanity in war.

Critics may argue this relies on accurately identifying targets. If LAWs mistakenly identify innocents as targets, these innocents will die. Thus it seems that humans should make the final kill order. This concern has some merit. Today, our Artificial Intelligence relies on neural networks, which can make mistakes. Furthermore, the complexity of neural networks makes it difficult to identify the cause of mistakes to rectify them. However, we are beginning to understand the “thinking” process of neural networks, and have made substantial progress in improving accuracy. There is a lot of promise, justifying the investment in developing LAWs. Lastly, even if LAWs can only approach, not achieve 100% accuracy, it would still be ethical to use them, because they will still save lives on aggregate compared to using human soldiers, while preserving the military advantages of rapid response times humans are incapable of.

Ultimately, LAWs are extensions of human will. Humans will still be behind their actions. For the reasons mentioned above, I believe that pursuing the development of robot weapons is necessary for defence and is beneficial for moral and strategic reasons. What do you think?

Leave a Reply