ONE of the most sobering and ironic challenges for humanity in the 21st century is how to kill one another while retaining the essence of what it means to be human. Which is why over millennia of warfare, we have conjured a surplus of excuses for taking the life of others.
Right up to the nuclear age and its potential for leveling entire nations in one hit, a human brain with all its cognitive biases and contradictions has had the final say in how, when, and most importantly if, weapons of mass destruction would be unleashed on enemies.
To be human has meant that even in the scorching heat of battle, it is possible, no matter how briefly, to step into the shoes of the adversary and devise a way to defuse the powder keg.
For example, Soviet naval officer Vasili Arkhipov’s actions in the Cuban Missile Crisis of 1962 most likely prevented nuclear Armageddon.
Yet, this may all go out the window in the next 20 years if the imminent evolution in military technology featuring “killer robots” takes centre stage in war theatres globally.
Formally known as Lethal Autonomous Weapons Systems (LAWS), this technology will offer affluent states the luxury of zeroing war casualties while bringing hitherto unseen efficiency in mowing down enemy combatants. Which, in all likelihood, for the first decade of LAWS will be gun-toting humans.
In early April, representatives from 80 countries met at the United Nations offices in Geneva for the fifth time since 2014 to debate the framework of regulating, if not outright banning, the development of killer robots. Their attempts at consensus have, thus far, been thwarted by the lack of a common definition for LAWS.
Additionally, a group of international NGOs called the “Campaign to Stop Killer Robots” has sustained pressure on the global community to outlaw this technology on grounds of immorality. It claims LAWS is “a step too far” as “they cross a moral line, because we would see machines taking human lives on the battlefield or in law enforcement”.
There are four key dimensions to this debate: geopolitical, moral, economic and sociological.
First, the geopolitical dimension. There is added urgency to ban LAWS before it turns into a tool of expansionist warfare for major powers. Needless to say, we are at the cusp — if not smack in the middle — of another “Thucydides Trap”. Since the Soviet Union’s collapse nearly three decades ago, the incumbent world order has been shaped, reshaped and enforced by the United States.
Today, however, with the US ceding space abroad to pursue economic self-strengthening, China and Russia are eagerly filling the vacuum, most prominently in the Middle East, South Asia and Africa. Consequently, the anxiety of diminished influence in world affairs has provoked Washington to threaten any perceived transgression with military force. This, regrettably, portends a new round of proxy, if not open war among the major powers.
Resultantly, a new arms race in LAWS seems nigh. Such a race may, unfortunately, panic less technologically developed nations that possess crude nuclear weapons like North Korea and Iran. For, when threatened by mobile hunks of metal armed to the teeth with state-of-the art weaponry, they may resort to desperate use of their arsenals to trigger mutually assured destruction.
The next dimension is morality. The case for banning LAWS thus far has focused on the wickedness of artificial intelligence (AI) deciding whom to execute, and in what manner. That said, critics have wilfully sidestepped the core contradiction in their case. Why is it wrong for AI to use sanctioned lethal force against humans on the battlefield, when it is perfectly acceptable for humans to use the same against one another?
Is the endgame then to save lives or the selective, self-serving application of morality? This is unclear. Since we know that even without LAWS, modern armies are perfectly capable of industrial-scale demolition. Syria is a textbook example.
Moving on, we have the economic dimension of swapping human soldiers for killer robots. It is important to recognise that federal governments are the largest employers in most modern economies. And, armies are no exceptions, which begs a few questions. To what extent will standing armies shrink in the face of LAWS deployment on battlefields?
And, what will happen to the rank and file, considering most hold high-school diplomas and may find upskilling for advanced tech jobs near impossible? A cursory look at the woeful state of US military veterans is sufficient cause for concern. Moreover, this dilemma will be especially dire for developing countries where the military serves as a ladder to upward social mobility and stable employment.
Finally, it is important to consider the signalling effect to society from elevating LAWS to the vanguard of national security. If, for example, we sanitise the use of robots to protect us from external threats and solve inter-state conflicts, it will inevitably create demand among citizens for AI-driven androids to protect them from violent crime. Likewise, criminals will be compelled to purchase such machines to even the playing field against law enforcement.
Moreover, commercialised, mass-produced LAWS will require a complete overhaul of the legal system that governs culpability and its corresponding punishment. For example, do robots go to jail or their owners? How do we prove premeditated murder when the chance of an electronic malfunction exists? Will fried circuits join insanity as a defensible legal position?
The reality is we cannot stop the march of military evolution. When AI technology reaches an appropriate level of sophistication in the next few decades, LAWS will inevitably become regular fixtures in local and border policing. It is basic economics given their massive savings in cost and efficiency.
The bigger question is whether such an era will bring with it a flippancy to wage war by pitting metal against metal? Or further cheapen the value of human life in countries that have become battlegrounds for major power hubris?
S. Mubashir Nor is an Ipoh-based independent journalist.