Despite Growing Concern, Governments Fail to Act on Development of Autonomous Weapons
In a recent opinion piece published in Nature, leading AI experts have called for a ban on lethal autonomous weapons, also known as “killer robots”. The article highlights the lack of progress in negotiations around the development and deployment of these weapons, despite growing concern from human-rights and arms-control organizations, as well as the public.
The Call for a Ban
The Campaign to Stop Killer Robots, which includes dozens of human-rights and arms-control organizations, has been calling for a ban on lethal autonomous weapons. These weapons have the potential to target and attack individuals without human intervention, raising serious concerns about the loss of life and the potential for unintended consequences. A poll cited in the Nature article suggests that over 60% of adults support such a ban.
Despite these concerns and calls for action, politicians and governments have failed to act. Even thousands of researchers and leaders in AI have joined the calls for a ban, yet no academic society has developed a policy on autonomous weapons due to concerns about discussing matters that are not purely scientific.
Confusion around Technical Issues
One reason for the lack of progress in negotiations is confusion, both real and feigned, around technical issues. Countries continue to argue about the meaning of the word “autonomous”, with some defining it in such a way that it would not apply to the weapons in question. The United Kingdom, for example, has pledged not to develop or use lethal autonomous weapons, but has redefined them in such a way that the pledge is effectively meaningless.
A Pragmatic Way Forward
The Nature article suggests that instead of blocking negotiations, countries should focus on devising practical measures to build confidence in adherence to a ban. These measures could include inspection agreements, design constraints that deter conversion to full autonomy, and rules requiring industrial suppliers to check the bona fides of customers. It would make sense to discuss the remit of an AI version of the Organization for the Prohibition of Chemical Weapons, which has devised similar technical measures to implement the Chemical Weapons Convention.
While progress in Geneva may be unlikely, there are glimmers of hope. Countries that have stated a position on the issue overwhelmingly favor a ban. Negotiations could progress in the UN General Assembly in New York City, where no country has a veto, and at ministerial-level meetings.
Calls for Action
Professional societies in AI and robotics have a role to play in this issue. The Association for the Advancement of Artificial Intelligence, the Association for Computing Machinery, and the Institute of Electrical and Electronics Engineers should develop and enforce codes of conduct proscribing work on lethal autonomous weapons. The American Chemical Society and the American Physical Society provide examples of professional societies that have established strict policies on weapons of mass destruction.
As autonomous-weapons technology races ahead and the desire to use it grows, experts warn that the world cannot afford another decade of diplomatic posturing and confusion. Governments must deliver on what seems a simple request: to give their citizens some protection against being hunted down and killed by robots.
Conclusion
As AI and robotics continue to advance, it is imperative that governments and the international community take action to ensure the responsible development and use of autonomous weapons. The call for a ban on lethal autonomous weapons is growing louder, and it is time for governments to take this issue seriously and work towards a verifiable and enforceable ban.