Released in Stockholm at the 2018 International Joint Conference on Artificial Intelligence (IJCAI), the pledge - organised by the Future of Life Institute and signed by companies and individuals working in AI and robotics - challenges governments, academia and industry to follow suit and ‘create a future with strong international norms, regulations and laws against lethal autonomous weapons’.
Corporate signatories include Google DeepMind, University College London, the XPRIZE Foundation, ClearPath Robotics/OTTO Motors, the European Association for AI, and the Swedish AI Society. Individuals who signed include Elon Musk, Jeff Dean, head of research at Google.ai; plus AI pioneers Stuart Russell, Yoshua Bengio, Anca Dragan and Toby Walsh.
“I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect,” said Max Tegmark, a physics professor at MIT and president of the Future of Life Institute. “AI has huge potential to help the world – if we stigmatise and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilising as bioweapons, and should be dealt with in the same way.”
Lethal autonomous weapons systems (LAWS) can identify and target an individual without a human ‘in-the-loop’; the decision and authorisation about whether or not someone will be attacked is left to the autonomous weapons system.
Toby Walsh, a professor of artificial intelligence at Australia’s University of New South Wales in Sydney, said: “We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so.”
Advocates of an international ban on LAWS are similarly concerned that they will be easier to hack, more likely to end up on the black market, and easier for terrorists to obtain.
Ryan Gariepy, founder and CTO of Clearpath Robotics and OTTO Motors, said: “The proliferation of lethal autonomous weapon systems remains a clear and present danger to the citizens of every country in the world. No nation will be safe, no matter how powerful.
“We hope that governments around the world decide to invest their time and effort into autonomous systems which make their populations healthier, safer, and more productive instead of systems whose sole use is the deployment of lethal force.”
In December 2016, the United Nations’ Review Conference of the Convention on Conventional Weapons (CCW) began formal discussion on LAWS. 26 countries attending the Conference have so far announced support for some type of ban, including China. The next UN meeting on LAWS will be held in August 2018.
The pledge
Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.
In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.
There is also a powerful pragmatic argument: lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilising for every country and individual. Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.
Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage. Stigmatising and preventing such an arms race should be a high priority for national and global security.
We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organisations, as well as leaders, policymakers, and other individuals, join us in this pledge.
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...