Skip to main contentSkip to navigationSkip to navigation
An Iron Dome launcher fires an interceptor rocket near the southern city of Beersheba in 2012
An Iron Dome launcher fires an interceptor rocket near the southern city of Beersheba in 2012 – part of Israel’s defence system programmed to respond automatically to attack. Photograph: Nir Elias/Reuters
An Iron Dome launcher fires an interceptor rocket near the southern city of Beersheba in 2012 – part of Israel’s defence system programmed to respond automatically to attack. Photograph: Nir Elias/Reuters

UK opposes international ban on developing 'killer robots'

This article is more than 9 years old

Activists urge bar on weapons that launch attacks without human intervention as UN discusses future of autonomous weapons

The UK is opposing an international ban on so-called “killer robots” at a United Nations conference that is this week examining future developments of what are officially termed lethal autonomous weapons systems (Laws).

Experts from the Foreign Office and the Ministry of Defence are participating in the week-long session in Geneva which will consider whether increased computing power will eventually enable drones and other machines to select targets and carry out attacks without direct human intervention.

The meeting, chaired by a German diplomat, Michael Biontino, has also been asked to discuss questions such as: in what situations are distinctively human traits, such as fear, hate, sense of honour and dignity, compassion and love desirable in combat?, and in what situations do machines lacking emotions offer distinct advantages over human combatants?

The Campaign to Stop Killer Robots, an alliance of human rights groups and concerned scientists, is calling for an international prohibition on fully autonomous weapons.

Last week Human Rights Watch released a report urging the creation of a new protocol specifically aimed at outlawing Laws. Blinding laser weapons were pre-emptively outlawed in 1995 and combatant nations since 2008 have been required to remove unexploded cluster bombs.

Some states already deploy defence systems – such as Israel’s Iron Dome and the US Phalanx and C-Ram – that are programmed to respond automatically to threats from incoming munitions. Work is also progressing on what is known as “automatic target recognition”.

The Foreign Office told the Guardian: “At present, we do not see the need for a prohibition on the use of Laws, as international humanitarian law already provides sufficient regulation for this area.

“The United Kingdom is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control. As an indication of our commitment to this, we are focusing development efforts on remotely piloted systems rather than highly automated systems.”

One of the problems of the debate is that there is no internationally agreed definition of what constitutes a lethal autonomous weapons system. Among those giving evidence to the UN meeting this week will be Dr William Boothby, a retired RAF air commodore and lawyer who used to run the unit responsible for ensuring that newly acquired weapons conform to the UK’s international humanitarian law obligations.

“International law already prohibits the use of currently available autonomous technology for offensive attack operations [unless they are tasked] in the most particular and narrowly defined circumstances,” Boothby said.

“It is current law that states must apply when assessing the lawfulness of new weapons.”

One scenario he suggested was where a young soldier might be ordered to clear a house of enemy troops. He might, from a bright sunlit street, enter a dark room, detect movement and in perceived self-defence open fire killing a mother and her young children.

“Who will say that a piece of machinery might not one day be developed capable of differentiating [between armed soldiers and non-combatants]?” Boothby said.

“I’m not saying it could be done, but I’m not prepared to say it couldn’t be done. And therefore is a ban on technology which has yet to be full developed to maturity an appropriate course of action? I suggest not.”

Thomas Nash, director of Article 36, which campaigns to prevent “unnecessary or unacceptable harm” caused by new weapons, was at the Geneva conference on Monday.

He told the Guardian he was disappointed at the UK government’s opposition to a specific prohibition. “That is a position that will have to change,” he said. “More than two-thirds of those who spoke today said they favoured the principle of all weapons being subject to the principle of ‘meaningful human control’.”

Nash suggested that work developing so-called “automatic target recognition” was already blurring the responsibilities between the task of the machine and its human controllers.

The technology may already be deployed in some systems, he added. It could be used to show on screen to an operator targets identified at a distance perhaps through their heat signatures or appearance.

Such prompting may influence the decisions of an officer. “We have concerns about it,” Nash said. “Any military attack should be through deliberate human reasoning. The next stage would be to allow the machine to initiate the attack itself.”

More on this story

More on this story

  • The Guardian view on robots as weapons: the human factor

  • Killer robots must be stopped, say campaigners

  • 'Killer robots' need to be strictly monitored, nations warn at UN meeting

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed