The U.K. isn’t convinced completely autonomous robots that can kill without human interaction should be banned, despite support from activists and human rights groups that are pushing for strict regulations and an end to “killer robots.”
At a United Nations conference this week, the U.K. will oppose prohibiting the creation of robots that can carry out attacks completely on their own, the Guardian reports. Lethal autonomous weapons systems (LAWS), will be the topic of debate and the European meeting plans to discuss how computers can let drones and weapons think for themselves, and whether and when “human traits” like compassion are better in war.
Some groups have supported banning the weapons for years, including the Campaign to Stop Killer Robots, launched in 2013. The organization is working to completely ban autonomous combat drones before they even get a chance to be built internationally.
Human Rights Watch is among the groups supporting the coalition to end “killer robots,” and recently published a paper outlining some of the major accountability issues to using autonomous devices, urging countries to adopt restrictions.
U.K. officials are not convinced LAWS should be banned entirely, though they told the newspaper development of their own autonomous weapons aren’t in the works yet.
The United Kingdom is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control. As an indication of our commitment to this, we are focusing development efforts on remotely piloted systems rather than highly automated systems.
Such drones would be able to find and destroy targets completely on their own, without a human controller telling them which places to hit. Understandably, such technology raises legal and ethical questions from opponents of LAWS. The U.K., however, remains unconvinced restrictions or bans on the weapons of the future are necessary.
H/T The Guardian | Photo via MashleyMorgan/Flickr (CC BY-SA 2.0)