Lethal autonomous weapon systems under international humanitarian law

In this paper, funded by the Norwegian Ministry of Defence, ILPI advisor Kjølv Egeland lays out the laws of war as they apply to the employment of lethal autonomous weapon systems in armed conflicts.

Robots formerly belonged to the realm of fiction, but are now becoming a practical issue for the disarmament community. While some believe that military robots could act more ethically than human soldiers on the battlefield, others have countered that such a scenario is highly unlikely, and that the technology in question should be banned. Autonomous weapon systems will be unable to discriminate between soldiers and civilians, and their use will lower the threshold to resort to the use of force, they argue. In this article, Egeland takes a bird’s-eye look at the international humanitarian law (IHL) pertaining to autonomous weapon systems. His argument is twofold: First, he argues that it is indeed difficult to imagine how IHL could be implemented by algorithm. The rules of distinction, proportionality, and precautions all call for what are arguably unquantifiable decisions. Second, he argues that existing humanitarian law in many ways presupposes responsible human agency

The article can be accessed here (available for purchase or through online library login).

Egeland, K. (2016). Lethal Autonomous Weapons under International Humanitarian Law. Nordic Journal of International Law, 85(2).

Related projects

view all Projects
Top of page
MENY

ILPI has closed down. The information on this page is kept for historical reasons more information

ILPI has closed down. The information on this page is kept for historical reasons

Lukk