47 Years of Impactful Scholarship

Volume 44, Issue 3

The Duty to Take Precautions in Hostilities, and the Disobeying of Orders: Should Robots Refuse?

This Article not only questions whether an embodied artificial intelligence (“EAI”) could give an order to a human combatant, but controversially, examines whether it should also refuse one. A future EAI may be capable of refusing to follow an order, for example, where an order appeared to be manifestly unlawful, was otherwise in breach of International Humanitarian Law (“IHL”), national Rules of Engagement (“ROE”) or, even, where they appeared to be immoral or unethical. Such an argument has traction in the strategic realm in terms of “system of systems”—the premise that more advanced technology can potentially help overcome Clausewitzian “friction” or “fog of war.” An aircraft’s anti-stall mechanism, which takes over, and corrects human error, is seen as nothing less than “positive.” As part of opening this much-needed discussion, the Authors examine the legal parameters, and by way of a solution provide a framework for overriding and disobeying. Central to this discussion, are state specific ROEs within the concept of “duty to take precautions.” At present, the guidelines relating to a human combatant’s right to disobey orders are contained within such doctrine, but vary widely. For example, in the United States, a soldier may disobey an order but only when the act in question is clearly unlawful. In direct contrast, however, Germany’s “state practice” requires orders to be compatible with the much wider concept of human dignity, and to be of “use for service.” By way of a solution, the Authors propose the crafting of a test referred to as “robot rules of engagement” (“RROE”) with specific regard to the disobeying of orders. These RROE ensure (via a multi-stage verification process) that an EAI can discount human “traits” and minimize errors that lead to breaches of IHL. In the broader sense, the Authors question whether warfare should remain an utterly human preserve—where human error is an unintended but unfortunate consequence—or, whether the duty to take all feasible precautions in attack in fact require a human commander to utilize available AI systems to routinely question human decision-making, and where applicable, prevent mistakes. In short, the Article examines whether human error can be corrected and overridden, but for the better, rather than for the worse.

Download the Article

Recommended Citation: Francis Grimal & Michael Pollard, The Duty to Take Precautions in Hostilities, and the Disobeying of Orders: Should Robots Refuse?, 44 Fordham Int'l L.J. 671 (2021).