Robots and autonomous weapons with AI: do they respect human ethics? 🤖

Published by Cédric,
Article author: Cédric DEPOND
Source: DARPA
Other Languages: FR, DE, ES, PT

Military robots equipped with artificial intelligence and autonomous weapons raise a major question: can they respect human values? Inspired by Isaac Asimov's laws, an American program explores this complex issue. DARPA, the military research agency, is focusing on creating ethical criteria to regulate these technologies.


The stakes are enormous. While autonomous weapons are already transforming military strategies, their use raises concerns. How can we ensure that these machines make decisions aligned with human moral principles? Drawing on Asimov's ideas, DARPA is attempting to provide answers, opening an essential debate on the future of warfare and technological ethics.

Asimov's three laws: a major inspiration


Isaac Asimov, a science fiction writer, imagined the "Three Laws of Robotics" in 1942. These rules, designed to protect humans, state that a robot may not harm a human, must obey human orders, and must protect its own existence. Here they are in detail:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm;
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Isaac Asimov explored the limits of these laws in his novels, showing how they could be interpreted in unexpected ways. These stories, although fictional, raise relevant questions about the morality of machines. Today, DARPA is drawing inspiration from them to regulate military autonomous systems.

The ASIMOV program: objectives and challenges


The ASIMOV program, launched by DARPA, aims to assess the ability of autonomous weapons to respect human ethical standards. This program does not develop new weapons; it seeks to create tools to measure robotic behavior in complex military scenarios.

Seven companies, including Lockheed Martin and Raytheon, are participating in this initiative. Their mission is to model virtual environments to test the variability of situations and evaluate the ethical difficulty of each case. The results could influence international standards.

The limits of ethics in autonomous systems


Human ethical standards are often subjective and vary across cultures. Autonomous systems, based on algorithms, struggle to incorporate these nuances. Peter Asaro, an expert in AI ethics, points out that ethics cannot be reduced to simple calculations.

Moreover, unforeseen situations on the battlefield pose a major problem. How can a robot make an ethical decision in the face of a moral dilemma? These questions remain without clear answers, despite DARPA's efforts.

Asimov's legacy and the future of military robots


Isaac Asimov always defended the idea that robots should serve humanity. His laws, although fictional, now inspire concrete programs like the one presented here, bearing his name. However, their application in a military context raises unprecedented questions.

DARPA acknowledges that the ASIMOV program will not resolve all ethical questions. Nevertheless, it represents a first step in regulating autonomous technologies. The work carried out could pave the way for more robust international standards.
Page generated in 0.081 second(s) - hosted by Contabo
About - Legal Notice - Contact
French version | German version | Spanish version | Portuguese version