
Lethal autonomous weapons (LAWs) are a type ofmilitary drone ormilitary robot which are autonomous in that they can independently search for and engage targets based on programmed constraints and descriptions. As of 2025, most military drones and military robots are not truly autonomous.[1] LAWs are also known aslethal autonomous weapon systems (LAWS),autonomous weapon systems (AWS),robotic weapons orkiller robots. LAWs may engage indrone warfare in the air, on land, on water, underwater, or in space.
In weapons development, the term "autonomous" is somewhat ambiguous and can vary hugely between different scholars, nations and organizations.[2]
The official United States Department of Defense Policy on Autonomy in Weapon Systems defines an Autonomous Weapons System as one that "...once activated, can select and engage targets without further intervention by a human operator."[3] Heather Roff, a writer forCase Western Reserve University School of Law, describes autonomous weapon systems as "... capable of learning and adapting their 'functioning in response to changing circumstances in the environment in which [they are] deployed,' as well as capable of making firing decisions on their own."[4]
TheBritish Ministry of Defence defines autonomous weapon systems as "systems that are capable of understanding higher level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control - such human engagement with the system may still be present, though. While the overall activity of an autonomous uncrewed aircraft will be predictable, individual actions may not be."[5]
Scholars such as Peter Asaro and Mark Gubrud believe that any weapon system that is capable of releasing a lethal force without the operation, decision, or confirmation of a human supervisor can be deemed autonomous.[6][7]
Creating treaties between states requires a commonly accepted labeling of what exactly constitutes an autonomous weapon.[8]
The oldest automatically triggered lethal weapon is theland mine, used since at least the 1600s, andnaval mines, used since at least the 1700s.
Some current examples of LAWs are automated "hardkill"active protection systems, such as a radar-guidedCIWS systems used to defend ships that have been in use since the 1970s (e.g., the USPhalanx CIWS). Such systems can autonomously identify and attack oncoming missiles, rockets, artillery fire, aircraft, and surface vessels according to criteria set by the human operator. Similar systems exist for tanks, such as the RussianArena, the IsraeliTrophy, and the GermanAMAP-ADS. Several types of stationarysentry guns, which can fire at humans and vehicles, are used in South Korea and Israel. Manymissile defence systems, such asIron Dome, also have autonomous targeting capabilities.
The main reason for not having a "human in the loop" in these systems is the need for rapid response. They have generally been used to protect personnel and installations against incoming projectiles.
According toThe Economist, as technology advances, future applications of uncrewed undersea vehicles might include mine clearance, mine-laying, anti-submarine sensor networking in contested waters, patrolling with active sonar, resupplying manned submarines, and becoming low-cost missile platforms.[9] In 2018, the U.S.Nuclear Posture Review alleged that Russia was developing a "new intercontinental, nuclear-armed, nuclear-powered, undersea autonomous torpedo" named "Status 6".[10]
TheRussian Federation is currently developingartificially intelligentmissiles,[11]drones,[12]unmanned vehicles,military robots and medic robots.[13][14][15][16]
Israeli MinisterAyoob Kara stated in 2017 thatIsrael is developing military robots, including ones as small as flies.[17]
In October 2018, Zeng Yi, a senior executive at the Chinese defense firmNorinco, gave a speech in which he said that "In future battlegrounds, there will be no people fighting", and that the use of lethal autonomous weapons in warfare is "inevitable".[18] In 2019, US Defense SecretaryMark Esper lashed out at China for selling drones capable of taking life with no human oversight.[19]
The British Army deployed new uncrewed vehicles and military robots in 2019.[20]
TheUS Navy is developing "ghost" fleets ofunmanned ships.[21]

In 2020 aKargu 2 drone hunted down and attacked a human target in Libya, according to a report from theUN Security Council's Panel of Experts on Libya, published in March 2021. This may have been the first time an autonomous killer robot armed with lethal weaponry attacked human beings.[22][23]
In May 2021 Israel conducted an AI guided combat drone swarm attack in Gaza.[24]
Since then there have been numerous reports of swarms and other autonomous weapons systems being used on battlefields around the world.[25]
In addition,DARPA is working on making swarms of 250 autonomous lethal drones available to the American military.[26]
Three classifications of the degree of human control of autonomous weapon systems were laid out byBonnie Docherty in a 2012Human Rights Watch report.[27]
Current US policy states: "Autonomous … weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force."[28] However, the policy requires that autonomous weapon systems that kill people or use kinetic force, selecting and engaging targets without further human intervention, be certified as compliant with "appropriate levels" and other standards, not that such weapon systems cannot meet these standards and are therefore forbidden.[29] "Semi-autonomous" hunter-killers that autonomously identify and attack targets do not even require certification.[29] Deputy Defense SecretaryRobert O. Work said in 2016 that the Defense Department would "not delegate lethal authority to a machine to make a decision", but might need to reconsider this since "authoritarian regimes" may do so.[30] In October 2016 PresidentBarack Obama stated that early in his career he was wary of a future in which a US president making use ofdrone warfare could "carry on perpetual wars all over the world, and a lot of them covert, without any accountability or democratic debate".[31][32] In the US, security-related AI has fallen under the purview of the National Security Commission on Artificial Intelligence since 2018.[33][34] On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report outlining five principles for weaponized AI and making 12 recommendations for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. A major concern is how the report will be implemented.[35]
Stuart Russell, professor of computer science fromUniversity of California, Berkeley stated the concern he has with LAWs is that his view is that it is unethical and inhumane. The main issue with this system is it is hard to distinguish between combatants and non-combatants.[36]
There is concern by some economists[37] and legal scholars about whether LAWs would violateInternational Humanitarian Law, especially the principle of distinction, which requires the ability to discriminate combatants from non-combatants, and theprinciple of proportionality, which requires that damage to civilians be proportional to the military aim.[38] This concern is often invoked as a reason to ban "killer robots" altogether - but it is doubtful that this concern can be an argument against LAWs that do not violate International Humanitarian Law.[39][40][41]
A 2021 report by the AmericanCongressional Research Service states that "there are no domestic or international legal prohibitions on the development of use of LAWs," although it acknowledges ongoing talks at theUNConvention on Certain Conventional Weapons (CCW).[42]
LAWs are said by some to blur the boundaries of who is responsible for a particular killing.[43][37] Philosopher Robert Sparrow argues that autonomous weapons are causally but not morally responsible, similar to child soldiers. In each case, he argues there is a risk of atrocities occurring without an appropriate subject to hold responsible, which violatesjus in bello.[44] Thomas Simpson and Vincent Müller argue that they may make it easier to record who gave which command.[45] Potential IHL violations by LAWs are – by definition – only applicable in conflict settings that involve the need to distinguish between combatants and civilians. As such, any conflict scenario devoid of civilians' presence – i.e. in space or the deep seas – would not run into the obstacles posed by IHL.[46]

The possibility of LAWs has generated significant debate, especially about the risk of "killer robots" roaming the earth - in the near or far future. The groupCampaign to Stop Killer Robots formed in 2013. In July 2015, over 1,000 experts in artificial intelligence signed a letter warning of the threat of anartificial intelligence arms race and calling for a ban onautonomous weapons. The letter was presented inBuenos Aires at the 24thInternational Joint Conference on Artificial Intelligence (IJCAI-15) and was co-signed byStephen Hawking,Elon Musk,Steve Wozniak,Noam Chomsky,Skype co-founderJaan Tallinn andGoogle DeepMind co-founderDemis Hassabis, among others.[47][48]
According toPAX For Peace (one of the founding organisations of the Campaign to Stop Killer Robots), fully automated weapons (FAWs) will lower the threshold of going to war as soldiers are removed from the battlefield and the public is distanced from experiencing war, giving politicians and other decision-makers more space in deciding when and how to go to war.[49] They warn that once deployed, FAWs will make democratic control of war more difficult, something that author ofKill Decision (a novel on the topic) and IT specialistDaniel Suarez also warned about: according to him it might recentralize power into very few hands by requiring very few people to go to war.[49]
There are websites[clarification needed] protesting the development of LAWs by presenting undesirable ramifications if research into the appliance of artificial intelligence to designation of weapons continues. On these websites, news about ethical and legal issues are constantly updated for visitors to recap with recent news about international meetings and research articles concerning LAWs.[50]
TheHoly See has called for the international community to ban the use of LAWs on several occasions. In November 2018, ArchbishopIvan Jurkovic, the permanent observer of the Holy See to the United Nations, stated that “In order to prevent an arms race and the increase of inequalities and instability, it is an imperative duty to act promptly: now is the time to prevent LAWs from becoming the reality of tomorrow’s warfare.” The Church worries that these weapons systems have the capability to irreversibly alter the nature of warfare, create detachment from human agency and put in question the humanity of societies.[51]
As of 29 March 2019[update], the majority of governments represented at a UN meeting to discuss the matter favoured a ban on LAWs.[52] A minority of governments, including those of Australia, Israel, Russia, the UK, and the US, opposed a ban.[52] The United States has stated that autonomous weapons have helped prevent the killing of civilians.[53]
In December 2022, a vote of theSan Francisco Board of Supervisors to authorizeSan Francisco Police Department use of LAWs drew national attention and protests.[54][55] The Board reversed this vote in a subsequent meeting.[56]
A third approach focuses on regulating the use of autonomous weapon systems in lieu of a ban.[57]Military AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal ('Track II') diplomacy by communities of experts, together with a legal and political verification process.[58][59][60][61] In 2021, the United StatesDepartment of Defense requested a dialogue with theChinese People's Liberation Army on AI-enabled autonomous weapons but was refused.[62]
Asummit of 60 countries was held in 2023 on the responsible use of AI in the military.[63]
On 22 December 2023, aUnited Nations General Assembly resolution was adopted to support international discussion regarding concerns about LAWs. The vote was 152 in favor, four against, and 11 abstentions.[64]