Autonomous Weapon Systems and the UK: The Urgent Need for Legal Measures
- Aisha Akram
- Aug 4, 2025
- 5 min read
The rapid advancement of industries by Artificial intelligence (AI), whilst enhancing efficiency and providing innovative solutions, bring significant risks, such as ethical concerns and risks of human rights violations. Many jurisdictions have established legal measures, in particular legislation, to appropriately regulate AI and mitigate the risks it creates [1].
However, the UK has yet to introduce AI regulatory framework. The government’s March 2023 White Paper ‘A Pro-innovation Approach to AI Regulation’ stated that a non-statutory approach would be taken, where existing laws would be relied upon to oversee the use of AI. This has been widely criticised as a laissez-faire approach, and there have been calls for more direct intervention.
This article will focus on the UK’s failure to adopt legal measures to reduce the risks of Autonomous Weapon Systems (AWS), despite recommendations from the House of Lords and the Human Rights Council to do so. AWS are AI weapon systems that can identify, select and attack targets with lethal force without human intervention. Whilst they do have the potential to revolutionise warfare, the ethics of these systems and their compliance with international human rights law remains of serious concern.
What risks to AWS pose?
The Special Rapporteur of the Human Rights Council has explained the key human rights challenges of AWS. In summary, there are threats to the right to life and to human dignity, threats associated with AWS proliferation, including the use of AWS outside of armed conflict such as policing, and most significantly, there is increased difficulty of attributing killing and holding individuals to account for violations of international law committed with AWS. He asserted that these challenges have received “inadequate attention” by states.
In December 2023, the House of Lords Artificial Intelligence in Weapon Systems Committee’s report on AI in weapon systems detailed the risks of AWS. There are concerns regarding whether AI technology can accurately identify and target threats considering current AI systems struggle to adapt to conditions outside of a narrow range of assumptions. Additionally, the increased proliferation of AWS could risk escalation of conflicts and heighten crisis instability. Furthermore, there is a lack of clarity on who should be held accountable if AWS results in unlawful use.
More recently, the use of facial recognition technology, which can facilitate the targeting of specific individuals, has been subject to concern in enabling AWS use in extrajudicial killings. In 2024, the UK government announced its controversial £230 million budget designated to be spent on technology such as drones and facial recognition for police use. This is surprisingly in conflict with the EU AI Act in which the AI application of biometric identification and categorisation of people is banned due to the unacceptable risk it carries. For example, studies have shown that they are inherently prone to error, including bias. The Face Recognition Vendor Test found, in 2019, that false positive rates are the highest in West and East African and Asian people and the lowest in Eastern European individuals.
How has the UK responded to recommendations?
In the Human Rights Council report, the Special Rapporteur recommended that the international community should establish legal measures to ensure that the attribution and accountability for AWS is possible and to facilitate the investigation of international law violations. Despite being a member, the UK failed to welcome these recommendations.
The UK additionally rejected recommendations put forward by the House of Lords Intelligence in Weapon Systems Committee. In the 2023 report, the government was advised that they must enhance the role of Parliament in decision making on AWS to sufficiently ensure that the use of AI in AWS is in line with ethics and the law.
In the government’s response to the report four months later, it was stated that the UK is committed to ensuring ethical control and accountability over AWS, yet no insight was given as to whether the UK will adopt legal measures to achieve these aims. It is difficult to appreciate how the UK can successfully mitigate the risks of AWS without establishing an interventionist approach, as recommended by the Human Rights Council and the House of Lords.
The Council of Europe Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law - A potential solution?
The Council of Europe’s 2024 Convention on AI and Human Rights, Democracy and the Rule of Law marks the first ever international legally binding treaty on AI. Its objective is to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law. Though not yet in force, the UK is signatory to this convention.
The convention will undoubtedly act as a safeguard against risks posed by AI systems. Under Section 2 of Article 1 (the object and purpose), the UK will be required to adopt appropriate legislative, administrative and other measures to give effect to the provisions set out in the convention. However, a gap exists under Section 4 of Article 3 (the scope), where matters relating to national defence are explicitly excluded. Since AWS are developed and deployed for defence and military purposes, they are treated as defence tools, and are therefore excluded from the convention’s scope. This could potentially lead to cases where the UK is not held accountable for human rights violations caused by AWS due to this limitation within the convention.
The convention, though effective to mitigate against risks posed by AI systems, cannot adequately curtail against the risks that AWS brings. To avoid falling behind, the UK must immediately follow recommendations it has received, and establish legal measures to respond to the rising risks of AWS, just as other jurisdictions have.
[1]: For example, the EU adopted the AI Act in 2024. It is a risk-based AI classification system, where AI systems are analysed and classified according to the risk they pose. High risk levels mean more AI compliance requirements. Additionally, Japan has proposed its first AI bill, with the purpose to prioritise and safeguard against human rights violations.
References:
Bianca Gonzalez, ‘UK’s £230 million plan to implement police facial recognition and drones’ Biometric News (London, 07 March 2024)
Council of Europe, Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (CETS No. 225, 5 September 2024) <https://rm.coe.int/1680afae3c> accessed 04 April 2025
Department for Science, Innovation & Technology, A pro-innovation approach to AI regulation (Cm 815, 2023)
House of Lords AI in Weapon Systems Committee, Proceed with Caution: Artificial Intelligence in Weapon Systems (Cm 16, 2023)
Human Rights Council, ‘Autonomous Weapons Systems: Special Rapporteur on extrajudicial, summary or arbitrary executions’ (2024) HRC/56/CRP.5
IAPP Research, ‘Global AI Law and Policy Tracker’ (2024) <https://iapp.org/resources/article/global-ai-legislation-tracker/> accessed 03 April 2025
Ministry of Defence, The Government Response to the Report by the House of Lords AI in Weapon Systems Committee: ‘Proceed with Caution: Artificial Intelligence in Weapon Systems’ (Cm 1023, 2024)
Patrick Grother, Mei Ngan, Kayee Hanaoka, ‘Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects’ (2019) NISTIR 8280
Robert Taylor, ‘Artificial intelligence: is the UK falling behind?’ (2025) 175 NLJ 19

Comments