Artificial Intelligence Revolutionizes Unmanned Systems To Enhance Global Security And Operations
The unmanned systems market is rapidly evolving, with emerging trends and technologies poised to …
24. June 2025
The Threat of Terrorist Use of Self-Driving Cars and Slaughterbots: A Growing Concern for Global Security
A recent report by the United Nations has sounded an alarm about the potential misuse of artificial intelligence (AI) by terrorist groups to carry out attacks using self-driving cars and swarms of autonomous drones. The report, “Algorithms and Terrorism: The Malicious Use of Artificial Intelligence for Terrorist Purposes,” highlights the risks posed by these technologies and emphasizes the need for governments and policymakers to take proactive measures to prevent their misuse.
The report delves into the dark side of AI, exploring its potential vulnerabilities and consequences. According to Antonia Maria de Meo, director of the UNICRI, “the reality is that AI can be extremely dangerous if used with malicious intent.” One of the primary areas of focus in the report is the potential damage that autonomous vehicles, drones, and other forms of automated transportation could cause if they fall into the wrong hands.
Vehicles have long been used in terrorist attacks, including deliberate ramming attacks. With increased autonomy in cars, this threat could become even more pronounced. The report highlights several ways in which self-driving cars could be exploited by terrorists. For instance, these vehicles could be packed with explosive devices or used to block roads, causing chaos and destruction.
However, the report also acknowledges that built-in safety features could frustrate terrorist plots in this area. Furthermore, there are rudimentary efforts underway to address these concerns, including initiatives by Islamic State supporters that ultimately did not materialize.
Another concern identified in the report is the threat of “swarms of autonomous drones.” These drones, which use facial recognition and other technologies, have the potential to target specific individuals with unprecedented precision. The report cites a 2017 video produced by the U.S.-based Future of Life Institute called “Slaughterbots,” which depicts a swarm of micro drones loaded with explosives using facial recognition to identify and attack their targets in a kamikaze fashion.
While this technology does not currently exist on-the-shelf, the report warns that such a scenario is not entirely novel or science fiction. The threat posed by autonomous drones is very real and should be taken seriously. In addition to these more dramatic threats, the report highlights the potential for seemingly mundane exploitation of AI.
The increasing use of map apps in urban areas for routing could provide opportunities for terrorists to create fake traffic congestion data or prevent security forces from arriving at the scene of an attack. The authors of the report acknowledge that the technical capability of terrorist groups and individuals to deploy technologies like AI may be considered low, but they also emphasize that this does not diminish the risk.
Rather, it highlights the need for governments and policymakers to stay on top of these threats and develop effective strategies to counter them. To address these concerns, the report recommends a range of measures, including comprehensive research into the potential misuse of AI by terrorist groups, improved cooperation between stakeholders worldwide to share intelligence and best practices, better understanding among policymakers of the capabilities of AI and its potential risks.
Increased leverage of AI in counter-terrorism strategy is also recommended. By combining robust security measures with innovative solutions that promote AI safety and responsible deployment, we can mitigate the risks posed by self-driving cars and swarms of autonomous drones.
The potential for these technologies to be used as tools of terrorism highlights the need for a multifaceted approach to counter this threat. Investing in research and development of AI safety protocols, enhancing cybersecurity measures, and promoting international cooperation are essential steps towards addressing these concerns.
Recent years have seen several high-profile incidents involving autonomous vehicles and drones, including a 2018 incident where a Tesla Model S was struck by a truck while traveling in autonomous mode, and a 2020 drone attack on a Saudi oil facility. These incidents demonstrate that the threat posed by self-driving cars and swarms of autonomous drones is very real and should not be underestimated.
Governments and policymakers have a critical role to play in addressing these concerns. By investing in research and development of AI safety protocols, enhancing cybersecurity measures, and promoting international cooperation, we can mitigate the risks posed by self-driving cars and swarms of autonomous drones.
Companies involved in the development and deployment of AI-powered transportation systems must also take proactive steps to ensure that their products are safe and secure. This includes implementing robust security measures, conducting regular testing and evaluation, and engaging with regulators and policymakers to address any concerns or vulnerabilities.
Ultimately, the threat posed by self-driving cars and swarms of autonomous drones is a complex issue that requires a comprehensive approach to address. By combining robust security measures with innovative solutions that promote AI safety and responsible deployment, we can mitigate the risks posed by these technologies and create a safer, more secure world for all.