Authors: Adya Madhavan and Satya Sahu

Document Type: Takshashila Discussion Document 2024-25, Version 1.0

Citation: Adya Madhavan and Satya Sahu, “A Survey of Military AI Technologies,” Takshashila Discussion Document No. 2024-25, December 2024, The Takshashila Institution.

Executive Summary

Artificial intelligence has been widely recognised for its transformative potential across sectors. In the case of defence, there are many concerns about autonomy, the risks that come with it, and the ethics involved. However, governments and militaries worldwide are increasing investments in military AI technologies to stay as technologically advanced as possible and capitalise on the benefits of AI. This document explores the motivations behind the military adoption of AI, through an examination of areas where it has been deployed and an analysis of the transformative potential of AI.

There are three key takeaways from this primer:

  • AI in the military performs three broad functions: automation, prediction, and analytics. Each function is further characterised by the degree of human control, which determines how much or how little oversight there is.

  • AI’s utility for military purposes spans various areas - broadly divided into autonomous and semi-autonomous vehicles, target identification systems and Lethal Autonomous Weapons Systems (LAWS), decision support systems, battlefield healthcare, logistics and maintenance, and data analysis. These different applications have already been implemented globally to varying degrees of sophistication.

  • Although AI is widely touted as having the potential to transform economies, several technological and operational barriers stand in its way. Once these technological barriers have been addressed, which in itself may take a couple of years, these operational issues need to be addressed through a combination of changing pre-existing systems and strengthening processes. As of now, the transformational impact of AI is hindered by the way legacy military systems and processes operate, as well as the fact that modifications need to be made to the technology itself for effective utilisation by the military.

Document Formatting Note: This document has been formatted to be read conveniently on screens with landscape aspect ratios. Please print only if absolutely necessary.

Author Information: - Adya Madhavan is a Junior Research Analyst with the High-Tech Geopolitics Programme at the Takshashila Institution, Bengaluru, India. She can be reached at adya@takshashila.org.in - Satya Sahu was a Research Analyst with the High-Tech Geopolitics Programme at the Takshashila Institution, Bengaluru, India. He can be reached at https://www.linkedin.com/in/satyashoovasahu/

Acknowledgements: The authors would like to thank their colleague Aditya Ramanathan for his valuable contributions and feedback.

Disclosure: The authors have utilised ChatGPT and NotebookLM in the course of developing the glossary of terms.

Introduction

In recent years, AI has been widely touted for its transformative potential in every sector, ranging from education to defence. In 2022, Defence Minister Rajnath Singh unveiled over 75 AI technologies1 conceived for military purposes for the Indian armed forces. From news outlets to academia, there has been a flurry of writing on concerns regarding AI and autonomous technologies. However, if AI is so risky, why are companies and governments ramping up investments centred around adapting AI for military purposes? This document seeks to explore why militaries seek to adopt AI capabilities. It also examines the potential of AI to transform existing military capabilities through an examination of current use cases.

Although we have seen some of generative artificial intelligence’s features at work with widely accessible LLMs like ChatGPT and Perplexity, many roadblocks still limit its applicability to other fields, including military applications. For instance, effective military planning is more than just target lists and requires subtle understanding of an adversary’s intent as well as political contexts. Furthermore, military decision-making often has to be made amid the fog of war, with little reliable information at hand.

Despite these limitations in the military field, AI has sometimes been likened to nuclear weapons2. However, nuclear weapons are not a useful analogy for AI, which is a set of technologies that is diffused into other platforms or technologies. A more instructive framework is to use the idea of AI as a general-purpose technology (GPT)3. The GPT framework explains how certain technologies can drive widespread transformation, both in civilian life and military applications.4 Jeffrey Ding, for instance, takes the example of steam, a technology that is derived from its capacity to provide energy to industrial processes and transportation, thus transforming both economies and militaries. While AI may be in a relatively nascent stage, and is yet to cascade into different fields and processes, it has similar potential to change both civil and war-like undertakings.

Other analogies include comparing AI to a brain, since the structural inspiration for neural networks stems from the human brain. However, that simplifies AI’s capabilities and overstates its resemblance to human thought. In its current capacity, AI lacks consciousness, intuition, and emotional processing, which are integral to the human brain. The GPT analogy works better since it emphasises AI’s transformative power without attributing human qualities to it.

What is Military AI? Definitional Conundrums and Concepts

The history of modern AI development is perhaps inextricably linked with the military’s interest in the concept of “thinking machines” that could assist with complex military tasks. The US Air Force, for instance, funded the development of one of the first AI programs, the Logic Theorist, in 1956. By the 60s and the 70s, the US Defense Advanced Research Projects Agency (DARPA), along with the US military played central roles in sponsoring projects such as the Speech Understanding Research program5 and other research that sought to codify human expertise into rules for decision making on and off the battlefield.

Narrow AI: systems designed to perform specific tasks, such as image recognition or language translation. This type of AI is currently available and actively deployed in various military applications.

AGI: a hypothetical machine that exhibits intelligent behaviour across multiple domains, akin to human cognitive abilities. While the development of Artificial General Intelligence remains a long-term goal for researchers, it is not yet a reality and is not the dominant focus of current military AI efforts.

The military’s focus on being an early adopter for AI systems that could “reason” and learn from large knowledge repositories,6 has only become more prominent over the years. This is not merely due to an increasing need for the direct deployment of automated and autonomous systems (such as drones and robots) in warfare, but also for predictive maintenance, targeting and guidance, air defence, intelligence analysis, and cyber-defence.7

To understand why militaries across the globe are pursuing the development of different AI technologies, it is useful to understand what features set AI apart from human abilities and other associated technologies such as those involved in computing. This chapter defines AI and the bundle of technologies that the term encapsulates, and classifies it based on the broad roles it can perform in the military domain.

This document does not weigh in on the debate surrounding the possibility of fielding Artificial General Intelligence,8 (AGI) – AI systems with human-like intelligence, and transferable knowledge, capable of solving a wide range of problems across multiple domains. At the time of writing this, AGI is still a theoretical concept, and therefore, speculating about it is an endeavour best left to academic discussions.9 Therefore, this paper confines the discussion to “narrow” AI systems. These refer to systems that are designed to perform specific tasks, and are limited by their inability to utilise training data and any taught and learned behaviour in new applications or domains without significant human intervention. All current and mature AI technologies fall into this category10.

The US’ Defense Advanced Research Projects Agency (DARPA) utilises two categories to differentiate various AI systems:

1. Handcrafted Knowledge AI Systems

An older approach to developing AI systems, these mainly comprise programmed rulesets that computing hardware use to process information.11 The rulesets are abstract representations of domain-specific knowledge and instructions provided in an “if-then” format. As long as such rules are provided to a computing system in the form of a program, the machine can apply these rules to a set of inputs and furnish an output. Given millions of such rules, such a system can appear to be very “intelligent” and excel at a highly-specific task.12

Almost ubiquitous today, such systems across software solutions, infrastructure, services and logistics automation, video games etc, are also known as “Expert Systems”, since the logical rules and parameters provided to the AI system are identified in advance by human experts.13 A kind of Expert System that combines such rulesets with information gleaned from sensors and other instruments is termed a “Feedback Control System”.14 In the military, these systems form the backbone of missile guidance and targeting technologies, autopilots, and digital signal processing systems.

Since it is virtually impossible to program a system with sufficient rulesets to choose an appropriate course of action in all possible situations, such AI systems are also limited in their ability to adapt their knowledge and “experience” to new problems. This inhibits the true automation of tasks.15

A prime example of an Expert System is IBM’s Deep Blue supercomputer,16 that used its high-speed computing prowess to beat the then reigning world champion, Gary Kasparov at chess in 1997.17

2. Statistical or Machine Learning (ML) AI Systems

Easily the most popular subdomain of AI, Machine Learning Systems differ from Handcrafted Knowledge Systems in that they are not explicitly programmed with rigid rulesets. After undergoing a “training period” where a human-developed algorithm is run on some sample data (training datasets), an ML system “learns” to generate its own rulesets, which can process new information and return the correct output.18

The key advantage of ML systems is their ability to technically achieve true automation within a specific domain despite the limitations of human programming capabilities. This is because they effectively “program themselves”.19 This approach to AI leads to the creation of sets of rules from a specific dataset, which in turn can be applied later to input data that consists of other similar datasets. Expert Systems may not be able to accommodate a wide degree of variance in the input datasets due to their hard coded rules. The versatility and adaptability of ML systems is a huge benefit over the former. An example of an ML system that used to be primarily an Expert System is Facial Recognition.20

ML systems have also displayed the ability to incorporate into their programming certain aspects of human decision-making that can either be overlooked, are unanticipated, or difficult to abstract into a set of rules manually. This has resulted in ML-based AI systems exceeding human performance and Handcrafted Knowledge AI systems in areas such as real-time language translation, content generation, and image analysis etc.21 On the other hand, the ability of ML systems to program themselves to adapt to complex real-world environments and display unanticipated abilities is poorly understood,22 and such “emergent behaviour” can also undermine their predictability and reliability.23

That said, the ability of ML systems to “interpret”, “learn” and “reason” autonomously in a manner mimicking human intelligence makes them highly versatile and well-suited for deployment in the military context.

These definitions also indicate that Expert Systems are no longer considered to be “truly AI” in common parlance,24 as the tasks they perform are instead understood to be part and parcel of general modern computing systems, and not merely an imperfect machine’s translation of human intelligence and abilities. In fact, such systems do not merely mimic, but far exceed human capabilities in some areas (for example, in air defence).25

ML systems are therefore considered the default AI technology approach today, as modern discussions of AI are centred around “the capability of computer systems to perform tasks that normally require human intelligence.”26 While ML systems are also limited in their ability to adapt to contexts that they are not trained for, advancements in dataset size and diversity, increased computational power, and algorithmic improvements continue to benefit their capacity for perception, reasoning, learning, and decision-making. However, with an increase in performance and a concomitant increase in complexity, ML systems also become less explainable and understandable on human scrutiny. Alongside the aforementioned concerns surrounding reliability, and predictability, the imperative to ensure explainable AI (XAI) systems has become a core concern for the military,27 not merely due to the ethics of fielding such systems, but also the need to maintain their trustworthiness.28 Future research documents will examine this research area.

In both military and civilian contexts, therefore, the term “AI” does not refer to a monolithic technology; it is an umbrella term encompassing new and mature technologies that usually interact with each other to produce the desired capabilities.29

A Typology for Military AI

Most of the time therefore, Military AI is used as a catch-all term to describe a broad suite of technologies and their applications. It is more useful to classify AI systems on the basis of their capabilities or the roles that they can fulfil in the military context.30 Broadly, these roles can be considered to be analytical, predictive, or operational, and can be understood at both the strategic and tactical levels.

Strategic: The overarching, long-term planning and execution of military objectives that align with national goals and policies. It focuses on achieving broader outcomes, such as securing peace, deterring adversaries, or winning wars.

Tactical: The immediate, short-term planning and execution of actions on the battlefield to achieve specific objectives. It involves deploying forces, manoeuvring units, and engaging adversaries in direct combat.

In an analytical role, AI can identify patterns, trends, and anomalies by processing vast amounts of unstructured data that may be difficult or time-consuming for humans to discern. AI algorithms excel at discerning subtle correlations and trends within complex information streams, such as high-resolution satellite imagery and real-time drone feeds. Automating the accurate interpretation of disparate data sources has been a moving target for military AI research since its early days, particularly since it has proven to be invaluable for timely situational and battlefield awareness.

At the strategic level, military planners and leadership gain more comprehensive views of the battlefield, granular threat assessment, and improved adaptability in their operations as AI-enabled intelligence analysis improves.

At the tactical level, rapid processing of real-time battlefield data from multiple sources is invaluable to military units and individual operators for maintaining enhanced situational awareness and better decision-making. AI systems onboard military aircrafts are a good example of this, analysing sensor data and satellite imagery to constantly provide pilots with immediate, actionable intelligence for adapting to changes in battlefield conditions.

In predictive roles, AI systems utilise historical and real-time input data to generate simulations for predicting potential outcomes in a range of military scenarios, both on and off the battlefield. Besides better-informed decision-making, forecasts like these can be transformative for a wide variety of activities, such as identifying latent vulnerabilities in critical military infrastructure or operations, simulating hyper-realistic scenarios for personnel training (often in conjunction with downstream technology stacks such as Virtual Reality), modelling complex adversary behaviour, and forecasting logistical requirements for military operations etc.31

At the strategic level, predictive analytics have obvious benefits for long-term planning as historical data, geopolitical trends, and updated information about adversary capabilities can be synthesised to forecast potential conflict scenarios. Military planners can leverage these to identify optimal deterrence strategies, force deployment and quicker and more effective resource allocation.

At the tactical level, a wide multitude of sensory data, intelligence updates, weather patterns etc can be used by AI to simulate the likelihood and location of threats and opportunities in the field. Being able to take proactive measures to avoid and counter threats (like potential ambush sites) are invaluable for units and vehicles. Similarly, AI-powered predictive maintenance can analyse sensor data from equipment and vehicles to anticipate failures and optimise maintenance schedules, reducing downtime and ensuring operational readiness in the field.32

In an operational role, AI systems are deployed directly to execute tasks or missions, either autonomously or in collaboration with human operators. This can take many forms, depending on the specific application and the level of autonomy granted to the AI system. Uncrewed Aerial Vehicles (UCAVs) are a prominent example of the use of AI in military operations, where they are used to control various aspects of navigation, target identification, and weapons deployment.33 In cyber-warfare, AI-powered defences are effectively the only way to detect and respond to threats in real-time.

Such systems can automate and take over repetitive, time-consuming, or complex tasks, freeing up human operators to focus on more critical aspects of their roles. In operational roles, AI can also reduce risk to humans. For instance, AI-enabled robotic systems can conduct tasks such as bomb disposal,34 or ISR operations in hazardous and hostile environments at length.

To be sure, remotely operated drones have been deployed for many years, with some automated operations. However, this automation was primarily predicated on “deterministic” Handcrafted Knowledge AI systems and still requires human operators. Beyond reducing the factor of human error affecting the outcomes, AI that leverages ML algorithms and vast amounts of data also has the potential to be more efficient and accurate at these tasks.35

Autonomy vs Automation

A central thread that emerges from the discussion on AI systems is that such systems are being leveraged by human operators to delegate specific tasks. The act of delegation implies that the operator gives up a degree of control over how the task may be done and the AI system gains a degree of autonomy vis a vis the same.36 When a system is programmed with hard-coded constraints and rules for a particular task, it is considered to be an “automated” system, with a low degree of autonomous control in the manner in which it can perform the task. Conversely, when a system can perform the task without any constraints imposed upon it by the operator delegating the task, it can be considered to be an “autonomous” system. All instances of “narrow AI” systems exist between these two extremes.

We can further classify military AI systems by the degree of human involvement, control and oversight.37 This is useful for ensuring the proper functioning of chain-of-command, and accountability mechanisms for different configurations of systems involving human control and AI autonomy.

  1. Human-in-the-Loop (HITL) / Semi Autonomous: These refer to systems where there is the most human oversight. While AI may provide suggestions and recommendations, all final decision-making is executed by human beings.

  2. Human-on-the-Loop (HOTL) / Supervised autonomous: HOTL systems operate more autonomously than HITL systems. Actions are performed by AI but under human supervision. Human supervisors can intervene if need be, but are not typically involved in real-time decision making.

  3. Human-out-of-the-Loop (HOOTL) / Fully Autonomous: These are fully autonomous systems that operate with no human intervention. HOOTL systems make independent decisions without any oversight based on real-time data, following previously set criteria.38

As the scope of this paper is limited to being a primer on military AI technologies, we will not examine the complexity of these configurations,39 nor their legal and moral implications. However, these merit further study in AI policy research. It is clear that the convergence of various AI technologies is transforming the nature of warfare. With these conceptual foundations in mind, the following chapter explores how such capabilities are being deployed and utilised by militaries around the world.

Use Cases of Military AI

AI systems are being deployed across a wide range of military functions, ranging from logistics to combat. With the most recent AI wave over the last couple of years especially, every country seems to be scrambling to harness AI to gain a technological edge over competitors and optimise processes. The following section provides a snapshot of the various ways in which different countries are innovating and using AI in their militaries. It looks at countries that are major AI users on different fronts and highlights certain use cases of different kinds of applications of AI.

1. Autonomous and Semi-Autonomous Vehicles

Drones and drone swarms are among the most common uses of AI in today’s military landscape. Countries across the board are rapidly developing AI-augmented UAVs. Currently, drones are vulnerable to signal jamming, since they are remotely operated and require continued network connectivity to be operable. Fully autonomous drones are impervious to jamming since they are guided by artificial intelligence and do not require connections with a pilot. They can also function in large swarms. Human actors often struggle to coordinate operations using multiple drones, but AI has the potential to deploy hundreds of drones while maintaining situational awareness effectively.

Ukraine is currently deeply involved in developing drones for military use. Most recently, there have been reports that Ukraine is in the process of developing autonomous drone swarms that can identify and attack targets in coordination with each other. Ukraine is already using AI-equipped drones for terrain mapping and detecting land mines, in addition to using drones during combat. For combat purposes, Unmanned Autonomous Vehicles (UAVs) such as the Saker Scout have been approved for battle usage40.

The US, too, has been ramping up R&D surrounding UAVs. In July 2023, the US tested and demonstrated vehicles like the uncrewed AI-enabled XQ-58 drone. Additionally, the US has tested and utilised drones such as ‘Global Hawk’ and ‘Reaper’ for intelligence-gathering purposes41. The US Air Force also flew an AI-controlled F-16 fighter jet, accompanied by two other F-16s with pilots42. An AI-piloted fighter jet also participated in a simulation dogfight with an experienced US Air Force pilot. Despite the pilot’s level of experience, the plane flown by the pilot and the AI-operated vehicle were reportedly fairly evenly matched.

The US Navy has also developed Unmanned Undersea Vehicles (UUVs) and Unmanned Surface Vehicles (USVs)43. UUVs are being explored to perform tasks that are too dangerous for humans, such as mine countermeasures, ISR, and gathering intelligence in areas that are inaccessible to manned vehicles. The USVs that the US Navy is developing can be fitted with various weapons and sensors and adapted for high-risk missions. They also bring the potential for cooperative missions where a fleet of USVs can coordinate and be deployed together.

Additionally, the US, the UK, and Australia have tested AI-powered AUKUS uncrewed aerial vehicles44. The British Army is developing systems like the MUTT (Multi-Utility Tactical Transport)45, an unmanned ground vehicle (UGV) designed to serve as a force multiplier in combat.

China, too, has developed fleets of UAVs equipped with AI. Its Wing Loong drones are reportedly going to be equipped with AI. China also has AI-powered drones capable of autonomously navigating forests without relying on satellite navigation systems. While there is limited available data on similar technologies for military use, the same technology can likely be adapted for UAVs for military purposes as well.

2. Target Identification Systems and Lethal Autonomous Weapons Systems

The idea of a machine that identifies and zeroes in on targets seemed straight out of science fiction until very recently. However, today, such systems are a reality and among AI’s most controversial applications. AI target identification systems have been deployed in the early stages of the Israel-Hamas war in the past year. Israel developed two systems, ‘Lavender’46 and ‘Where’s Daddy’.47 Lavender was programmed to identify suspected Hamas operatives, who would then be vetted as potential targets of bombings. However, reports revealed that approval for these bombings was given without much discretion. Where’s Daddy was used to track identified targets, monitor their whereabouts, and attack them once they were in their homes. A third program called Gospel operated similarly to Lavender. The only marked difference between the two was that Gospel identified structures for bombing while Lavender identified human targets.

The US also has an AI-driven system that operates similarly. The Advanced Targeting and Lethality Automated System, or ATLAS, was mounted on an M1 Abrams tank and tested for a variety of functions in 202248. According to reports, target identification and tracking abilities were tested in a realistic exercise environment. The US XVIII Airborne Corps also utilised AI to help Ukraine identify Russian targets in 202249.

During the Libyan Conflict in 2020, Turkey deployed a drone called Kargu 2, which was developed by the Turkish company STM. This is the first known deployment of a lethal autonomous weapons system in battle. Kargu 2 was used to identify and then track and attack targets50.

There have also been reports that the Indian army has utilised over 140 AI systems across its borders with China and Pakistan to identify and classify targets51.

3. Decision Support Systems

Using AI to identify targets is merely one form of decision support that the technology can provide. Due to its ability to process large amounts of data rapidly, AI can also lend itself to decision-making processes at the strategic level. Training AI programs on large amounts of data to understand strategy also enables them to make better strategic decisions rapidly.

For instance, China has recently developed an ‘AI commander’52 that is currently used to participate in simulations and war games while playing the role of commander. While it is currently constrained by a cap on its memory, the AI commander can theoretically play the role of a decision-maker in actual conflict, not just simulations. It can provide rapid data-driven decisions that allow battle strategy to be crafted rapidly while responding to real-time situations. While China maintains that this is only for training purposes, and ‘the party commands the gun’– or the computer in this case– in the future, it may be used to make strategic decisions in actual conflict situations.

Earlier this year, Turkey developed a decision support system that won a United Nations award53. The system utilises real-time meteorological data to respond to forest fires as effectively as possible. The system also allows for the appropriate evacuation and intervention strategies to be made based on its evaluations of changing data. This is yet another instance of technology made for a certain function that can be adopted for wider use in different spheres.

4. Battlefield Healthcare

Another area where AI’s foray has the potential to be transformative is battlefield healthcare. Automated medical care, triage, and diagnostics based on AI enable armed forces to perform medical care efficiently in environments that may be too risky for human medical personnel.

Israel has developed a platform called Aidoc54, which rapidly evaluates pathology reports and sends alerts. Doctors get immediate alerts on patients with critical conditions and can attend to them with a sense of urgency. In war conditions, this enables limited medical personnel to prioritise the needs of the most critical patients. Additionally, Israel has also developed robotic machinery that can perform functions such as removing bullets. While these technologies may not have been developed exclusively for military usage, they lend themselves well to circumstances where time is of the essence and efficiency is paramount.

The US has also developed an AI-powered triage system that has been cleared by the FDA55. The Automated Processing of the Physiological Registry for Assessment of Injury Severity Hemorrhage Risk Index, or APPRAISE-HRI56, is an app developed by the Department of Defense that uses an AI algorithm trained on vital sign data. It enables people to evaluate how at risk they are to experiencing blood loss at a life-threatening scale.

AI is already being commercially implemented in various aspects of healthcare, suggesting that its use for battlefield medicine isn’t far off. From diagnostics and lab report analysis to drug discovery, AI is being used to augment many functions and make processes more efficient. In addition to simple procedures, AI can perform complex diagnostics such as evaluating the aggressiveness of cancers57. While there is limited publicly available information on AI tools developed specifically for military purposes, many of these medical developments can also be easily applicable to military medicine.

5. Logistics and Maintenance

Aside from combat, and strategy, there are less glamorous aspects of the military that operate behind the scenes, such as supply chain logistics, transport, and maintenance, which are imperative to ensure the smooth functioning of any institution. AI systems are being used to make these tasks more efficient across civilian and military domains.

For instance, the US uses an Autonomic Logistics Information System (ALIS) for predictive maintenance of naval and airforce vehicles. In the case of the F-35 fighter jet58, ALIS is used to analyse data pertaining to performance, maintenance logs as well as operational usage in order to optimise the maintenance needs of the aircraft. Similarly, the Japan Ground Defense Force59 is exploring the application of AI to predict equipment failures, streamline supply chains, and optimise the maintenance of military resources. Additionally, British Airways has predictive aircraft health monitoring systems in place60 that help assess conditions from data on different aircraft conditions. While there is limited public data on the system, it could also be applicable to the Royal Air Force.

The UK has also tested UGV convoys along with the US Army61. Delivering supplies to frontlines in dangerous circumstances claims many lives, and these unmanned convoys have been designed to serve this purpose. The experiment where this technology was tested featured the vehicles operating autonomously and semi-autonomously. If deployed, this would allow for many of the risks of battlefield logistics to be mitigated, with exponentially fewer human beings at risk when delivering supplies.

6. Data Analysis

Another area where AI surpasses human capabilities is data analysis, as its capacity to process and recollect data and perform calculations using it far exceeds that of a human mind. For the military, this can mean using AI to process large amounts of data and have it provide insights that can help them make informed choices. Depending on the kind of insights desired, this can be done using different types of datasets.

The UK, US, and Australia are set to test a new technology that uses AI to process large volumes of sonar data as part of their technology sharing agreement, Aukus Pillar II62. Analysing SONAR data will help determine the position of Chinese submarines faster and more accurately than existing technologies do.

The US Air Force utilises AI to analyse cyber data. This allows it to comb through large amounts of data and identify potential security threats in its networks without expending large amounts of manpower. This capitalises on AI’s ability to process huge amounts of data in real-time at a scale that human actors are unable to.

Ukraine, on the other hand, uses AI to process large amounts of satellite data, which allows it to have better intelligence on the location of Russian targets. Processing geospatial data more efficiently and accurately than humans can, AI systems have allowed Ukraine to utilise the extensive amount of data available and harness it to meet its needs.

7. Learning from Civilian Use Cases

Considering that most militaries do not publicly disclose the details of their most recent technological developments, looking at civilian applications of AI can help understand the extent to which the technology has been developed and may be repurposed for military users.

Amazon63 uses AI to manage its extensive supply chain. By analysing various points of data— including but not limited to customer preferences, demand and historical data — AI systems can manage inventory per the needs of the market, and optimise both delivery routes and warehouse operations. The same technologies could be applied to military supply chains to manage the distribution of fuel, weapons and rations to troops in dangerous or contested areas. In addition, predictive analysis could be used to forecast supply shortages. Uber uses AI to optimise routes and implement dynamic prices. This technology could be adapted for military logistics requirements as well, to optimise the transport routes for convoys based on real-time battlefield data.

Tesla’s widely acclaimed autopilot system64 is fitted in its cars and manages navigation, route optimisation, and object detection. The system utilises real-time data from the car’s sensors and machine learning to enable cars to drive autonomously. Self-driving cars navigate traffic conditions and directions. The same technology can be used for military logistics and unmanned vehicles of various kinds.

Microsoft’s use of AI helps balance supply and demand in energy grids65. This predictive analysis helps prevent blackouts by examining and predicting consumption patterns. The same principle could be used to manage energy usage in military-bases and forward-operating bases. In remote combat zones where off-grid energy is used, it could help prevent energy shortages.

Boston Dynamics’ Spot is an AI-powered autonomous quadruped robot66. Spot is designed to make real-time data-driven decisions, and its size and agility make it capable of navigating a variety of conditions. The firm advertises it as capable of providing “valuable insights into routine operations, site health, or potentially hazardous situations”. A technology like Spot can be adapted for purposes such as military reconnaissance or to defusing explosives– capitalising on its size, agility and ability to perform fairly complex tasks.

Limitations and Shortcomings

While the use of AI into the military and defence has begun, there are several limitations that can potentially constrain its degree of penetration. There are hurdles at almost every level that need to be addressed. While many of these challenges may be primarily technological, they are also compounded by operational issues as well as a host of legal and regulatory challenges.

Example Scenario: For example, in a hypothetical scenario where input data was compromised through a cyberattack in which attackers injected false information into a traffic sensor data, the network would indicate heavy congestion in areas based on false data. This would lead the system to disrupt decision making and reroute vehicles arbitrarily which would lead to actual traffic jams based on falsified data. Attackers could use ransomware to lock predictive algorithms in these traffic sensors and force authorities to pay ransom to restore the network.

While this example highlights a relatively benign situation in which the outcome isn’t catastrophic, in the context of an algorithm being used for military purposes, the attack could result in a loss of lives or damage to infrastructure based on falsified data.

Some of the largest technological problems faced are as follows:

1. Data Dependency

In order for AI models to be effective and make informed decisions, they need to be trained on extensive data sets. Military applications such as image recognition and predictive analysis especially require massive amounts of data that is representative of diverse conditions. This can include aspects such as varied terrains, weather conditions and combat situations. Comprehensive data, especially for specific use cases is hard to come by, which limits AI models’ abilities to generalise across situations. There is also a data scarcity for several high-stake events, such as large-scale cyberattacks, owing to the lack of historical precedence. In such situations, it is even more challenging to train AI systems.

Even in situations where data is available, it tends to be highly classified. The sharing of datasets such as intelligence reports or battlefield logs can raise security concerns. This is a major limiting factor in terms of collaborative AI development.

2. The Issue of Context

Currently, AI systems excel at specific tasks, such as solving complex calculations with large numbers. However, they are unable to adapt to situations that fall beyond the ambit of their training data. Narrow AI fails at being able to interpret and adapt to real-time changes autonomously, such as weather changes or equipment failures. This means that some dynamic battle conditions are too unpredictable for AI models to be able to have an adaptable understanding of the context. Unlike human decision-makers, AI lacks any intuitive decision-making ability and relies solely on data it is trained on to understand any causality.

3. Computational Limitations

The issue of the availability of computational resources arises whenever AI systems are deployed. Most applications of AI require immense computational resources that are hard to make available in environments such as forward operating bases or mobile platforms. If AI were to be deployed even in such resource-constrained environments, it would lead to a degree of latency in decision-making, processing and responding, which can be a critical issue in operations that are time-sensitive. High-performance AI systems are also highly energy intensive, which raises another resource-related concern since many conflict zones may be off the grid or remote.

In addition to these technological bridges that must be crossed, there are certain operational challenges that also need to be addressed when deploying AI:

1. Integration of Systems

By and large, military infrastructure tends to be a combination of legacy systems and more recent technologies. To successfully integrate new technologies, such as AI, with older technologies, such as analogue communication networks, new interfaces need to be created, or old systems need to be overhauled completely. This process is both expensive and time-consuming. There is also the issue of compatibility with older physical infrastructure. For example, predictive maintenance tools may not be easy to implement with older aircraft or tanks without sensors.

Military organisations also tend to have data silos with separate systems between divisions, such as the army, navy and air force. This further highlights the issues with integration, because in addition to legacy systems, newer systems are also not uniform and centralised.

2. Interoperability

Even within the AI realm, different nations and military branches often use different AI platforms built with differing data standards and programming languages. In larger coalitions such as NATO, the lack of standardisation can also potentially further hinder interoperability. Additionally, proprietary systems from commercial vendors can also limit interoperability since they are unlikely to be compatible with systems developed by other competitive vendors.

3. Supply Chain Dependencies

Emerging technologies, such as AI, often require components, such as microchips, that tend to have complex global supply chains. Therefore, there exists the potential for geopolitical tensions to disrupt supplies, hindering access to critical hardware. These cutting-edge components are also often manufactured in limited quantities, another factor that can impact production globally.

Global supply chains also come with security risks since vulnerabilities in the supply chain can have a direct repercussion on the military. As was evident with the pager attacks in Lebanon67 on the 17th and 18th of September 2024, adversarial access to the supply chain can pose a significant threat in addition to compromising system integrity.

Conclusion

In the coming years, militaries will need to address the challenges discussed above to effectively use artificial intelligence. For India, this is likely to mean a rapid increase in the adoption of AI technologies across military areas. Since China is aggressively pursuing the adoption of AI, the Indian armed forces will inevitably be pushed into employing AI. Currently, little is known as to the details of both the capabilities and the limitations of military AI. This will be the urgent first step to making better assessments of Chinese capacities and deploying AI more effectively in India. Once this information gap has been filled, India will need to seek out trusted partners to help it develop and acquire military AI within reasonable timeframes to ensure that it does not fall behind. In the coming years, the strategic, operational and tactical effects of military AI for India will need to be studied carefully for India to best achieve its ambitions.


References

Footnotes

  1. Pib.gov.in. “Raksha Mantri Launches 75 Artificial Intelligence Products/Technologies during First-Ever ‘AI in Defence’ Symposium & Exhibition in New Delhi; Terms AI as a Revolutionary Step in the Development of Humanity,” 2022.↩︎

  2. Matthews, Dylan. “AI Is Supposedly the New Nuclear Weapons — but How Similar Are They, Really?” Vox, June 29, 2023.↩︎

  3. Dafoe, Alan, and Jeffrey Ding. “Engines of Power: Electricity, AI, and General-Purpose Military Transformations | GovAI.” Governance.ai, 2021.↩︎

  4. Turn, R, A Hoffman, and T Lippiatt. “Military Applications of Speech Understanding Systems.” RAND, June 1974.↩︎

  5. Turn, R, A Hoffman, and T Lippiatt. “Military Applications of Speech Understanding Systems.” RAND, June 1974.↩︎

  6. Cohen, Paul R, Robert Schrag, Eric K Jones, Adam Pease, A.D Lin, Barbara Starr, Dave Gunning, and Murray Burke. “The DARPA High-Performance Knowledge Bases Project.” Ai Magazine 19, no. 4 (December 15, 1998): 25–49.↩︎

  7. Gentile, Gian P, Michael Robert Shurkin, Alexandra T Evans, GriséMichelle, Mark Hvizda, Rebecca Jensen, International Security And Defense Policy Center, and Rand Corporation. A History of the Third Offset, 2014-2018. Santa Monica, Calif.: Rand Corporation, 2021.↩︎

  8. Goertzel, Ben. “Artificial General Intelligence: Concept, State of the Art, and Future Prospects.” Journal of Artificial General Intelligence 5, no. 1 (December 1, 2014): 1–48.↩︎

  9. Gentile, Gian P, Michael Robert Shurkin, Alexandra T Evans, GriséMichelle, Mark Hvizda, Rebecca Jensen, International Security And Defense Policy Center, and Rand Corporation. A History of the Third Offset, 2014-2018. Santa Monica, Calif.: Rand Corporation, 2021.↩︎

  10. Harris A., Laurie. “Artificial Intelligence: Overview, Recent Advances, and Considerations for the 118th Congress.” CRS Reports (.gov), August 4, 2023.↩︎

  11. Prabhakar, Arati. “Powerful but Limited: A DARPA Perspective on AI Arati Prabhakar,” n.d.↩︎

  12. Allen, Greg. “Understanding AI Technology.” apps.dtic.mil. JAIC, April 1, 2020.↩︎

  13. “The Three AI Waves That Will Shape the Future,” Curating the Future, February 28, 2017.↩︎

  14. Allen, Greg. “Understanding AI Technology.” apps.dtic.mil. JAIC, April 1, 2020.↩︎

  15. Allen, Greg. “Understanding AI Technology.” apps.dtic.mil. JAIC, April 1, 2020.↩︎

  16. IBM. “Deep Blue | IBM.” www.ibm.com, 2024.↩︎

  17. Curating the Future. “The Three AI Waves That Will Shape the Future,” February 28, 2017.↩︎

  18. Allen, Greg. “Understanding AI Technology.” apps.dtic.mil. JAIC, April 1, 2020.↩︎

  19. Allen, Greg. “Understanding AI Technology.” apps.dtic.mil. JAIC, April 1, 2020.↩︎

  20. Lydick, Neil. “A Brief Overview of Facial Recognition,” n.d.↩︎

  21. Allen, Greg. “Understanding AI Technology.” apps.dtic.mil. JAIC, April 1, 2020.↩︎

  22. Schaeffer, Rylan, Brando Miranda, and Sanmi Koyejo. “Are Emergent Abilities of Large Language Models a Mirage?” ArXiv:2304.15004 [Cs], April 28, 2023.↩︎

  23. Trusilo, Daniel. “Autonomous AI Systems in Conflict: Emergent Behavior and Its Impact on Predictability and Reliability.” Journal of Military Ethics 22, no. 1 (January 2, 2023): 2–17.↩︎

  24. Morgan, Forrest, Benjamin Boudreaux, Andrew Lohn, Mark Ashby, Christian Curriden, Kelly Klima, and Derek Grossman. “Military Applications of Artificial Intelligence Ethical Concerns in an Uncertain World,” 2020.↩︎

  25. Allen, Greg. “Understanding AI Technology.” apps.dtic.mil. JAIC, April 1, 2020.↩︎

  26. Morgan, Forrest, Benjamin Boudreaux, Andrew Lohn, Mark Ashby, Christian Curriden, Kelly Klima, and Derek Grossman. “Military Applications of Artificial Intelligence Ethical Concerns in an Uncertain World,” 2020.↩︎

  27. Holland Michel, Arthur. “The Black Box, Unlocked: Predictability and Understandability in Military AI.” UNIDIR, September 22, 2020.↩︎

  28. Nathan Gabriel Wood. “Explainable AI in the Military Domain.” Ethics and Information Technology 26, no. 2 (April 16, 2024).↩︎

  29. Corn, Gary. “Symposium on Military AI and the Law of Armed Conflict: De-Anthropomorphizing Artificial Intelligence – Grounding Notions of Accountability in Reality.” Opinio Juris, April 5, 2024.↩︎

  30. Allen, Greg. “Understanding AI Technology.” apps.dtic.mil. JAIC, April 1, 2020.↩︎

  31. Madhavan, Adya. “#92 the Beginning of China’s AI Command?” Substack.com. Technopolitik, July 7, 2024.↩︎

  32. Deloitte. “Using AI in Predictive Maintenance.” Deloitte United States, 2023.↩︎

  33. Macdonald, Norine, and George Howell. “Killing Me Softly Competition in Artificial Intelligence and Unmanned Aerial Vehicles,” n.d.↩︎

  34. Evans, Scarlett. “AI-Powered Robot Dogs Tested to Find Explosive Devices.” Urgentcomm.com, November 27, 2023.↩︎

  35. Lin-Greenberg, Erik. “Wrestling with Killer Robots: The Benefits and Challenges of Artificial Intelligence for National Security.” MIT Case Studies in Social and Ethical Responsibilities of Computing, August 10, 2021.↩︎

  36. Morgan, Forrest E., Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden, Kelly Klima, and Derek Grossman. “Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World.” RAND Corporation, April 28, 2020.↩︎

  37. Boulanin, Vincent, Lora Saalman, Fei Su, Petr Topychkanov, and Moa Peldan Carlsson. “Artificial Intelligence, Strategic Stability and Nuclear Risk.” SIPRI, June 1, 2020.↩︎

  38. Morgan, Forrest, Benjamin Boudreaux, Andrew Lohn, Mark Ashby, Christian Curriden, Kelly Klima, and Derek Grossman. “Military Applications of Artificial Intelligence Ethical Concerns in an Uncertain World,” 2020.↩︎

  39. Cowen, Michael B., and Rick Williams. “Is Human-On-The-Loop the Best Answer for Rapid Relevant Responses? - Joint Air Power Competence Centre.” japcc, May 13, 2021.↩︎

  40. The Economist. “How Ukraine Uses Cheap AI-Guided Drones to Deadly Effect against Russia.” The Economist, December 2, 2024.↩︎

  41. DiNardo, Georgina. “Artificial Intelligence Flies XQ-58A Valkyrie Drone.” Defense News, August 3, 2023.↩︎

  42. Hadley, Greg. “In F-16 Dogfight, AI and Human Pilots Are ‘Roughly an Even Fight,’ Says Kendall.” Air & Space Forces Magazine, May 8, 2024.↩︎

  43. Martin, Bradley, Danielle C. Tarraf, Thomas C. Whitmore, Jacob DeWeese, Cedric Kenney, Jon Schmid, and Paul DeLuca. “Advancing Autonomous Systems: An Analysis of Current and Future Technology for Unmanned Maritime Vehicles.” www.rand.org, 2019.↩︎

  44. GOV.UK. “AUKUS Takes Another Step Forward with Real-Time AI Trials,” August 8, 2024.↩︎

  45. General Dynamics UK. “Multi-Utility Tactical Transport (MUTT),” n.d.↩︎

  46. Abraham, Yuval. “‘Lavender’: The AI Machine Directing Israel’s Bombing Spree in Gaza.” +972 Magazine, April 3, 2024.↩︎

  47. Abraham, Yuval. “‘Lavender’: The AI Machine Directing Israel’s Bombing Spree in Gaza.” +972 Magazine, April 3, 2024.↩︎

  48. Washington Post. “The next U.S. Battle Tank Could Use AI to Identify Targets.” October 12, 2022.↩︎

  49. King, Anthony. “Digital Targeting: Artificial Intelligence, Data, and Military Intelligence.” Journal of Global Security Studies 9, no. 2 (March 12, 2024).↩︎

  50. Kallenborn, Zachary. “Was a Flying Killer Robot Used in Libya? Quite Possibly.” Bulletin of the Atomic Scientists, May 20, 2021.↩︎

  51. Pandit, Rajat. “Army Steps up Deployment of AI-Powered Surveillance Systems on Borders with China & Pakistan.” The Times of India, August 7, 2022.↩︎

  52. Sharma, Ritu. “China Unveils World’s 1st Virtual Military Commander; Participates in Computer Wargames to Prepare for Future.” EURASIAN TIMES, June 18, 2024.↩︎

  53. hurriyetdailynews.com. “Türkiye’s AI Forest Fire System Wins UN Award.” Hürriyet Daily News, May 12, 2024.↩︎

  54. Ben David, Ricky. “Israel’s Aidoc Raises $110m for AI Tech That Reads Imaging Scans.” Timesofisrael.com, 2018.↩︎

  55. Tyler, Samantha, Matthew Olis, Nicole Aust, Love Patel, Leah Simon, Catherine Triantafyllidis, Vijay Patel, et al. “Use of Artificial Intelligence in Triage in Hospital Emergency Departments: A Scoping Review.” Cureus 16, no. 5 (May 8, 2024).↩︎

  56. Stallings, Jonathan D, Swamy Laxminarayan, Chenggang Yu, Adam Kapela, Andrew Frock, P Andrew, Andrew Reisner, and Jaques Reifman. “APPRAISE-HRI: An Artificial Intelligence Algorithm for Triage of Hemorrhage Casualties.” Shock Publish Ahead of Print (June 20, 2023).↩︎

  57. Bhinder, Bhavneet, Coryandar Gilvary, Neel S. Madhukar, and Olivier Elemento. “Artificial Intelligence in Cancer Research and Precision Medicine.” Cancer Discovery 11, no. 4 (April 2021): 900–915.↩︎

  58. Office, U. S. Government Accountability. “The F-35: ALIS in the Looking-Glass.” www.gao.gov, n.d.↩︎

  59. Kelly, Tim. “Japan’s Military to Spend on AI, Automation, Perks to Combat Recruitment Crisis.” Reuters, August 30, 2024.↩︎

  60. Pozzi, James. “British Airways Taps into Predictive Maintenance for Fleet Health | Aviation Week Network.” aviationweek.com, February 16, 2024.↩︎

  61. Media, OpenSystems. “British Semi-Autonomous Logistic Convoys Tested in U.S. - Military Embedded Systems.” Militaryembedded.com, 2019.↩︎

  62. Christianson, John, Sean Monaghan, and Di Cooke. “AUKUS Pillar Two: Advancing the Capabilities of the United States, United Kingdom, and Australia.” Www.csis.org, July 10, 2023.↩︎

  63. aws.amazon.com. “Artificial Intelligence | Amazon Supply Chain and Logistics,” n.d.↩︎

  64. The Economic Times. “Tesla Autopilot: What Is It and How Does It Work? Here’s Everything You May Want to Know.” The Economic Times, July 8, 2023.↩︎

  65. Hughes, Alyssa. “AI-Powered Microgrids Can Facilitate Energy Resilience and Equity.” Microsoft Research, November 2024.↩︎

  66. Boston Dynamics. “Spot.” Boston Dynamics, 2023.↩︎

  67. Scarr, Simon. “How Israel’s Bulky Pager Fooled Hezbollah.” Reuters, October 16, 2024.↩︎