AI and the decision to go to war: future risks and opportunities

ABSTRACT This short article introduces our Special Issue on ‘Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making'. We begin by stepping back and briefly commenting on the current military AI landscape. We then turn to the hitherto largely neglected prospect of AI-driven systems influencing state-level decision making on the resort to force. Although such systems already have a limited and indirect impact on decisions to initiate war, we contend that they will increasingly influence such deliberations in more direct ways – either in the context of automated self-defence or through decision-support systems that inform human deliberations. Citing the steady proliferation of AI-enabled systems in other realms of decision making, combined with the perceived need to match the capabilities of potential adversaries in what has aptly been described as an AI ‘global arms race', we argue that this development is inevitable, will likely occur in the near future, and promises to be highly consequential. After surveying four thematic ‘complications’ that we associate with this anticipated development, we preview the twelve diverse, multidisciplinary, and often provocative articles that constitute this Special Issue. Each engages with one of our four complications and addresses a significant risk or benefit of AI-driven technologies infiltrating the decision to wage war.

What if intelligent machines determined whether states engaged in war?In one sense, this is merely the stuff of science fiction, or long-term speculation about how future technologies will evolve, surpass our capabilities, and take control.In another more nuanced sense, however, this is a highly plausible reality, compatible with the technologies that we have now, likely to be realised in some form in the near future (given observable developments in other spheres), and a prospect that we are willingly, incrementally bringing about.
This Special Issue addresses the risks and opportunities of the eminently conceivable prospect of AI intervening in decision making on the resort to force.Here we will step back and very briefly comment on the current military AI landscape before turning to this largely neglected domain of anticipated AI-enabled influence.We will then highlight four thematic 'complications' that we associate with the infiltration of AI-enabled technologies into the decision to wage war, before previewing the twelve diverse, multidisciplinary, and often provocative contributions that variously engage with them.

Current context
Artificial intelligence (AI)-the evolving capability of machines to imitate aspects of intelligent human behaviour-is already radically changing organised violence.AI has been, and is increasingly being, integrated into a wide range of military functions and capabilities.Official documentation released around the Australia, United Kingdom (UK), and United States (US) 'AUKUS' agreement, for example, has outlined a growing role for AI across advanced military capabilities, including a commitment to 'Resilient and Autonomous Artificial Intelligence Technologies (RAAIT),' under which '[t]he AUKUS partners are delivering artificial intelligence algorithms and machine learning to enhance force protection, precision targeting, and intelligence, surveillance, and reconnaissance.'(AUKUS Defence Ministers 2023) Not only does this proliferation of AI across military capabilities have a profound impact on the utility, accuracy, lethality, and autonomy of weapon systems (see, for example, Scharre 2024), but the intersection of AI and advanced weapons systems is thought to have serious implications for the military balance of power (see, as one example, Ackerman and Stavridis 2024).Indeed, AI is seen as now essential to the quest for military advantage and figures centrally in current thinking about the preservation of military superiority.As one analysis has explained, AI 'will enable the United States to field better weapons, make better decisions in battle, and unleash better tactics.' (Buchanan and Miller 2017, 21.)In the words of former US Undersecretary of Defense, Michele Flournoy (2023), 'AI is beginning to reshape US national security'.
This growing importance and increasing exploitation of military AI has been accompanied by concern about the potential risks and adverse consequences that might result from its use.A world of AI-infused decision making and concomitant compressed timelines, coupled with intelligent automated-and even fully autonomous-weapons, brings dangers as well as military advantages.This has prompted attempts to develop rules and limits that could constrain at least some military applications of AI and seek to minimise dangerous or undesirable outcomes.Such regulation is not merely an academic concern.There is ample evidence that governments too are growing concerned about the broad risks associated with the use of AI in the military sphere.An international summit on 'Responsible AI in the Military Domain,' held in the Hague in February 2023, issued a 'Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy' that has been endorsed by more than 50 states as of February 2024 (US Department of State 2023).It comprises a list of desirable measures intended to promote the safe and prudent utilisation of military AI, including the proposition (relevant to a concern addressed throughout this Special Issue) that its use should take place 'within a responsible human chain of command and control' (US Department of State 2023).
In the context of announcing that Australia would join this Declaration in November 2023, Australian Defence Minister, the Hon Richard Marles MP, reiterated Australia's commitment to 'engage actively in the international agenda towards the responsible research, development, deployment and application of AI' (Australian Government 2023).Moreover, in a joint statement following their October 2023 meeting in Washington, DC, US President Joe Biden and Australian Prime Minister Anthony Albanese affirmed that 'States should take appropriate measures to ensure the responsible development, deployment, and use of their military AI capabilities, including those enabling autonomous functions and systems' (Prime Minister of Australia 2023).Of course, in order meaningfully to take such measures, it is essential to understand-and anticipate -the full range of ways in which AI-enabled systems will be employed in a military context.It is the purpose of this Special Issue to enhance this understanding by addressing a hitherto largely unaddressed and (we suggest) emerging use of AI-driven systems.
Finally, we would be remiss not to acknowledge the recent attention-and concerngenerated by the spectre of military AI alongside nuclear weapons.Indicative of the international apprehension prompted by this potential coupling, there was widely-reported speculation leading up to the November 2023 bilateral summit in San Francisco between Presidents Joe Biden and Xi Jinping that the US and China were 'poised to pledge a ban on the use of artificial intelligence … in the control and deployment of nuclear warheads' (South China Morning Post 2023; see also Porter 2023;Saballa 2023).
No such pledge was made on the day, but, following the summit, both governments appeared to respond to the anticipation surrounding the predicted announcement with statements on the need for further talks between the US and China to discuss the risks of advanced AI systems (Lewis 2023; Ministry of Foreign Affairs of the People's Republic of China 2023; White House 2023a; White House 2023b).The utilisation of AI in the realm of nuclear weapons has prompted considerable analysis and apprehension for the obvious reason that the potential stakes are so enormous (see, for example, Kaur 2024;Shaw 2023;and Parke 2023).As Depp and Scharre starkly observe, '[i]mproperly used, AI in nuclear operations could have world-ending effects' (Depp and Scharre 2024).In its 2022 Nuclear Posture Review, the US proclaimed as policy that, without exception, humans will remain in the loop in any decisions involving the use of nuclear weapons (US Department of Defense 2022, 13).A basic worry is that AI could be integrated into nuclear command and control in ways that automate response capacities, possibly reinforcing deterrence but also raising risks of unwanted escalation or loss of control.This concern that the introduction of AI can create new vulnerabilities for nuclear command and control has led to calls for norms and guidelines intended to limit the nuclear instability and the threats to nuclear deterrence that could ensue (Avin and Amadae 2019).
In sum, the effect of AI on the performance of weapon systems, the conduct of military operations, and the vulnerabilities and strengths of military forces is of great importance.These developments have serious (if still uncertain) implications for the future of war, and have gripped the attention of academics, state leaders, and the general public alike.However, the intellectual ferment and policy deliberations inspired by the proliferation of AI-driven military tools have focused largely on the ways in which force will be employed (and transformed) as a result, rather than on the question of how this constellation of emerging technologies is likely to inform (and potentially transform) decision making on whether and when states engage in war.It is to this latter question that we turn.

A neglected prospect
The focus of academics and policy makers has been overwhelmingly directed towards the use of AI-enabled systems in the conduct of war.These include, prominently, the emerging reality of 'lethal autonomous weapons systems' ('LAWS'-or, more colloquially and provocatively, 'killer robots') and decision support systems in the form of algorithms that rely on big data analytics and machine learning to recommend targets in the context of drone strikes and bombings (such as those by Israel in Gaza that have generated recent attention (Abraham 2024;Davies, McKernan, and Sabbagh 2023)).By contrast, we seek to address the relatively neglected prospect of employing AI-enabled tools at various stages and levels of deliberation over the resort to war. 1  In other words, our focus in this Special Issue-and in the broader project from which it emerges-takes us from AI on the battlefield to AI in the war-room.We move from the decisions of soldiers involved in selecting and engaging targets (as well as authorising and overseeing the selection and engagement of targets by intelligent machines) to state-level decision making on the very initiation of war and military interventions; from jus in bello to jus ad bellum considerations (in the language of the just war tradition); and from actions adjudicated by international humanitarian law to actions constrained and condoned by the United Nations (UN) Charter's prohibition on the resort to force and its explicit exceptions.
This shift in focus, at this particular point in time, is crucial.It anticipates what we believe is an inevitable change in how states will arrive at the consequential decision to go to war.We base our prediction that AI will infiltrate resort-to-force decision making in part on the steady proliferation of AI-driven systems-including predictive, machine-learning algorithms-to aid decision making in a host of other realms.Such systems are relied upon for everything from recruitment, insurance decisions, medical diagnostics in hospitals, and the allocation of welfare, to policing practices, support in the cockpits of commercial airplanes, and judgements on the likelihood of recidivism.In short, human decision making is becoming more and more reliant on the assistance of AI.In addition, the need to match the capabilities of potential adversaries in the increasingly high-speed, always high-stakes context of war fuels what has aptly been called the latest 'global arms race' (Simonite 2017).Although AI-enabled systems currently have only a limited and indirect role in state-level decision making on the resort to force, we are convinced that they will progressively influence such deliberations in more direct ways.By examining the prospect of AI gradually intervening in resort-to-force decision making now, it is possible to identify benefits and risks of using these technologies while there is still time to find ways, respectively, to enhance or to mitigate them.
The gravity of these considerations is difficult to overstate.As Ashley Deeks, Noam Lubell, and Daragh Murray (2019, 16) have provocatively posed, [i]f the possibility that a machine might be given the power to 'decide' to kill a single enemy soldier is fraught with ethical and legal debates, what are we to make of the possibility that a machine could ultimately determine whether a nation goes to war, and thus impact thousands or millions of lives?
Of course, an intelligent machine 'determining' whether a state engages in war could mean different things.Bracketing science fiction scenarios and long-term futuristic speculation, there are two ways that current AI-driven systems could conceivably impact resort-to-force decision making.First, AI-enabled decision support systems could be used to inform deliberations on whether to engage in war.In such a scenario, human decision makers would draw on algorithmic recommendations and predictions to reach decisions on the resort to force.This is already beginning to happen, at least indirectly, with respect to the AI-aided collection and analysis of intelligence, which makes its way up organisational hierarchies and chains of command.Alternatively, AI-driven systems could themselves calculate and implement decisions on the resort to force, such as, conceivably, in the context of defence against cyber attacks.Moreover, worrying suggestions of an AI-driven automated nuclear response to a first strike have also been mooted-and threatened-particularly in the case of a decapitation attack.(It has been reported that the Soviet Union bequeathed to Russia a 'dead hand' launch system, and that it is still in place, so this is not an unthinkable possibility.The Russian 'Perimeter' system is described in Depp and Scharre 2024; see also Andersen 2023, 12.) In such cases, a course of action would be determined and implemented by an AI-enabled autonomous system-with or without human oversight.Both types of scenario entail foreseeable (and likely near-future) developments that demand immediate attention.

Four complications
For all the potential benefits of these AI-driven systems-which are variously able to analyse vast quantities of data, make recommendations and predictions by uncovering patterns in data that human decision makers cannot perceive, and respond to potential attacks with a speed and efficiency that we could not hope to match-challenges abound.The workshop that led to this special issue, 'Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making,' held at the Australian National University (ANU), 29-30 June 2023, set out to address four thematic 'complications' that we proposed would accompany the integration of AI-enabled systems in state-level decision making on the resort to force.
Complication 1 relates to the displacement of human judgement in AI-driven resort-to-force decision making and possible implications for deterrence theory and the unintended escalation of conflict.When programmed to recommend-or independently calculate and implement-a response to a particular set of circumstances, intelligent machines will behave differently than human agents.This difference could challenge our understandings of deterrence.Current perceptions of a state's willingness to resort to force are based on assumptions of human judgement, resolve, and forbearance rather than machine-generated outputs.Moreover, AI-enabled systems delegated the task of independently responding to aggression in certain contexts would make and implement decisions at speeds impossible for human actors, thereby accelerating decision cycles.They would also seem likely to misinterpret human signalling (on the desire to deescalate a conflict, for example).Both factors could contribute to inadvertent, and potentially catastrophic, escalations in the resort to force in scenarios where human decision makers would have exercised restraint (see, for example, Wong et al. 2020, chapters 7, 8).
Complication 2 highlights the possible implications of automation bias.Empirical studies show that individuals and teams that rely on AI-driven systems often experience 'automation bias', or the tendency to accept without question computer-generated outputs (Cummings 2006;2012;Mosier and Fischer 2010;Mosier and Manzey 2019;Skitka, Mosier, and Burdick 1999).This tendency can make human decision makers less likely to use their own expertise and judgement to test machine-generated recommendations.Detrimental consequences of automation bias include the acceptance of error, the de-skilling-including 'moral deskilling' -of human actors (Vallor 2013), and, as one of us has argued in this collection and elsewhere, the promotion (alongside other factors) of 'misplaced responsibility' in war, or the dangerous misperception that intelligent machine can bear moral responsibility for what are necessarily human decisions and their outcomes (Erskine 2024a, 551-554;2024b).
Complication 3 confronts algorithmic opacity and its implications for democratic and international legitimacy.Machine learning processes are frequently opaque and unpredictable.Those who are guided by them often do not comprehend how predictions and recommendations are reached, and do not grasp their limitations.This current lack of transparency in much AI-driven decision making has led to negative consequences across a range of contexts (Knight 2017;Pasquale 2016;2017;Vogel et al. 2021).As a government's democratic and international legitimacy require a compelling and accessible justification for the decision to resort to war, this lack of transparency poses grave concerns when machines inform, or independently calculate and implement, such courses of action.
Complication 4 addresses the likelihood that AI-enabled systems would exacerbate organisational decision-making pathologies.Studies in both International Relations (IR) and organisational theory reveal the existing complexities and 'pathologies' of organisational decision making (within IR, see for example Barnett and Finnemore 1999).AI-driven decision-support and automated systems intervening in these complex structures risk magnifying such problems.Their contribution to decisions at the national-and even intergovernmental-level could distort and disrupt strategic and operational decision-making processes and chains of command.
These proposed complications are explored in the twelve articles that follow in the context of either automated self-defence or the use of AI-driven decision-support systems to inform human resort-to-force deliberations.The articles explore how best to approach these complications, with each article identifying a risk or opportunity of using AI-enabled systems in one of these contexts, asking how the risk can be mitigated or the opportunity promoted, and, sometimes, suggesting that an ostensible 'complication' is overstated and in no need of redress. 2

Contributions
Significantly (and perhaps unusually), this collective attempt to grasp the potential hazards and benefits of employing AI-driven systems to contribute to the decision to wage war draws on a range of disciplines.These interventions are variously made from the perspectives of political science, IR, law, computer science, philosophy, sociology, psychology, and mathematics.
The volume begins with Ashley Deeks (2024) anticipating states being increasingly tempted (given the prospect of hypersonic attacks) to allow AI-driven systems to make autonomous judgements on the initiation of force in particular cases.Observing that this use of autonomy would entail effectively 'delegating' consequential resort-to-force decision making to machines, Deeks raises crucial legal and normative questions about such 'machine delegation', turning to the US legal system to interrogate whether and how it could be justified.Benjamin Zala (2024) continues to examine this weighty possibility of dismissing humans from the resort-to-force decision-making 'loop' in certain circumstances by addressing the specific high-stakes scenario of using AI and machine learning in nuclear command and control systems.Zala warns of two routes by which AI-enabled systems would increase a state's incentive to strike first (with either nuclear or strategic non-nuclear weapons): automation in military deployment; and the introduction of AI-informed human decision making in relation to early-warning threat assessment.In both cases, he argues that a loss of human caution and forbearance would be pivotal.Marcus Holmes and Nicholas J. Wheeler (2024) intervene in the discussion with a very different approach to the potential role of AI in nuclear crisis management.While Zala's main focus is on the risks of AI-enabled systems in this context, Holmes and Wheeler turn their attention optimistically to the opportunities.Although they explicitly reject any notion that AI-driven systems should be allowed to operate nuclear command and control, they maintain that these systems could valuably enhance human decision making in such scenarios.Acknowledging that AI lacks emotional intelligence, they nevertheless provocatively propose that AI-enabled technologies offer opportunities to foster empathy, trust, and what they call 'security dilemma sensibility'.
Also addressing these seemingly more innocuous cases where AI-enabled tools supplement rather than supplant human decision making on the resort to force, Toni Erskine (2024b) argues that our interaction with such decision-support systems threatens to undermine our adherence to international norms of restraint in war.Erskine identifies two sources of this detrimental effect: our tendency to mistake AI-enabled tools for responsible agents in their own right ('the risk of misplaced responsibility'); and our misperception of the outputs produced by these tools ('the risk of predicted permissibility').Each misstep, she argues, not only makes the initiation of war seem more permissible in particular cases, but also collaterally chips away at the hard-won international norms themselves.In line with Erskine's first proposed risk, Mitja Sienknecht ( 2024) is also concerned with how human-machine decision making on the resort to force complicates attributions of responsibility.Yet, she raises a different problem: 'responsibility gaps' when decisions are informed or made by AI-enabled systems to which responsibility cannot coherently be apportioned.In response, Sienknecht introduces the intriguing concept of 'proxy responsibility', which acknowledges the political, military, and economic structures that surround AI-influenced decision making on the resort to force and seeks to provide a pragmatic way of attributing responsibility for machine actions to human agents.
Frequently accompanying such complex questions of responsibility attribution are discussions of how the human actor is-and should be-situated in relation to AIenabled systems when it comes to decision making.Indeed, questions of where, whether, and why humans should be in the war-initiation decision-making 'loop' alongside intelligent machines are returned to and debated throughout this volume-often amidst claims to the irreplaceable virtues and capacities of human actors.Jenny L. Davis (2024) offers a novel take on these debates-and an added stipulation to the common call for 'meaningful human control'-by focusing on the type of human actor that should be tasked with interpreting and implementing AI-driven outputs in resort-to-force deliberations (and other high-stakes scenarios).It is not enough to simply demand 'humans in the loop'.Rather, she argues that we need 'experts-in-theloop', a conclusion that implies imperatives to employ, support, and provide ongoing professional training to human practitioners.Maurice Chiodo, Dennis Müller, and Mitja Sienknecht (2024) concur with Davis on the importance of training and educating human actors, but turn their attention to the education of AI developers.They begin with the assumption that responsible military AI development is needed in order to mitigate the sorts of risks of integrating AI technology into resort-to-force decision making identified by the other contributors.Focusing on the need to provide developers with clear training on ethical issues as a way of mitigating detrimental path dependencies that lead to such risks, they propose an original educational framework ('10 pillars of responsible AI development') and emphasise the need for AI developers to be trained in how AI systems will actually be integrated into military processes.
Highlighting a point alluded to by a number of contributors, Sarah Logan (2024) addresses the vital role that intelligence analysis plays in decision making on the resort to force.As AI becomes increasingly important to such analyses, she anticipates the accompanying dangers of our reliance on large language models (LLMs).Specifically, she cautions that generative AI (or algorithms that can be used to create new content and that draw on LLMs) will exacerbate informational 'pathologies' with which intelligence analyses are already afflicted: 'information scarcity' and 'epistemic scarcity'.Explaining that these pathologies are compounded by the limited data available to train LLMs, she notes that Western governments, like Australia, face particularly detrimental constraints in accessing such data compared to authoritarian regimes such as China and Russia.Pivoting from Logan's incisive account of AI-enabled tools as flawed providers of information and curators of selective knowledge to AI-enabled tools as means of directly augmenting human cognitive capacities, Karina Vold (2024) returns more optimistically to the opportunities afforded by AI systems in resort-toforce decision making.Specifically, she extols the strategic military advantages that accompany what she calls 'human-AI cognitive teaming'.While acknowledging that becoming too reliant on AI-enabled systems carries risks for both individual users and broader society (as explored by other contributors), Vold valuably highlights the role that algorithmic decision-support systems can play in enhancing the otherwise limited human capacities particularly important for state-level resort-to-force decision making: inter alia memory, attention and search functions, planning, communication, comprehension, quantitative and logical reasoning, navigation, and even (to return to Holmes and Wheeler's provocation) emotion and self-control.Osonde A. Osoba (2024) examines the integration of AI into what he calls 'military decision-making ecosystems' using two analytic frames: an artefact-level analysis focused on the technical properties of individual AI systems and a systems-level perspective aimed at highlighting broader institutional implications of AI use.Referring to Vold's conception of AIenabled cognitive enhancements, Osoba's artefact-level analysis highlights the potential positive impacts that AI integration can have in terms of increasing what he intriguingly describes as 'cognitive diversity' in decision-making processes.Osoba then argues that both states and their national security institutions tasked with resort-to-force decision making qualify as complex adaptive systems.Based on this identification, he draws on dynamics observed in other stable complex systems to offer some sceptical assessments of concerns surrounding human deskilling and algorithmic transparency.
Reminiscent of Zala's observation that intelligent machines lack the crucial human capacity to be moved (or, more aptly, constrained) by glimmers of doubt that would promote the exercise of caution in decision making on the resort to force, Neil Renic (2024) forewarns the dulling of our 'tragic imagination' if we continue to allow machines to infiltrate this process.Renic compellingly argues that the 'speed, inflexibility, and false confidence' of AI-assisted decision making would risk fostering an insensitivity to what he identifies as the tragic qualities of violence-namely, its limits and unpredictability-as well as a denial of our own fallibility.As such, he maintains that some aspects of decision making must never be forfeited to AI-driven systems.Concluding the collection on a similarly cautionary note, Bianca Baggiarini (2024), examines the potential dangers of 'algorithmic reason' in the context of decision making on the resort to force. 3She presents a powerful case that the very technologies that promise both certainty and decision making efficiency actually obscure what we can see and know through practices of 'invisibility, anonymity, and fragmentation'.Sharing Osoba's scepticism of calls for algorithmic transparency, which she sees as woefully misguided, Baggiarini concludes by expressing concerns that AI-supported decision making on the resort to force is not compatible with democratic legitimacy.
The contributors to this Special Issue do not always agree.They reach different conclusions on the benefits or risks that will accompany states' anticipated reliance on AIenabled systems in resort-to-force decision making.They differ on the degree of optimism or pessimism with which this development should be approached.Moreover, they focus on divergent points at which AI will infiltrate these deliberative processes and address a range of contexts in which this is likely to have an impact.Nevertheless, the articles in this collection speak to each other and share a commitment to understanding a consequential and (we suggest) inevitable change in decisions to wage war.Each article represents a process of learning from the diverse perspectives brought together as part of this important, ongoing conversation.We hope that this collection prompts engagement, reflection, debate, and further research.