Before algorithmic Armageddon: anticipating immediate risks to restraint when AI infiltrates decisions to wage war

ABSTRACT AI-enabled systems will steadily infiltrate resort-to-force decision making. This will likely include decision-support systems recruited to assist with crucial deliberations over the permissibility of waging war. Potential benefits abound in terms of enhancing individual and institutional capacities for cognition, analysis, and foresight. Yet, I argue that we have reason to worry. Our interaction with these systems – as citizens, political and military leaders, states, and formal organisation of states – would also court significant risks. Specifically, reliance on decision-support systems that employ machine-learning techniques would threaten to undermine our adherence to international norms of restraint in two distinct ways: (i) by creating the reassuring allusion that these AI-driven tools are able to replace us as responsible agents; and (ii) by inserting unwarranted certainty and singularity into complex jus ad bellum judgements. I will refer to these challenges as the ‘risk of misplaced responsibility’ and the ‘risk of predicted permissibility’, respectively. If unaddressed, each proposed risk would make the initiation of war appear more permissible in particular cases and, collaterally, contribute to the erosion of hard-won international norms of restraint.


Introduction
The steady proliferation of artificial intelligence (AI) is changing the myriad practices that define international politics.This includes radical changes to the practice of war (Erskine and Miller 2024).While already profoundly affecting how battles are fought, AI-driven systems will also increasingly influence the prior, consequential step of determining if and when a state engages in organised violence.In short, AI will infiltrate the decision to wage war.It is essential to consider any risks that would accompany this developmentincluding to the hard-won international norms and accompanying moral responsibilities that impose limits on the resort to force.
Unfortunately, our collective gaze shifts too easily to fanciful speculation about future scenarios and conceptions of AI with capacities yet unrealised (and perhaps unrealisable).The spectre of future iterations of AI surpassing human capacities (the hypothetical 'singularity') and constituting a proposed existential threat has received a surge in attention recently (e.g.Center for AI Safety 2023; Kleinman 2023; Roose 2023)perhaps prompted by the startling emergence of human-like characteristics in language-generative models like ChatGPT.When it comes to war, such apocalyptic musings include warnings that future AI systems that are delegated decision-making powers on the resort to force might evolve to have their own intentions and purposes, with catastrophic consequences for the human entities that they may come to see as obstacles to their flourishing.There is nothing wrong with engaging in such speculationunless it means that we overlook and neglect immediate risks, associated with current technologies, which warrant our attention now.Such immediate risks of AI in resort-to-force decision making will be my focus here.
Specifically, I will identify and briefly explore two distinct ways that our anticipated reliance on AI-enabled systems in the decision to go to war could undermine our collective commitment to exercise restraint.In contrast to some contributions to this special issue (e.g.Deeks 2024; Zala 2024), I will not consider circumstances in which AI-enabled systems would be granted the decision-making power to initiate war on behalf of the state (though I acknowledge that risks to restraintin the unintended escalation of violence, for examplehave been compellingly forewarned for such scenarios).Rather I will consider those seemingly more innocuous instances in which human decision makers (acting individually and in formal organisations) would retain the role of arbiters of the resort to force and simply be supported by AI-driven tools.
I will begin by briefly introducing the ways that AI-enabled systems could contribute toand in some limited respects are already contributing toresort-to-force decision making, including in this strictly supplementary, supporting role.Next, I will highlight the international norms that variously license and limit the decision to wage war and say something about the actors that we reasonably expect to adhere to these norms and discharge corresponding responsibilities of restraint.I will then propose two ways that reliance on AI-enabled decision-support systems could conceivably chip away at this existing structure of restraint, argue that each risk warrants further attention, and raise questions for future study.
I will refer to these challenges as the 'risk of misplaced responsibility' and the 'risk of predicted permissibility', respectively.Importantly, neither proposed risk is grounded in an inherent danger of the emerging technology per se.Rather, each arises from our interaction with AI-enabled systems and the ways in which this interaction affects how we (as citizens, as political and military leaders, as states, and as formal organisations of states) both understand our roles and responsibilities and navigate adherence to international norms.
AI, machine learning, and their infiltration into resort-to-force decision making AI is simply the evolving capability of machines to imitate aspects of intelligent human behaviour.As a general label, it also tends to be used for the various technologies that display this capability.A sub-set of AI technologies use a range of techniques that fall within a category called 'machine learning'.Machine learning has been pithily described as 'the art of making educated guesses based on data' (Marcus and Davis 2019, 45) and is particularly important in the context of this discussion.It relies on a specific type of algorithm 'developed via automated statistical inference procedures over large data sets' (Davis, Williams, and Yang 2021; see also Barocas, Hardt, and Narayanan 2023;Kearns and Roth 2019).Trained on data sets through various possible techniques, these algorithms create self-learning models that allow them to then perform tasks (such as classifying information and predicting outcomes) when presented with new data. 1  When it comes to current technologies and resort-to-force decision making, AI-enabled systems could potentially be used in two distinct ways.They could conceivably be employed to independently calculate and carry out courses of action in specific contexts that would constitute the initiation of organised violence, such as defence against cyberattacks or a counter-strike in response to a nuclear attack.Alternatively, AI-enabled systems could be used to inform human decision making on the resort to force.In the former case, these systems would produce autonomous responses that would, at least temporarily, supplant human decision making on war initiation.In the latter case, human agents (acting either individually or as corporate deliberative bodies) would draw on algorithmic analyses, recommendations, and predictions to reach decisions on the resort to force.Their decision making would thereby be supplementedand hopefully enhancedby what are referred to as AI-enabled 'decision-support systems'.
My focus here is on the latter.In other words, I am not considering systems that would serve as 'AI generals', 2 autonomous proxies for presidents who have relinquished the weighty task of contemplating a nuclear response, or algorithmic substitutes for executive decision-making bodies.Rather, I am interested in AI-driven systems that would metaphorically whisper in the ears of military and political leaders and make appeals and recommendations to executive bodiessystems that would, ideally, function as algorithmic advisors and data-driven deliberative partners.
The potential advantages of employing AI-enabled systems as tools to support resortto-force decision making are multiple.Machine learning techniques can enhance our individual and institutional decision-making capacities by analysing huge quantities of data quickly, uncovering patterns of correlation in datasets that are beyond human cognition, estimating risks, and predicting the possible outcomes of actions (or inactions).Of course, as Ross Andersen (2023, 12) quips, '[n]o one is inviting AI to formulate grand strategy, or join a meeting of the Joint Chiefs of Staff'.Not yet anyway.Nevertheless, the use of machine learning algorithms to advise governments on resort-to-force decisions is unlikely to be far away (Deeks, Lubell, and Murray 2019, 2, 7;Nelson and Epstein 2022; U.S. Government Accountability Office 2022, 1).Moreover, and even more immediately, there is evidence that AI-enabled tools are already contributing incrementally, and at different levels, to states' decision making on the resort to force through intelligence collection and analyses (Deeks, Lubell, and Murray 2019, 2, 6; see also Logan 2024;Suchman 2023).According to the United Kingdom's 2022 Defence Artificial Intelligence Strategy (UK Ministry of Defence 2022), 'AI is increasingly being used by adversaries across the full spectrum of military capabilities, including for situational awareness, optimised logistics, operational analysis and wargaming and for decision support at tactical, operational and strategic levels.' 3 Both this future prospect and existing influence will necessarily contribute to one significant aspect of resort-to-force deliberation: navigating appropriate action with respect to international norms of restraint.
International jus ad bellum norms and 'moral agents of restraint' International norms are widely-accepted, internalised principles that embody established codes of what actors should do, or refrain from doing, in relation to particular practices. 4 They express powerful expectations that both compel and constrain the behaviour of actors in world politics and are retrospectively invoked to commend or condemn conduct, including in support of sanctions when they have been violated.In short, international norms have prescriptive and evaluative force.A prominent and powerful set of international norms are represented by the 'just war tradition', an evolving consensus on principles that dictate what constitutes just and unjust behaviour in the context of organised violence.Refined over centuries and codified in international law, these principles outline what is understood to be morally permissible, prohibited, and required in war.Although they allow both engagement in war and conduct within it to be justified if particular conditions are met, they also place a heavy burden on the participants (and potential participants) in armed conflicts to exercise restraint.While not universally adhered to, these international norms of restraint nevertheless have a profound effect on how states act, justify their actions, and perceive themselves and others.Recognition of these norms is an important aspect of the relationship between states and informs whether states' actions are viewed to be legitimate.
Just war principles are conventionally organised into two categories: jus ad bellum principles, which relate to the resort to organised violence, and jus in bello principles, directed at conduct within it. 5The former, those principles invoked to assess the permissibility of the overall war, are crucial for the discussion here.They play a significant role in the initial decision of whether to engage in organised violence.Jus ad bellum principles dictate conditions that must be met for a war to be deemed permissible, including: that it be a last resort (meaning that its legitimate objectives cannot be realised by other less harmful means); that it meet the standards of proportionality (in that the good to be achieved by the war as a whole outweigh the harm that it will cause); that there be a reasonable chance of success (ensuring that a war is not knowingly fought in vain); and, most prominently, that there be a just cause for waging war (narrowly defined as individual or collective defence of the state against aggression, but increasingly extended to include the protection of vulnerable populations in other states from mass atrocity crimes). 6Implicit in these conditions are demanding responsibilities to exercise restraint responsibilities to pause before the first missile is launched or troops are deployed, to interrogate, deliberate, and determine whether going to war is permissible, and to act and exercise forbearance accordingly.
Elsewhere, I have coined the label 'moral agents of restraint' for those actors in world politics that we can reasonably expect to discharge these responsibilities (Erskine 2024, 550-551).This label refers to actors that we not only recognise as moral agents, or duty bearers (because they possess the capacities for understanding and reflecting on such requirements and acting in such a way to conform to them), but also that have some role or influence in the decisions and actions related to either the resort to organised violence or its conduct.When it comes to jus ad bellum responsibilities, relevant moral agents of restraint include state leaders and those high-ranking political and military officials who contribute directly to decision making on the resort to force.Moral agents of restraint also include citizens within democracies who vote forand censurethe leaders who wage war in their name.Moreover, as I will take as given that formal organisations are moral agents in their own right, with responsibilities not reducible to their members, 7 jus ad bellum moral agents of restraint must also include states, like Australia, and intergovernmental organisations (IGOs) such as the North Atlantic Treaty Organization (NATO) and the United Nations (UN).
For the purposes of this discussion, there are two important points to be drawn from this brief overview of jus ad bellum norms, corresponding responsibilities of restraint, and the actors who can reasonably be expected to discharge them.First, AI-enabled decision-support systems would seem potential assets when it comes to deliberation over whether the resort to force is permissible.After all, determining whether waging war in particular circumstances complies with jus ad bellum norms involves not only complex analyses of current threats (in identifying whether there is a just cause) and possible courses of action in response (in ascertaining whether initiating war constitutes a last resort).These deliberations also invariably involve ex ante judgements about the consequences of initiating war (in terms of judging its overall proportionality, or how much harm would result compared to not acting; assessing the likelihood of meeting one's legitimate objectives; and evaluating the prospect of options short of war being successful).Deliberations over the permissibility of waging war can also involve ex ante appraisals of future aggressive actions by other states and non-state actors (in considering the more controversial category of anticipatory self defence).With respect to these latter prospective assessments, predictive analyses of key strategic variablessuch as the consequences of inaction, forecasted civilian casualties, estimated mission cost, and anticipated threatsare fundamental.In sum, the potential benefits of AI-driven systems, particularly those that employ predictive machine learning techniques, are eminently apparent when it comes to jus ad bellum deliberations.
Second, these AI-driven systems, for all their computational capabilities and predictive potential, lack the specific capacities that would allow them to qualify as moral agents of restraint.AI-enabled systems can aid the decision making of individual human and institutional actors (and potentially independently implement courses of action based on their own calculations), but they are not the sorts of entities that are capable of specific types of reasoning.They do not have capacities for understanding and reflecting on their actions and the probable outcomes of their actions, for evaluating their reasons for adopting a particular course of action, or for acting on the basis of this deliberation and self-reflection.In short, they lack 'reflexive autonomy' (Erskine 2024, 544-545; see also Davis 2024).We cannot reasonably expect them to discharge responsibilities of restraint.Such moral responsibilitiesand blame when they are derogated fromremain with the individual and institutional agents that would rely on these systems to augment their own capacities.
As such, we not only need to make a clear distinction between our AI-enabled tools and the moral agents whose capacities they would hopefully enhance, but also pay close attention to how interaction with these tools might inadvertently affect the decision making of the individual and institutional actors that we rely on to discharge responsibilities of restraint.In what follows, I will suggest that we have reason to worry that AIdriven decision-support systems employed to aid deliberation over the resort to force would cause problems for realising our collective commitment to restraint in two distinct ways: (i) by creating the reassuring illusion that they are able to replace us as responsible agents; and (ii) by inserting unwarranted certainty and singularity into complex moral judgements. 8Both possibilities will be offered as calls to further consideration and research.I will address each in turn.

The risk of misplaced responsibility and the erosion of norms of restraint
We have a pernicious inclination to assume that our AI-enabled tools possess capacitiesand a corresponding statusthat they do not.The danger is that we may then see our responsibilities as somehow diminished, or forfeited altogether, to our AI-enabled tools in the context of some decisions and actions.I refer to this risk as 'misplaced responsibility'. 9Four factors contribute to this risk, which casts a shadow over the prospect of AI-enabled decision-support systems being employed to aid deliberation over the resort to force.
First, we have a tendency to wildly (and wishfully) misattribute characteristics to AI-enabled systems.We see what we want to see, and what we can best understand given our own experience, too often imagining sophisticated reproductions of ourselves in systems that crudely imitate aspects of our behaviour.This misattribution is readily apparent in the public response to new language-generative models (such as ChatGPT) that use predictive machine-learning techniques to create text that sounds plausible (and human).These algorithms simply use statistical inference to string symbols (words) together, producing a probabilistic output to which we then give meaning.Yet, users often assume that these algorithms are 'understanding' and 'reflecting' upon what they are relaying.In other words, we are inclined to attribute to machines that merely mimic intelligent human behaviour exactly those capacities that would define them as moral agents in their own right.Referring to just such tendencies, Mary L. Cummings (2006, 28) has observed that an automated decision-support system can be viewed 'as an independent agent capable of wilful action'.This is a dangerous misperceptionparticularly if such a system were to produce recommendations and predictions related to the initiation of war that were thereby granted (unwarranted) added weight as considered judgements of what should be done.
A second factor that contributes to misplaced responsibility when we rely on AI-enabled tools to inform decision making on the resort to force is 'automation bias'.This is, simply, the tendency 'to disregard or not search for contradictory information in light of a computer-generated solution' (Cummings 2006, 25; see also Mosier and Manzey 2019;Skitka, Mosier, and Burdick 1999).We unquestioningly accept that the computer is correct.This tendency undermines the purpose of AI-driven decisionsupport systems; namely, to supplement and enhance our decision making.The propounded value of having a 'human in the loop' is lost if the human decision maker effectively removes herself from the loop by taking for granted the accuracy and relevance of particular machine-generated outputs.Such deference could have a profound impact on resort-to-force decision making, where any output related to a particular strategic variable must be carefully considered and questioned in relation to a broader context of norms, values, and acknowledged uncertainty.Worryingly, research on automation bias has demonstrated that it increases in high-pressure, time-sensitive contexts (Cummings 2015)precisely those conditions under which decisions are made on the initiation of war.
When considering the effects of automation bias, it is important to recall that jus ad bellum moral agents of restraint include institutional actors, such as states and IGOs.Some research has been done to identify automation bias in teams (Mosier et al. 2001;Mosier and Fischer 2010;Skitka et al. 2000), but what about the effects of automation bias on organisational decision making?Deference to machine-generated outputs will likely affect individuals (and teams) at multiple points and levels in organisational decision-making practices and procedures related to the resort to force.After all, the infiltration by AI-enabled tools that I am envisaging (and that has already commenced) does not occur at only one location, at one point in time, in the context of a single decision.This reality promises a compounding effect when it comes to automation bias as analyses and recommendations travel up hierarchical structures and chains of command.Moreover, it is theoretically conceivable that automation bias could also have an effect on genuinely corporate decision making at the institutional level (in a way not reducible to the biases that affect the individual constituents of the organisation) as AI-driven tools become integrated into the decision-making structures and procedures of the organisation itself. 10As automation bias would tempt moral agents of restraint to disengage from those challenging decisions on the resort to force for which they would nevertheless remain answerable, both possibilities could have far-reaching implications.
Third, the very machine learning processes that are likely to be drawn on to support the types of ex ante judgements outlined above are frequently opaque and unpredictable.This has the potential to exacerbate the two tendencies just cited.This opacity can bolster the misperceptions about the capacities of algorithmic systems to which we are prone.Moreover, it means that recommendations and predictions by AI-enabled decisionsupport systems can often be neither audited nor explained by those who are guided by them, thereby reinforcing our tendency toward automation bias.
Fourth and finally, we (imperfect human actors) have a tendency to disown responsibility, especially in challenging circumstances.The decision to go to war is fraught and consequential.For a moral agent of restraint tasked with jus ad bellum considerations, the fiction that decisions on the resort to force could actually be made elsewhere, and responsibility for their outcomes could likewise be outsourced, would be reassuring and likely readily embraced.Despite our AI-enabled military tools not qualifying as moral agents of restraint, and the individual human and institutional actors listed above retaining jus ad bellum responsibilities, synthetic scapegoats for high-stakes decisions would offer a welcome respite to those charged with the weighty task of contemplating war.
Through the combination of these four factors, there is a risk that individual and institutional actors supported by sophisticated AI-driven decision-support systems would believe that they were 'off the moral hook' when it comes to decision making on the resort to force.In short, they would adjust their understanding of their own roles and responsibilities.To be clear, the problem that I am highlighting is not the (unavoidable) human-machine 'teaming' that the prevalence of machine-learning tools will bring to complex deliberations on the resort to force, but, rather, the (avoidable) abdication of decision-making that threatens to accompany it.
In her incisive contribution to this special issue, Ashley Deeks (2024) examines the legal basis for 'delegating' resort-to-force decision making to intelligent machines.Here I suggest the danger of abdicating decision making to such artefacts.The shift in terminology is intentional and important.Deeks accurately and provocatively describes the potential move of transferring decision-making powers to machines in certain contexts.I am offering a supplementary point.My proposed terminology seeks to reflect what I have argued are the highly problematic psychological and misperceived ethical consequences of this potential transfer given the factors outlined above.When one delegates a task, one nevertheless retains responsibility for both the decision and the outcome (if and when it is implemented).Simply, the actor doing the delegating remains answerable.However, if one 'abdicates' decision making to another agent or entity, something else is happening.One steps away from the problem and looks the other way.In the interest of 'clean hands' and a clear conscience, one ostensibly sheds responsibility for the outcome.The risk that I am anticipating, associated with individual or institutional agents succumbing (perhaps conveniently) to the myth of machine moral agency, is that any delegation to AI-enabled tools of either direct war initiation (Deeks' focus) or an aspect of resort-to-force decision making as part of a broader deliberative process (my focus here) risks being experienced as an abdication.Given that there is no moral agent of restraint to metaphorically step into the breach, this entails a potentially catastrophic perceived unburdening of responsibility.
To offer a final point on this risk of misplaced responsibility, it is important to consider that the detrimental effect of our deference to AI-enabled decision-support tools in such high-stakes decision-making scenarios may not be limited to the degradation of our agency and how we perceive and uphold our roles and responsibilities.The damage may also acquire geo-political reach by extending to the collateral erosion of hard-won international norms of restraint.This may occur if the individual and institutional agents to which particular expectations are legitimately directed see themselves displaced as the relevant decision makers.We may passionately pay lip-service to these international norms.We may genuinely hope that they are respected.Yet, if the bodies that should be discharging jus ad bellum responsibilities are effectively abdicating decision making to entities that cannot be expected to bear such burdens, then these norms of restraint will be eroded.Simply, nobody will perceive themselves as answerable for upholding the norms in particular contexts.

The risk of 'predicted permissibility' and the temptation of pre-emptive violence
A second potential risk to restraint when AI-enabled systems infiltrate resort-to-force decision making is what I will call the risk of 'predicted permissibility'.This is distinct from the dangers that follow from misperceptions about the capacities and moral status of these systems.It arises instead from potential misperceptionsor wilful misrepresentationsof the nature of the narratives, predictions, and recommendations that such systems would generate, and the possible repercussions of unquestioningly feeding these outputs into deliberations on the legitimacy of resorting to force.
My focus here is specifically on the sorts of predictive machine learning algorithms alluded to above.The domestic and international legitimacy of any resort to war depends on a state being able to justify this decision to its citizens and the international community, respectively, with reference to international norms of restraint.As already acknowledged, assessments of whether jus ad bellum conditions are met require complex risk analyses and ex ante judgments of the consequences of actions (and inactions), which would seem to welcome the assistance of machine learning techniques.Yet, my concern is that such statistical, data-driven methods could encourage states (and other resort-to-force decision makers) to take as 'given' what are no more than forecasts of possible future scenarios.By relying on AI-driven decision-support systems that use predictive analytics, might we lower the bar for justifying war initiation, particularly with respect to the nature and likelihood of future threats?
We already see predicted threats justifying consequential actions in domestic law enforcement.Inscrutable algorithms are relied upon to make assessments of recidivism, for example, on the basis of which an individual convicted of a crime may be sentenced (O'Neil 2016, 23-31) or denied parole (Deeks 2018(Deeks , 1538(Deeks -1547)).Such assessments are calculated based on the statistical likelihood of someone in their position re-offending and constituting a danger to the community.Predictive algorithms guess what someone is likely to do in the future based on the patterns of behaviour of a restricted set of similarly situated actors already analysed.Decisions based on such data-driven estimates can be profoundly unjust (Eubanks 2019;O'Neil 2016).Correlation cannot establish culpability.Yet, notably, comparable methods are already applied to targeting in war where the cost of injustice is even higherwhen machine learning algorithms are used to support human decision making in the selection of targets.AI-enabled decisionsupport systems have been employed for drone strikes by the United States in Yemen and Pakistan (Gibson 2021;Naughton 2016), and, recently, for bombings by Israel in Gaza (Abraham 2024;Davies, McKernan, and Sabbagh 2023).In these cases, recommendations of 'legitimate' targets are also based on probabilities.Algorithms that rely on big data analytics and machine learning suggest targets by highlighting patterns and correlations in large amounts of data drawn from individuals' text messages, web browsing, email traffic, and location, for example.They speculate who is likely to represent a threat in a way that would (arguably) render them a permissible target.
In both law enforcement and targeting scenarios, if such predictions are reassuringly read as facts, particularly to support actions with high stakes consequences and punitive aims, then we court the dual dissonance of justifying actions in response to transgression that have yet to eventuate and potentially punishing the innocent (or, in the case of estimates of recidivism, the genuinely reformed).An assessment borrowed from the strikingly prescient science fiction author, Philip K. Dick, is apt here.Writing in the 1950s, Dick (2002, 19) envisaged a future system in which the state could analytically determine when someone would (probabilistically) commit a crime and would engage in their 'prophylactic pre-detection' and arrest. 11Dick's fictional protagonist, the creator of this system, noted that 'the basic drawback of the precrime methodology' is simply that '[w]e're taking in criminals who have broken no laws' (Dick 2002, 2-3).'As the story unfolds, he not only concludes that such action is deeply unjust, but also realises that misguided assumptions about the singularity and certainty of these predicted futures encourage unnecessary punitive action (when other ways of responding to possible prospective threats would have been sufficient to avert them). 12Stumbling into the same pitfalls is eminently conceivable in AI-assisted resort-to-force decision making.
Predictive machine-learning algorithms could be employed by a state in an attempt to establish that the 'just cause' criterion has been met in instances where there is a fear (or purported fear) of prospective aggression by an adversary.For those who uphold the (controversial) possibility of justifying the anticipatory resort to force in the name of self-defence, demanding criteria are usually set in terms of the imminence and magnitude of the threat and the demonstrable danger of not acting first. 13Meeting these criteria requires evidence and certainty.Could an algorithmic outputwith its underlying calculations opaque and possibly secret, but nevertheless perceived as providing rigorous data-driven evidencebe invoked to contribute to a case for anticipatory self-defence?
I see (at least) two dangers here.Both entail ways in which this use of machine learning algorithms could render the resort to force more permissive.One is simply that the statistical inferences invoked to lend legitimacy to the initiation of war could be impossible to interrogate and therefore open to being misused.We might arrive at a digital variation on the 'dodgy [intelligence] dossier' produced by the United Kingdom to justify the anticipatory initiation of war against Iraq in 2003 14but one not susceptible to the same (however belated) scrutiny.In sum, algorithmic predictions and recommendations might tempt us to replace evidence that can be tested with black-box probabilities, making it easier to rationalise anticipatory violence even in circumstances that would not warrant it.
Yet, there is also another problemmore difficult to articulate, but perhaps more perniciouswhich need not involve disingenuous motives and wilful manipulation.In the context of anticipating a state's vulnerability to aggression, machine learning analytics could allow us to imagine possibleeven probablefutures based on circumscribed sets of existing data.Assuming an optimal quantity and quality of data with which to train these algorithms, they could provide valuable information to supplement state deliberation and planning.My concern is that one might mistake these inference-based predictions for sophisticated and somehow infallible computations of the situation at hand, which could be understood to claim certainty and point towards a singular future. 15Comprehensible or not in terms of how they were arrived at, the latter would be deemed beyond question and doubt.This matters profoundly if we insert this presumed certainty and singularity into our jus ad bellum deliberations on the legitimacy of resorting to war.Our practical (human) judgement in the context of such deliberations is meant to acknowledge uncertainty and thereby promote caution, forbearance, and prudence in making ex ante assessments against ad bellum criteria.Silencing this uncertainty thereby lowers the bar on what is deemed permissible.
It should be noted that this risk of 'predicted permissibility' thereby also threatens the integrity of international norms of restraint in the resort to force.However, unlike the risk of 'misplaced responsibility', this is not because the relevant moral agents of restraint see themselves displaced and no longer answerable to calls to adhere to these norms.Rather, misperceptions of the outputs produced by predictive machine-learning techniques tempt us to recalibrate what counts as adherence to jus ad bellum norms, effectively weakening them.

Conclusion
AI-enabled systems will continue to infiltrate resort-to-force decision making.This will likely involve their increasingly direct influence on such deliberations, including in the form of decision-support systems that employ machine learning techniques to address key strategic variables on the permissibly of engaging in war.I have argued that this evolution risks being accompanied by unintended, yet foreseeable, detrimental effects.Indeed, long before we conceivably face any prospect of a machine-initiated algorithmic Armageddon, these seemingly innocuous technologies could prompt multiple, incremental, and likely consequential changes to how we deliberate and see ourselves as responsible agents.These technologies could also affect how we interpret and apply the parameters that define the legitimate resort to force.I have highlighted two bundles of concerns that correspond to these effects under the labels 'misplaced responsibility' and 'predicted permissibility'.
Misplaced responsibility involves the potentially catastrophic error of assuming that AI-driven tools can be answerable for weighty decisions on the resort to force.A sceptic might respond that denying and redirecting responsibility is hardly unique to our interactions with AI-enabled decision-support systems.As Shannon Vallor (2013, 484) observes, 'the reduction of human decision-making to mechanistic, formulaic or quasi-algorithmic processes can happen by means other than technological automation.'She adds that '[w]e can easily conceive of military environments in which soldiers and officers are encouraged to eschew moral reasoning in favour of legalistic templates, decision-trees and other formal mechanisms of reducing the cognitive burdens (and freedoms) of human judgement.' 16Indeed, our sceptic might go on to charge just war principles themselves with constituting a primitive, algorithmic decision-support system that allows us to shrug off responsibility for consequential decisions by submitting to strict criteria and calculations.In response, it is important to note that any set of rules or superior orderseven when we are expected to use our judgement to critically interpret, evaluate, and apply them (as is certainly the case with just war principles)can be invoked as an (illegitimate) excuse for morally checking out and 'doing what one is told'.Yet, something else is happening with the case at hand.The temptation to shrug off responsibility when it comes to AI-enabled decision-support systems appears more compelling and legitimateand dangerously so.This is because we anthropomorphise these systems.We tend to see them as having minds and wills of their own.They masquerade as moral agents and we thus mistake them for loci of responsibility.It will be important to prevent this tendency from having a devastating effect on how we, as moral agents of restraint, deliberate and act in the context of crucial decisions on the initiation of war.
Separately, the risk of predicted permissibility arises from our misunderstanding of the knowledge produced by AI-enabled decision-support systems, particularly those that use machine learning techniques.We too easily mistake statistical probability for certainty and then interpret predictions as pointing towards (or warning of) a single future or outcome.Grave problems would result if forecasted scenarios were to become accepted variables in overall calculations of the justice of resorting to forcethereby impeding the practical judgement, prudence, and forbearance under acknowledged uncertainty that should define such deliberations.Even while we recognise that AI-driven recommendations and predictions could provide valuable supplementary counsel in resort-to-force considerations, it is imperative that we not imbue these outputs with certainty and causal claims that they cannot support.
Both potential risks could have a profound effect on how Australia, its allies, and its adversaries make decisions on the initiation of organised violenceand also indirectly affect the international norms that govern its permissibility.They deserve our immediate attention.As these risks are rooted in the way we interact with and perceive AI-enabled decision-support systems, future work on ways of mitigating these risks and reinforcing restraint must look to mediating these interactions and refining our perceptions.This might be pursued through, for example, the design of the systems themselves, the creation of guidelines for their effective and just use, and the education and training of those who would use them and interpret their outputs.These important themes are variously addressed in articles that follow (e.g.Chiodo, Müller, and Sienknecht 2024;Davis 2024;Vold 2024).
Notes 12.The nuances of Dick's 'precrime methodology' and the neglected lessons offered in the short story that describes itparticularly with respect to the potential to learn from predicted futures (acknowledging that multiple futures are possible) in order to dissuade crimes rather than punish them pre-emptivelyare relevant to the consideration of using predictive algorithms in resort-to-force decision making and warrant further attention.13.There is a rich literature on the (contested) moral and legal boundaries of permissible anticipatory self-defence, which is beyond the scope of this short article.According to the oftcited statement of Daniel Webster in the Caroline case of 1842, there must be 'a necessity of self-defence … instant, overwhelming, leaving no choice of means, and no moment of deliberation' (quoted in Walzer 1977Walzer /1992, 74), 74).Walzer (1977Walzer ( /1992, 81) , 81) amends what is required to 'sufficient threat', which would entail evidence of an adversary's 'manifest intent to injure, a degree of active preparation that makes that intent a positive danger, and a general situation in which waiting, or doing anything other than fighting, greatly magnifies the risk'.14.The 'dodgy dossier' was a briefing prepared for then-Prime Minister Tony Blair's UK government, which purportedly drew on intelligence from a number of sources and was invoked to bolster the case for the existence of weapons of mass destruction in Iraq and help justify an anticipatory war.It was eventually discovered not only to have drawn directly on a range of unattributed, non-intelligence sources (including an unpublished PhD thesis), but also to have exaggerated claims made in these otherwise blatantly plagiarised documents to make its case.15.On this illusion of a single anticipated future, compare with Baggiarini (2024), who draws on Amoore (2020, 80) to lament machine learning algorithms reducing 'the multiplicity of potential futures to a single output'.16.Vallor offersthis point in the separate context of a valuable discussion of 'moral deskilling' caused by automated decision making in war. of Intelligence at Cambridge University.She is currently Chief Investigator of a two-year research project on 'Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making' funded by the Australian Government, Department of Defence.She recently served as Director of the Coral Bell School (2018-23) and Editor of International Theory: A Journal of International Politics, Law, and Philosophy (2019-23).Her research interests include: the impact of artificial intelligence (AI) on organised violence; the moral agency and responsibility of formal organisations (such as states, intergovernmental organisations, and transnational corporations) in world politics; the ethics of war; the responsibility to protect ('RtoP'); cosmopolitan theories and their critics; and joint purposive action and informal associations in the context of global crises and existential threats.She is the recipient of the International Studies Association's 2024-2025 Distinguished Scholar Award in International Ethics.ORCID Toni Erskine http://orcid.org/0000-0002-5352-6319