Ethical Aspects of Military Maritime and Aerial Autonomous Systems

ABSTRACT Two categories of ethical questions surrounding military autonomous systems are discussed in this article. The first category concerns ethical issues regarding the use of military autonomous systems in the air and in the water. These issues are systematized with the Laws of Armed Conflict (LOAC) as a backdrop. The second category concerns whether autonomous systems may affect the ethical interpretation of LOAC. It is argued that some terms in LOAC are vague and can be interpreted differently depending on which ethical normative theory is used, which may increase with autonomous systems. The impact of Unmanned Aerial Vehicles (UAVs) on the laws of war will be discussed and compared to Maritime Autonomous Systems (MAS). The conclusion is that there is need for revisions of LOAC regarding autonomous systems, and that the greatest ethically relevant difference between UAVs and MAS has to do with issues connected to jus ad bellum – particularly lowering the threshold for starting war – but also the sense of unfairness, violation of integrity, and the potential for secret wars.


Introduction
Autonomous systems are emerging in many areas of society. Tele-operated systems are becoming increasingly more autonomous, and autonomy is indeed considered as one of the "Eight Great Technologies" (HM Government 2013). 1 This raises ethical questions, particularly when autonomous systems are in possession of firepower (Healey et al. 2013). Several robotic automation systems, such as unmanned aerial vehicles (UAVs), are being used in combat today. UAVs operate at a level where humans are in charge and responsible for the deployment of lethal forcebut not always in a directly supervisory manner. This raises a number of questions, and most basically of all the following: whether it can be considered morally right to use UAVs in war, and if so, in what manner and under which circumstances.
It has previously been argued that the potential ethical uniquenessthat is, the uniqueness with regard to ethical aspectsof UAVs can be clarified by looking at the laws of armed conflict (LOAC) in more detail (Johansson 2011). Discussing, inter alia, the normative theories underlying LOAC vis-à-vis the challenges of UAVs can help us discern whether there is need for additional rules concerning the use of UAVs in war.
While there has been much research on the ethics of UAVs (Finn and Wright 2012;Kreps and Kaag 2012), also in the pages of this journal, there has not been as much research on the ethical aspects of unmanned or autonomous systems in water. The purpose of this article is to give an overview of ethical aspects of maritime autonomous systems (MAS) by comparing these with ethical aspects of UAVs.
Examples of ethical questions regarding MAS are, for instance, whether there are ethical problems with MAS looking for mines (Ireland 2010), and whether the use of weaponized autonomous systems to protect ships from piratesor monitoring illegal immigrants or illegal fishingcan be considered ethical. 2 This article will, however, mainly focus on maritime or naval autonomous systems in the military.
It is not clear how we should refer to these unmanned or autonomous vehicles or vessels. The most publicly known unmanned vehicles are probably UAVs (also referred to as drones), which indeed includes the term "unmanned" in its very name. The latter is also the case in the so-called "master plans" from the US Navy, which refer to unmanned surface vehicles and unmanned underwater vehicles (see, for instance, Department of the Navy 2007). Some other names used are autonomous underwater vehicles (AUVs), autonomous marine vehicles (AMVs) or remotely operated vehicles (ROVs), indicating different levels of independence. In contrast to an ROV, for instance, which always needs a human operator, an autonomous underwater vehicle has no remote control during its mission. The more autonomy, the less the human interaction and interference.
Some notes on terminology: Throughout this article, the term "maritime" will be used instead of "naval". While naval might seem more appropriate, as the focus is on military autonomous systems, the boundaries between naval and maritime are not always clear, and maritime seems to cover the materials discussed here better and more widely. Furthermore, the term autonomousrather than unmannedwill be used (except for UAVs, since this term is already commonly established). There are two reasons for this. First, the development is moving towards increasing autonomy, and most unmanned vehicles have some autonomy. Autonomy is a vague term, sometimes used for almost anything that can do at least some things without someone remotely controlling it, such as navigation, which is the case for most UAVs. Second, there are many examples of humans voluntarily stepping out of the "decision loop", letting the autonomous system "make the decision" (perhaps even to kill), or trusting the computer's "judgment" more than one's own 3so in effect, the difference between the remotely controlled (unmanned) vehicle and the autonomous one is not necessarily big, as autonomy comes in so many different forms and degrees. Examples include everything from autonomous lawnmowers to "fully" autonomous robots in the military. Both are, however, hardly autonomous if compared to human free will as discussed in philosophy seminars. I have chosen to use the overarching term maritime autonomous systems -MAS (and then uMAS for underwater, sMAS for surface)unless particularly specified in some sections of this article. The decision to use system instead of vehicle is also in order to maintain as wide a perspective as possible. 4 Two categories of ethical questions surrounding military autonomous systems will be discussed in this article. The first category has to do with ethical issues regarding the use of such systems. These issues will be systematized with the laws of armed conflict (LOAC) as a backdrop. The second category concerns whether military autonomous systems may affect the ethical interpretation of LOAC. It can be argued that some terms in LOAC are vague and can be interpreted differently depending on which ethical theory (or normative theory; I use the two terms interchangeably here) theory is employed to underpin it, something which may increase with autonomous systems (Johansson 2011). I will use research on UAVs as a stepping stone by looking at some of the ethical issues regarding UAVs, and also explore whether there are relevant differences between UAVs and MAS. Then, I will look at whether UAVs affect the laws of war and compare UAVs to MAS in this respect. However, in order to do this, it is necessary to have some background information on MAS, and the missions of the modern navy.
2. Maritime autonomous systems and the missions of the modern navy "The modern navy" can of course refer to very different types of naviesa large one, such as the American, or a small one, such as the Swedish. 5 This section is mainly based on the larger navy, as the larger navy covers more issues than the small. Whether or not there are ethical differences between larger and smaller navies regarding autonomous systems will not be explored in this article.

Characteristics and purpose of MAS
Both uMAS and sMAS offer the potential for significant contributions to the conduct of naval warfare tasking, particularly when integrated with one another and with other manned and unmanned platforms, sensors and communications systems into a systems solution. As the development of adaptive and eventually intelligent autonomous control capabilities become more mature, the potential for these systems to engage in cooperative autonomous behavior will increase, allowing groups of these vehicles to operate together as networks. 6 While uMAS and sMAS have much in common, they also have many distinct characteristics. Surface vehicles can use radio frequency for virtually unlimited communications and navigation. This is much more difficult for underwater vehicles. Significant improvements have been made in underwater acoustic communications in the past decade, and many of these improvements are reaching operational status. Future developments in autonomous control will minimize the communication difficulties of remotely controlled vehicles.
Telepresence (i.e. the use of television cameras on uMAS) permits observations from the uMAS to be sent to the human controllers on the surfaceor subsurface, if controlled by a submarine. In many circumstances, the high costs associated with the surface platform are justified, particularly when control, manipulation, or specific complex tasks are involved requiring human oversight. In other situations, in which the task is routine and can be programed, untethered systemsi.e. unmanned undersea vehicles, which are either partially or totally autonomous, without a need for what is often called "umbilical" cords or linksrepresent an attractive alternative. Elimination of the umbilical cord also reduces drag. Untethered uMAS were developed for various offshore industry, science and naval purposes, replacing many of the functions of similar survey vehicles. The opportunity to provide multiple simultaneous views by operating several untethered uMAS or using uMAS with a surface platform will enhance their capabilities. For example, untethered uMAS are beginning to replace towed vehicles for seafloor survey, and they are enabling new kinds of inspections that were previously impossible (such as in New York City water tunnels); they perform surveys from the shore, and they improve the utilization of ship resources by operating uMAS simultaneously with other uMAS or other types of operations. However, as noted earlier, due to the intrinsic limitation of bandwidth for communications in the ocean, untethered systems in the ocean require substantially more autonomy than systems on the surface or in the air.
Examples of purposes of MAS are: . Surface MAS: mine countermeasures, 7 anti-submarine warfare, maritime security, 8 surface warfare, special operations support, electronic warfare and maritime interdiction operations support (see Department of the Navy 2007). . Underwater MAS: intelligence, surveillance and reconnaissance, mine countermeasures, anti-submarine warfare, inspection/identification, oceanography, communication/Navigation Network Node, payload delivery, information operations and time-critical strike (see Department of the Navy 2004).
During the 1990s, advances in sensors and computers produced a new concept -Network-Centric Warfare (NCW). The information-technology revolution enabled computer-based information gathering and processing, providing an accurate and real-time picture of enemy activity that could be shared between networked forces (Speller 2014, 109). Such forces could be geographically dispersed but still tied together by data links, enabling them to work in a network of systems, thereby covering far larger areas than individual platforms, thus achieving information superiority. This would be further enhanced with the use of unmanned, autonomous systems, in the air as well as in the water. Till (2013, 131) discusses the navies and technology, and argues that the rate of exponential advance in computer power, which according to "Moore's law" has so far doubled every eighteen months, makes revolutionary change in military operations inevitable. The maritime prophets of NCW generally emphasize that for all this to work, it will be necessary with substantial changes to existing maritime organizations, habits of thought, procedures and doctrine. Traditional operational and tactical independence of the sea-based commander seems likely to weaken, according to Till (2013, 133).
Unmanned underwater systems already play a significant role in naval warfare, the most obvious example being the torpedo. In recent years, several developmental systems have reached levels of maturity at which they can be used in direct support of combat operations. The principal mission of these systems is reconnaissance: to provide environmental or countermine data. Typically, MAS provide significant standoff and clandestine capability. They can operate autonomously to different extents, but when operating autonomously, they do not currently have adaptive or intelligent capabilities, which reminds us of the difficulties in defining autonomy mentioned in the introduction: Can something be considered fully autonomous without having intelligence? These systems can, however, carry out predetermined missions, providing optical or acoustic imagery and physical environmental data, such as information on temperature, salinity, depth and currents, as well as optical properties.
As the autonomy level of MAS increases, the number and complexity of the missions that they can carry out will increase as well. There is also the added benefit of being able to perform missions not previously feasible due to the risk involved or due to a lack of available human operators. Highly autonomous systems can facilitate a military force that is mission-capable with fewer personnel, be capable of more rapid deployment, and be easier to integrate into the digital battlefield. The exact level of needed autonomy must be defined as appropriate for different missions. 9

Legal issues
There are several legal issues surrounding MAS, for instance, whether the uMAS is a sovereign extension of its state, and thus immune from seizure by other nations, and whether it can operate on the surface of another nation's territorial sea, or operate there at all (Henderson 2006). As already mentioned, defining "autonomous" is difficult. Another vague term is "vessel", and attempts to define it are as old as maritime commerce itself. 10 Henderson (2006) argues that some of the uMAS currently under development will likely fall outside current parameters of maritime jurisprudence. He believes that uMAS should be considered vessels under US law regardless of their size or mission. But he points out that submarines are warships, and it is a well-established tenet of international law that warships are extensions of their respective states, enjoying sovereign immunity from interference by the authorities of nations other than the flag nation. As such, warships may not be seized, boarded or searched without the permission of the commanding officer. A ship need not be armed, however, to be considered a warship.
A uMAS is unmanned and lacks a crew or commanding officer. Henderson (2006) points out that it cannot be a warship per se, even if deemed a vessel in its own right. But as a component of a larger unit, uMAS might still be considered extensions of the launching/controlling warship. As such, they would enjoy the same level of sovereign immunity as the support vessel and be immune from seizure. If deemed vessels in their own right, but not warships, uMAS may still enjoy immunity as auxiliaries. Henderson argues that a strong case can be made under domestic law that uMAS are in fact vessels and, therefore, subject to all applicable rules for operation and navigation. This conclusion stems from the notion that most uMAS will either be considered components of their support ships or be construed as vessels outright. It is conceivable that a uMAS might fall through legal cracks, Henderson argues, such as a non-payload uMAS launched and operated from shore. This vehicle would have no support ship, nor would it technically be a means of transportation. But for the sake of uniformity and to avoid confusion, he believes that it is best to treat all uMAS alike.
This rationale holds particularly true in the international arena, according to Henderson, where there is far less regulatory or statutory guidance available, and legal guidelines applicable to uMAS are even more fluid than domestic law.

Ethical aspects of UAVs compared to MAS
The following discussions of ethical aspects of the use of autonomous systems will be made with the Laws of Armed Conflict (LOAC) as its backdrop. A closer analysis will be made in section four, where specific terms in LOAC will be analyzed more closely in light of ethical theory.

The laws of war
A pacifist would, of course, argue that ethical evaluations of weapons used in warsuch as UAVs, irrespective of their level of autonomyare meaningless, as war is unethical in itself. The ethical evaluations in this article are, however, made against the backdrop of the laws of armed conflict (LOAC), as codified in, for instance, the Geneva and Hague conventions, thus assuming that war is not necessarily unethical.
The rules of war are often divided into jus ad bellum and jus in bello. Jus ad bellum specifies what criteria must be fulfilled in order to start a war, where "just cause" is arguably the most important one. The rules of jus in bello establish criteria for ethical means of fighting once at war. 11 There are, in my view, mainly two criteria of jus ad bellum that are relevant here: just cause and proportionality. According to the criterion of just cause, the reason for going to war needs to be just. War should not be a means of revenge, innocent life or national security must be in imminent danger, and the use of armed force must thus serve to protect life and security. Examples of this include self-defense from external attack and punishment for a severe wrongdoing which remains uncorrected. According to the rule of proportionality, the anticipated benefits of waging war must be proportionate to its expected evils or harms. This is also known as the principle of macro proportionality, in order to separate it from the jus in bello criterion of proportionality.
According to the jus in bello rule of proportionality (or "excess"), an attack cannot be launched on a military objective if the civilian damage would be excessive related to the military advantage. That is, the value of an attack must be in proportion to what is gained. According to the rule of necessity, the attack must be necessary, meaning that war should be governed by the principle of minimum force. This principle is meant to limit excessive and unnecessary death and destruction.
There are also criteria of jus post bellumthe justice of the aftermath of warsuch as regulations concerning war termination, in order to ease the transition from war to peace. 12 These are, however, not discussed in this article.
It might be argued that it would be sufficient to look solely at jus in bello, as autonomous systems are employed only when a war has already been started. I have argued, however, that possession of autonomous systems might affect the interpretation of jus ad bellum as well, as they might increase the inclination to start a war (Johansson 2011). The reason for this is that such systems have advantages in terms of reducing casualties for the possessor, which may make war seem like more of a risk-free enterprisethereby lowering the threshold for starting a war. The possession of autonomous systems maymore than other weaponsalso affect the interpretation of LOAC, since it may determine which normative moral theory the interpretation of LOAC will be based on. I will come back to this below.
Let us start by looking at arguments connected to jus ad bellum.

Increased inclination to start war (lower threshold)
One argument against the use of military autonomous systems is that the threshold for starting war may be lowered, and somesuch as Peter W. Singer (2009)argue that this type of technology even creates the illusion that wars can be "costless". If a country does not risk the lives of its soldiers, it might be easier to start or enter a war that might otherwise not have been started. Another issue concerns the horrors and brutality of war, which might be forgotten more easily with this type of technology compared to more traditional weapons, when human soldiers are in harm's way.
There are examples to support the reasoning above, that is, where autonomous systems most likely have made it easier to undertake and support a new war effort. We might mention the 1991 Persian Gulf War, the 1999 war over Kosovo or the 2003 invasion of Iraq (Asaro 2008). Would the United States have entered Iraq, Kosovo or the Persian Gulf, if the enemy had been able to use UAVs on US soil? This hypothetical question indicates to us the potential importance of UAV possession when interpreting the criteria of the jus ad bellum, such as "just cause"in particular when a country that has UAVs chooses to attack a country without UAVs.
We cannot say for certain, as a general statement, that autonomous systems actually lower the threshold for starting a war, but it seems intuitively plausible that this can be the case, since risking human lives or not obviously makes a difference, a difference which becomes more pronounced the higher the percentage of autonomous systems employed.
So, what can be said about the difference between autonomous systems in the air, compared to autonomous systems in the water? If the threshold is lowered with UAVs, does the same go for MAS?
The risk of MAS lowering the threshold is probably not as high as with UAVs. While it is possible to win a war with a swift invasion by using UAVs, that would arguably not be the case with MAS since they cannot get out of the water. They may, however, function as platforms for UAVs. MAS would most likely mainly be used for defensive measures, such as sea denial or fleet-in-being. Overall, the risk for autonomous systems lowering the threshold for starting war seems higher with UAVs than with MAS.

Unfairness
Another argument against the use of autonomous military systems is an increased risk of a sense of unfairness on the part of those who do not possess this technology. This may in turn affect what happens after the war. Michael Walzer (2006,132) points out that it is important to make sure that victory is "in some sense and for some period of time a settlement among the belligerents". And if that is to be possible, the war must be fought, as the utilitarian moral philosopher Henry Sidgwick put it, so as to avoid "the danger of provoking reprisals and of causing bitterness that will long outlast" the fighting (ibid., with reference to Sidgwick [1891] 2010). The bitterness that Sidgwick and Walzer have in mind might be the consequence of an outcome thought to be unjust, but it may also result from military conduct thought to be unnecessary, brutal, unfair or simply "against the rules". It seems clear that UAVs, more than many other permitted weapons, might provoke a sense of unfairness as it permits one side to avoid risking the lives of their soldiers. According to Jane Mayer (2009), the use of UAVs in Pakistan has stirred "anti-American sentiments" and perpetrators of terrorist bombings in Pakistan have begun presenting their attacks as "revenge for drone attacks" (see also Zenilman 2009).
Autonomous systems are not more brutal or unfair per se, but there might be a feeling that the part using them are cowards or that they, even if not doing anything strictly illegal, are using means that violate informal rules of honor or are displaying a lack of warrior's virtues, such as courage or mercy. It might even be argued that the right to kill your opponent in war is based on an idea of a certain reciprocal imposition of risk: the soldier threatens what he or she risks, that is, life and health, and is therefore allowed to take the life of the enemy without being considered a murderer. This sense of fairness is arguably removed if one side uses robots and the other does not.
The unfairness resulting from the asymmetry when one part is in possession of autonomous technology and the other is not can, however, also be considered an argument in favor of autonomous systems: the asymmetry can have a deterrent effect on the weaker party, which might be more likely to pursue a negotiated diplomatic settlement (Arkin 2009). If this is the case, it can be argued that a sense of unfairness is something that the weaker party can accept, and that it is a better option than losing human lives. On the other hand, if the stronger party threatened the weaker party mainly due to its power to do so, and the provocation would not have been made without the possession of autonomous technology, that would be an argument in the other direction.
There may not seem to be much of a difference between UAVs and MAS when it comes to the issue of unfairness. Unfairness is, however, often a sensation or feeling rather than something we can measure absolutely (unless regulated and measured by law), and here there may be a difference between UAVs and MAS. UAVs are more visible (even though they operate in the air) because they can be seen and heard, so people are reminded of them when they perceive their presence. Also, there is more of an integrity issue (including privacy issues) with UAVs compared to MAS, as they fly over people's homes. There will be less risk for civilian casualties with MAS, most likely, compared to UAVs, but integrity issues may arise if uMAS are used for surveillance in coastal areas.

Secret wars
The advantage of possessing autonomous systems might lead to what is sometimes termed "secret wars", due to diminishing transparency (Mayer 2009). When autonomous systems are used by national security or intelligence agencies, lack of transparency can indeed be a problem. According to LOAC, wars are fought between states, and it is unclear whether a national security agency can be considered a "state". By using a security agency, where decisions are taken behind a veil of secrecy, ethical and legal rules can be circumvented more easily, since the breach cannot be controlled by outside parties. This can turn into "unofficial wars", where one country or party can claim a legal and moral right to kill people in a country with which they are not at war. As a result, other countries might argue they can do the same thing, and the laws of war would in some sense be nullified.
Another question has to do with UAV operators who operate far away from where the weapons are deployed. Should they be considered engaged in warfare, so that they could be retaliated against, even though they are, for instance, on American soil? In other words: Would it be permitted, according to the laws of war, to attack a UAV operator in Nevadaand if so, where? Would it be permitted to attack him or her on his or her way to work? What about a programmer who programs an autonomous robot? Such questions indicate that it is necessary to define more clearly the borders of the battle ground.
When it comes to secrecy, the potential for uMAS to be secret is of course tremendous. However, it would probably be more difficult to achieve the same type of secret warsthat is, to actually win a "war" by taking out crucial people or assetswith uMAS compared to UAVs, as UAVs operate on land. Although UAVs may be seen, which would affect the secrecy, there is probably greater potential for secret wars with UAVs than MAS, since they can accomplish more on land than MAS can in the water.

Numbing
The argument from numbing is related to the jus ad bellum argument that the threshold for starting war may be lowered with the use of autonomous systems. The reason for this is straightforward. When killing at a maximum range, it is possible to be less conscious ofand thus to distance oneself fromthe fact that one is killing human beings, and thus to experience less or no regret. With the increasing ability to kill from an extreme distance, in perfect safety, or by having a "fully" autonomous system in which kill commands are not directly made by human beings, the awareness of killing and death is decreased (Grossman 1996). Physical distance, in short, may detach a fighter from the consequences of the use of weaponry. This is, however, a discussion that dates back to the time when bows and arrows were introduced, thus increasing the distance between the fighters, as compared to killing a person with a sword, for instance. Also, because of the potential lack of transparency considered in the previous section, it might be easier for a UAV operator than a conventional soldier to avoid responsibility, or at least the sense of responsibility.
However, it has been shown that UAV operators suffer from post-traumatic stress disordersand that they can actually suffer more severely from this, since they follow the target for days and may become familiar with them in a sense that a regular pilot does not (McCammon 2017). There is also a difference regarding the degree of autonomy. Human Rights Watch claims that robots that can decide who to kill will become a reality within a decade. 13 This might of course lessen the burden for PTSD-afflicted operatorsthere might in fact be no need for them at all in the futurebut then we have another problem: If an argument against unmanned systems has to do with numbed humans, what would be more "numb" than a machine?
There are many examples of positive human contact being helpful in reconciling the differences between different cultures, and where the presence of robotic technology could create a vacuum. Armstrong (2008) points out that there is a risk that there will be an impact on the "hearts and minds" of people in conflict and post-conflict zones when and if autonomous robots are deployed. In the future, there will be discussions on the lack of mercy in machines, which also might lead to atrocities in war. One might program a robot to recognize signs of surrender, but it can be argued that it will not be able to show mercy, compassion and humanity in the same sense as a human being can do. On the other hand, there is research on "synthetic emotions", such as "synthetic guilt", which will, in a sense, teach robots right from wrong and correct their future behavior (Arkin 2009). There is an ongoing debate in machine ethics on how to make robots behave "ethically"either to use a top-down method, where rules or ethical theories would be programed into the robot, or a bottom-up method, where the robot learns about ethics like a child does, from trial and error, and with the help of the already mentioned synthetic emotions.
Even if one argues that a machine can never show "real" compassion, machines do have some clear advantages over human soldiers. They do not get tired, afraid or act from selfinterest. Furthermore, there is an increased chance that a machine will not need to kill an opponent, but can capture him instead, since the machine is not afraid of being killed. In other words, a soldier may use extra force in order to protect himself, in situations of fear or stress, something that would not be an issue for a robot.
Here we may find a difference between autonomous systems in the air and in the water, at least when UAVs are used for killing individuals. The MAS will not be attacking individual persons as directly as UAVs often dothat is, they will not be looking into the eyes of other human beings. If the robot has a high degree of autonomy, this may not matter, compared to the case where there is a human looking through the robot's eyes, hesitating. The point is that we can speculate whether it might in some cases be easier to order a MAS to sink a vessel even if there are 100 soldiers on board, compared to the killing of one concrete person up close.

Errors due to technology
One argument against UAVs is that there is a risk of striking the wrong targets, with less possibility of stopping the wrongful targeting (Mayer 2009). But there are claims to the contrary: that UAVs are more "surgical" in the way they kill and that there is less collateral damage from a UAV strike than there is from a regular airplane strike. There is an even greater risk of mistakes or increased collateral damage if the pilot "hurries". The same would most likely apply for MAS. But again, there is the issue of the level of autonomy. It might be scary trusting life-and-death decisions to the technology, but at the same time, humans are often considered the weakest link in a decision loop. It can be argued that there would be fewer errors with fewer humans involved.
Michal Klincewicz (2015) argues that as artificial-intelligence programs become more sophisticated, they will at the same time be more vulnerable to being hacked, and therefore pose a security risk. Systems that have so-called "ethical governors", for instance, have problems with framing and representation. In order to manage such challenges, artificial intelligence software would have to be enormously complexand complex software tends to have more software bugs, leading to security vulnerabilities.

Autonomous systems and the ethical interpretation of LOAC: UAVs vs. MAS
Does the introduction of increasingly autonomous weapons have an impact on our understanding and use of LOAC? I will argue that it might, since the use of UAVs and MAS clearly influences the way in which we measure what is proportional, necessary and excessive, to mention three crucial LOAC terms. All of these are vague terms, which need to be specified. The fact that UAVs and MAS can change calculations of losses and costs quite significantly, especially for the party that has such weapons, means also that LOAC will be interpreted differently depending on the weaponry used.
One ethical way of countering what some will then see, as intimated earlier in this article, as a lowered threshold for initiating war, may be to apply a utilitarian perspective inspired by R. M. Hare's utilitarianism, which is a form of rule-utilitarianism. This might, on the level of the rules and principles of LOAC, help us formulate meaningful restraints on what might otherwise seem to be an excessive risk of armed conflict caused by it being initiated much to easily, due to the simplicity and low cost of using autonomous weaponry.
Important terms in LOAC that can be interpreted differently depending on which normative theory one subscribes to are just, excessive and necessary. 14 As indicated, I will in the following use a rule-utilitarian frameworkmore specifically R. M. Hare's utilitarian universal principles from his article "Rules of War and Moral Reasoning" (Hare 1972)rather than comparing theories. The reason is that rule-utilitarianism represents a plausible middle ground between pure deontology (duty ethics) on the one hand and pure (act-) utilitarianism on the other.
Hare argues that we need rules, such as the ones in LOAC, but we also need overarching principles, that is, we need what he calls a "two-level approach". The idea is to move between what Hare refers to as a general and specific level, respectively, of rule-utilitarianism. According to general rule-utilitarianism we ought to bring about principles and actions striving for the greatest good. According to specific rule-utilitarianism we can have specific rules, striving for maximizing utility. The point is that the general level can work as a "safe guard".
An important characteristic of utilitarianism is its impartiality. Hare believes that the requirement of benevolence is secured by the reference to serving the interests of all. Impartiality shows, according to this outlook, how a utilitarian outlook for interpreting vague terms in LOAC can be kept on the right track.
A utilitarian interpretation will focus on maximizing utility, and it can argue for local rather than global maximization, that is, it can argue in favor of maximizing utility in a certain area rather than a larger area or humanity as a whole. For instance, country A is suffering because of some factors that may justify starting a war against country B (involving, for instance, imminent danger, the suffering of children, wrongs in the past, etc., but it is up for debate whether these factors would justify A starting a war). Country A may argue that if they attack country B, B will suffer, but if B surrenders quickly and gives A part of B's territory, then the total sum of suffering would be less than that of A today. Even if all the criteria of jus ad bellum ideally need to be fulfilled, the possession of autonomous technology might actually strengthen such a case for using armed force, in particular with regard to "reasonable chance of success". It can thus be argued that that the term "just" may be "hijacked" by a possessor of autonomous technology. That is, a UAV possessor inclined to start a war, perhaps against a country that does not possess UAVs, might find a suitable interpretation of what is "just", motivating him to start a war, with LOAC on his side. This point also clearly touches on and is linked to the question of proportionality.
Since UAVs, probably more than MAS, may lower the threshold for initiating war, as I have argued above, there might also be a difference in terms of interpreting LOAC to one's advantage if one is in possession of many UAVs, rather than many MAS, since the former probably can accomplish more.
Such a riskwhether it concerns UAVs or MAS -may, however, be prevented or softened by an overarching principle of impartiality. The basic idea is that one is not allowed to take any specific individual into account, and if we take that argument to its logical conclusion, not even individuals from a certain country. That is, the impartiality of utilitarianism may function as a safeguard when interpreting vague terms in LOAC.
The term "necessary" is even vaguer than "just" and lies at the heart of military ethics and laws: Do not make people suffer unless it is militarily necessary. But who and what will determine what is necessary? The implications for autonomous systems regarding interpretations of "necessary" are similar to the implications regarding interpretations of "just", in the sense that the term might be interpreted to the advantage of the UAV possessor. It might, for instance, be argued that a UAV attack is necessary in order to win a war faster and with fewer casualties. There would most likely not be any difference between UAVs and MAS in that respect. The two-level approachand a strong insistence on impartiality might help lessen the inclination to use armed force, but I have argued elsewhere (Johansson 2011) that military virtues of restraint, i.e. an argumentation inspired by virtue ethics, might be needed to avoid interpretations of necessity that are too liberal.
"Excessive" is a term closely connected to "necessary" in LOAC and might function as a moral restraint on a belligerent who is too focused on the end to care sufficiently about the means. In the conduct of hostilities, it is not permissible to do "any mischief which does not tend materially to the end [of victory], nor any mischief of which the conduciveness to the end is slight in comparison with the amount of the mischief" (Walzer 2006, 126; see also Sidgwick [1891Sidgwick [ ] 2010. What is being prohibited by that account is excessive harm, but the argument is somewhat circular. Walzer interprets this as involving two criteria. The first is that of victory itself, or what is usually called military necessity. The other depends upon some notion of proportionality: we are to weigh "the mischief done", which presumably means not only the immediate harm to individuals but also any injury to the permanent interests of mankind, against the contribution that this "mischief" makes to the end of victory. Walzer points out that the argument as stated sets the interest of the individuals and of mankind at a lesser value than the victory one is aiming at. Any act of force that would significantly contribute to victory would most likely be considered permissible. And any officer who would assert the "conduciveness to victory" of the attack he is planning is likely to have his way. So, we can see that proportionalityand the avoidance of excessive forceis an idea that is difficult to apply, since there is no "ready way to establish an independent or stable view of the values against which the destruction of war is to be measured" (Arkin 2009).
This has interesting implications for the use of autonomous technology. Regarding excessive harm, it might even be argued, based on utilitarianism, that the fact that one country lost no lives compared to the many lives that would have been lost in a regular fight might tip the scales for "mischief done" so that the end did indeed justify the means simply via the use of UAVs, which alone contributed to the fact that fewer people died. However, whereas a strict utilitarian would not distinguish between the nationalities of the lives lost, Hare's separation of levels might prevent that.

Conclusions
Autonomous systems have advantages in terms of reducing casualties for the possessor of autonomous systems, but they may at the same time make war seem more like a risk-free enterprise, therefore lowering the threshold for starting a war. Autonomous systems in general probably lower the threshold for starting war, and more so the higher the percentage of autonomous systems. The risk for such a lowering of the threshold associated with MAS in particular is probably lower than that for UAVs, however. While it is possible to win a war with a swift invasion by using UAVs, that would not be the case with MAS since they cannot get out of the water, at least not far. They may, however, function as platforms for UAVs. As such, MAS would, I surmise, mainly be used for defensive measures.
The problem with several terms in LOAC being open to interpretation, and the fact that different normative theories might provide conflicting results, are challenges that need to be addressed, particularly when it comes to autonomous systems. This article looked at interpretations from a rule-utilitarian point of view, but it might also be useful to use other normative theories, such as deontological ethics or virtue ethics, when discussing autonomous military systems and the vague terms in LOAC, and compare the results.
It is clearly important to discuss these issues today, even though very advanced robots may seem hypothetical. Philosophical discussions help systematize thoughts on this matter, and ethical debates today can help form the laws of tomorrow. There are movements which aim to "ban killer robots", that is, to prohibit even the development of autonomous military systems. A complete prohibition would seem unlikely, not least since countries that do not abide by such legal restraints might get ahead technology-wise, but it is important to read and develop LOAC with these potentially highly autonomous weapons in mind.
This article has hopefully shown that there is reason to investigate these questions further and that there may be a need to revise LOAC, possibly adding rules that focus specifically on autonomous systems or rules for interpretation. We also need, from the point of view of just war theory, to examine the difficult cases where a country that possesses autonomous systems intends to start a war with a country that does not. In those cases, it is particularly important to determine that the cause truly is just.
Notes moving target indicator (GMTI) capability, HAE (High-Altitude Endurance) UAVs or the future space-based radar could provide initial indications and warning of preparations for mining activities in coastal areas of interest to naval forces. Submarines of different types can employ uMAS in the littoral areas to execute bottom mapping as well as identification of the boundaries of mined areas. Identification of mine-free areas all the way to the beach would also be important, and the entire area would be kept under surveillance to prevent additional minelaying. In addition, small uMAS can conduct detailed reconnaissance of different types of areas, prior to any offensive operation. 8. Maritime security missions for sMAS are: (1) to collect intelligence data above the ocean surface (e.g. electromagnetic, optical, air sampling, weather) and below the ocean surface (e.g. acoustic signals, water samples, oceanographic or bethymetric info) and (2) deter enemy attacks on established U.S. and allied positions and material, including ships, while (3) keeping manned platforms out of harm's way. (See Department of the Navy 2007.) 9. The Global Hawk Intelligence, Surveillance and Rreconnaissance (ISR) UAV, for example, can choose between imaging and maneuvering when maneuvering would ruin an image. It can also choose an alternate airport when necessary, without operator input if communications are lost. However, these are still programed choices, and the decision hierarchy must be anticipated at the mission-planning stage, which is more similar to traditional expertsystem programing than to the still-developmental neural network, genetic algorithm or more modern artificial intelligence techniques. The level that includes waypoint navigation (en route navigation changes) and manual command of the payload may be adequate for present-day missions, but it does not provide a truly transformational capability. More complex tasks require more decision-making capability. 10. In Roman law, for instance, it was argued that "we must accept a vessel whether of the sea or of the river, or that sails on some other piece of standing water, or if it should be a raft" (Henderson 2006, 59). According to the International Maritime Dictionary (de Kerchove 1961, 822-823), "vessel" is a general term for all craft capable of floating on water and larger than a rowboat. The term vessel includes every description of water craft or other artificial contrivance used or capable of being used as a means of transportation on water.
The question is whether large uMAS fall under one rule while light-weight uMAS fall under another. 11. My discussion is based not least on Orend (2005). 12. There is little international law regulating jus post bellum, so one must turn to the moral resources of just war theory (Orend 2005). 13. "Självstyrande robotar under debatt" [«Self-navigating Robots under Debate»], radio debate, Ekot, September 4, 2009. Accessed November 1, 2018. http://sverigesradio.se/sida/artikel. aspx?programid=83&artikel=6136761. 14. The first three criteria of jus ad bellum (just cause, right intention and legitimate authority) might be considered mainly deontological (deontological ethics is based on duty, which is derived from either rationality (Kant), a social contract (contractualism) or natural rights), whereas the other main criteria ((last resort, reasonable chance of success and proportionality) are arguably more utilitarian (Orend 2005). Utilitarianism represents a consequentialist type of ethics, where consequences (and maximizing utility) are crucial, rather than motive or duty per se. However, even if we thus distinguish between mainly deontological and mainly utilitarian criteria, all these terms can be interpreted with different normative theories as their base, and this makes a difference in how they are understood and utilized.

Acknowledgement
The author would like to thank the referees and editors for the journal for useful suggestions and help.

Disclosure statement
No potential conflict of interest was reported by the author.

Notes on contributor
Linda Johansson has a PhD in philosophy from the Royal Institute of Technology in Stockholm, with a dissertation entitled Autonomous Systems in Society and War -Philosophical Inquiries. She is currently a lecturer and researcher at the Swedish Defence University in Stockholm.