Average Utilitarianism Implies Solipsistic Egoism

ABSTRACT Average utilitarianism and several related axiologies, when paired with the standard expectational theory of decision-making under risk and with reasonable empirical credences, can find their practical prescriptions overwhelmingly determined by the minuscule probability that the agent assigns to solipsism—that is, to the hypothesis that there is only one welfare subject in the world, namely, herself. This either (i) constitutes a reductio of these axiologies, (ii) suggests that they require bespoke decision theories, or (iii) furnishes an unexpected argument for ethical egoism.


Average Utilitarianism
Average utilitarinism (AU) holds that the overall value of a world is equal to the average lifetime welfare of all welfare subjects in that world. 1 Among population axiologies, AU has some notable and distinctive virtues: For instance, it avoids Parfit's [1984] 'Repugnant Conclusion', and would be chosen by selfish agents from behind a particularly natural version of the veil of ignorance. It has plenty of vices to counterbalance these virtues (see Hurka [1982a], among many others), and has therefore never been an especially popular doctrine. But it has attracted its share of advocates over the years, including Hardin [1968], Harsanyi [1977], and Pressman [2015]. 2 In the world that we appear to inhabit, AU has the slightly dispiriting consequence that we can only make a very small difference to the overall value of the world: since the number of welfare subjects is very large, even acts that produce enormous welfare improvements in absolute terms have only minuscule effects on average welfare.
To illustrate, there are currently about 7×10 9 human beings alive on Earth. There are also many billions of mammals, birds, and fish being raised by humans for meat and other agricultural products. And there are perhaps some 10 11 mammals living in the wild, along with similar or greater numbers of birds, reptiles, and amphibians, and a significantly larger number of fish-conservatively 10 13 , and possibly far more. 3 This is despite a significant decline in wild animal populations in recent centuries and millennia as a result of human encroachment.
To determine the total number of welfare subjects (by which we can divide total welfare to find an overall average), we must consider past as well as present individuals. (Future individuals count too, of course, but their numbers are much harder to estimate, and might depend on our choices.) Estimates of the number of human beings who have ever lived are on the order of 10 11 [Kaneda and Haub 2018]. But this number is dwarfed by past populations of non-human animals. In wild animal populations, most individuals die young (with smaller animals being both more numerous and shorterlived), and so birth and death rates in the wild animal population as a whole are unlikely to be less than 1 per individual per year (corresponding roughly to an average lifespan of 1 year). Being extremely conservative, then, we might suppose that all and only mammals are welfare subjects and that 10 11 mammals have been alive on Earth at any given time since the K-Pg boundary event (∼66 million years ago), with a population birth/death rate of 1 per individual per year. This implies a 'timeless population' of at least ∼6.6×10 18 welfare subjects. Being a bit less conservative (although perhaps still objectionably conservative), we might suppose that all and only vertebrates are welfare subjects and that 10 13 vertebrates have been alive on Earth at any time in the last 500 million years (since shortly after the Cambrian explosion), with a population birth/death rate of 10 per individual per year. This implies a timeless population of at least ∼5×10 22 welfare subjects.
Even by the more conservative estimate, we find that providing one unit of welfare to one individual increases average welfare by at most 1 6.6 × 10 18 ≈ 1.5 × 10 −19 units. By the less conservative estimate, increasing someone's welfare by one unit will increase average welfare by at most 1 5 × 10 22 = 2 × 10 −23 units.

Solipsism
There is, however, one hypothesis according to which the total number of welfare subjects in the world is quite a bit smaller-solipsism. Solipsism is the proposition that only I exist (or, in your case, the proposition that only you exist), and that what appears to be an external world populated with other individuals is in fact just a figment of my (or, respectively, your) imagination. If solipsism is true, then the total number of welfare subjects, the size of the universal population, is 1.
Solipsism is surely improbable. But just how improbable is it? Like average utilitarianism, it has several notable virtues: It is simple and ontologically parsimonious. It is a natural conclusion to draw from various arguments for external world scepticism (of the kind found, for example, in Descartes [1641] and Berkeley [1710]), if one does not simultaneously accept some 'rescue' hypothesis like theism. (And it is, at the very least, contentious whether those arguments for external world scepticism have ever been satisfactorily refuted.) Solipsism also provides a powerful answer to the otherwise intractable question 'Why am I me?'-namely, 'There isn't anyone else I could have been!' And, finally, it is a recurring and enduring idea in the history of philosophical thought, having been entertained (in various forms) by such thinkers as the Buddhist philosopher Ratnakirti (see Kajiyama [1965]), Wittgenstein [1922], and Hare [2009]. 4 For our purposes, it will be necessary to go beyond these general observations and to say something about what probability one might reasonably assign to solipsism. A bit more specifically, the interest of the following arguments depends on the claim that one's credence in solipsism should be not absurdly small-not less than, say, 10 −9 (one in a billion).
Of course, assigning probabilities to philosophical hypotheses is at best a matter of rough guesswork; we do not, for instance, have objective chances to guide us. But we can do better than simply pulling plausible-seeming numbers out of thin air. We can, for instance, consider a more complete and fine-grained space of possibilities over which our probabilities should sum to 1, and aim for reflective equilibrium among the credences that we assign to the various possibilities in that space. Fully carrying out that exercise here would be a tedious experience for the reader. But we can do a first approximation, aiming simply to find a plausible lower bound.
It seems clear that my credence that my common-sense view of the world has 'got things basically right', metaphysically speaking, should not be greater than 0.9. That is, given how little we have to go on, the lack of expert consensus in basic metaphysics, and trying to correct for the general human tendency toward overconfidence, I should have at least 0.1 credence that the world is in some way fundamentally very different than I take it to be. A good chunk of that credence should go to 'some possibility that nobody has ever thought of'. But it also seems overconfident, conditional on my ordinary view of the world being wrong, to have credence greater than 0.9 in that possibility. This leaves at least 1% of my credence to distribute over known revisionary metaphysical hypotheses. And then, to get some sense of a lower bound on my credence in solipsism, I should ask, first, 'Are there any other known revisionary hypotheses that are many orders of magnitude more probable than solipsism?' (to which, it seems to me, the answer is 'no') and, second, 'Are there thousands or millions of known revisionary hypotheses that are at least roughly as plausible as solipsism?' (to which, again, the answer seems to be 'no'). Taken together, these observations suggest that my credence in solipsism should be at most a few orders of magnitude less than 1% (that is, 10 −2 ).
All in all, then, while it would strike me as somewhat unreasonable to assign solipsism a probability greater than 10 −2 , it also seems unreasonable to assign it a probability less than 10 −9 . To assign any more extreme probability would not display due modesty about our understanding of matters metaphysical. 5

Solipsistic Egoism
Now, consider an average utilitarian, Ava, who assigns solipsism a subjective probability of 10 −9 , and must choose between taking one unit of welfare for herself, or providing a thousand other welfare subjects with a thousand welfare units each. And let's suppose she believes that, if solipsism is false and the external world/other minds are real (hereafter, 'realism'), then the total number of welfare subjects in the world is 10 18 . (For simplicity, I am rounding down our already conservative lower-bound estimate of 6.6×10 18 , and ignoring the credence that Ava ought to have in larger population sizes, which would only strengthen our conclusions.) And let's assume (without any loss of generality) that, regardless of whether or not solipsism is true, average welfare prior to Ava's intervention is 0. This situation is summarized in Table 1. Now, suppose that Ava responds to risk in the standard way, by maximizing expected value. Given the facts stipulated above, the expected value of the altruistic option is (1 − 10 −9 ) × 10 −12 + 0 × 10 −9 ≈ 10 −12, while the expected value of the selfish option is (1 − 10 −9 ) × 10 −18 + 1 × 10 −9 ≈ 10 −9 . That is, even though the altruistic option almost certainly yields a million times more value than the selfish option, the selfish option has a thousand times greater expected value, because if solipsism is true, and only Ava exists, then she can have astronomically greater impact on average welfare than she could otherwise hope for. Despite the enormous disparity in stakes, we find that Ava ought to choose the selfish option as long as her credence in solipsism is greater than ∼10 −12 . Conversely, holding fixed her credence in solipsism at 10 −9 , we find that she should give her own interests a billion times more practical weight than anyone else's: that is, her interests carry a billion times greater weight in expectation. 6,7 Altruistic option 10 6 10 18 = 10 −12 0 10 −12 Selfish option 1 10 18 = 10 −18 1 10 −9 Among those who gave sharp credences, responses spanned the range [0,0.5], with a median of 0.01 and a mean of ∼0.105. Arbitrarily excluding the answers that I take to be clearly irrational (those outside the interval (0,0.1]) still gives a median of 0.01 but reduces the mean to ∼0.048. On the other hand, including a few participants who gave interval credences, at the lower bound of their intervals, gives a median of 10 −6 and a mean of ∼0.091. Of course, all of these numbers are quite a bit greater than 10 −9 . 6 As I discovered while revising this paper, the vulnerability of AU to this sort of 'solipsistic swamping' has been noticed once before, in a short blog post by Caspar Oesterheld [2017]. I am not aware of any other discussions of the phenomenon, or any discussion of its generalization to the other axiologies besides AU discussed below. 7 We will consider shortly whether maximize expected value is the right decision rule for Ava to follow here. But I will take it for granted that ethical theories like AU must, in one way or another, include or be combined with some rule for decision-making under risk. This can be done in various ways. For instance, an average utilitarian might adopt a subjective criterion of rightness according to which an option is right iff it maximizes average welfare in expectation, or she might recognize both objective and subjective rightness. But even an average utilitarian who is a univocal objectivist about rightness, holding that an option is right iff it maximizes average welfare in fact, can and should provide a substantive theory of decision-making under risk. She might, for instance, hold that an option is right iff it maximizes average welfare in fact, but rational iff it maximizes average welfare in expectation. Or she might hold that her objective criterion of rightness applies not only to choices among acts but also to choices among decision procedures, and that, at least

Generalizations
We assumed that Ava accepts a particular (very natural) version of average utilitarianism, which has been our exclusive focus so far. But, as Thomas Hurka has emphasized, there are many non-equivalent views that can be described as 'average utilitarian'. He describes [1982a, 1982b] a total of eleven such views, which he names A1-A11. (Ava's view, which I have called AU and which tells us to maximize average lifetime welfare in the timeless population, is Hurka's A1.) These theories are not all equally vulnerable to solipsistic swamping. For A2, which tells us to maximize the sum of momentary welfare averages (that is, averaging welfare at each time and then summing across times), the crucial number that determines how much solipsism magnifies one's efficacy is the size of the present population, rather than the timeless population. For A7, which tells us to maximize the average lifetime welfare of present and future people (ignoring the past), the crucial number is of course the size of the present and future population. So, either of these views somewhat dampens the swamping phenomenon. On several other views (Hurka's A3, A4, A6, A8, A9, and A11), which involve averaging across times, things depend on how long the agent believes that she will exist if solipsism is true, and how long the universe as a whole will contain welfare subjects if solipsism is false. A5 and A10 evade the solipsistic swamping problem entirely; but, as Hurka points out, these views are independently very implausible. Solipsistic swamping also threatens other axiologies that try to capture the intuitive attractions of AU in large-population contexts (like avoiding the Repugnant Conclusion). For instance, consider the view that Hurka [1983] calls 'Variable Value I' (VV1), according to which the value of a population X is given by where X is the average welfare level in X, |X| is the number of welfare subjects in X, and f is a function that is strictly increasing, strictly concave, and has a horizontal asymptote. 8 'Variable value' axiologies are meant to resemble total utilitarianism for small populations and average utilitarianism for large populations, reflecting the intuition that increasing the size of a (happy) population, without changing its average welfare, adds value when the population is small but has diminishing marginal value as population size increases.
How vulnerable is VV1 to solipsistic swamping? Very roughly, the crucial factor is the ratio r between f (1) and the horizontal asymptote of f. If this ratio is much larger than the minimum population size conditional on realism, then VV1 might agree arbitrarily closely with total utilitarianism, and so be safe from solipsistic swamping. If r is much smaller than that minimum population size, then VV1 will reduce the extreme practical weight that AU gives to solipsism by approximately a factor of r. To illustrate for an agent who faces many future choices, what will maximize average welfare in fact is to adopt the decision procedure of maximizing average welfare in expectation. 8 This view is also discussed by Ng [1989], under the name 'Theory X ′ '. the latter case, suppose that Here r ≈ 10 9 , meaning that the value of a large population with a given average welfare level converges to roughly one billion times the value of a singleton population at that same welfare level. Now the problem of solipsistic swamping persists, but is much less extreme. Since f (10 18 ) f (1) ≈ 10 9 , the relative weight of the solipsistic hypothesis is reduced by a factor of nearly 10 9 , and so, using the numbers from our original example (in Table 1), we now find that the altruistic option has greater expected value than the selfish option. But the problem is far from vanquished. Consider a new agent, Valerie, who (i) accepts VV1 with the f specified above, (ii) assigns solipsism a credence of 10 −9 , (iii) accepts our slightly-less-conservative estimate of the minimum population size conditional on realism, which, for simplicity, we will round down to 10 22 (as compared to 10 18 in the case of Ava), and (iv) must choose between taking one welfare unit for herself or providing a thousand other welfare subjects with one welfare unit each (for a thousand units in total, as compared to a million in the case of Ava). For Valerie, like Ava, selfishness is the order of the day-even though she is nearly certain that the altruistic option will produce far more value (Table 2). A bit more generally, given a VV1 axiology with the f specified above, 10 −9 credence in solipsism, and a minimum population of 10 22 conditional on realism, Valerie should give her own interests at least 10,000 times as much weight as anyone else's, because of her credence in solipsism. 9 On the other hand, the rank-discounted utilitarian (RDU) axiology defended by Asheim and Zuber [2014] faces a more extreme form of solipsistic swamping than even AU does. On this view, the value of a population X is given by where the members of X are indexed in order of increasing welfare by their rank r, w x (x r ) gives the welfare of the rth-worst-off individual in X, and β ∈ (0,1) is a constant 10 −18 9 Hurka also describes a view that he calls Variable Value II, which applies an increasing and concave transformation g to average welfare, so that the overall value of a population X is given by g( X)f (|X|). This view behaves like VV1 for our purposes, with the additional caveat that, if g is sufficiently concave or the amount by which an agent can improve her own welfare is sufficiently great, then g will further moderate the difference in stakes between solipsism and realism.
that determines the degree to which worse-off individuals are prioritized over betteroff individuals. This view does not uniformly discount the interests of each individual in large population scenarios, as average utilitarianism does: the worst-off individual, for instance, always gets exactly the same weight, regardless of population size. But because the weight given to the interests of better-off individuals diminishes geometrically with their welfare rank, the interests of all but the very worst off can be dramatically discounted in large-population scenarios. For instance, suppose that Ragnar (i) accepts RDU with β = 0.99999 (β closer to 1 implies less discounting of the better off), (ii) assigns solipsism a credence of 10 −9 , (iii) believes that there are at least 10 18 welfare subjects, conditional on realism, and (iv) must choose between taking one welfare unit for himself or providing a million other welfare subjects with one welfare unit each. Further, suppose Ragnar knows that none of the million individuals he has the chance to benefit are among the 10 9 worst-off. (If the total number of welfare subjects is at least 10 18 , then it is extremely unlikely, in any given choice situation, that one is in a position to help any of the 10 9 worst-off.) On the other hand, Ragnar recognizes that if solipsism is true, then he is very well positioned to improve the welfare of the very worst-off individual in the whole universe-namely, himself.
Because of the power of geometric discounting, Ragnar will find that there is a truly dramatic disparity between the solipsistic and non-solipsistic stakes (Table 3): if solipsism is true, then he can do at least 10 4337 times more good (by acting selfishly) than he could do if solipsism is false (by acting altruistically). RDU, then, produces a much stronger form of solipsistic swamping than even AU. 10

Escape Routes
So, not just AU but several other prima facie plausible population axiologies as well are vulnerable (in different degrees) to solipsistic swamping. This suggests that we must either reject all of these axiologies or embrace de facto ethical egoism. But there are various ways in which we might try to avoid that forced choice. I will briefly consider five such escape routes.
First, perhaps we were too generous to solipsism in supposing that it deserves a credence of at least 10 −9 . As we have seen, however, even much smaller credences (say, 10 −1000 ) are enough to make trouble for RDU. And if the minimum population size conditional on realism is significantly larger than we have so far supposed (for Table 3. Solipsistic swamping for RDU (b = 0.99999) Solipsism is false.
Solipsism is true. EV 1 − 10 −9 10 −9 Altruistic option , 10 6 × b 10 9 0 , 10 −4337 Selfish option , 1 × b 10 9 b 10 −9 10 Of course, the proponent of RDU could always hand-select a β close enough to 1 to avoid solipsistic swamping. But, apart from the ad hoc-ery of this tactic, it seems very likely that such a large β would make RDU practically indistinguishable from total utilitarianism. At the very least, RDU must thread a very tight needle to avoid collapsing into egoism, on the one hand, or totalism, on the other. (And, no matter what β we choose, as long as we hold it fixed in the face of new empirical information, we will be in constant danger of collapsing into solipsistic egoism if we come to believe that the total world population is significantly larger than we thought, or into de facto total utilitarianism if we come to believe that it is smaller.) example, if the universe contains many other biospheres with welfare subjects, and perhaps many large interstellar civilizations), then AU and VV1 are also vulnerable to swamping from much smaller credences in solipsism. So, reducing our credence in solipsism (as long as it remains non-zero) is at best an insecure escape route. The potential for solipsistic swamping to survive even many-order-of-magnitude reductions in the probability of solipsism highlights its resemblance to other problem cases in which expected value calculations are dominated by tiny probabilities of extreme scenarios (for example, Pascal's wager [Pascal 1669], Pascal's mugging [Bostrom 2009], or the St. Petersburg game [Bernoulli 1738]). Merely noting this similarity serves at best to categorize the solipsistic swamping problem, not to solve it. But it also suggests that we might avail ourselves of existing strategies for responding to problems of this general type. For instance, a second possible response is to adopt the perennial suggestion of simply ignoring very small probabilities [Buffon 1777;Smith 2014;Monton 2019]. But, without entering into the large existing debate over this proposal, suffice it to say that it carries quite serious drawbacks and is generally considered unsatisfactory [Hájek, 2014;Isaacs, 2016;Lundgren and Stefánsson forthcoming].
The comparison with Pascal's wager also suggests a third response, analogous to the famous 'many gods' objection: perhaps there are other very-small-population scenarios besides solipsism that are more probable and/or higher-stakes, and that favour some practical conclusion other than egoism. For instance (as one reviewer suggested), perhaps the external world, rather than being a figment of my imagination (à la solipsism), is an illusion created for me by some more powerful being G (for example, God), who will reward or punish me depending on my choices-perhaps rewarding me if I act altruistically and punishing me if I act selfishly. If we further suppose that G and I are the only welfare subjects in the timeless population, then AU et al will similarly assign outsized importance even to very small credences in this hypothesis.
I am not convinced that this particular hypothesis will reverse or otherwise override the egoistic implications of the solipsistic hypothesis. Consider how it affects the implications of expectational AU, for instance. On the one hand, suppose that G is perfectly morally good. In that case, it seems that I should expect G to 'reward' me by maximizing my welfare whatever I do (since, all else being equal, that will maximize average welfare)-and, at any rate, not to punish me for doing whatever would maximize expected average welfare before accounting for G's rewards and punishments. If G is not all-good, on the other hand, then, as far as I can see, all bets are off. There is no particular reason to think that G is more likely to reward me for one course of action rather than for another (for example, for acting altruistically rather than selfishly, or vice versa). In either case, the hypothesis that only G and I exist seems practically inert-or, rather, reinforces the case for solipsistic egoism by providing another high-stakes (because small-population) scenario in which I have no clear reason to do anything other than to maximize my own welfare.
With that said, I do want to concede the general possibility of other low-probability scenarios involving very small populations whose practical implications contradict and supersede those of solipsism. But it would be remarkable, at least, if the practical implications of these ultra-swamping scenarios turned out to match closely the implications of our ordinary common-sense beliefs about the world. So, while the possibility of 'many gods'-style objections should make us less confident that averagist, variable value, and rank-discounted axiologies support egoism in the last analysis, it only reinforces the general point that the practical implications of these views can be hijacked, and taken in unexpected and counterintuitive directions, by very small credences in very-small-population scenarios.
The 'G' hypothesis, however, also serves to motivate a fourth response to solipsistic swamping: perhaps the problem comes from responding to metaphysical uncertainty by using a decision rule (expected value maximization) that is appropriate only for responding to empirical uncertainty. Metaphysical uncertainty seems to provide especially fertile ground for extreme low-probability hypotheses that, when handled with an expectational decision rule, generate fanatical 'swamping' effects. The 'G' hypothesis arguably illustrates this point. For another example (suggested by a different reviewer), consider a metaphysical hypothesis that endorses unrestricted mereological composition and allows that welfare subjects can overlap with and include one another, such that what we take to be single welfare subjects are in fact vast colonies of overlapping welfare subjects (for example, you-minus-one-neuron, you-minus-a-different-neuron, …). On this hypothesis, plausibly, the vast majority of all welfare subjects will be proper parts of the single largest welfare subject, because its greater number of parts permits a vastly greater number of mereological combinations. In this case, it is total utilitarianism and kindred theories that are in danger of swamping: even a tiny credence in this hypothesis might imply that it is overwhelmingly important, in expectation, to benefit the largest welfare subject, and thereby its proper parts.
Again, I don't want to deny that there might be other metaphysical hypotheses besides solipsism that can generate fanaticism problems for particular normative theories. But I doubt that drawing a decision-theoretic distinction between metaphysical and empirical uncertainty is the right way to deal with these problems. First, the line between empirical and metaphysical uncertainty is far from clear. For instance, although solipsism may be a topic for metaphysicians, I find it hard to say precisely what makes it a metaphysical hypothesis as opposed to a (deeply revisionary) empirical hypothesis about what concretia exist and how they interact with and explain one another. Second, if we don't respond to metaphysical uncertainty like empirical uncertainty, how should we respond to it? There is a clear analogy here with the recent literature on decision-making under normative uncertainty, where there has been much debate over whether the same decision rules should apply to normative and empirical uncertainty. The two most widely discussed views that distinguish normative from empirical uncertainty are (i) 'My Favourite Theory', according to which an agent should simply act on the single normative theory in which she has greatest credence [Gracely 1996;Gustafsson and Torpman 2014] and (ii) normative externalism, according to which an agent should simply act on the true normative theory, regardless of whether she believes that theory or her evidence supports it [Weatherson 2014;Harman 2015;Hedden 2016]. By analogy, we might avoid problems of metaphysical fanaticism by adopting a 'My Favourite Metaphysics' view, according to which an agent should simply act on the single metaphysical theory in which she has greatest credence, or 'metaphysical externalism', according to which an agent should simply base all her choices on the metaphysical truths, regardless of her beliefs or evidence. To my knowledge, nobody has defended either of these views, and they seem fairly ad hoc and undermotivated. In particular, the standard motivations for the corresponding views about normative uncertainty (for example, that hedging between rival normative theories requires problematic intertheoretic comparisons [Hedden 2016] or involves 'moral fetishism' [Weatherson 2014]) do not transfer to their metaphysical counterparts. 11 In any event, if the right conclusion to take from the solipsistic swamping problem is that we must draw a basic decision-theoretic distinction between empirical and metaphysical uncertainty, this itself would be an important and unexpected conclusion.
Perhaps most importantly, however, solipsistic swamping is just the limiting case of a more general problem for averagist, variable value, and rank-discounted axiologies, which has no intrinsic connection either with metaphysical uncertainty or with very small probabilities-namely, that, when combined with standard expectational decision rules, these views all seem to over-weight small-population scenarios. For instance, consider an average utilitarian who assigns 1% credence to the hypothesis that the universe will only ever contain 10 20 welfare subjects, and 99% credence to the more optimistic hypothesis that advanced future civilizations will eventually support 10 50 welfare subjects or more [Bostrom 2013]. The same absolute welfare improvement matters 10 30 times more in the former scenario and therefore, discounting for her credence, matters 10 28 times more in expectation. Thus, even though she is quite confident in the 'optimistic' hypothesis, she should base her choices almost entirely on the 'pessimistic' hypothesis. 12 More generally, expectational average utilitarians will generally give almost no practical weight to states that imply very large timeless populations, even when those states are very probable. Apart from optimism about the future of humanity, such states might correspond to (i) hypotheses that attribute sentience to more beings-for example, to insects, other invertebrates, or relatively simple artificial intelligences-or (ii) cosmological hypotheses that imply that the universe is very large and hence contains many non-Earth-originating welfare subjects (as well as exobiological hypotheses that imply a higher probability of welfare subjects emerging in a given star system). If we find this general phenomenon of 'small-population swamping' counterintuitive, then ignoring either metaphysical uncertainty or small probabilities won't help, since we cannot assume that smallpopulation scenarios will always be metaphysical in character or deserve de minimis probability.
A fifth and final strategy, then, is to attempt a more thoroughgoing abandonment of standard expectational decision theory: perhaps averagist, variable value, and rankdiscounted axiologies must be equipped with their own bespoke theories of decision-making under risk that avoid the tyranny of small-population scenarios. This strikes me as the most plausible way for average utilitarians et al. to avoid solipsistic swamping, and the only possible way to avoid the more general phenomenon of small-population swamping. But it is not immediately obvious what these revised decision rules should look like, and, in departing from standard decision theory, they are likely to incur significant theoretical costs. 13 And, in any case, if we conclude 11 On the other hand, several of the objections to My Favourite Theory and normative externalism do seem to transfer-e.g. the problem of theory individuation for My Favourite Theory [MacAskill and Ord 2020: 334-5] and the 'dependence problem' for externalism [Podgorski 2020]. 12 Of course, this is complicated by the facts that (i) if the optimistic hypothesis is true then agents like us may be able to have much greater impact on total welfare, and so perhaps a similar level of impact on average welfare, and (ii) we might be in a position to significantly influence the population size of future civilization. 13 Here is one example. Teruji Thomas suggests an extension of average utilitarianism that ranks risky prospects by expected total welfare divided by expected population size [2016: 150]. This view straightforwardly avoids solipsistic swamping. But it has the very significant downside of violating statewise dominance-that is, preferring options that yield worse outcomes in every possible state of the world. (As proof, consider a choice between a lottery L 1 that yields one individual with welfare 10 in state S 1 or 9 individuals with that particular views in population ethics cannot safely appeal to the best-developed and most widely accepted theory of decision-making under risk, this itself would also be a notable conclusion. Unless we can find some clever decision-theoretic escape, then, we are left with a conditional: if certain otherwise plausible axiologies are correct, then the best thing that we can do, ex ante, to make the world a better place is to act selfishly (to greater or lesser extents, depending on the axiology). This leaves us, of course, with two further options: reject all of these axiologies, or embrace (de facto, impartially motivated) ethical egoism. 14

Disclosure Statement
No potential conflict of interest was reported by the author.

ORCID
Christian J. Tarsney http://orcid.org/0000-0002-6324-7145 welfare 20 in state S 2 , and a lottery L 2 that yields 9 individuals with welfare 11 in S 1 or 1 individual with welfare 21 in S 2 , where S 1 and S 2 are equiprobable. L 2 statewise dominates L 1 , by average utilitarian lights, but the 'expected total welfare divided by expected population size' method of evaluating lotteries gives L 1 a value of 19 and L 2 a value of 12.) A simpler strategy is just to change the shape of the utility function-for example, to hold that average utilitarians should maximize the expectation, not of average welfare, but of some non-linear transformation of average welfare. But this does not seem to help very much: in any region where the transformation is concave, solipsistic swamping will be less extreme when the possible outcomes in the solipsistic state are better than the possible outcomes in the realist state, but correspondingly more extreme when the possible outcomes in the solipsistic state are worse than the possible outcomes in the realist state. Where the transformation is convex, the pattern is reversed. If the possible outcomes, given realism, are located very close to an inflection point where the transformation goes from convex to concave, then solipsistic swamping might be mitigated more generally. But finding ourselves at such an inflection point can only be a matter of luck, and any help that it gives us with the solipsistic swamping problem falls apart if we come to believe that the possible outcomes (average welfare levels), given realism, are significantly better or worse than we previously thought. 14 For helpful feedback on earlier versions of this paper, I am grateful to Teruji Thomas, Michael Pressman, Dean Spears, audiences at MIT and the 2021 APA Eastern Division Meeting, and two anonymous reviewers for this journal.