Measuring, modeling, and managing systemic risk: the missing aspect of human agency

It is problematic to treat systemic risk as a merely technical problem that can be solved by natural-science methods and through biological and ecological analogies. There appears to be a discrepancy between understanding systemic risk from a natural-science perspective and the unresolved challenges that arise when humans with their initiatives and interactions are included in systemic-risk considerations. It is therefore necessary to investigate possible fundamental differences and similarities of systemic risk with and without accounting for human involvement. Focusing on applied and implementation aspects of measuring, modeling, and managing systemic risks, we identify three important and distinct features characterizing such fundamental differences: indetermination, indecision, and responsibility. We contend that, first, including human initiatives and interactions in systemic-risk considerations must emphasize a type of variability that is especially relevant in this context, namely the role of free will as a fundamental source of essential indetermination in human agency. Second, we postulate that collective indecision generated by mutual uncertainty often leads to the suspension or alteration of rules, procedures, scripts, and norms. Consequently, the associated systemic risks cannot be incorporated into explanatory models, as the new causal rules cannot be predicted and accounted for. Third, analogies from biology and ecology, especially the idea of ‘ contagion, ’ downplay human agency, and therefore human responsibility, promoting the false belief that systemic risk is a merely technical problem. For each of these three features, we provide recommendations for future directions and suggest how measuring, modeling, and managing approaches from the nat-ural-science domain can best be applied in light of human agency


Introduction
Systemic risk is currently a prominent research area, due to events of the recent past (May and Arinaminpathy 2010).However, there appears to be a discrepancy between understanding systemic risk from a natural-science perspective and the unresolved challenges that arise when humans and their initiatives and interactions are included in systemic-risk considerations.In fact, at present, most systemic-risk research does not account for such human agency (Page 2015), resulting in analogies often being drawn between human systems and biological or ecological ones (Haldane and May 2011).For systems in which human agency plays an important role, these comparisons can be misleading and sometimes obscure the problem rather than illuminate it.For example, the appropriateness of interpreting banking networks as ecosystems can be questioned, especially regarding the role of human agency during the financial crisis and the extent to which human actors can manage, adapt to, and control risks (Peckham 2013).Therefore, from an applied and implementation perspective, the current technical view on systemic risks must be broadened to include the special nature of human agency.A key prerequisite for the successful interfacing of human-agency aspects with natural-science perspectives is to better understand key differences between systemic risks with and without accounting for human involvement.
Our analysis is based on an interdisciplinary review including findings from physical, ecological, economic, financial, and social systems (Table 1).Such a comparison of theoretical and applied systemic-risk studies conducted in diverse fields is indispensable for a better understanding of the true nature of systemic risks, and can shed light on how and why systemic risks originate and when and where they can be decreased (Helbing 2013).Indeed, in nearly all studies of systemic risks, the question of how to reduce them is highly relevant (Cooley et al. 2009).However, before managing systemic risks, they need to be appropriately measured and modeled.Therefore, the three aspects of measuring, modeling, and managing systemic risks are interlinked, and all of them need to be considered as equally important, because each subsequent aspect relies on the previous one (Pflug and R€ omisch 2007).Although there are similarities, in many ways the measuring, modeling, and managing of systemic risks should be treated fundamentally different in systems that include human agents than in systems that do not.
Generally speaking, systems such as social systems that consist of situated, adaptive, and heterogeneous agents whose interactions produce higher-order structures and functionalities tend to be non-predictable, hard to describe or define, and prone to large events characterized by Table 1.Selected key papers on systemic risk according to discipline.Discipline long-tailed distributions (Centeno et al. 2015;Page 2015).Unfortunately, as a system's size and complexity grows (and with them, the importance of interdependent relations), increasingly less data are usually available to measure and model the system's behavior and the potentially associated systemic risks.
The aforementioned difficulties in the measuring, modeling, and managing of systemic risk have stimulated a diversity of methodological approaches across fields.Note that these contributions were often not using the term 'systemic risk', which until recently was very much anchored in financial discourses (Cline 1984).In fact, most pre-2007 methodological contributions to the measuring, modeling, and managing of systemic risk were focusing on biological, ecological, and infrastructure systems.The approaches applied in these fields have changed quite rapidly within just a few decades, with implicit assumptions regarding equilibria increasingly being recognized as inadequate for understanding system behavior (Gardner and Ashby 1970;Holling 1973;May 1973;Pimm and Lawton 1978;DeAngelis and Waterhouse 1987;Naeem and Li 1997;Naeem 1998;and especially the review by McCann 2000).Various ecological instabilities in the 2000s, but particularly the global financial crisis of 2007/2008, have increased interest in, and funding for, complex adaptive behavioral systems analysis, especially in regard to systemic risk (Gai and Kapadia 2010;Allen, Babus, and Carletti 2012;Amini, Cont, and Minca 2016;Georg 2013, Staum 2013;Caccioli et al. 2014).However, few systemic-risk studies have explicitly included humans in their analysis (Page 2015), and systematic comparisons between approaches based on social science and natural science are not yet available.The present paper aims to fill part of this knowledge gap, by discussing important features and misunderstandings, as well as possible ways forward.
To present our position, we divide this paper into a discussion of a 'natural-science perspective' (Section 2) that focuses on the measuring, modeling, and managing of systemic risk for systems that do not include humans and a 'human-agency perspective' (Section 3) that recognizes at least three important and distinct features characterizing systems involving human agents: indetermination, indecision, and responsibility.Of course, human aspects are not absent in the natural-science domain, where risks are typically defined according to human preferences and interests, even when concerning purely technical systems.Likewise, systemic risks are related to common notions of loss, dysfunction, or collapse as viewed from the perspectives of one or more well-defined human groups.Rather, the difference between the natural-science perspective and the human-agency perspective lies in the explicit exclusion or inclusion, respectively, of human initiatives and interactions in the risk-producing processes.It should be noted that the basic phenomenasuch as emergence, circular causality, and tipping pointsassociated with systemic risks are conceptually the same with and without human actors, and are addressed, for example, in complexity theory.However, when it comes to the measuring, modeling, and managing of systemic risks in systems involving human actors, the initiatives of, and interactions among, these humans must be considered in detail: as we will discuss below, systems involving human actors therefore require much more complex analyses than, and differ in fundamental features from, those without them.We end this paper with conclusions and suggestions for future research directions (Section 4).

Natural-science perspective
The natural-science perspective focuses on systems in which human initiatives and interactions are generally not included in the scope of study.This mostly concerns environmental, economic, and technical systems.Below, we first identify the most important measures of systemic risk that have been used in the past or were recently developed within different disciplines.Then we give an overview of approaches for modeling systemic risk.Finally, we discuss a variety of available management options.This threefold synopsis serves as a basis for our subsequent discussion about human-agency aspects relevant to systemic risk.
Before discussing the measurement aspects of systemic risk below, we emphasize that these are often hard to disentangle from modeling aspects and that analyses of systemic risk can commence either from measuring or from modeling systemic risk.

Measuring systemic risk
Systemic risk typically involves cascading effects among interconnected agents, leading to collective losses, dysfunctions, or collapses.Quantitative measures of systemic risk have traditionally been defined and investigated mostly in the financial domain, especially in regard to banking systems (Table 2).The emphasis is usually on the interdependencies and contributions of nodes to systemic risk, with mechanistic approaches seeking to identify the mechanisms through which damages can propagate.In the ecological domain, field studies and experiments strive to measure the full impacts of perturbations in ecosystems, such as secondary species extinctions or biomass reductions.Different types of shocks can be studied, including so-called pulse or CoVaR measures the value that is at risk in a system at a given quantile level q, conditional on an event stressing a set of institutions (i.e. when X is the negative random variable describing the systemlevel losses triggered by the event, CoVaR satisfies Prob(X CoVaR) ¼ q); DCoVaR measures how CoVaR changes when the system's "normal" operation becomes further stressed.Adrian and Brunnermeier (2011) Systemic risk index (SRISK) Measures the amount of capital an institution would need to raise in order to function normally given an event that stresses a set of institutions.
Brownlees and Engle (2017) Distress insurance premium (DIP) Measures the expected system-level loss given that the loss triggered by an event that stresses a set of institutions exceeds a pre-defined threshold level.Huang, Zhou, and Zhu (2009) Default impact (DI) Measures the total loss in capital in a system caused by the cascade triggered by the default of an institution, excluding the loss from this initial default.Cont, Moussa, and Santos (2010) Contagion index (CI) Measures the expected system-level loss conditional on the event stressing a set of institutions.

DebtRank
Measures the recursively defined impact on a system resulting from an event that stresses an institution, allowing only for impact pathways that do not visit the same institutional links twice.Battiston et al. (2012a) Measures in copula models Measure the conditional value at risk (CoVaR, see above) given an event that stresses a set of institutions, with institutions having a nonlinear probabilistic dependency structure described by a copula.Brechmann, Hendrich, and Czado (2013) Set-valued measures Measure the set of additional capital allocations that make an initial capital allocation acceptable.
Feinstein, Rudloff, and Weber (2017) press perturbations, as well as the removal of species (Dunne, Williams, and Martinez 2002;Kondoh 2003;Scheffer and Carpenter 2003;Ives and Carpenter 2007).In socioeconomic systems, case studies can help identify the mechanisms that cause systemic risk.Models that integrate such mechanisms are built to identify or construct potential indicators such as DebtRank, which is a network metric of the interbank liability network (Battiston et al. 2012a).These models are used not only to measure systemic risks but also to measure the contribution of individual nodes to these risks.
The analysis of ecosystem dynamics has also shown that systemic risk can result in regime shifts.For example, Folke et al. (2004) performed a comprehensive review of the evidence for regime shifts in terrestrial and aquatic ecosystems.Such tipping points are consistent with the behavior of nonlinear dynamical models and boil down to the mathematical concept of bifurcations.In particular, regime shifts can occur more easily if a system's resilience has been reduced, e.g. through the removal of functional groups or via alterations of the disturbance regime to which a system had previously adapted (Schr€ oder, Persson, and de Roos 2005).Indicators of resilience, a notion originally defined by Holling (1973) as the largest magnitude of disturbance a system withstands without shifting to a different state, can thus be inverted to measure systemic risks.
Instead of measuring the risk itself, it is often proposed to measure a system's internal features that enable systemic risk, such as the structure of interactions among individuals or institutions.For example, in ecosystems, species variability (Naeem and Li 1997;Naeem 1998;McCann 2000) or diversity measures (Gardner and Ashby 1970;May 1973;Pimm and Lawton 1978) are often considered as indicators of stability, although this has been the subject of considerable debate (see below).

Modeling systemic risk
Systemic risks that emerge in networked systems often result from propagation phenomena.Modeling is a crucial tool for organizing knowledge on such complex systems, identifying their vulnerabilities, designing indicators, guiding the collection of data, and developing interventions.Most models of systemic risk consider a network of interactions between nodes that can represent individuals, species, firms, institutions, or processes.At the highest level of abstraction, models from graph theory analyze how generic classes of networks break down when some nodes or links are removed (Albert, Jeong, and Barab asi 2000;Callaway et al. 2000;Cohen et al. 2000;Albert and Barab asi 2002).Such representations can apply to networked infrastructures such as the Internet or power grids, and have, for instance, been used to establish differentiated responses to random and targeted attacks on such infrastructures (Albert, Jeong, and Barab asi 2000).
However, systemic risk is not purely determined by the static structure of these networks, but is also generated by dynamic processes that occur on them.The simplest dynamic is the process of contagion (Watts 2002;Dodds and Watts 2004), which applies, e.g. to diseases and computer viruses (May and Lloyd 2001;Newman 2002).In other systems, the redistribution of load in a network is the crucial dynamic that generates systemic risks.For example, in power grids, when an electric station breaks down, the electrical load it was carrying is redistributed to other stations, some of which may receive a larger load than they can withstand, thereby triggering new failures (Crucitti, Latora, and Marchiori 2004a;Zhao, Park, and Lai 2004;Wang and Rong 2011).In financial networks, if a bank defaults, its liabilities, which are assets of other banks, are written off, and this may precipitate new defaults (May and Arinaminpathy 2010;Haldane and May 2011;Battiston et al. 2012aBattiston et al. , 2012b)).This kind of domino effect has also been studied in supply chains, as suppliers and customers are financially interdependent (Fujiwara 2008;Delli Gatti et al. 2009).
Modeling frameworks aimed at generalizing these different dynamics have been proposed (Lorenz, Battiston, and Schweitzer 2009).In reality, the nodes may dynamically react to local failures and adapt to systemic risk.Such behavior can be modeled using the frameworks of evolutionary game theory and/or agent-based modeling.For instance, evolutionary dynamics have recently been used to evaluate how firms adapt their strategies for mitigating systemic risks given different supply-chain structures (Colon et al. 2017).By modeling endogenous network formation (Kirman 1997), i.e. by allowing nodes to rewire their links, it is possible to study how a network prone to systemic risk may emerge from the behavior of nodes (Allen and Gale 2000;Jain and Krishna 2001;Boss, Summer, and Thurner 2004;Gardiner 2004;Hanel, Kauffman, and Thurner 2007).
Many models help elucidate the role of specific features of a system in amplifying or alleviating systemic risks.For example, Colon and Ghil (2017) highlighted the detrimental impact of unbalanced supply delays on systemic risks in economic production networks.From this understanding, models can be used to design and test indicators of systemic risk (e.g.Battiston et al. 2012a).This crucial step is enabled by the increasing availability of detailed microdata and of economic agent-based models that have been calibrated to reflect important macroeconomic behavior (Poledna et al. 2018).Robust findings derived from such models can then inform the management of systemic risks.

Managing systemic risk
A key mechanism through which modern societies can manage risk is insurance (Geneva Association 2010), or more generally, diversification (Kunreuther, Pauly, and McMorrow 2013;IPCC 2012).While frequent-event risks (e.g. car accidents) can efficiently be handled through insurances, diversification becomes increasingly difficult for extreme-event risks (Kessler 2014;Linnerooth-Bayer and Hochrainer-Stigler 2015).In addition, due to the increasing connectedness of real-world networks (e.g. in financial systems) and across such networks (e.g. through the interplay between financial systems and supply chains), it has become more difficult to truly diversify a portfolio, as there might be hardly traceable pathways that connect two events.Thus, an approach to the management of systemic risk is to modify the topology of the underlying network and to steer it toward safer regions of its operating space (Poledna and Thurner 2016).
The hypothesis that increasing a network's structural diversity can improve its stability has long been debated in ecology (Gardener and Ashby 1970;May 1973;Pimm and Lawton 1978).This diversity-stability debate has led to helpful distinctions among different types of diversity and stability, but overall, contradicting results have been found regarding the impacts of diversification (Tilman and Downing 1996;Naeem and Li 1997;Naeem 1998;McCann 2000).This is because diversification may enable risk sharing and facilitate post-failure recovery, but can also multiply the number of pathways through which risks propagate.Echoing the controversy in the ecological domain, the impacts of diversification on systemic risk in the economic domain have recently been debated (Allen and Gale 2000;Gardiner 2004;Gai and Kapadia 2010;Haldane and May 2011;Allen, Babus, and Carletti 2012;Battiston et al. 2012b;Georg 2013;Staum 2013;Caccioli et al. 2014).
Instead of using a one-size-fits-all rule of thumb, reshaping a network's topology can be based on analyzing the detailed contributions each network node is making to systemic risk.Using this approach, decision makers can try to identify nodes that are too big and/or too interconnected to fail, as well as so-called keystone nodes that in times of failure cause large secondary effects or the complete failure of the network (Paine 1969;Mills, Soul e, and Doak 1993;Dunne, Williams, and Martinez 2002;Crucitti, Latora, and Marchiori 2004b).On this basis, Poledna and Thurner (2016) postulated that the management of financial systemic risk is essentially a technical matter of restructuring financial networks (Cooley et al. 2009;Huang, Zhou, and Zhu  2010; Adrian and Brunnermeier 2011; Brownlees and Engle 2017; see also Elsinger, Lehar, and Summer 2006;Gleeson and Cahalane 2007;Lorenz, Battiston, and Schweitzer 2009;Payne, Dodds, and Eppstein 2009;Mistrulli 2011;Roukny et al. 2013).
Interventions to change a network's structure can also be implemented in reaction to critical trigger events.Motter (2004) studied this issue focusing on how to modify, through the intentional removal of nodes, a network's structure during the period after an attack and before cascading effects unfold.While he found that the removal of most central nodes through an attack could trigger global cascades, the intentional removal of some nodes could significantly decrease the cascade size (see also Brummitt, D'Souza, and Leicht 2012).The timing at which best to implement such interventions is a crucial and still largely unanswered question, although indicators such as DebtRank can be used to devise early-warning signals.
To summarize, blindly applying diversification strategies to reduce systemic risks may have unintended and undesirable consequences.It is possible that changing a network's topology may be a more promising path forward than thinking about diversification possibilities.However, most analyses discussed so far have in common that human initiatives and interactions are not included and the management of systemic risk is treated as a merely technical problem that can be solved by natural-science methods alone.While partly true, especially from an applied and implementation perspective, we suggest that this view must be broadened to include the special nature of human agency as examined in the next section.

Human-agency perspective
We now focus on the human-agency dimensions of systemic risks and expand upon these in relation to the preceding discussion of measuring, modeling, and managing approaches.Rather than going into the contested terrain of identifying and comparing the strengths and weaknesses of social and cultural theories of human behavior and social processes (Renn and Klinke 2004;Florin and Xu 2014;Kasperson 2017), we restrict our scope to applied and implementation aspects and focus on three important and distinct features: indetermination, indecision, and responsibility.We contend that, first, including human initiatives and interactions in systemic-risk considerations must emphasize a type of variability that is especially relevant in this context, namely the role of free will as a fundamental postulate of understanding human behavior and the resultant social dynamics, implying an essential indetermination of human agency.Second, instances of collective indecision generated by mutual uncertainty can be those in which rules, procedures, scripts, and norms are suspended or altered.Consequently, these are examples of high systemic risk that cannot readily be incorporated into explanatory models, as the new causal rules typically cannot be predicted and accounted for.Third, analogies from biology and ecology, especially the idea of 'contagion', downplay responsibility in human agency, leading to the false belief that systemic risk is a merely technical problem.Addressing these three features, we call for their integration into systemic-risk analyses, which could be started via an integrative, adaptive, and iterative process equipped with a toolbox-based approach, as detailed below.For each feature, we provide a path forward toward their integration with measuring, modeling, and managing approaches rooted in the natural-science domain.

Indetermination
When human actions are included in approaches from the natural-science domain, it is often proposed to resolve the indetermination of human behavior using the hypothesis of rationality.Although there are fundamental limitations, these approaches may successfully be applied to specific contexts, in which human initiatives and interactions exist but are strongly constrained (Farmer and Geanakoplos 2009).For example, car traffic can be modeled well using fluid dynamics, because the initiatives of, and interactions among, car drivers are strongly constrained by the road network, traffic rules, and car capabilities.Similarly, in some financial markets, the behavior of real traders can be reproduced by 'zero-intelligence robot traders', because the way such traders behave is strongly regulated.Additionally, agent-based models can readily include cascadeenabling mechanisms such as imitation and social learning (which can underpin the alteration of conventions or social norms followed by agents while reflecting their uncertainty and ignorance) or trust and reputation (which can underpin changes in the willingness of agents to take risky decisions, e.g. in systems such as the money market).They can thereby address some important aspects of mutual-decision processes that can lead to crisis, crashes, and bank runs (Bouchaud 2013).
However, in most cases, the fundamental indetermination of human behavior is crucial for systemic risks and cannot be addressed through the tools and methods of the natural-science domain.The self-immolation of Mohamed Bouazizi in Tunisia in December 2010, which started the Arab Spring, is an example of free willsuicide as a form of protestand what consequences it can havea systemic event (Pollack et al. 2011).Even in very controlled environments, indetermination plays an important role in the realization of systemic risk.The Cuban Missile Crisis, during which three senior officers of the Soviet submarine B-59 considered launching nuclear torpedoes as they believed they were under attack, and among them, Vasili Alexandrovich Arkhipov cast the single vote against the launch that prevented the nuclear strike (and, presumably, an all-out nuclear war), is another case in point (Savranskaya 2005).Most human decisions do not lead to systemic events, and indetermination plays a role in systemicrisk analyses only if the corresponding action can spread through the system.
Conceptually, there are three types of impacts through which human decisions can contribute to systemic risk: pyramidal, pivotal, and sequentialwhich we now consider in turn (for a more detailed discussion, see Ermakoff 2015).A pyramidal impact occurs when a group or set of nodes (sometimes called 'followers' in the network literature) attains its sense of direction and determination from one actor or a small set of actors acting as one (sometimes called 'leaders').In other words, individuals assign control over their actions to one other agent.This typically occurs in formally and hierarchically organized structures or networks.It is worth noting that not only the actions of the leader can have effects on the organization or group as a whole; a decision not to act or an inability to act can also have a pyramidal impact.The aforementioned collective decision of the three senior officers during the Cuban Missile Crisis can serve as an example.Their decisiondue to Arkhipov's determinationled the submarine crew not to attack, thereby preventing a sequential impact (see below) from taking place that ultimately could have led to nuclear war.In the natural-science domain, analyses of controllability in networks often assume pyramidal impacts, which are used to direct networks to desirable states (e.g.Liu et al. 2008).If human agency unfolds through pyramidal impacts, many risk measures and modeling approaches can be applied from the aforementioned fields.
A pivotal impact occurs when one or a few actors reconfigure a balance of forces between sub-systems through their action or lack thereof.This presupposes that the sub-systems are well defined.In democratic societies, this type of impact is most visible in legislative procedures when the passing of a bill depends on the vote of one or a few actors.In ecological networks, keystone species can have pivotal impacts.More in general, for dynamics occurring on networks it is often the network hubs (i.e. the nodes with many links or neighbors) that have pivotal impacts.It is worth noting that, in contrast to pyramidal impacts, visibility is not required for pivotal impacts.If human agency unfolds through pivotal impacts, the resultant systemic risks can be thoroughly analyzed in their respective natural-science domain, as discussed in the previous section: salient examples are DebtRank and other centrality measures.
A sequential impact occurs when an individual action triggers a cascading process of behavioral alignment and/or reaction.In social processes, this must be based on observable information about behaviors and takes place sequentially.The aforementioned example of the self-immolation of Bouazizi in 2010, which initiated large-scale protests in Tunisia and in the Arab world, can be seen as a sequential impact.Various formal models of diffusion capture such impacts, including threshold and cascade models (Granovetter 1978), and associated measures are often used in these cases.If human agency unfolds through sequential impacts, these can be modeled via tools for measuring, modeling, and managing from the natural-science domain: for example, fault-tree analysis as has often been applied to simulating possible accidents arising from sequential impacts in nuclear power plants (Henley and Kumamoto 1981).
In summary, human decisions have the potential to enable systemic risk only if the resultant actions can spread through a system.Various mechanisms of spreading are possible and can be modeled via tools from the natural-science domain.However, the aforementioned examples from the Arab Spring and the Cuban Missile Crisis highlight how human actions are fundamentally indeterminate and yet must be considered in systemic-risk analyses, not the least since some such indeterminate actions can enable spreading mechanisms that are unknown beforehand, or preclude them in ways that are unknown beforehand.From a practical point of view, this means that continuous monitoring of a system is necessary to account for indetermination and for possible but as yet unknown spreading mechanisms that can cause systemic risks.Some of these mechanisms can be analyzed via tools for measuring, modeling, and managing from the natural-science domain, and corresponding results can help establish guidelines for how to respond.As discussed below, a toolbox-based approach may be appropriate in these cases.

Indecision
While the preceding discussion of indetermination has focused on single decision makers and on how the impacts of their actions may be spreading throughout a system, there is an additional fundamental feature of human collectives contributing to systemic risks that is not usually captured within the natural science domain, namely epistemic impacts.According to Ermakoff (2015), an epistemic impact occurs when an action alters expectations about collective outcomes in a way that crucially shapes the likelihoods of these outcomes, and as a consequence, the lines of conduct adopted by the members of a group.Here, the term 'epistemic' refers to the views actors form about other actors' knowledge and beliefs.Events that undermine common expectations make the future more indistinct.Consequently, behaviors become indeterminate as expectations about the future lose their ground, leading to situations of collective indecision.Indicators of human behavior relevant for measuring systemic risks may thus focus on detecting correlates of such collective indecision arising from epistemic impacts, which may open up the possibility of a radical shift in stance or in the probability that systemic risks are realized.As an example of such a situation of collective indecision and its possible consequences, Ermakoff (2015) analyzes the political and social rupture that occurred in Versailles on August 4th, 1789, which historians view as the revolutionary breakthrough that ended feudalism in France.As Ermakoff (2015, 67) showed through an event-structure analysis for that day, 'collective indeterminacy punctuated the drastic reshuffling of patterns of social and political relations'.The Ukraine crises of 2013-2014 that followed the Ukrainian government's decision not to establish closer economic integration with the European Union also had epistemic impact.This was possible because there was already a situation of collective indecision beforehand.According to G€ oler (2015), the establishment of norms, structures, and values in Ukraine for eventually joining the European Union caused collective indecision also within the Russian government, as this was interpreted as a geopolitical threat, which accordingly led to actions inspired by strategies of traditional power politics.
Collective indecision is sometimes said to lead to breaks in causal chains, based on the understanding that such situations lie outside of any explanatory model, as it is not clear which new rules will subsequently emerge or be put in place.Epistemic impacts can therefore be game changers.In some situations, it might be possible to assessmaybe just roughlythe probabilities with which different new rules will arise.In other situations, it might be possible to observe correlates of collective indecision, and to use them as early-warning signals for systemic risk to occur.For example, this could include indicators that measure how people are seeking behavioral cues from other people or processes, or indicators of the degree to which their questions to others (e.g.related to concerns or uncertain outcomes) are actually answered, rather than merely being reflected back at them.
As Little (2002) noted, the complex coincidences that cause a given system to fail are rarely foreseen by the people involved, and are evident only in hindsight.This leads to hindsight bias (Fischhoff 1975), through which developments that were not seen or understood while a crisis unfolds become more obvious in retrospect.As a consequence, reviewers of such developments may simplify causes and effects, or highlight a single element while overlooking its multiple contributing factors.Furthermore, hindsight bias is tempting reviewers to arrive at simple solutions or blame individual decision makers, thereby making it more difficult to determine what really went wrong.As suggested by Hindmoor and McConnell (2013), to avoid this backward mapping, the focus should be on placing epistemic impacts in the context of their time of occurrence, without being unduly influenced by advance knowledge of the crisis that will ultimately happen.By analyzing the global financial crisis of 2007/2008, these authors demonstrate that the comparison of backward and forward mapping of early-warning signals can provide important indications of what went wrong, due to which individual decisions and collective indecision.Such assessments may help salient actors to recognize alternative options for how to act in similar situations, thereby reducing the likelihood of similar crises to occur in the future.

Responsibility
The question of who should manage systemic risk is fundamentally different in systems in which humans are operating than in those in which this is not the case.Maybe most exemplary for our claim is the paper by Haldane and May (2011), which gave rise to financial ecology as a new field of research (see also May, Levin, and Sugihara 2008).As noted by Peckham (2013), 'little attention has been paid to the manner in which financial downturns have been analogized [ … ] as forms of "contagion"', after the concept of contagion has been developed, primarily in the late 1990s, to describe the mechanisms underlying the 1997 Asian financial crisis.In a similar vein, construing banking systems as ecosystems could be perceived as de-emphasizing the role of human agency in financial crises concerning the extent to which human actors are able to, and are charged with responsibilities to, manage and control systemic risks.Likewise, the notion of 'toxic financial products' (Nerghes, Hellsten, and Groenewegen. 2015) as biological agents implicitly suggests that they need to be understood in terms of biological processesregarding them as a natural hazard, rather than as a manmade peril.In general, using terms related to 'contagion', 'ecosystems', and 'toxicity' in the context of financial crises risks downplaying the importance of human agency, and therefore responsibility, within human-operated systems (a problem already addressed by Helbing 2013).Accordingly, there is a danger that epidemiological, ecological, and chemical metaphors and models used in financial theory, social analysis, and health research obscure more than they illuminate, while stoking fears and raising expectations that may unduly constrain the parameters of public debate.
In fact, humans as adaptive agents have a much broader repertoire to deal with systemic risks than biological or ecological agents.This prominently includes the capability to foresee and act against systemic risks, which agents in biological and ecological systems are usually unable to do.On this basis, humans can take a flexible and integrative approach to systemic risk (Frank et al. 2014).By not emphasizing these aspects, biological and ecological analogies for humanoperated systems fuel the idea that systemic risks are a merely technical problem.Similar to situations associated with many other technological developments, attempts at managing systemic risks in such a technical way is likely to create new risks.More broadly, sociological theories such as the 'risk society' developed by Beck (1992) stipulate that modernity reflexively relies on increasing complexity to manage the very risks it creates, which causes disasters whose origins are often deeply embedded in the construction of social organizations and institutions.Theoretically, Beck (1992Beck ( , 1999) ) with his idea of a reflexive risk society, Parsons (1951) with his idea of negative feedback loops to maintain order, Luhmann (1993Luhmann ( , 1995Luhmann ( , 2012) ) with his focus on systems theory and complexity reduction, and Giddens (1991) with his positive emphasis on modernization have provided major contributions from a social-science perspective to understanding how societies develop.
Such theories can be used as a starting point to address the issue of how risk evolves in societies.For example, in their review of the emergence of global systemic risk, Centeno et al. (2015) arranged theories of risk within the social sciences along a continuum from realist to constructivist.For the former, a fundamental assumption is the possibility to assess the likelihood and impact of any specified risk given its inherent characteristics.For the latter, the existence and nature of risk derives from its political, historical, and social context.Therefore, risks do not exist independently from society, but are created socially in response to the need to regulate populations, interactions, and processes.Hence, the identification of current and emerging systemic risks has to be seen as a response of social systems and therefore has to be understood from a social-science perspective rather than from a natural-science perspective, as only the former can define what systemic risks need to be considered in a given society (Luhmann 2012).
To summarize, taking ideas about 'contagion', 'ecosystems', and 'toxicity' out of their original contexts might implicitly detract from recognizing human agency and human responsibility.This is problematic, as systemic risks are manmade.Helbing (2013) suggested (re)designing systems together with suitable management principles, which could include adjusting network topologies and using self-organization mechanisms (Poledna and Thurner 2016).However, one could criticize these suggestions and similar ones as not being able to prevent the rise of other systemic risks: self-organization mechanisms often bring systems near critical thresholds for triggering phase transitions (Bak and Paczuski 1995;Axelrod and Cohen 1999), and thus cannot solve the root problem.In fact, from a reflexive modernity perspective, as discussed above, such suggestions could be seen as being in line with what a risk society is actually doingproducing new risks by managing old ones.Due to the high costs of crises associate with systemic risks, the management of systemic risks to prevent such crises should rank highly on the collective agenda (Little 2002).Systemic-risk reductions also typically require collective effort, because consequences are shared but causes are beyond the control of single agents.Hence, the management of systemic risks poses a societal challenge that needs to be addressed from an integrative perspective taking into account complementary insights from the social and the natural sciences.

Conclusions and future directions
We began our discussion by noting the discrepancy between the understanding of systemic risk from a natural-science perspective and the unresolved challenges that arise from accounting for human initiatives and interactions.We have identified fundamental differences and similarities between systems with and without human agency.From the perspective of measuring and modeling systemic risks, many approaches already developed in the natural-science domain can in principle also be used in cases requiring the integration of human decisions, initiatives, and interactions.These include approaches accounting for pyramidal, pivotal, and sequential impacts.Practically, however, continuous monitoring of a system is advisable, as the fundamental indetermination of human behavior and of possible spreading mechanisms affected by human behavior can cause systemic risks to occur in ways that are unknown in advance.Once specific spreading mechanisms have been identified comprehensively, they can be analyzed via tools from the natural-science domain, and corresponding results can be used as guiding principles for how to respond to the resultant systemic risks.Epistemic impacts leading to collective indecision and game changers, however, cannot be accounted for via tools from the natural-science domain.Nevertheless, early-warning indicators can be constructed based on learning from past events, provided that hindsight bias is accounted for.Such indicators of observable correlates of epistemic impacts, collective indecision, and game changers could be useful for detecting when traditional tools are breaking down and for setting up responses based on alternative analyses.Finally, analogies being drawn from biological systems to social systems can be problematic, as they may implicitly or explicitly remove human agency, and therefore responsibility, from being adequately recognized and addressed.We see the management of systemic risks as a societal challenge that needs to be addressed from an integrative perspective taking into account complementary insights from social science and natural science.
Our analysis suggests that traditional approaches to risk management are of limited utility for managing systemic risks (Florin et al. 2018).Based on a critical review of these limitations, Frank et al. (2014) assessed adaptive approaches as being superior for managing systemic risks and suggested moving toward an increasingly adaptive risk-management framework and focusing on solutions with multiple benefits, moving away from optimized static interventions to continuous processes of adaptation.Interestingly, these recommendations closely agree with how the IPCC community advises to deal with future risks of climate change and other global changes (IPCC 2012).In this context, Page (2015) argued that the portfolio of current approaches to managing complex systems and systemic risks is a collection of failed attempts.From a more positive angle, one may view those approaches as representing an ensemble of perspectives, each of which sheds light on some specific aspects of this complex issue that may be needed in the future.Consequently, all of the approaches presented to date have their value, but applying only a single one of them to a given problem may inappropriately bias the view.Hence, interventions that offer robust solutions, in the sense that they decrease systemic risk from many different perspectives and under many different future circumstances, may be most appropriate.This is even more so if we acknowledge that validating models with past data (e.g. using time series) is difficult in a bottom-up evolutionary approach or a top-down engineered fashion.The advantages of using an ensemble of systemic-risk models and associated management options are also closely related to the fact that complex systems usually have the potential to produce multiple types of qualitatively different outcomes (e.g. they may reach an equilibrium state after some time, may produce distinct patterns in space or cycles in time, but may as well just behave in a stochastic fashion).For systems of high complexity, in which large uncertainties about future phenomena cannot be included in any one explanatory model, an integrative, adaptive, and iterative approach based on pluralistic methodologies seems most appropriate, as it enables the possibility of continuously managing, learning from, reframing, and even transforming the system to decrease systemic risks.
How to start this integrative, adaptive, and iterative risk-management process therefore is an important question.A toolbox-based approach embedded within such a process may be a promising way forward.A toolbox typically links methods, models, and approaches in a way that highlights the complex nature of systemic-risk analyses, thus emphasizing the existence of multiple entry points to the measuring, modeling, and managing of systemic risks.Importantly, a toolbox created in this way could provide a new understanding of, and appreciation for, the diversity of tools and methods that currently exist.This could contribute to a shift in emphasis from methodology and technology focused on single means of analysis to an understanding that systemicrisk problems are multi-faceted and require a multitude of approaches and methodologies, including traditional risk considerations, to deliver insights under a broad range of circumstances.Finally, the right level of abstraction needs to be chosen depending on the research question at hand.As discussed, human-agency aspects may play a less important role for some systemic-risk challenges (e.g. for supply-chain management) than in others (e.g. for managing political disruption).This question is related to the optimal complexity of a model, i.e. to how detailed a model should be, and can be, given limited resources.Due to the increased calibration potential resulting from more and more data becoming available and from the steady rise in computing power, an increase in optimal model complexity can be observed over the last few decades.Such increasing model complexity often decreases the possibility of fundamentally understanding how model predictions depend on model structures and model inputsan understanding that is typically better provided by rather stylized, less complex models.This is another observation underscoring the need for adopting a multi-faceted multi-model approach within an integrative, adaptive, and iterative risk-management framework to take such considerations into account.

Table 2 .
Selected key measures of systemic risk from the risk domain and financial domain.