Systemic risks and solar climate engineering research. Integrating technology ethics into the governance of systemic risks

Abstract The paper explores how the framework of systemic risks can help govern the risks imposed through solar climate engineering research. The central argument is that a systemic perspective of risk is a useful tool for analysing and assessing the risks imposed through Stratospheric Aerosol Injection (SAI) research. SAI is a form of climate engineering that could cool the planet by enhancing its albedo through the injection of aerosols into the stratosphere. Researching such a technology creates systemic risks with a strong sociotechnical component. This component consists of the potential societal harm that a developing or new technology might cause to existing norms, values, institutions, and politics. The systemic risk framework is a valuable heuristic for this case, given the complex interdependencies of societal systems, infrastructures, markets, etc. At the same time, the systemic risk framework can be enhanced through the inclusion of a more robust and reflected ethical considerations on technological risks. Consequently, this article seeks to supplement the systemic risk governance framework with insights from technology ethics. Specifically, the paper offers an ethically reflective conception of societal value dynamism and stakeholder engagement and participation, tying it to existing systemic risk governance approaches.


Technology ethics and climate risks
The impacts of climate change are assured to be both harmful and difficult to predict.The Intergovernmental Panel on Climate Change reports highlight this fact, when the potential damage caused by climate change is expressed through a probability scaling of 'high confidence' to 'low confidence' (IPCC WGI 2023, 4).The future of how climate change will impact the planet is, while clearly harmful for many species, eco-systems, and societies all over the world, also shrouded in risk and uncertainty.Accordingly the assessment and governance of climate risks has become paramount for climate action.One way of governing and abating some of these risks is through new technologies.As with many risks, industrialized societies seek to partially offset or remedy these climate risks through innovation and ever-increasingly complex infrastructures and systems.Climate change, for better or worse is thus often framed as a challenge for technological innovation.
The severity and urgency of climate change has led to expert-driven debates on the research and possible deployment of solar climate engineering through Stratospheric Aerosol Injection (henceforth SAI).SAI as an approach entails continuously spraying aerosols e.g.via aircrafts, into the stratosphere which would reflect a fraction of the incoming solar radiation.The resulting veil of aerosols would then serve as a temporary sunblock.This method is considered by some scholars as an additional means to slow the rate of planetary warming or even stabilize temperatures, while humanity stops emitting CO2, and eventually achieves progress in the active reduction of atmospheric CO2 (Keith 2013;Crutzen 2006).
However, the process of researching SAI poses a systemic risk to the energy transition through its sociotechnical impacts, i.e. the potential societal harm that the mere possibility of a new technology might cause to existing norms, values, institutions, and politics.From a climate policy perspective, this harm manifests itself as creating societal reactions that run contrary to the overarching goal of climate action, namely the sustainable and just abatement of CO2 emissions, or the risk of 'mitigation deterrence' (McLaren 2016).
To assess and possibly govern this risk, this paper applies existing frameworks of systemic risk governance to SAI research governance.It further expands on these frameworks by supplying additional insights from the ethics and philosophy of technology, as well as risk ethics so far underexplored by the systemic risk community.In contrast to conventional risks, systemic risks are risks that endanger the functioning of vital systems, e.g.infrastructure, supply chains, healthcare systems, or other (Florin and Nursimulu 2018;Schweizer 2021).
Framing SAI research as a systemic risk contextualizes the endeavour within the many systems and structures it would affect, most importantly how it could negatively impact the energy transition.What distinguishes this approach from other work on the risks imposed through SAI so far (e.g.Tang and Kemp 2021) is the specific focus on SAI's research process, rather than SAI deployment.
The main argument is that a well-governed research program taking into account the systemic risks of SAI research, could reduce the risk of mitigation deterrence.The systemic risk governance framework can be enhanced in order to account for the dynamic nature of societal values and the moral relevance of participation when evaluating New and Emerging Sciences and Technologies (NESTs).This requires a theoretically grounded conceptualization of both participation and societal values in the context of NESTs, which this paper provides.Ethics and philosophy of technology scholarship can supplement existing systemic risk governance frameworks, providing reflective insights about the interdependent relationship between technology research and societal values, and establishing a more rich understanding of participation as a means to achieve justice in a democratic decision-making process.
The paper first connects societal risks to technology, exploring how the two are reciprocally related in modern industrialized societies.On this basis, the risks invoked through SAI research are presented in the subsequent section.By way of applying the systemic risk framework to the case of SAI research, two supplements for this framework are proposed in the form of societal value dynamism and stakeholder engagement and participation.These two supplements are argued to create a more politically acceptable and ethically justifiable form of governing SAI research's risks.The concluding section reiterates the central findings and suggests space for future investigation with regards to risk governance for New and Emerging Sciences and Technologies in general.

Contextualizing societal risks and technology
Modern, industrialized societies often rely on technological innovation to handle the unwanted side effects of daily business, facilitate certain activities, or explore new possibilities for action.This inevitably raises ethical questions.Whether and to what extent we should rely on technology and innovation, and what kinds of technologies we ought to research and build, are all normative questions that require moral reasoning and political engagement.New technologies often lead to new possibilities for action, resolve existing problems at the cost of causing new ones, and add to the complexity of existing infrastructure and sociotechnical systems.Electric vehicles for example resolve the problem of pollution and direct CO2 emissions of traditional combustion engines, while also causing problems for the supply of electricity, and the just and environmentally friendly extraction of rare metals.Additionally, they require a modification of the existing mobility infrastructure in the form of new charging stations, as well as large scale transmission cables.
Due to the complexity and vastness of current societies, made up of interrelated markets, politics, institutions, infrastructures, etc. it would be imprudent for researchers, policy-makers, and the public to try and anticipate all the outcomes of researching and introducing New and Emerging Science and Technologies (NESTs).Rather, exploring and evaluating the impact of potential NESTs has been framed as a dynamic process that puts specific focus on possible harms and benefits, i.e. risks and uncertainty.From an ethical perspective, this has led to increasing attention being paid to the assessment of risk imposition related to technological development in the past years (Hansson 2017;Taebi 2017;Grunwald 2015), with various focal points on e.g.public acceptance, policy-making, participation, and others.The conceptual aim of risk ethical assessment is to understand the moral implications of imposing risks (e.g. as domination, see Maheshwari and Nyholm 2022), and the possible justification for the imposition of risks through technology (Hansson 2013).
It is through this lens of continuous risk imposition, assessment, and abatement that I analyse SAI research's risk of leading to harmful consequences such as 'mitigation deterrence' , i.e. to a reduction in societal willingness to mitigate.Consequently, it is first important to take a closer look at the interrelated nature of societal values and technological risks.I refer to the continuous innovation and deployment of new technologies constantly reshuffling the imposition of individual, communal and societal risks as the sociotechnical risk cycle.The sociotechnical risk cycle is a schematic process and is neither an exhaustive nor exclusive means by which such risks can be explored.The following example describes this cycle.
The flammability of a building might be drastically reduced through the introduction of some building material.However, at a later point in time research and empirical data indicate that this building material is carcinogenic and thus increases the likelihood of workers developing lethal diseases.Consequently, a new material is developed that is both less flammable and not carcinogenic.Unfortunately, this new material turns out to be extremely damaging to the environment due to the materials necessary for its production.From a schematic perspective, a technology was introduced, a given risk was identified on the basis of empirical insights and value claims (e.g.health and the environment ought not be harmed), and subsequently abated through the introduction of a new technology.Finally, the new technology created novel or different risks, and the constant process of risk identification -abatement -and novel identification repeats itself.
Ulrich Beck describes a similar process in his seminal work titled the Risk Society (1992).Beck specifically highlights the relationship between producing and increasing wealth and the risks that this production incurs (Beck 1992, 19): In advanced modernity the social production of wealth is systematically accompanied by the social production of risks.Accordingly, the problems and conflicts relating to distribution in a society of scarcity overlap with the problems that arise from the production, definition and distribution of techno-scientifically produced risks. 1   Note that this cycle is difficult to break, not least because a continuous growth paradigm seems to undergird any reason for political and economic action in industrialized capitalist societies.
With regards to a new technology, even if it resolves past problems it will introduce some form of novel or known risks.This is due to the inherently dynamic characteristic of risk in a value pluralist and dynamic society that is open to the emergence of new values and norms, as well as the overhauling of existing norms and values.Values, in this sense refer to 'what a person or group of people consider important in life' (Friedman, Kahn, and Borning 2008, 70).They are ideals that people seek to realize.
The sociotechnical risk cycle and its persistence hinge, I argue, on the following three central characteristics of risk.One, the identification of risks is both the result of empirical observation and moral evaluation.In the case of building materials, we find that different values were at play at different moments in time.First, the safety and security of houses, second the health and well-being of the workers, and third some form of ecological integrity.In short, the assessment of those risks was based on specific values at each moment in time, which shows how risks are value-laden phenomena.The value-laden aspect of risk is also acknowledged in the sociological tradition of risk assessment.Thus when risk scholar Ortwin Renn describes risks as being something that has 'an impact upon what humans value' (Renn 2008, 2) some form of moral argument needs to define what that human value is.What constitutes a value is an ethical question that cannot be answered through observation alone.The fact that identifying risks is a political and ethical process highlights the need for meaningful participation that accounts for historical injustices and structural inequities.As the working material example demonstrates, there needs to be political action claiming that the environment is worthy of protection, and therefore harming it constitutes a risk.For this political action to be effective however, a decision-making process is required that allows for meaningful participation among all stakeholders, and potentially remedies existing power imbalances.Risks, defined as potential harms accordingly cannot but rely on an evaluative premise and the identification of risks is always also a political process.
A second characteristic that the above example highlights is the constant reshuffling of risks in liberal, industrialized societies.The possible harm to one value (e.g.safety) is abated through the introduction of a new material.However, the intervention does not end here, since new values emerge, or existing values become more prominent.A labour union might be politically active and demand better conditions for their workers, which entails not having them be subject to carcinogenic substances.Thus the value of individual and communal health and well-being, perhaps also the value of social equity in terms of how workers are being treated, becomes a new prominent factor for the existing risk assessment.This dynamic development of values is inevitable in value pluralist societies, wherein the public opinion on what should and should not matter can become political and institutionalized, and thus leads to and requires a continuous reassessment of existing risk-impositions.This dynamism is a central challenge for the implementation and maintenance of technology and infrastructure, as these structures themselves are the manifestation of certain values that may change over time.For example, twentieth century flood risk management systems that exclusively focus on safety and ignore their environmental impacts, may no longer be deemed adequate by the public due to changing values (Taebi, Kwakkel, and Kermisch 2020).
The third and final aspect is the way in which risks are often abated or mitigated in industrialized societies, namely through technological innovation and intervention.This third aspect is helps understand the relationship between societal values, technological development, and risk.Actions and behaviours on a societal level are often not fundamentally changed just because society realizes that these behaviours carry risks.Rather, since these behaviours are manifestations of specific values or notions of a good life, applied science and policy seeks ways to mitigate or abate the risk through technological innovation.People may not want to stop insulating their housing units, since it is assumed that insulated housing is a valuable action (to save money, to protect the environment, etc.) Instead of adapting or changing our behaviour (and the values that drive them), we seek to sidestep the risks that behaviour induces by changing their context.This is what some scholars have framed as the so-called 'techno-fix' (Huesemann and Huesemann 2011;Sand, Hofbauer, and Alleblas 2023;Jongsma and Sand 2017), i.e. seeking to resolve societal ills through technological innovation.
The persistence of the sociotechnical risk cycle is particularly glaring in the context of climate change.This persistence invites the claim that humanity cannot innovate its way out of climate change, since the constant reliance on innovation based on an assumption of continuous growth, are drivers rather than solutions for the ongoing ecological crisis.Innovation often ignores the fundamental structural causes for climate harms, while doing little to alleviate climate injustices.The way climate change harms vulnerable communities in particular, is often tied to the precarious context within which these groups and individuals find themselves, which in turn are co-products of underlying marginalization through political oppression, exploitation, and/or discrimination.The solution to these injustices is not technological development, but societal change found in the politics of e.g.degrowth or deep ecology movements (Kerschner et al. 2018).
At the same time, categorically dismissing NESTs as a potential additional means to combat climate change comes with its own ethical and practical challenges.From an ethical perspective, it is not a given that NESTs necessarily lead to more climate injustice and further marginalization.Rather, it depends on how these technologies are envisioned, and what role they are imagined to play in the energy transition.Decentralization, a main feature of the degrowth paradigm, might be advanced through innovation (Pesch 2018).Further, overcoming climate change and the ecological disaster accompanying it, necessitates swift, sustainable, and effective intervention.As the recent IPCC report shows, there is no chance of achieving the 1.5 or 2 degrees Celsius goal without the implementation of Negative Emissions Technologies (IPCC WGI 2023, 184): Past, present and future emissions of CO2 therefore commit the world to substantial multi-century climate change, and many aspects of climate change would persist for centuries even if emissions of CO2 were stopped immediately (IPCC 2013b).According to AR5 [the fifth Assessment Report], a large fraction of this change is essentially irreversible on a multi-century to millennial time scale, barring large net removal ('negative emissions') of CO2 from the atmosphere over a sustained period through as yet unavailable technological means.
Accordingly, the climate crisis that humanity has produced now requires all hands on deck to resolve, or at the very least make it the least catastrophic we can.The Paris goals are also closely linked to reaching certain, irreversible climate tipping points that would have catastrophic impacts on the biosphere as a whole (Armstrong McKay et al. 2022).In addition, current mainstream analysis of climate change, such as the IPCC reports, tend to underrepresent the more dangerous and potentially even more catastrophic climate scenarios (Kemp et al. 2022).The realization of climate change's irreversible impacts and that it could turn out much worse than anticipated underscores the need to consider an array of interventions, including, but not limited to technological approaches.

Stratospheric Aerosol Injection research and sociotechnical implications
Given the urgency for action that climate change causes, some scientists and experts have explored the possibility of directly influencing the climate through technological means.One form of doing this is through Stratospheric Aerosol Injection (SAI), which entails continuously spraying aerosols e.g.via aircrafts, into the stratosphere.These aerosols would create a veil that could temporarily block some of the incoming sunlight.The central premise of this intervention is that it could serve as an additional means to slow the rate of planetary warming, giving humanity more time to eventually reduce atmospheric CO2 (Keith 2013;Crutzen 2006).
The potential of a fully-fledged, global research program on SAI has led to heated debates throughout the natural and social sciences dealing with climate change.Some scholars from a variety of fields have publicly called for a moratorium on deployment (Biermann et al. 2022), which prompted others to push for a 'balanced' account of SAI and its research process (Wieners et al. 2023).Calls against SAI research usually point out that researching a technology inevitably leads to some form of deployment.This is commonly referred to as a 'lock-in' or 'slippery slope' occurring, wherein political or economic incentives outweigh the scientific and justice related rationales when deciding whether to go ahead with the development of a certain technology.Applied to SAI, the research runs the risk of inevitably leading to deployment despite the scientific community's best intentions (McKinnon 2019;cf. Callies 2019).
Another important warning against SAI research is that investing resources and political capital into such a risky and potentially unjust form of climate mitigation is both imprudent and morally problematic (Biermann et al. 2022;Hamilton 2013Hamilton , 2014;;Schneider 2019).In contrast, those defending the idea of a globally implemented research project point towards the catastrophic climate situation humanity already finds itself in and argue that we cannot make a well-informed choice about SAI if we do not understand it well enough (Wieners et al. 2023;Winsberg 2021).Accordingly, only a well-coordinated SAI research process with proper political and ethical governance can tell us whether the technology is viable and justifiable.
Following this latter line of reasoning the argument developed here assumes that a research ban is in and of itself problematic, since it presumes that technology development is the direct application of scientific knowledge and forecloses any risk or uncertainty.It relies on a potentially deterministic understanding of technological development, which seems to unduly simplify the way new technologies are developed, and their sociopolitical embeddedness.
With this assumption in mind, I focus on the research process of SAI, and make no claim about deployment.The evaluation of SAI research lends itself to an ethical risk assessment, since its outcomes, worries, promises, etc. are all shrouded in a considerable amount of uncertainty.In other words, most of what we can say about SAI research comes in the form of possible outcomes, possible harms, and possible benefits.However, traditional risk analysis and governance approaches seem ill-equipped to handle the inherent complexity of the technology-climate change-society nexus.I believe that a systemic perspective of risk, which focuses on the complex, interdependent nature of societal and technological risks, is a useful tool for analysing and assessing the risks imposed through SAI research better and make space for their ethical evaluation.

Systemic risks & technology: the normative relevance of values
The concept of systemic risk has its origins in economics and finance.Scholars such as Kaufmann and Scott for example explore the concept when discussing the potential contributors to the breakdown of entire financial systems, and the subsequent systemic effect of that breakdown (Kaufman and Scott 2003).With reference to the OECD report on systemic risks (2003), the International Risk Governance Council's released a report specifically focusing on their governance, further characterizing systemic risks in lieu with its 'cascading effects' , highlighting the interdependent and often non-linear relationship of triggers and outcomes (Florin and Nursimulu 2018, 9).Risk scholars such as Ortwin Renn and Pia-Johanna Schweizer have substantially reworked and deepened the concept, underscoring its relevance for the challenges humanity faces in the twenty first century (Renn et al. 2022;Schweizer and Renn 2019;Schweizer 2021).It is specifically this latter approach, with its focus on transdisciplinary and qualitative assessment of systemic risks that I focus on.
Systemic risks are risks that endanger the functioning of any vital system, e.g.infrastructure, supply chains, healthcare systems, and other systems central to the functioning of society.They can be contrasted with traditional risks through the following characteristics. 2Their cause-andeffect structure is non-linear, which means that apparently minor impacts may have severe for the system.This non-linearity also leads to system lags and tipping points, wherein the consequences of a given input may only manifest themselves at a later moment in time.Further given the complexity of the systems affected through systemic risks, there is a considerable amount of uncertainty in their assessment, and consequently their governance.This complexity exacerbates the high and latent interdependency of both the system under investigation as well as the mechanisms that potentially led to a systemic breakdown.
While Renn and colleagues mention five aspects of risk governance in total, I specifically focus on two of them, namely (1) the need for 'recurrent, adaptive, and synchronized' governance (Renn et al. 2022(Renn et al. , 1914) ) and ( 2) the epistemic importance of meaningful participation (Renn et al. 2022(Renn et al. , 1915)).I believe that these two aspects can be most illuminatingly enhanced through the inclusion of ethical considerations.They subsequently give an indication of how systemic risks governance can be further ethically grounded, ensuring a reflective and adaptive assessment process, and having a moral basis for the broad participation of all actors and agents involved (see Renn et al. 2022 Importantly, enhancing these two aspects by actively incorporating ethical reflection into the participation and decision-making process presents a valuable heuristic when setting up a holistic SAI research framework. The first characteristic that Renn and colleagues discuss is the necessity for systemic risk governance to be 'recurrent, adaptive, and synchronized' (2022, 1914).Iterative and adaptive approaches are central to any form of decision-making under risk and uncertainty (Marchau et al. 2019;Taebi, Kwakkel, and Kermisch 2020).Governance frameworks that have been proposed for SAI research so far all include reflexivity, adaptability, and iteration as central tenets of the research process (NASEM 2021;Gardiner and Fragnière 2018;Bellamy 2016;Hofbauer 2023).The need for dynamic governance can be most easily explained through the fact that the identification of risks goes in tandem with the gathering of new information.Empirical findings put the governance process into a new context, e.g. when scientific inquiry leads to the realisation that SAI may lead to a change in precipitation patterns in highly rain dependent regions (Niemeier and Timmreck 2015;Tilmes et al. 2020).
However, dynamic governance is also a highly relevant framework for the ethical assessment of risks.As the sociotechnical risk cycle highlights, introducing new technologies entails the continuous abatement of existing risks and the related production of novel risks.Since risks are also the product of value judgements, a novel risk might not just be the outcome of new empirical findings, but actually related to societal developments that re-evaluate an existing state-of-affairs on the basis of new values.These new values might themselves be a reciprocal product of, or influenced by the new technology.Importantly, new values might arise and impact the research of new technologies before these technologies have been fully deployed or developed.Hence new values create and are being created by the interaction of societal norms and values, as well as the infrastructure and technological 'imaginary' that a given society finds itself in Jasanoff and Kim (2015;Hilgartner 2015).This is mainly what scholars sceptical of SAI research fear, when they claim that such a program would inevitably lead to more exploitation and injustice.They argue that researching SAI is inherently morally wrong, since its goal is hubristic and its very concept incompatible with values such as democracy and global justice (Hulme 2014;Szerszynski et al. 2013;Schneider 2019).This potentially harmful impact of SAI research, in turn, poses as a systemic risk towards a just energy transition, undermining justice considerations in favour of a quick technofix.
An SAI research program might thus influence specific values that a society holds, such as how sustainability and justice relate to one another (Hofbauer 2023).A research program that accounts for its potential impacts on existing sociotechnical imaginaries consequently needs to be aware of its framing and narrative, and proactively lead the charge in how its technology is communicated.When scholars such as Jebari et al. (2021) argue that SAI research proposals need to come in tandem with thorough mitigation measures, they also address the need for proper framing: SAI research, whatever its outcome will not resolve climate change.This is of course an immense challenge, narratives surrounding technology are solely in the hands of scientists, but rather a complex compound product of political interests, institutions and societal expectations (Borup et al. 2006).Nonetheless an integrated mechanism for the framing of SAI could help provide a common context for the conversation surrounding its research.
So far, systemic risk scholarship has paid little attention to the reciprocal nature of such risks arising in the society-technology nexus, i.e. societal risks that emerge through the introduction or research of NESTs.Importantly, these risks should be considered an additional form of systemic risks that require ethical analysis and reflection.Assessing these risks entails not just accounting for the empirical observation of societal values, but also for the normative fluidity of existing and future risks and their interdependence with NESTs.Actively incorporating framing mechanisms that highlight the limitations of SAI research, for example, could serve as a tool to account for the risk of setting problematic expectations or reinforcing existing unjust worldviews.

Risks & technology: a procedural and substantive account of participation
The second characteristic of systemic risk governance that can be supplemented with ethical considerations is inclusivity and the broad and meaningful engagement of stakeholders.As Renn and colleagues explain, inclusivity is paramount from an epistemic point of view.A more diverse set of perspectives will help better identify the risks at hand.Referencing Florin 2013, Renn et al. argue that (2022, 1915): Having many stakeholders involved provides a much more effective guarantee to pay attention to a multitude of early warning signals and to detect irregularities that may be outside of the screen that official risk observers use.
In other words, there is epistemic value in including all kinds of actors involved and affected by the system, especially in terms of making sure that all risks are accounted for, and that the system is properly understood.With the term epistemic I refer to the idea that stakeholder participation can teach researchers and policymakers about the values stakeholders hold.Again, this example shows how values come into play and how e.g. the concept of participation might play a role for the identification of relevant risks.
In an earlier piece, Renn and Schweizer discuss in-depth not just the epistemic relevance but also the normative importance of inclusive governance approaches for policymaking in complex, systemic risk situations (Renn and Schweizer 2020).Focusing on the societal implications of the energy transition, the authors write that '[…] procedural structures are urgently needed that build upon the best available expertise and the informed consent of those who will experience the consequences of the requested changes' .(Renn and Schweizer 2020, 41) Accordingly, the authors refer to some ethical considerations regarding participation, such as Habermas' ethics and discourse theory (Habermas 1984), to highlight the need for participation as an ethical criterion.
To clarify the ethical relevance of participation, it is useful to distinguish between procedural and substantive approaches to ethics and questions of justice. 3In a nutshell, while substantive accounts of justice focus on the outcome of a given decision-process, i.e. whether the final agreement or distribution is just, procedural accounts of justice emphasize the importance of inherently just processes. 4 The inclusive governance of systemic risks relies on a robust conception of justice as participation.This means that justice is conceptualized within the boundaries of fair procedures alongside a set of substantive ethical guardrails.This is what the political philosopher John Rawls referred to as 'imperfect procedural justice' (Rawls 1999, 74f.)Imperfect procedural justice describes a decision-process wherein the outcome of said process is not immediately just merely by virtue of having followed a specific procedure.It also requires evaluating the decision's outcome.In other words, broad participation alone does not ensure a just outcome.
The focus on procedural is motivated by two central characteristics of decision-making in the systemic risk context.First, systemic risks are far-reaching, interdependent phenomena that affect a variety of stakeholders with diverse, often incompatible sets of values or interests.Meaningful and reflective participation provides a stage for the assessment, deliberation, and consolidation of those values, as well as a ground to establish guardrails against potentially illegitimate value-claims.
Second, systemic risks are often shrouded in uncertainty and ambiguity, reducing the predictive capacity and epistemic authority of expert-driven deliberation, i.e. inductive reasoning through the scientific method.The iterative nature of learning about the systemic risk phenomenon requires a dynamic and reflexive process of assessment, evaluation, and deliberation to account for both the discovery of new information and how that new information affects existing value-judgements by creating a new context within which these judgements are made.
Accordingly, the risk of SAI research resulting in a technocratic technofix, leading to mitigation deterrence with no underlying justice or sustainability considerations can be clarified and accounted for through this procedural and substantive distinction of justice.Robust and meaningful participation provide both useful insights and critical oversight about the research processes, ensuring that value considerations relevant to e.g. the most vulnerable stakeholders are an integral part of the decision-process.Further, clear guardrails in terms of which stakeholders get to have how much say, and what degree of potential environmental damages are acceptable provide a kind of deliberative playing field for the procedure, while also accounting for critical substantive values.

Contextualizing participation
Building on the ethical considerations for inclusive governance, participation and recognition can best be understood as a central tenet to achieving justice (Fraser and Honneth 2003).Importantly, meaningful participation is a catalyst to frame all stakeholders as meriting equal recognition of the potential harms a NEST might cause for them, while also being able to historically contextualize and differentiate between different kinds of needs and harms.This combination of equity and differentiation is particularly central to a NEST of global impact such as SAI (Hourdequin 2019) and plays a major role in the ethical evaluation of a just energy transition (van Uffelen et al. 2022).Similarly, the labour union that looks out for the well-being and health of its members is a crucial stakeholder in the sociotechnical risk cycle explored above, and their participation in the decision-making process is ethically indispensable.Stakeholder engagement then is not merely prudent and epistemically valuable, but necessary if the risk assessment and governance process is to be morally justifiable.
Unchecked and decontextualized participation might lead to unjust processes as well.When trying to deal with climate change and its accompanying harms, uncritically giving the same weight to all the actors involved runs the risk of distorting procedural and recognitional aspects of the decision-making and governance process.While it is not always possible to pre-emptively assess the intention of all stakeholders, putting their role in the system at risk into context and taking their past actions into account can give a more holistic and reflected understanding of their intentions.Further, some actors may have incentives to stall rather than enable radical climate action, which means that the role and motivation of these actors needs to be contextualized, and the stakes they hold evaluated.For example, the stakes of keeping a fossil fuel economy at a profitable level are potentially morally less relevant (perhaps even immoral), as say the stakes of future generations to inherit a habitable planet.
This difference in stakes further influences the kinds of risks we might deem acceptable, and their reasons (e.g. in the form of values) ought to be central and transparent to the debate on risk governance.Transparency serves not only the purpose of knowing who holds which values, but also a moral and political purpose for when society is confronted with the question of how the different risks from different stakeholders are to be evaluated and taken into account.The point of this broader understanding of participation then is not to pre-emptively judge the stakeholders' position, but to give all stakeholders a reasonably informed forum for a meaningful debate on the values and risks involved in the decision-making process surrounding a NEST.An incorporated ethical reflection helps contextualize the arguments from different stakeholders based on their underlying values, which in turn could serve to set guardrails against potentially unacceptable risk impositions.
In this sense participation is a sine qua non condition for justice, while at the same time requiring some guardrails and limitations on the kinds of risks that should be considered and form part of the decision-process.Participation in the form of actively reflecting on the values involved in the arguments raised by stakeholders further highlights the importance of understanding the dynamic relationship of the risks, values, and technology.A dynamic account of sociotechnical value change underscores the fact that continuous participation is necessary, since values may change over time and through the NEST introduced.Actively including and accounting for the dynamic and interrelated process of changing societal values and technological development, as well as the ethical relevance and limitations of participation broadens the existing systemic risk governance framework.Consequently, the framework becomes more holistic when evaluating NESTs, or even their research process such as SAI research governance proposals.Such a holistic and systemic framework would further allow us to take a nuanced perspective on the ethical underpinnings of governing SAI research processes, and the moral and sociopolitical challenges such a research process needs to take into account.

Conclusion
Any SAI research program needs to inherently and explicitly address the question of how it aims to deal with the risks it produces, such as mitigation deterrence effects, the risk of unilateral decision-making, and the unreflected inclusion and exclusion of different values.The non-reflective, and decontextualized introduction of an SAI research program is a systemic risk for the energy transition, potentially inhibiting the necessary shift away from fossil fuels.SAI research carries the real risk of undermining the abatement of emissions through indirect technological means, based on expectations and expediency, rather than scientific facts.However, this risk can possibly be countered through the introduction of a number of measures, on both the side of SAI research, and the impacted system, i.e. the energy infrastructure.The systemic risk framework is insightful to account for the complex nature of risks imposed through SAI research, specifically providing a systemic outline of the interdependent structures and systems at play.At the same time, the framework can be more ethically grounded through additional reflections on the value-technology relation as well as the justice requirement of participation.This could be done via e.g. a more holistic policy framework that already includes measures beyond merely researching SAI, but also addressing the risk of deterrence effects (Jebari et al. 2021).Further, such a framework could help tie any SAI research effort explicitly to binding mitigation commitments and decarbonization policies.On an international level, this would entail carving out clear responsibilities for nations, but also corporations and industry.Further, policy measures such as risk-response feedback frameworks (Jebari et al. 2021), anticipatory governance approaches, as well as assessment methodologies that account for the risks and uncertainties involved in such a research process can anticipate ways of dealing with the potential of mitigation deterrence.On this basis, it is possible to ethically govern the systemic risks associated with SAI research.