Combining approaches: Looking behind the scenes of integrating multiple types of evidence from controlled behavioural experiments through agent-based modelling

ABSTRACT Understanding complex (social) phenomena benefits from combining different tools, perspectives, expertise, and experiences. Research designs that combine approaches are gaining in popularity. Carrying out research in interdisciplinary teams, however, is a challenging, high-investment activity. Unawareness of and reflecting on conflicting ways of seeing or studying the world may endanger project success. Agent-based modelling has proven instrumental in bringing together different approaches. Yet, this potential enabler remains an unusual combination partner: the additional prevalent lack of transparency about what is combined, and how, obstructs advancement. We therefore invite our readers behind the scenes of our multi-year research collaboration where we combine agent-based modelling with controlled behavioural experiments to advance the understanding of collective resource use in a common pool resource dilemma. The paper contributes by 1) being an example in sharing the processes of combining approaches, and by 2) highlighting the enabling role of ABM in combining research approaches.


Introduction
We see, what you and I cannot see' Our understanding of real-world complex (social) phenomena benefits from combining different tools, perspectives, expertise, and experiences. Combining research approaches is thus increasing in popularity, particularly for advancing the study of the complexity of the world around us as it enables access to under-explored terrain (Poteete et al., 2010;Tashakkori et al., 2021;Timans et al., 2019). This type of research -'combining approaches' -appears under different labels, such as 'mixed methods' (Brannen, 2005;Johnson & Onwuegbuzie, 2016;Tashakkori et al., 2021) or 'multiple methods' research (Poteete et al., 2010). These contributions have in common that they use or integrate qualitative and quantitative empirical research logics, methods and/or data at different stages of the research process to answer a research question (Schwartz-Shea & Yanow, 2011;Timans et al., 2019). We -the authors -do the very same in sustainability science when aiming to understand and explain the complex dynamics that result from continuous interactions between people and their environments (Berkes & Folke, 1998;Folke, 2016;Schill et al., 2019).
Combining approaches has many advantages: It enables different ways of knowing and meaningmaking, resulting in a more holistic understanding of the research object (Timans et al., 2019). Combining approaches frees research from restrictions of individual approaches allowing for deeper insights. Moreover, when results of different approaches challenge each other, they can generate insights that cannot be achieved by one approach alone. These benefits are particularly relevant when dealing with complex phenomena and engaging in inter-or transdisciplinary research (Tashakkori et al., 2021;Wehrden et al., 2017). Additionally, combining approaches is considered powerful since -depending on the combination -the strengths of one approach may compensate for weaknesses of another, and vice versa (Poteete et al., 2010;Tashakkori et al., 2021).
However, combining approaches can be a challenging endeavour. It requires expertise within and across different disciplines (Lang et al., 2012;Wehrden et al., 2017) and significant investments into collaborative processes that allow research teams to value both qualitative and quantitative research logics (Curry et al., 2012;Hesse-Biber, 2015). Such processes can be particularly resourceconsuming when the combinations include approaches that are rooted in conflicting ways of seeing the world (ontology) and/or studying the world (epistemology; Schwartz-Shea & Yanow, 2011;Timans et al., 2019). When researchers are unaware of these fundamental differences, they could lead to unproductive confusion and tensions in research teams, which in the worst case, could stall or derail entire research projects. It may further be challenging to discover which combinations work well, and which do not, for a particular research project. What contributes to this challenge is a lack of transparency and discussion regarding both the research process and the underlying assumptions of the combined approaches. For example, the process of integrating data often relies on informal processes and a set of assumptions that are typically not reported in publications.
In light of these challenges and the importance of the process when combining approaches, examples are needed that highlight lessons learned about key factors and activities that enable researchers to benefit from emerging opportunities and to navigate potential pitfalls. Research is typically presented as a flawlessly orchestrated performance without any hint of what went on 'behind the scenes'. However, in order to advance our knowledge about useful combinations wethose who combine approaches -need to share our experiences with and expertise in the process of combining approaches.
In this paper, we invite our readers behind the scenes of our research where we combine the approaches of agent-based modelling (ABM) and controlled behavioural experiments (bExp), complemented by structured observations of participants, questionnaires, and interviews. Conceptual and agent-based modelling have proven to be useful tools for facilitating such a process of joining together different understandings and problem perceptions, for instance, in participatory research activities, e.g. (Étienne, 2014). Agent-based modelling is increasingly used as a 'mediator' between different worlds, be it between empirical and theoretical, or between science and practice, or the natural and the social sciences. We use our ways of combining ABM and bExp to both shed light on the work that usually remains behind the curtains and -in particular -to highlight the role of ABM in combining approaches more generally.
Both ABM and bExp seek answers to explanatory questions (why is . . . ? how is . . . ?) to uncover the causes of a phenomenon of interest, in our case collective action and sustainable resource use, by relying on experimentation. However, while bExp involve actual humans as participants, the participants in ABMs are artificial 'agents'. This artificial world allows for control of both intraand inter-individual factors and processes (e.g. individual knowledge or trust level within a group) and enables experimentation with multiple factors and processes simultaneously.
We have used this particular combination of approaches to advance our understanding of sustainable collective resource use in a common pool resource (CPR) dilemma. Both ABM and bExp are by themselves common approaches in the field of collective action and CPR (Poteete et al., 2010): bExp, in the form of CPR games, are widely used to investigate the influence of factors or conditions (e.g. trust or institutions) on the capacity of user groups to cooperate (see, Lindahl, Janssen et al., 2021;Ostrom et al., 1994 for overviews); ABM is used to investigate specific collective action processes, such as the evolution of cooperation or conflict resolution mechanisms (see, Poteete et al. (2010) for an overview). However, as useful as this combination of ABM and bExp is for CPR research, it remains relatively rare (some exceptions are: Janssen & Baggio, 2017;Janssen, 2013;Manson & Evans, 2007;Schill et al., 2016;Straton et al., 2009).
The purpose of this paper is thus two-fold: 1) being an example in sharing the process of combining approaches; and 2) highlighting the enabling role of ABM for combining and integrating qualitative and quantitative evidence to study the why and how of human behaviour in relation to their complex environments. Our paper is structured as follows: we first explain how the idea for our research collaboration and combination was born, and shortly describe the approaches we combine (section 2). We then focus on detailing and reflecting on our particular combination and highlight our benefits of combining ABM and bExp project (Section 3) as well as the role of ABM as an enabler for combining approaches more generally (Section 4), before we conclude (Section 5).

How it began: mixing researchers and their approaches
For us, it all started in 2012, at a retreat on a small beautiful Swedish Island -a space created yearly by our institutes for their researchers to (re-)connect. This is where we -the author team -met and clicked and where the idea for collaboration emerged. What brought us together was our shared interest in studying human behaviour in social-ecological systems, which consider humans and nature as deeply intertwined and co-evolving (Berkes & Folke, 1998;Folke, 2016;Schill et al., 2019). We shared a systems perspective -a preference for understanding and studying social-ecological systems as complex (adaptive) systems (Levin et al., 2013;Preiser et al., 2018;Schlüter, Orach et al., 2019). Moreover, even though we have been trained in different fields and disciplines (artificial intelligence, cognitive science, ecology, economics, management, sustainability science, system science), we all identify as interdisciplinary researchers with a genuine curiosity for how researchers with different backgrounds see and study the world.
What differentiated us were the approaches we used to answer questions such as 'what makes groups of people manage their shared resources sustainably?' or 'how does the environment influence the capacity of groups to cooperate?' Nanda Wijermans and Maja Schlüter use ABM and Caroline Schill and Therese Lindahl use bExp. We all knew about the respective other approaches, but had not used them in our own research. Caroline and Therese shared a finding from a series of lab bExp; Lindahl et al., 2016;Schill et al., 2015) in which four participants (students) share a renewable resource and decide in each round how much of the resource to extract (a so-called CPR game): groups that cooperate do not necessarily use the CPR sustainably. We sought to investigate this finding deeper, because contemporary behavioural CPR literature had been focused on what makes groups cooperate rather than on how cooperative groups develop agreements in line with ecological conditions, e.g. how group knowledge is formed and then acted upon. We considered this a perfect case for using ABM, as it allows us to develop and formalise an explanation for the observed behaviour and then test whether it would reproduce the experimental results. In particular, we wanted to uncover critical individual-level factors and processes affecting individual and group behaviour, and ABM would allow us to manipulate and measure precisely that (in contrast to bExp). We called our research collaboration 'AgentEx': agent-based modelling meets controlled behavioural experiments. The model we then built closely follows the setup of the bExp. The agents represented the experiment participants, interacting with each other and a renewable resource. Four years later, we published our explanation -a formalised mechanism of intra-and inter-individual/inter-group factors and processes -explaining why cooperation is not enough for sustainable resource use Wijermans et al., 2016).
Several grant applications later, we received funding to continue our collaboration. We currently explore the role of perception of (unexpected) resource changes for collective sustainable resource use, motivated by the challenges small-scale fishers might face due to climate change. To this end, we develop a second ABM to expand on the previous one . Therein we formalise a mechanism connecting perceptions of resource use change with individual and group behaviour. To inform the design of this mechanism, we collected data using bExp, complemented with structured observations of participants, questionnaires and interviews with participants. Different from our previous project, the bExp are in the field with small-scale fishers as participants. Hence, part of our current project is to adapt our previous ABM  to reflect this different type of participants and field context. Before detailing how we combine ABM and bExp, we briefly outline the main characteristics and rationales of each approach, including strengths and weaknesses.

Agent-based modelling
Agent-based modelling (ABM) is a computational approach for studying (social) complexity as it reflects a systems perspective, wherein heterogeneous and autonomous entities dynamically interact and produce emergent outcomes (Heckbert et al., 2010). ABM is typically used for understanding why and how observed (macro) patterns arise by investigating the (micro/meso) processes underlying them. ABMs are diverse and can cater to diverse purposes, e.g. description, explanation, prediction, theoretical exploration, illustration, etc. (Edmonds et al., 2019). Core features that draw researchers to ABMs are: 1) the dynamics of interactions leading to emergent outcomes (multilevelness), and 2) the explicit representation of dynamic behaviour of heterogeneous agents (Conte & Paolucci, 2014;Heckbert et al., 2010).
When developing an ABM, a researcher creates and experiments with an artificial world with artificial actors (agents) that interact with and within their environment (the model; Eberlen et al., 2017;Gilbert, 2008;Gilbert & Troitzsch, 2005). The process of developing an ABM involves the stages of designing, building, testing, and publishing the model (Gilbert & Troitzsch, 2005). The design stage results in a conceptual model where researchers select and specify the agents, their (inter-)actions and the environment (dynamics) to be included. This process can be guided by theory, empirics as well as reasonable best guesses for knowledge gaps. The building stage consists of programming the conceptual model into a form understandable by a computer, and is aided by programming languages (e.g. Python, Java, Julia, etc.) and/or toolkits (e.g. NetLogo, Repast; for an overview see, (Abar et al., 2017). During this process of formalisation, logical gaps in the conceptual model might be detected or parts that need more specification. In the testing stage the computer model is verified and validated (David et al., 2017). Herein, verification involves evaluating whether the computational model has been implemented adequately given the conceptual model. Validation involves testing whether the model is a good representation of the 'real world' phenomena given the model's purpose. For both testing (validation and verification) and answering research questions, simulation experiments are used (Lorscheid et al., 2011). The final stage, like for other research, involves the publication of findings and encourages publishing the model and code openly (e.g. on ComSES (Janssen et al., 2008) or GitHub (2008) platforms) using documentation protocols (e.g. ODD (Grimm, 2020) or ODD+D (Müller et al., 2013)) to allow for accessibility, transparency, and reproducibility.
Although ABMs 'force' a certain way of looking at the world, in terms of agents, their environment, and interactions, they are simultaneously flexible in representing the philosophical and/or theoretical views the researchers/theories/insights hold to describe agents and environment (Conte & Paolucci, 2014). This richness and complexity are often not valued in furthering science. However many see ABMs as more than mere tools and put it forward as a research methodology in its own right (Durán, 2020). A connected perceived challenge is striking a balance between the level of detail needed and the ability to understand what happens in the model (Edmonds & Moss, 2005). Nonetheless, the process of designing an agent-based model itself is often appreciated by those involved since it shapes the conversations and questions regarding the phenomena of interest. Moreover, it enables the integration of different understandings (mental models, conceptual/ theoretical models, empirical insights, etc.), in e.g. developing theory based on empirical knowledge in inter-or trans-disciplinary teams (Schlüter, Orach et al., 2019).

Controlled behavioural experiments
Controlled behavioural experiments (bExp) are primarily used to test hypotheses about human behaviour (Falk & Heckman, 2009). They are grounded in a positivist perspective, strive for objectivity and generalisability, and hence follow a quantitative research design logic. They are controlled because researchers control different conditions by design, meaning that experimental participants are randomly assigned to different groups (called treatments) so that the only difference between these groups is the manipulated variable(s), e.g. communication allowed vs. not allowed. The experiments thus allow researchers to establish a causal link between the manipulated variable(s) and the outcome variable(s) given by observed behaviour. For instance, one could identify whether or not communication is a facilitator for collective action.
The bExp we refer to herein are dynamic CPR field experiments conducted with small-scale fishers (Lindahl, Janssen et al., 2021). With these, Lindahl and Schill aim to test if different ecological conditions give rise to differences in exploitation and cooperation behaviour. These experiments rely on the experimental economics tradition (Smith, 1976) in the sense that each decision has an economic consequence. This design follows from an attempt to capture 'real' (revealed) behavioural responses over 'hypothetical' ones.
The majority of CPR experiments to date have been conducted with students as participants ('conventional' lab experiments). However, experimenters increasingly use online platforms to reach the wider general public (online experiments) or move directly to the field and conduct experiments with specific populations. The increased use of CPR field experiments is mainly due to the interest of studying sustainable resource use around CPR (Cárdenas et al., 2013;Lindahl, Janssen et al., 2021). Moreover, experimenters (especially in the field) complement bExp with data collection tools that capture qualitative understandings of behavioural drivers (e.g. more detailed questionnaires with open-ended questions, participant observation, focus group discussions, see e.g. (Lindahl, Janssen et al., 2021)), which have also been used in the CPR experiments we refer to herein.
A limitation of using bExp for studying aspects of complex systems is that they typically only allow for testing the effect of one, or very few variables at a time, unless a large experimental research program is set-up, requiring major investments in time and large funds. Another limitation is that although one can establish a causal link from a certain variable to a specific behavioural outcome, a deeper understanding of the behavioural motivators and drivers is difficult to obtain.

Our combining with ABM
We started using ABM to develop and to develop and test an explanation of (intra-) individual factors and processes, i.e. by being able to manipulate and follow the role of trust and knowledge on the agent's behaviour, based on the feedback each agent would get from previous outcomes ('AgentEx-I'). The explanation was based on the qualitative and quantitative empirical insights from the bExp and complementary methods (post-experimental questionnaire). This initial intuition was fed by observing experiment participants (following an observation scheme), and theory from CPR studies, and social psychology. After formalising our explanation in an agent-based model, we tested it by trying to qualitatively reproduce outcome patterns from the bExp.
In our current project, we combine ABM and bExp differently to unpack and explore the role of perceived ecological change on the collective capacity of groups to use CPR sustainably ('AgentEx-II'). This implies an extension of the first model with mechanisms specifying how different perceptions of resource change might affect behaviour. We conceptualised these mechanisms for an ABM, but we did not know whether they would bear enough relation to reality, i.e. the actual mechanism in humans. We therefore expanded and modified the data collection around the bExp to gather information about resource change perceptions and their effect on behaviour. In particular, we a) added questions to the post-experimental questionnaire and asked these additional questions at one moment during the experiment; b) redesigned the observation scheme by e.g. introducing observations for every round; and c) added semi-structured interviews with the teams conducting the experiments, eliciting stories about the group dynamics. Resulting mechanisms are presently being formalised and their impact then explored in the new ABM with the goal of formulating hypotheses for future testing with bExp.

Detailing our ABM and bExp combinations
We use ABM and bExp to understand collective action problems in natural resource use. Our combination of ABM and bExp highlights two main types of (and reasons for) combining: (1) Exploring an empirical phenomenon (AgentEx-II): unpacking an aspect or process by obtaining empirical insights, using bExp and complementary methods, and then using these insights to design, build, and explore a mechanism using ABM. The data/knowledge needs are driven by the specificity and dynamicity of ABM which focuses the empirical inquiry on the interactions of individuals in groups over time. Here, particularly the narratives -rich stories -we obtained through the interviews with the experiment team were essential for developing empirically-grounded explanations of causal relations over time. Collecting data for ABM thus may change the focus and importance of complementary methods for bExp.
(2) Testing a hypothesized explanation of a phenomenon (AgentEx-I): using the ABM to reproduce previously not understood behavioural patterns of bExp. Particularly, developing explanations that include social and cognitive aspects of behaviour which are hard to measure or control in bExp.
Both types of combining have consequences for research design, i.e. they specify why and how the approaches interact with each other, and influence how empirical evidence from bExps is used for the ABM.

Research designs
The two projects followed different designs. In AgentEx-I, we followed a more familiar empirical cycle. The ABM abstracts from the same phenomenon of interest that the bExp studied (i.e. the decision context of the experiment), and we used information from the bExp together with theory and intuition to design the ABM. Then simulation experiments were run and outcomes compared to bExp outcomes (testing). In AgentEx-II, the research design started with the ABM that shaped the data collection around the bExp to gain an empirically-based understanding of why, when and how the perception of ecological change affects behaviour. This understanding was then used to develop and explore possible mechanisms in an ABM, which informs future focus for empirical studies (exploring). See the Supplementary material for a visualisation ( Figure S1) detailing the different research designs we have used. Figure 1 details the use of empirical evidence in combining ABM and bExp. In the model design stage, this involved individual-level data to inform the choice of variables, their relations and even a chain of relations in the form of a decision tree. These are built into each agent of the model. Once the model is built, we run simulation experiments for which we use initial settings for variable values distributions (parameters) that are based on the empirical distributions of characteristics of our experimental participants. In the testing stage, empirical data was used to compare the grouplevel outcomes of the model with the group-level outcomes of the bExp, to assess whether the model can reproduce the patterns. For AgentEx-I, although both design and testing stages made use of bExp data, the data collection was not influenced by the ABM. For AgentEx-II, the data collection for model design is at the heart of the project, but excludes testing as the outcomes of this ABM will point to relevant testable explanations and the needed data to test. See the Supplementary material in which figures S2a and S2b visualise in detail the different research uses of empirical evidence for the different ways of combining ABM and bExp in AgentEx I and II.

Reflections on our combining with ABM
A key takeaway from our research is that it is possible to combine ABM with bExp in different ways and stages of a research process (exploring & testing). It depends on the research question which way of combining -exploring or testing -is relevant and thus how the research is designed. Some research designs may be closer to scientists' traditional ways of conducting research, others may take them to less familiar territory. However, each design has its value. AgentEx-I was triggered by an experimental study while AgentEx-II picked up one of several questions arising from AgentEx-I. Theory building does not typically occur over just one scientific cycle, but rather over multiple iterations, where each combination of approaches contributes to theory development.

Behind the scenes
Each combination (or project) reflects a different process. In AgentEx-I we developed a model of the decision context and task as well as the explanation. Through the necessary deep engagement, we learned each other's different approaches. Particularly for our bExp researchers, co-designing a model meant taking a deep dive into ABM thinking. For AgentEx-II, we had already established our team, not least the confidence regarding the meaningfulness of combining ABM and bExp. We then placed the focus on co-designing the data collection to inform the model. This is where our ABM researchers had to dive deeply into the reality of field experiments and data collection. Since the data-collection was integrated in another project, we had to guard the data collection of both In AgentEx-I (our first combined approach) we used empirical data for design and validation. For AgentEx-II (our 2nd combined approach) we used the data for design and initialisation. Collected data includes: data on experiment participant behaviours which is gathered by documenting their individual and group (aggregate) decisions (time series of resource use behaviour; quantitative; (A)), by using observation schemes, complemented with questionnaires (distributions of individual characteristics; quantitative; (B)) and interviews with the teams that conducted the experiments (rich stories; qualitative; (C)). The qualitative insights have been used in the model design and building stage, guiding decisions on what variables and causal relations should be included and how to formalise variables and relations in a decision-tree structure. The distribution data is used to initialise the model (i.e. initial individual characteristics of the agents and the group constellations they appear in) and the time series of group-level resource use behaviour is used to validate explanations by comparing them against outcome patterns from the simulation experiments.
AgentEx-II and the other bExp project. For instance, in AgentEx-II we ideally wanted to conduct certain questionnaire questions before the experiment to better detect changes over time. The bExp experts could not change the experimental setup like this, as asking questions before could alter behaviour/results. The uncomfortable situation, where certain requested changes are not desired by part of the team, luckily did not turn into a power play situation that could divide us into (choosing) the project most important to ourselves. After some (self)reflections, we made explicit what the non-negotiable aspects were to maintain scientific rigour and found a solution to honour them. As unique as our situation may be, the kind of tensions that may arise between people and their priorities in projects etc., are not. We benefited from the robust collaboration we built up, and the trust we developed for each other while navigating the tension in guarding both aims so that at any time speaking up, listening, and finding solutions was possible when decisions would affect the quality of the data or the model.

Benefits of using ABM for (our) combining
The process of using ABM was to our great benefit as it forces certain questions that create a unique angle, stressing both over-time and interactions between and within the individual and group levels. The process resulted in multiple relevant questions e.g. about certain cognitive processes of individuals' and group mechanisms. We realised that we can only answer these questions together and this resulted in the need for and priority for certain types of data. The process also forced us to be precise about these mechanisms and therefore enabled and triggered deeply focused discussions, for instance, regarding the exact relationship of all variables involved and the specific behavioural mechanisms. Discussions typically started with rather 'harmless' topics, such as the role of trust for cooperation. However, such topics easily developed into difficult beasts, requiring answers to a chain of subsequent questions in order to develop the model. While creating our AgentEx-I model, for instance, we had lengthy discussions about a possible mechanism that describes how experimental participants' trust in others to stick to an agreement might change over time and in relation to changes in the natural resource. We departed from our intuitive explanation: 'when an unexpected resource level drop occurs, then trust erodes.' This seemed a sensible description because it meant someone took out more than agreed. However, in the model we needed to specify the cognitive process of the individual, thus snowballing into 'how can individuals differ in being affected by the drop in resource?', 'how fast does trust erode? and, "hold on, what would then make trust actually increase?', etc. The topic of trust and how we 'had to' engage with it in order to build a model is a perfect example of the ability of ABM to combine -in the same model -not only empirically-based explanations of both qualitative and quantitative nature, but also theoretical explanations.
In the process of co-creating the model, the required level of specificity also forces us to be precise, reflect, but also share and jointly scrutinize the implicit and explicit thoughts we were having out in the open.

ABMs as enablers of combining approaches
Using ABM influenced our research profoundly. It created a space for curiosity-driven deep learning that helped us advance our understanding of the phenomena we sought to study. ABM did so by 1) forcing us to be precise and focused in our discussions; 2) making it easier to navigate tensions as the ABM was our joint product, rather than representing one particular position; and perhaps most importantly by 3) bringing certain conceptual questions to light that needed an answer for building the ABM. Agent-based simulations invite questions concerning interactions, relations, decisions-making, and how everything affects everything else over time. Often, these questions can be very different from one's traditional way of thinking and can therefore lead the attention to relevant, but previously unknown, terrain. The aim of this questioning is to understand how and why e.g. an interaction plays out. This process is often uncomfortable and requires time and endurance. To make an ABM work, one cannot gloss over such aspects and details, since the agents need specific rules for engaging with other agents, and with the world they are residing in. We are not the first to point out how enabling ABM can be used for advancing social science (see e.g. (Conte & Paolucci, 2014;Eberlen et al., 2017). However, ABM is still much under-utilised and anything but a conventional 'combination partner' as it is not yet standard in the social science repertoire of research approaches.
In addition, ABM allowed us to use both qualitative and quantitative evidence as part of the model. The conventional mixed method terminology relies on categorising an approach as either qualitative or quantitative. ABM does not belong to either of the traditional categories (Saetra, 2017). This may seem to make it challenging to 'fit in'. We see it as a strength, however, because ABM can be a reflection of either or of both. While one may associate ABM with numbers, and manifold numbers are certainly created in the process of running simulations, much of the design and their interpretation is more of a qualitative nature (Saetra, 2017). ABM itself does not force any assumptions, type of logic or data on the agents or the world created in the model. This freedom is what makes the approach so powerful and an easy 'combination partner'. ABM 'only' requires a world with interacting agents.
ABM combined with other approaches has an important role to play in the study of social phenomena (Saetra, 2017), particularly when it comes to questions regarding mechanisms that are hard to discover or observe directly (Gilbert & Troitzsch, 2005) or when systematic experimentation with counterfactuals is called for (Saetra, 2017). In the field of agent-based social simulation, combining approaches is not unusual. One might even claim that most empirically-grounded agent-based models reflect a combining exercise. They are just typically not framed or even recognised as such and hence not deeply reflected upon. Many empirical ABM modellers, however, concern themselves with using and communicating the use of empirical data in their models (Boero & Squazzoni, 2005;Smajgl & Barreteau, 2014). Early examples on combining can be found, for instance, in the special issue by (Janssen & Ostrom, 2006) which exemplifies the use of qualitative and quantitative data for model design and testing. They stress the importance of rigour and the potential of using empirical data for developing crucial elements of models (e.g. cognitive processes of agents). Accordingly, the focus lies on pushing the frontiers of ABM in improving their quality. In another special issue edited by (Edmonds, 2015), the focus lies on making transparent the practice of using qualitative evidence in ABM and advocating for more transparency when using qualitative evidence. As Edmonds (2015) rightfully states, ABMs are particularly suitable for the inclusion of qualitative evidence while using formal models. To enable others to repeat and validate findings, this explicit urge for transparency is still actively advocated for in the lively community of social simulation scientists through a workshop series organised by the special interest group on the use of qualitative evidence of the European Social Simulation Association (Borit, 2012).
In the social sciences, using ABM in combination with other approaches is still unusual and there are very few papers that provide reflections. These papers typically introduce ABM to a research domain that is unfamiliar with ABM thus far and provide 'care instructions' for combining with ABM. Chattoe-Brown (2014), for instance, argues for using ABM in attitude research (sociology), highlighting its ability to integrate different types of data for advancing theory. The included 'care instructions' raise awareness of three different forms of agent-based models: 1) completely data-free, 2) with data for one level (for the design stage or micro level), and 3) with data for two levels (for the design & validation stages, or micro and macro levels). In Chattoe-Brown's view, only the latter type is meaningful for theory development. The author thus connects 'the right type of model' with a 'way of doing science' that this particular audience is familiar with. We, however, propose that the other two ABM forms can also play a critical role for both advancing knowledge and theory building. They represent currently untapped potential as we demonstrated with our work. Castellani et al. (2019), on the other hand, focus on the potential of combining casebased methods (CBM) and ABM in bridging the perceived methodological incompatibilities. They highlight the ability of ABMs as virtual laboratories. Their 'care instructions' mainly state that there is more to operationalise when these methodological communities work together. They stress how much each approach would benefit from that combination. For CBM, benefits include e.g. more effectively studying behaviours and interactions in cases, the ability to explore counterfactuals and scenarios, and focusing on longitudinal trends over time. For ABM the combination with CBM allows for connecting model design and validation to empirical complexity, especially at the microlevel of agent behaviour rules.

Conclusion
Understanding complex (social) phenomena benefits from combining different tools, perspectives, expertise, and experiences. Combining approaches enables us to enter under-explored scientific terrain. It typically involves relying on creative ways of understanding what we encounter, to then put findings on the map for others to see, so that we all can learn, revisit, or start their own expeditions. Herein we highlighted the role ABM can play for such expeditions. Based on ABMs flexibility, it allows explorers to use whatever understanding they need (qualitative and/or quantitative) to answer their research question and based on its formal nature, it creates an awareness of the assumptions made by each explorer involved.
We should not have to enter such under-explored terrain blindly, however. Learning from others' experiences is just as important as learning about their successful expeditions. To advance our understanding about the complex (social) worlds we live in, we need to actively engage in collective learning about research processes. This requires us to provide the information for others to learn, critique, and connect to our work -as we did by showing our readers what occurred 'behind the scenes' in our collaboration. We want to conclude by inviting other explorers to share and reflect on what is happening behind their scenes, and strongly recommend going on an expedition with ABM.