A Meta-Analysis of the Effects of Cross-Cutting Exposure on Political Participation

Scholars have advanced many theoretical explanations for expecting a negative or positive relationship between individuals’ cross-cutting exposure—either through interpersonal or mediated forms of communication—and their political participation. However, whether cross-cutting exposure is a positive or negative predictor of participation is still an unsettled question. To help fill this gap, we conducted a meta-analysis of 48 empirical studies comprising more than 70,000 participants examining the association between cross-cutting exposure and political participation. The meta-analysis produced two main findings. First, it shows that, over all studies, there is no significant relationship, r = .002, Zr = .002 (95% CI = −.04 to .05). Second, the null relationship cannot be explained by variations in the characteristics of cross-cutting environments (e.g., topic, place, or source of exposure), participation outcomes (e.g., online vs. offline activities), or methods employed (e.g., experiment vs. survey). Taken together, these results should alleviate concerns about negative effects of cross-cutting exposure on political engagement. Implications for future research are discussed.

ability to argue (Price, Cappella, & Nir, 2002), and facility to hold accurate beliefs (Garrett, Weeks, & Neo, 2016). Furthermore, these findings resonate well with the centuries-old notion that the clash of dissimilar viewpoints is good for democracy (de Tocqueville, 1988).
There is one aspect, however, where the evidence regarding the benefits of crosscutting exposure is far from being a settled issue, and this refers to political participation. To paraphrase Klofstad, Sokhey, and McClurg (2013, p. 120), there is disagreement over the effects of disagreement on political engagement. It was the seminal work of Diana Mutz (2002aMutz ( , 2002b) that first tested in thorough fashion whether the democratic benefits of cross-cutting exposure come at the expense of participation. To the degree that encountering greater political difference in one's social networks triggers ambivalence and threatens social harmony-argued Mutz-communication across lines of political difference deters citizens from active political involvement. Thus, a "deliberative versus participatory democracy" (Mutz, 2006, p. i; italics in original) dilemma was put forth. As could be expected, this dilemma sparked a great interest among scholars, who ever since have examined it both within and outside the U.S., using survey and experimental data, and extended it to mediated forms of exposure, including traditional media, cable TV, and social network sites.
We know of no attempt to tackle the lack of consistent findings in the literature on cross-cutting exposure and participation with a meta-analysis-the ideal method for synthesizing data across studies. We collected and meta-analyzed published research using survey and experimental methods, including cross-sectional and longitudinal studies, that investigate the influence of exposure to disagreeable opinions in interpersonal, online and mass media contexts on political participation, understood as any behavior by ordinary citizens directed toward influencing some political outcome. By conducting a meta-analysis, we are able to estimate the sign and strength of the average correlation between cross-cutting exposure and participation. In addition, following an inductive approach, we examine three factors that may explain variations in this relationship: Characteristics of cross-cutting exposure, the characteristics of participation as well as methodological and sample characteristics.

Relating Cross-Cutting Exposure to Participation
In her seminal work, Mutz (2002b) suggested two theoretical explanations for the assumption that exposure to cross-cutting communication results in lower levels of political participation. First, she theorized that cross-cutting exposure may result in attitudinal ambivalence. Ambivalence, then, may increase uncertainty in individuals because their own political standpoints are challenged by disagreeing information. This intrapersonal conflict may-similar to the reasoning addressed in spiral of silence theory (see Noelle-Neumann, 1974)-, in turn, negatively affect political participation, that is, individuals are less willing to take political action and tend to delay voting decisions (Mutz, 2002b). Second, Mutz (2002b) argued that cross-cutting exposure discourages 524 Jörg Matthes et al. political participation because individuals are worried that political action may threaten social relationships and social harmony. According to this interpersonal social accountability explanation, "one feels uncomfortable taking sides in the face of multiple competing constituencies" (Mutz, 2002b, p. 840). Various studies have supported this notion of negative effects on political participation (e.g., Lu, Heatherly, & Lee, 2016;Moehler & Conroy-Krutz, 2016). Others have found evidence in line with the reasoning. For instance, Klar (2014) shows that cross-cutting discussion settings will lead to more bipartisan evaluations and depolarization, while homogeneous settings tend to polarize. Mutz's findings regarding a negative effect of cross-cutting exposure on political participation have not gone unchallenged. While some studies found no or rather limited effects of cross-cutting communication on political participation (Eveland & Hively, 2009;Nir, 2011), there is a series of studies that have observed a positive effect on participation or related outcomes (Huckfeldt, Mendez & Osborn, 2004;Kwak et al., 2005;Matthes, 2013;Scheufele, Hardy, Brossard, Waismel-Manor, & Nisbet, 2006). The nature of these divergences stems from methodological and theoretical factors.
On the theoretical side, there are three arguments promoting positive effects of crosscutting exposure on participation, learning, information search, and polarization. According to the learning mechanism, disagreeing information may prompt individuals to thoroughly reflect about their own political standpoints and beliefs. Furthermore, it has been argued that an individual exposed to cross-cutting communication will be more aware of oppositional views and will learn certain political aspects he or she had previously not taken into consideration (Mutz, 2002a). To the degree that learning promotes participation, cross-cutting exposure can have a positive, indirect effect on participation (McLeod, Scheufele, & Moy, 1999;Neuman, 1986). According to information search, exposure to cross-cutting communication may enhance an individual's demand for additional information and has been shown to be positively related to hard news media use (McLeod, Sotirovic, & Holbert, 1998) which, in turn, is positively related to political participation (Shah, McLeod, & Yoon, 2001). Finally, the polarization mechanism suggests that exposure to disagreeable opinions can increase polarization, which in turn, fosters participation. For instance, in a field experiment that exposed large groups of Democrat and Republican social media users to bot messages from opposing political views, Bail et al. (2018) found that Republican participants expressed substantially more conservative views after following a liberal Twitter bot, whereas Democrats' attitudes became slightly more liberal after following a conservative Twitter bot. Polarization, in turn, has been found to be a significant antecedent of more active engagement (Wojcieszak, 2011).
On the methodological side, scholars have defined and measured cross-cutting exposure different to Mutz (see Hutchens, Eveland, Morey, & Sokhey, 2018). Mutz (2006) asked respondents if they tend to have different opinions compared to their discussion partners, which treats cross-cutting exposure as an independent variable. , by contrast, used measured cross-cutting exposure with an interaction term of two independent variables and did therefore not directly measure cross-cutting exposure. Not surprisingly, their studies show that the influence of cross-cutting exposure on turnout is negligible. Other scholars have challenged Mutz on grounds that her research ignores important moderators (e.g., Choi, Lee, & Metzger, 2017;Matthes, 2013) or does not control for exposure and/or discussion of like-minded talk. As noted by Eveland and Hutchens Hively (2009), this is not a zero-sum game: talking to like-minded people does not preclude from talking to non-like-minded people. In other words, researchers should include in their models of participation both agreement and disagreement.
To summarize, there are two competing arguments when it comes to the relationship between disagreement and participation. On the one hand, the seminal studies by Mutz (2002b) would predict a negative relationship between cross-cutting exposure and participation. On the other hand, three theoretical mechanisms-learning, information search, and polarization-would suggest the opposite effect. We therefore formulate two competing hypotheses. First suggested by Chamberlin (1890), the idea behind a competing hypothesis is to reduce confirmation bias by developing and comparing alternative hypotheses for a phenomenon under study. This pays justice to the two competing research traditions described above. Thus, H1a: There is a negative effect of cross-cutting exposure on political participation.
H1b: There is a positive effect of cross-cutting exposure on political participation.

Moderators
In order to determine the moderators, we have followed an inductive meta-analytical approach (see for this, Baltes, Dickson, Sherman, Bauer, & LaGanke, 2002;Dillard, Weber, & Vail, 2007;Yang, Aloe, & Feeley, 2014). According to this approach, the moderators are extracted inductively from the available studies. This reflects what has been done; however, no hypotheses are offered for the role of single moderators. All important variations observed in the studies are thus treated as potential moderators. This procedure is perfectly in line with our competing hypotheses, because it may help to detect different rather than uniform moderating patterns. It is important to note that no unifying framework has been offered in the literature that would allow for a systematic mapping of moderators. A meta-analysis can only include those characteristics that vary and have been reported in extant research. We test three different types of moderators, (1) those pertaining to the characteristics of cross-cutting exposure, (2) those describing the attributes of participation, and (3) methodological aspects.
Characteristics of Cross-Cutting Exposure: Source, Topic, and Place There are two main sources for political information in most people's lives: The media and interpersonal communication. It is not a discussion which of these two sources of information triggers larger effects: As early as in The People's Choice by Lazarsfeld, Berelson, and Gaudet (1944) the difference in effect sizes between media and interpersonal communication was discussed, concluding that interpersonal interaction was more influential than media. In the context of our meta-analysis, we want to separate the source of exposure from the channel of exposure. We thus define interpersonal communication as one-to-one communication and mass-mediated cross-pressures as one-to-many. Both, one-to-one and one-to-many, can occur offline or online (i.e., interpersonal talk online and offline; mass-media exposure online and offline). One may argue that normative social pressures in situations of dissent are much higher in interpersonal compared to mass-mediated situations (Mutz, 2002b). That is, when watching or reading news messages, people need not react to opposing views or justify themselves. Interpersonal discussion, by contrast, demands a direct response and the social consequences of disagreement may be much more visible. However, this question has not been sufficiently addressed in prior research. We thus ask:

RQ1
: Is the effect of cross-cutting exposure on political participation more positive or more negative for interpersonal compared to mass-mediated cross-pressures?
Most studies on the effects of cross-cutting exposure are based on self-reports of political discussions (see Klofstad, McClurg, & Rolfe, 2009). Commonly, respondents are asked whether they politically disagree or agree with others. This approach to crosscutting exposure does not come without challenges. Most importantly, it raises the question what respondents understand as "political". The alternative is to relate survey questions to specific topics that are commonly perceived as political. This is frequently done in experimental research, in which researchers define specific issues when constructing the stimulus material to be used in the experiment. For instance, in a field experiment, Shi (2016) mailed participants either a disagreeing or reinforcing message related to a constitutional amendment on same-sex marriage which was on the ballot in the U.S. state of North Carolina in 2012. Her results showed that disagreeing messages dampened political participation. In short, this discussion raises the question whether it empirically makes a difference whether respondents are asked about politics in generalhere, politics may be defined by respondents-or specific issues. We are not aware of any previous research addressing this question directly with respect to the effects of crosscutting exposure. Hence, we formulate the following research question: RQ2: Is the effect of cross-cutting exposure on political participation more positive or more negative for general versus specific political topics?
As has often been argued, the internet may create environments in which crosscutting exposure is limited (e.g., Dylko et al., 2017). Hence, one may theorize that the effect of cross-cutting exposure on participation is weaker in an online rather than offline setting because online and offline settings differ in their opportunities for cross-cutting exposure. Schulz and Roessler (2012) argued that there may be multiple opinion climates online depending on the content that the users select. In online environments "algorithms inadvertently amplify ideological segregation by automatically recommending content an individual is likely to agree with" (Flaxman, Goel, & Rao, 2016, p. 299).
However, while there hence are arguments for how the internet may promote exposure to disagreeing or to reinforcing political messages, the question to be addressed here is whether the effects of cross-cutting exposure are more or less consequential when crosscutting exposure occurs offline or online. We are not aware of any previous research that has directly compared the effects of offline versus online cross-cutting exposure on political participation. In that vein, however, Hardy and Scheufele (2005) reported similar moderating effects of both offline and online interaction on the effect of online news use on political participation. We therefore formulate the following research question: RQ3: Is the effect of cross-cutting exposure on political participation more positive or more negative for online compared to offline cross-pressures?

Attributes of Participation: Type, Publicness, and Effort
Previous research highlights three important distinctions: Type of participation (online vs. offline), publicness of participation (i.e., private versus public expressions), and effort of participation. The advent of the internet also made online political participation possible (Ohme, de Vreese, & Albaek, 2017). Online participation typically refers to activities such as signing an online petition, sending a message to a public office holder, or sharing political messages on social networks. By contrast, scholars have measured offline participation with indicators of traditional forms of participation, such as voting, working in a party organization, attending speeches and displaying campaign buttons. The question now is, when people are exposed to cross-pressures, does this affect online participation to a greater extent than offline participation? There is some evidence that crosscutting talk depresses offline (e.g., Hopmann, 2012;Matthes, 2013) and online (e.g., Valenzuela, Kim, & Gil de Zúñiga, 2011) participation. However, findings are inconsistent (see Bello, 2012), and we lack studies systematically comparing online and offline participation. We therefore ask: RQ4: Is the effect of cross-cutting exposure on political participation more positive or more negative for online compared to offline participation?
Another angle from which political participation can be observed is the question to what extent a political activity is public or not (Scheufele & Eveland 2001). Donating money to a political party can be done without anyone in one's surroundings noticing, attending a political rally can hardly be done secretly. This difference in publicness may have implications for how individuals react when exposed cross-cutting information prior to a possible political activity. Mutz (2002a), for instance, reported that cross-cutting exposure has a negative impact on political activities involving public face-to-face confrontation, while it does not on activities not involving such direct confrontation. This is also in line with research on the spiral of silence (Noelle-Neumann, 1974). The theory assumes that voicing opinions publicly is less likely for minority (i.e., one disagrees with the majority) opinions compared to majority opinions due to the fear to become socially isolated. Following this line of reasoning, when participation efforts are private, they cannot be sanctioned with social isolation by others. When they are public, by contrast, social relations are put at stake because everybody can see and evaluate the political action. In Mutz' words, according to the social accountability explanation, "we would expect mainly public forms of political participation to be affected; in private situations such as the voting booth, cross-cutting networks should pose few problems due to social accountability." (p. 841). However, the learning, information search, and polarization explanations would not support this reasoning. We thus ask: RQ5: Is the effect of cross-cutting exposure on political participation more positive or more negative for public as compared to private forms of participation?
Finally, the effects of cross-cutting exposure may depend on the efforts needed for participatory activities. Whereas low effort participation refers to those activities requiring a relatively small amount of time and energy (e.g., sharing political information; signing a petition on the street), high effort participation is more time-and energyconsuming, as for instance, protesting or writing a political blog entry. High effort forms of participation may be rooted in deeper convictions, and thus, be less affected by cross-pressures. As Pattie & Johnston (2009, p. 266) explained, high effort participation usually takes place in contested environments, such as voting campaigns, where the mass media may outweigh the effects of interpersonal disagreement. By contrast, "other forms of political participation generally take place in lower-stimulus environments,
where disagreement in discussion networks may have a greater effect" (p. 283). In this vein, Lu and Myrick (2016) have shown that cross-cutting exposure has a more pronounced effect on low than high effort political participation. However, the opposite assumption could be derived from the learning, information search, and polarization explanations. For instance, one could argue that learning, as a result of cross-cutting exposure, may have stronger effects on less effortful and thus less demanding forms of participation. The reason is that less-effortful forms are easier to influence. No research has tested these conjectures, however. We thus ask: RQ6: Is the effect of cross-cutting exposure on political participation more positive or more negative for low effort as compared to high effort forms of participation?

Methodological and Sample Characteristics
Findings may not be uniform across designs and samples. Design and sample characteristics are typically accounted for in a meta-analysis. We pose no hypotheses, as these factors are mainly determined by the characteristics of the sampled studies. Previous research on the validity of survey experiments, for instance, has reported that the results found in experiments tend to show larger effect sizes than found when surveying realworld communication effects (Barabas & Jerit, 2010). Hence, we will also investigate whether differences in the results reported by prior research depend on the implemented research designs. Sample characteristics, especially the question of student versus nonstudent samples, typically matter for communication effects. We therefore ask: RQ7: Does the effect of cross-cutting exposure on political participation depend on the design and the sample characteristics?
Findings may not be uniform across individuals. For instance, persuasion research generally suggests that differences exist between men and women as well as younger or older respondents in how they react to persuasive messages (e.g., Leventhal, Jones, & Trembly, 1966). The effects of disagreement may also crucially depend on culture. Collectivist cultures like those in Asia value the collective work of groups and may thus perceive cross-pressures as more threatening to social harmony. Western cultures, by contrast, tend to emphasize the individual, so people may still be valued as individuals even though they do not agree with the majority view. This perspective has also been proposed in research on the spiral of silence. However, a recent meta-analysis demonstrates that the spiral of silence is independent of culture (Matthes, Knoll, & von Sikorski, 2018). So even though the roles of age, gender and region have never been systematically examined in research on cross-cutting exposure, meta-analysis allows us to explore the dependency of an effect on these characteristics. Ultimately, this enables a test of the universality of the phenomenon under study. We therefore ask: RQ8: Does the effect of cross-cutting exposure on political participation depend on the gender, age, and region of the respondents?
A Meta-Analysis of the Effects of Cross-Cutting Exposure on Political Participation 529

Study Retrieval
Our systematic literature search strategy is visualized in Figure 1. Studies were collected from two major databases (Communication and Mass Media Complete, Web of Science). The search was limited to journal articles and conference papers written in English. The databases were searched through May 2017. They were examined searching the string ((counterattitudinal OR "counter-attitudinal" OR "partisan news" OR "partisan media" OR "cross-cutting" OR "twosided" OR disagree* OR heterogen* OR ambivalen* OR "cross-pressure*" OR dissonan* OR
incongruen* OR uncongenial OR "different political viewpoints") AND (polit*)). To identify additional literature, we searched Google Scholar checking who cited Mutz (2002b). This step was taken to ensure that literature, possibly not (yet) traceable through databases, was included in the analysis, too. This led to an initial list of N = 4,180 identified papers. This list of 4,180 papers is available upon request from the authors, as well as reasons for all the excluded papers.

Study Selection
We selected papers that assessed the impact of cross-cutting exposure on participation. 1 Study exclusion was done in three consecutive steps. First, we excluded all research presenting no quantitative data, content analyses, methodological research, literature reviews, research unrelated to the goal of the meta-analysis, and research presenting simulated data. In this step, we excluded N = 4,108 papers, leaving 72 papers for the second step. There, we applied two inclusion criteria. The first criterion pertained to the operationalization of the independent variable "exposure to cross-cutting (i.e., disagreeing) opinions". That is, we only operationalized exposure to opinions that do not match those of the respondents. Exposure had to be either measured or manipulated by exposing some of the participants to crosscutting opinions and some of the participants to agreeing opinions or a control condition. Exposure could occur through interpersonal discussions and/or media. Most important, opinions of others or opinions expressed in the media had to conflict with the opinions of the respective participants (Eveland & Hively, 2009). As a result, papers were excluded if they did not contain any measurement or manipulation of exposure to opinions, but solely investigated exposure to people of different ethnicity, age, gender, or income (e.g., Belletini, Ceroni, & Monfardini, 2016). In addition, papers were excluded if they only objectively assessed whether participants were exposed to cross-cutting opinions (e.g., through voter registration files), but did not check whether participants were actually exposed (e.g., Belanger & Eagles, 2007). Papers were also excluded if they measured exposure to partisan vs. non-partisan media without checking whether the exposure was actually cross-cutting or in agreement with the participants' opinion. Finally, we excluded papers that measured exposure to opinions without checking whether the opinions were in agreement or disagreement with the participants' opinion (e.g., Nir, 2005). The second criterion concerned the dependent variable. Papers were only included if they measured political participation as any behavior or behavioral intention "by ordinary citizens directed toward influencing some political outcome" (Brady, 1999, p. 737). This included any form of behavioral civic engagement as well as the timing of voting decisions (Kim & Chen, 2015). Papers were excluded if they merely measured political interest or knowledge. In addition, papers were excluded if they measured discussing an issue as a form of participation since this presents our independent variable. In total, 18 papers were excluded in the second step.
In the third step, we excluded six papers lacking the appropriate statistical information to calculate effect sizes with the formulas provided by Lipsey and Wilson (2001). The authors of the six papers were contacted but it was not possible to obtain the missing information (e.g., Dilliplane, 2011;Pattie & Johnston, 2009 A Meta-Analysis of the Effects of Cross-Cutting Exposure on Political Participation 531

Effect Size Calculation and Integration
Pearson's r was used as the effect size estimate. A positive r indicates that people who are exposed to cross-cutting opinions are more likely to participate politically than those who are exposed to agreeing opinions. In case studies reported timing of voting decision as the dependent variable, a positive r indicates that people who are exposed to cross-cutting opinions vote earlier than those who are exposed to agreeing opinions.
In studies reporting Pearson correlation coefficients, r was directly taken from the articles. In studies reporting Kendall's tau correlation coefficients, Kendall's tau was converted to Pearson's r based on Gilpin (1993). In studies reporting regression results, standardized regression coefficients were transformed to Pearson's r according to the formula provided by Peterson and Brown (2005). In studies reporting means and standard deviations or frequencies, r was calculated according to the formulas provided by Lipsey and Wilson (2001). Important to note, effect sizes obtained from studies reporting Pearson correlation coefficients, Kendall's tau correlation coefficients, means and standard deviations, or frequencies did not significantly differ from effect sizes obtained from studies reporting regression coefficients (i.e., controlling the influence of other variables; χ 2 (1) = .59, p = .44). Before performing the syntheses, correlation coefficients (r) were converted to Fisher's z scale (Zr; Borenstein, Hedges, Higgins, & Rothstein, 2009;Lipsey & Wilson, 2001). In total, 114 effect sizes were obtained. Meta-analysis was carried out using the R metafor package (Viechtbauer, 2010). 2

Moderators
The source to which participants were exposed was coded as follows: media (0), common citizens (1), family, friends, or neighbors (2), and colleagues (3). It was coded whether participants were exposed to cross-cutting opinions regarding a specific topic (0) or opinions in general (1). The channel of exposure was coded as offline (0) or online (1). Respondent characteristics were: mean age in years, percentage of women, and participants' origin (North America (0), Europe (1), Asia (2), Africa (3), South/Central America (4)).
In terms of the dependent variable, the type of participation was coded as offline participation (0; e.g., campaign work, displaying campaign signs, attending rallies), online participation (1; e.g., commenting on a post, emailing an elected official, signing an online petition), voting (2), or timing of voting decision (3). Furthermore, the effort of participation was coded as low effort (0; e.g., displaying campaign signs, subscribing to a mailing list, liking a political actor) or high effort (1; e.g., attending a political meeting, working for a party/candidate, financial donations). In addition, we coded the publicness of participation as private (0; e.g., voting, donating money anonymously) or public (1; e.g., displaying campaign signs, liking a political actor, attending rallies).

Overall Effect Analysis
Falsifying hypothesis one (H1a and H1b), the overall effect analysis revealed an insignificant effect of cross-pressures on participation, r= .002, Zr = .002 (95% CI = −.04, .05). 3 It displays aggregated effect sizes for each study as well as corresponding 95% CIs. Larger squares 532 Jörg Matthes et al.
indicate larger samples sizes. A significant variability was found among effect sizes, Q (69) = 866.23, (p < .0001). This finding suggests that effect sizes vary due to between-study differences. The I 2 statistic-the amount of total variability (sampling variance + heterogeneity) that can be attributed to the heterogeneity among the true effects-are important in this context. About 94% of the total variability could be attributed to between-study differences (I 2 = 93.80). These may be explained by moderators (Huedo-Medina, Sánchez-Meca, Marín-Martínez, & Botella, 2006).

Moderator Analysis
Moderated effects were tested by calculating multilevel mixed-effects models (i.e., multilevel meta-regressions). For each moderator, a separate meta-regression was calculated. All categorical moderators were dummy coded. Results are displayed in Table 1. Estimates represent changes in effect size according to the changes in moderator levels. The chisquare test statistic indicates whether a moderator, taken as a whole, significantly impacts effect size (Q-test; Borenstein et al., 2009). By contrast, the z test statistic indicates whether or not a certain level of categorical moderator was significantly different from the reference category of this moderator (Z-test; Borenstein et al., 2009). The reference categories equal the moderator levels signified as zero in the methods section. Answering research question 1, effect sizes did not differ between interpersonal and mass-mediated cross-pressures (χ 2 (3) = .84). Thus, it did not matter whether the media or citizens, family or colleagues exposed respondents to cross-cutting views. When it comes to research questions 2 and 3, neither the topic (i.e., general vs. specific, χ 2 (1) = 0) nor the exposure place (i.e., online vs. offline, χ 2 (1) = .44). When it comes to the characteristics of the dependent variable, political participation, we also did not find any significant effect. Answering research question 4, 5, and 6, the effects of cross-cutting exposure were not larger for online versus offline (χ 2 (3) = 2.48), public versus private participation (χ 2 (1) = 1.20), nor for low versus high effort participation (χ 2 (1) = .44). Answering research question 7 and 8, there were also no effects of design and sample characteristics, nor did we detect any effects for respondent characteristics (gender: χ 2 (1) = .19; age: χ 2 (1) = .63; region: χ2(4) = .51) (see Table 1).

Publication Bias Analysis
We used funnel plot and Egger's regression test for funnel plot asymmetry to test whether studies with small samples and minor effect sizes failed to be published (Egger, Smith, Schneider, & Minder, 1997). There was no evidence of publication bias in terms of smaller studies with minor effect sizes missing at the bottom left corner. This finding was corroborated by a non-significant Egger's regression test.

Discussion
With two competing theoretical predictions, the relationship between cross-cutting exposure and political participation is one of the key unsettled questions in political communication scholarship. For the first time, we have tested this notion with an extensive metaanalysis, the only way to provide a definite answer to any research question. Across all studies, we observed no significant relationship between cross-cutting exposure and political participation. None of the many moderators we examined had a significant A Meta-Analysis of the Effects of Cross-Cutting Exposure on Political Participation 533 Table 1 Meta-regression results for testing the influence of the moderators Note. k: number of effect sizes; Estimate: meta-regression coefficients for Zr; CI: confidence interval with lower (LL) and upper limit (UL); χ 2 : test statistic of Q-test; z: test statistic of Z-test. effect, either. Yet in some studies, cross-cutting exposure had a positive, in some others, it had a negative effect, yet in others, there was no relation whatsoever on participation. Considering the large number of studies, respondents, and effect sizes we included, the evidence is so overwhelming to firmly conclude that disagreements over the effects of disagreement (Klofstad et al., 2013, p. 120) can be settled: exposure to cross-cutting information, either online or offline, interpersonal or mass-mediated, no matter the topic under investigation or the type of participation, and independent of the characteristics of the respondents, does not systematically dampen, nor encourage, participation.
One may argue that the phenomenon of cross-cutting exposure itself so heterogeneous, and as a consequence, its definition and measurement vary from study to study. In fact, we have also assessed how cross-cutting exposure has been operationalized, for instance, with frequency-based measures to discussion disagreement vs. network-size measures of exposure (see Mutz, 2002a), but none of these distinctions varied systematically in order to matter in a meta-analysis. Even if they did, those measures are correlated and thus very unlikely to explain why an effect is positive in one and negative in another study. In addition, we did not include studies on heterogeneity, but only those on disagreement which is a clearly defined and very straight-forward concept. It is clear that variations in the direction of an effect can only be explained by fundamental study characteristics, such as those we have analyzed in the present study, not by tiny differences in item wordings. After reviewing all studies, we selected all possible characteristics with sufficient variation between the studies. We also additionally tested combinations of several forms of participation such as offline public versus onlinepublic. None of these variations yielded significant effects, however. The overall nulleffect thus clearly reflects the state of the art in this line of research. Needless to say, a non-significant relationship should have the same value than any positive or negative meta-analytical effect.

Explanations
We offer two different explanations for why there is no overall effect of cross-cutting exposure on participation. First and most importantly, the effect may be mediated rather than direct. In fact, we have discussed five underlying mechanisms, two for negative and three for positive effects. In fact, Mutz (2002b) theorized that the relation between crosspressures and participation can be explained by social accountability and ambivalence. Yet, subsequent research has not sufficiently picked up this idea. Prior research has primarily looked at the direct (or total) relationship, and potential mediating paths have not been systematically investigated. Direct effects may be visible in some studies, but not in others depending on the variables included in the models. Yet when only a direct effect is examined, there is a risk that mediating paths are overlooked (Hayes, 2018). That is, an effect may only be there when the underlying mechanisms is taken into account. One could argue that underlying mechanisms are different for positive and negative effects, and in some cases, they may even cancel each other out. Rephrased, if some mechanisms lead to negative and some to positive outcomes on participation, the overall effect may be zero. Thus, rather than asking whether or not the effect is positive or negative, future research should test the conditions under which the five different processes are likely to unfold. Consequently, the picture may be much more complex than previously thought.
Second, and related to that, the different mechanisms may depend on moderator variables. Since mechanisms have not been tested in prior research, such moderated mediation processes cannot be assessed with the present meta-analysis. As a first step toward that direction, research is called to engage in theory development describing the conditions for several (and competing) underlying mechanisms. Ultimately, we should strive to explore moderated mediation models and mediated moderation models. That is, cross-cutting exposure may prompt some underlying mechanisms for some individuals, and this in turn, will foster or dampen participation.
Assuming the effects of cross-cutting exposure are mediated and moderated, the overall conclusion of our meta-analysis still holds: Across studies, there is no main effect of cross-cutting exposure which clearly contradicts the original theoretical idea as well as the challenging stream of research. Hence, moderated mediation models may not change the overall interpretation of our meta-analysis but lead to a more accurate understanding about the conditions and mechanisms under which cross-cutting exposure fosters or hamper participation.

Limitations and Future Research
One may argue that the effect may be moderated beyond the variables we could examine in the present meta-analysis. Such moderators may be situated either on the individual or at the contextual level and they may lead to a negative effect for some individuals while for others, cross-pressures may foster participation. To give some recent examples, personality traits such as agreeableness or extroversion (Lyons, Sokhey, McClurg, & Seib, 2016) or general social trust (Matthes, 2013) as well as concepts such as political interest or efficacy, and the strength of ideology may matter in this context. The problem is that the impact of such moderators cannot be assessed in a meta-analysis unless they are treated as systematic factors in, for instance, experimental studies. In fact, strength of partisanship or related concepts such as attitude strength would arguably help to explain when disagreement may impact participation and when not (see Matthes, Morrison, & Schemer, 2010). However, strength of partisanship has either been controlled or was included as an interaction variable (e.g., Matthes, 2013). Thus, the effects for low vs. high partisan strength cannot be computed pointing to a pressing issue to be addressed in future research.
As is common in meta-analyses, we only included papers available in English. Yet it is important to note that we included all relevant studies published in English. Some prominent works (e.g.,  could not be included because they either did not directly measure participation or cross-cutting exposure. 4 Also, coding continents ignore differences between single countries. Related to that, we need research on countries varying in their level of democracy or other indicators that inform us how "open" a country is to political disagreement. This may have important implications for the study of political participation, yet we completely lack data about this aspect. Another limitation refers to the fact that we did not test the effects of heterogeneity or ambivalence. We can thus draw no conclusions whatsoever about these terms which are conceptually and theoretically different leading to different hypotheses and outcomes (Bello, 2012;Castro & Hopmann, 2018;Nir, 2011). Also, the relationships between disagreement and polarization are far from being clarified, neither theoretically nor empirically.
Even though one can never rule out that unpublished studies were missed during the retrieval phase, we believe that this could not have affected our findings as we applied a random effects model. This means the investigated studies were treated as a random subset of a larger study population (Hedges & Vevea, 1998). We also found no evidence for a publication bias. Finally, we may not have been able to code some potentially important moderators because they did not occur in a considerable number of studies (e.g., the role of anonymity).
We call for future research to conduct experimental studies and-even though sampling had no effects in the present study-we encourage researchers to work with non-student samples. This may be decisive in tracking moderated effects for some parts of the public. Also, we clearly need more work systematically comparing the characteristics of the sources of cross-cutting talk as well as the type of participation. Disagreement in face-to-face situations may lead to different psychological processes than disagreement observed in mass-mediated contexts. Hence, future studies should not only compare face-to-face versus mass-mediated settings but also measure and analyze the processes that exposure to dissent may trigger in these contexts. That is, studies may differ in their findings because disagreement may have meant different things in different studies. Yet this is impossible to code in a meta-analysis, and thus, more research is needed about how disagreement is experienced on a psychological level. When it comes to online participation, the anonymity of the expression situation should deserve more attention. Again, this could not be coded. Anonymity should not be confused with public versus private participation. Participation may be public, but can be either anonymous or not. One may theorize that voicing dissenting views anonymously may evoke different psychological mechanisms than speaking one's mind under disclosure of one's identity (Neubaum & Krämer, 2016. Finally, future research is called to work with behavioral measures rather than behavioral intentions.

Conclusion
This study aimed at quantifying the effect of cross-cutting exposure on political participation by using a meta-analytical approach. The results showed high heterogeneity of the effects observed in the literature but no overall effect. Several key theoretically and methodologically relevant moderators did not help to explain why cross-pressures enhance participation in one case, dampen engagement in the other, or are totally unrelated to participatory efforts. We conclude that the concerns about negative effects of cross-cutting exposure on political engagement can be alleviated. Notes 1. All excluded and included papers are made assessable here: https://osf.io/nxg32/? view_only=65b732e5355a472cb00d13cd6d441fb9.
2. Estimates were based on random-effects models assuming differing true effect sizes varying, for instance, because of different participants or treatments (see Hedges & Vevea, 1998). Several studies reported results that enabled obtaining more than one effect size per study. Performing a meta-analysis on these studies would violate the assumption of independence of effect sizes and would assign more weight to the studies producing more than one effect size. Researchers recently suggested treating meta-analysis as a multilevel model to address these issues (e.g., Cheung, 2014;Konstantopoulos, 2011). The basic idea nests the effect sizes (first level) within the studies (second level). Effect sizes stemming from the same study receive the same random effect while effect sizes stemming from different studies receive different random effects. Hence, the dependence or independence of effect sizes is explicitly modeled by assigning the correct random effect. Consequently, all effect sizes can be taken into account without aggregation and loss of information. Following this reasoning, the moderator analyses were carried out using the rma.mv() function in multilevel mixed-effects models (Viechtbauer, 2010). The overall effect analysis as well as publication bias analysis were performed with effect sizes aggregated within studies using the rma() function. It enables the estimation of single-level random-effects models (Viechtbauer, 2010). A maximum likelihood estimator was applied. As studies showed considerable