Identifying the domains of ideological similarities and differences in attitudes

Liberals and conservatives disagree, but are there some domains where we are more or less likely to observe ideological di ﬀ erences? To map the types of attitudes where we may be more or less likely to observe ideological di ﬀ erences, we draw on two approaches, the elective a ﬃ nities approach, which suggests individual di ﬀ erences explains di ﬀ erences between liberals and conservatives, and the divergent content approach, which posits the key distinction between ideologues are their value orientations. The goal of the current research was to explore when and why liberals and conser-vatives disagree. We tested whether ideological di ﬀ erences are more likely to emerge in attitudes characterized by threat, complexity, morality, political ideology, religious ideology, or harm (as compared to objects not characterized by these domains) using both explicit and implicit measures of 190 attitude objects. While all domains predicted ideological di ﬀ erences, the political domain was the only signi ﬁ cant predictor of ideological di ﬀ erences when controlling for the other domains. This study provides insight into which attitudes we are most and least likely to ﬁ nd ideological di ﬀ erences.

whether there will be ideological differences in a new, unanticipated sphere.For example, if we know that attitude objects associated with threat (e.g., crime) produce reliable ideological differences, then we can also predict that a new attitude related to threat (e.g., a war with Eurasia) will also produce ideological differences, whereas a new attitude unrelated to threat (e.g., what team wins the Super Bowl) will be more likely to produce ideological similarities.However, threat is just one example.We will explore how the threat, complexity, morality, ideology, and harm domains can help us locate ideological differences and similarities in attitudes.Just as England's first geological map helped landowners reliably predict the location of coal deposits (Sharpe, 2015), we hope that this initial map will help scholars and practitioners predict the locations of ideological differences and similarities.
There are two overarching approaches in the literature that scholars use to explain ideological differences: (1) elective affinities approach and (2) divergent content approach.We draw on these previous approaches of understanding ideological differences, to develop predictions about the areas where we are most and least likely to see attitudinal differences between liberals and conservatives.Generally, we use the elective affinities approach to predict that attitude spheres characterized by threat and complexity will exhibit larger ideological differences, whereas their converse will exhibit smaller ideological differences.We use the divergent content approach to predict that attitude spheres characterized by morality, ideology, and harm will exhibit larger ideological differences, whereas their converse (non-moral, non-ideological, and unrelated to harm) will exhibit smaller ideological differences.

Elective affinities
The elective affinities approach posits that personality, motivation, and cognitive differences shape divergent political views (e.g., immigration policy) and identifications (e.g., self-rating as strongly liberal).This approach is often based on findings suggesting there are ideological asymmetries in self-reported cognitive styles and reactions to threat (Jost, 2017).In short, people choose views and identifications that match their dispositions (e.g., openness to experience or need for closure; Hirsh et al., 2010;Jost et al., 2009;Thórisdóttir & Jost, 2011).Personality dispositions correlated with ideology can be categorized into two basic categories.The first categorization is threat; those dispositions associated with managing fear and uncertainty (Jost et al., 2003;Jost et al., 2007).Conservatives report greater fear of threat and loss (Jost et al., 2003), suggesting conservatives are dispositionally motivated to avoid threatening situations (Jost et al., 2003;Jost, Stern et al., 2017), and defend the current social system against threat and uncertainty compared to their liberal counterparts (Jost et al., 2004(Jost et al., , 2001)).On the other hand, liberalism (compared to conservatism) is associated with openness to experience (Jost et al., 2003), providing an explanation for why liberals like abstract art more so than conservatives (Feist & Brady, 2004), and why liberals tend to own more diverse music collections, books, and items pertaining to travel (e.g., cultural memorabilia; Carney et al., 2008).
The second type of disposition is a tendency to engage with complex cognitions (i.e., the tendency to look at situations as black and white versus nuanced; Jost et al., 2003;Jost, Sterling et al., 2017).Liberalism has been associated with cognitive complexity (i.e., having an ability to perceive nuance) (Jost et al., 2003).This notion explains why conservative senators use less complex policy statements (Tetlock, 1983), and why liberals are more likely to engage in perspective taking of outgroups than conservatives (Sparkman & Eidelman, 2016)perhaps explaining their greater tendency to support immigration policy, and policies that benefit racially diverse groups.Taken together, the elective affinity approach suggests that personality dispositions related to threat and complexity are related to ideological differences.
Notably, personality differences in ideology are thought to influence both the political (e.g., reactions to terrorism) and non-political (e.g., favorite musicians) preferences of liberals and conservatives.This is because people are more likely to prefer products, attitudes, objects, and ideas that match one's personality.Consistent with this idea, messages that target individuals' personalities more effectively influence preferences as compared to standard persuasion techniques (Hirsh et al., 2012;Matz et al., 2017;Orji et al., 2017).This suggests that by merely highlighting a match between a message and an individual's personality, behaviors and preferences can be shaped.This line of reasoning suggests liberals and conservatives' personalities make different attitudes and preferences seem more attractivesubsequently creating ideological differences.Specifically, based on the elective affinities perspective we predict that situations that highlight personality dispositions related to threat and the need for more nuanced thinking (i.e., complexity) likely increase the extent to which liberals and conservatives differ in their preferences.Topics highlighting complex thought processes or threats will pull liberals and conservatives in opposite directions, creating greater differences between liberals' and conservatives' preferences.Conversely, those topics that are less threatening and simpler will likely be topics with more ideological similarities.

Divergent content perspectives
The divergent content approach suggests that liberals and conservatives may differ in their personality dispositions, but that the key difference between the groups are different underlying political, moral, and religious values (Haidt & Graham, 2007;Schwartz, 1996) and subsequently, no differences between liberals and conservatives are expected when attitudes are not based in politics, morality, or religion.These different underlying political values are what drives differences between liberals and conservatives.For example, while there is consensus that morality is an important factor to both liberals and conservatives (Hofmann et al., 2014: Skitka et al., 2016), the content of that morality differs (Graham et al., 2011;Haidt et al., 2009) leading to different values (Nilsson & Erlandsson, 2015).These different values have further downstream consequences, including different perceptions of what is harmful (Feinberg et al., 2014;Gray et al., 2014;Inbar et al., 2009).For example, liberals tend to support more gun restrictionsas they believe people are being harmed by guns, while conservatives disagreeinstead believing people will be harmed if their guns are taken away (Gray et al., 2014).This suggests that while harm is consistent across the ideological spectrum, what liberals and conservatives view as harmful varies.Similarly, when it comes to prejudice, the key difference between liberals and conservatives are their political values (see Brandt & Crawford, in press).Both groups express prejudice towards value violating groups, but the groups who are seen as value violating are different for liberals and conservatives (Wetherell et al., 2013).
One implication of differences in political and moral values underlying ideological differences is that they may further influence the non-political preferences of liberals and conservatives.This is because products, attitude objects, and ideas that match one's political, religious, or moral values are more likely to be liked, and political rhetoric may turn apolitical attitudes into political attitudes (e.g., purchasing Greenland during August 2019).Consistent with this idea, messages that target individuals' moral priorities effectively influence their political (Feinberg & Willer, 2015) and non-political preferences (Kidwell et al., 2013).Moreover, in some cases people adjust their attitudes to match those of political elites who share their political affiliation (Cohen, 2003;Zaller, 1992).This suggests that a person's behavior and preferences can be shaped by merely highlighting a match between a preference and the person's political and moral values.
This line of reasoning suggests liberals and conservatives' political and moral values make different attitudes and preferences seem more attractivesubsequently creating ideological differences.Specifically, based on the divergent content perspective we predict that situations that highlight moral values, political or religious values, or harm likely increase the extent to which liberals and conservatives differ in their preferences.Topics falling into these domains will pull liberals and conservatives in opposite directions, creating greater differences between liberals' and conservatives' preferences.Conversely, those topics that are apolitical, less morally or religiously relevant, and less harm relevant will likely be topics with more ideological similarities.

Current research
The current research attempts to map ideological differences and similarities in attitudes by examining how attitude objects related to threat, complexity, morality, politics, religion, or harm are associated with the size of ideological differences.We examine two broad sets of hypotheses about the domains based on the elective affinities and divergent content perspectives: • Elective Affinities Hypotheses: Attitude objects associated with threat (threat hypothesis) or complexity (complexity hypothesis) will be associated with greater ideological differences than those associated with low levels of threat or with simplicity.• Divergent Content Hypotheses: Attitude objects associated with morality (morality hypothesis), politics (political hypothesis), religion (religion hypothesis), or harm (harm hypothesis) will be associated with greater ideological differences than those associated with low levels of morality, politics, religion, or harm.
These two sets of hypotheses are not necessarily mutually exclusive.Although the work inspiring the elective affinities hypotheses tends to downplay the role of divergent content, and the work inspiring the divergent content hypotheses tends to downplay the role of elective affinities, it is possible for us to find evidence in support or contradiction of both approaches.More importantly, testing these two sets of hypotheses help us understand not only what the ideological differences are, but also the spheres we are most and least likely to find them.This latter point is key, as these inferences will provide clues as to where we can expect to find ideological differences and similarities in new, unstudied spheres, and can also provide insight into how ideological minds differ in their attitudes across a variety of attitude objects.For example, the current state of the literature might lead to the notion that threat and complexity are just as important for understanding ideological differences and similarities as are religion and morality.This might be the case, but until we test how well these different domains are associated with the size of ideological differences in attitudes we will not know.
We map ideological differences and similarities in attitudes and test our two sets of hypotheses using 190 single attitude objects and 95 pairs of attitude objects, using both implicit and explicit measures.Using such a large number of attitude objects and multiple types of measures allow us to comprehensively map out this space.We expect similar results across both implicit and explicit measures because prior research typically shows similar ideological differences when examining implicit and explicit measures (e.g., Greenwald et al., 2009;Payne et al., 2010;Jost et al., 2008). 1  To answer our research question, we first estimated the size of ideological differences in a large number of domains using both the Implicit Association Test (IAT, Greenwald et al., 1998) and explicit attitude measures.Then, we tested if the size of the ideological differences is larger in attitude objects related to threat, complexity, morality, politics, religion, or harm.To estimate the size of the ideological differences, we then analyzed the previously collected data from Hussey et al. (2018), which includes a measure of political ideological identification, implicit and explicit ratings of 95 pairs of items (e.g., Jews and Christians), and explicit ratings of each of the 190 individual items (e.g., Christians). 2 This data allow us to estimate ideological differences, but it does not give us insight into the extent the items fall into one of the six domains we are interested in studying.To estimate the extent each pair of items and each individual item is perceived as falling into one of the six domains, we collected two new samples of data.Participants rated the extent each pair and each individual item can be described by one of the six domains we test.Our two sets of hypotheses were tested by assessing whether the size of the ideological differences are associated with each of the six proposed domains.

Methods
Step 1: estimating the size of ideological differences and similarities using existing data The first necessary step to address our research question is to estimate the size of ideological differences across a large range of attitude objects. 3For this, we used the Attitudes, Identities, and Individual Differences (AIID) Study (for a full description of methods and materials see Hussey et al., 2018). 4This study was originally designed to examine the validity of the construct of attitudes and provide a large dataset for further use by researchers.Participants (after our exclusions described below N = 49,665, 17,635 men, 31,946 women, 84 participants did not report their gender, Mage = 31.55,SDage = 12.75) who visited the Project Implicit (https://implicit.harvard.edu/)website between 2004 and 2007 first completed demographic information, including their self-reported ideology.We recoded this measure to range from −3 (strongly liberal) to +3 (strongly conservative).Next participants completed both an implicit association task (IAT) and explicit self-report measures for 1 of 95 attitude pairs.We used both the implicit and explicit measures to estimate the size of ideological differences and similarities on these attitudes.At the aggregate level, there tends to be a strong relationship between implicit and explicit attitudes (Hehman et al., 2019) suggesting our analyses, which are also at an aggregate level, should produce similar trends for both types of measures.
The participants were randomly assigned to the IAT and the explicit measures of attitudes or identities for 1 of 95 different attitude pairs (see Table S1 in the supplemental materials for complete list).The topics of these pairs vary greatly, and include topics related to (1) Specific people (e.g., celebrities such as 50 Cent and Britney Spears), ( 2) Groups of people or types of peoples (e.g., African Americans and European Americans), (3) Content based thoughts, principles, and ideas (e.g., Gun Control and Gun Rights), ( 4) Stylistic based thoughts, principles, and ideas (e.g., Hot and Cold), ( 5) Locations, Regions (e.g., Japan and United States), ( 6) Organizations (e.g., Microsoft and Apple), (7) and Objects (e.g., Television and Books) (see Table S1 in the supplemental materials for a complete list including these categorizations).
Estimating the size of ideological differences using the IAT The IAT had two subtypes, one focused on evaluation (e.g., "good" vs. "bad"), and another focused on identity (i.e., "self" vs. "other").For all of the IAT tasks, participants completed one IAT based on the procedure described by previous research using the same dataset (e.g., Nosek & Hansen, 2008).The IATs were scored using the D score developed by Greenwald et al. (2003).To ensure that all of the participants had the same cultural reference, were actively engaged in the IAT, and provided quality data, we restricted our sample to people reporting citizenship and residence in the United States, with English as their primary language, and we used a strict exclusion criteria that removed participants who did not meet one or more of the following eight criteria on the IAT: (1) ≥35% responses <300 ms responses in any one practice block.
The sample sizes after these exclusions are in Table S1 in the supplemental materials.
To estimate the size of ideological differences on the IAT, we regressed IAT scores onto ideological identification for each of the 95 attitude pairs (95 regressions in total).The absolute value of the unstandardized beta from these regressions is our estimates of the size of ideological differences on the IAT.We also included a dummy code indicating whether the IAT was an evaluative IAT (IAT type dummy code = 0) or an identity IAT (IAT type dummy code = 1).This dummy code was used when we regressed IAT onto ideological identification to control for possible methodological differences.To get a sense of how precisely we are able to measure the size of ideological differences on the IAT, we computed the effect size (in r) that we have 80% power to detect at the available sample sizes.These effect sizes are reported in Table S1 in the supplemental materials.Importantly, across all of the domains, we have adequate power to detect small correlations, indicating that our estimates of ideological differences on the IAT are precise. 5  Estimating the size of ideological difference using explicit measures Explicit preferences between pairs of items were also collected.Participants were asked which item of the pair they preferred.Specifically, they were asked, "Which do you prefer, Y or X?" (where X and Y were the attitude targets).Participants responded using a 7-point scale; Strongly prefer Y to X (1), Somewhat prefer Y to X (2), Slightly prefer Y to X (3), X and Y are equally liked (4), Slightly prefer X to Y (5), Somewhat prefer X to Y (6), and Strongly prefer X to Y (7).To estimate the size of ideological differences, we regressed explicit preferences onto the ideological identification for each attitude pair (95 regressions in total).The absolute values of the unstandardized betas from these regressions are our measure of the size of ideological differences.To get a sense of how precisely we are able to measure ideological differences in explicit preferences, we computed the effect size (in r)that we have 80% power to detect at the available sample sizes.These effect sizes are reported in Table S1 in the supplemental materials.Importantly, across all of the 95 pairs of attitude items, we have adequate power to detect small correlations, indicating that our estimates of ideological differences in explicit preferences are precise.
The AIID Study also included explicit evaluations of the individual items that make up each pair.For each of the individual items that make up the 95 pairs, participants reported the extent to which they like each item using likability (i.e., "How much do you like or dislike X"), warmth (i.e., "How warm or cold do you feel towards X"), or positivity (i.e., "How positive or negative do you feel towards X") assessments.Responses were recorded using a 10-point scale from strongly dislike/cold/negative (1) to strongly like/warm/positive (10).Explicit evaluation ratings were regressed onto ideological identification for each of the 190 individual attitudes (190 regressions).The absolute values of the unstandardized betas from these regressions are our measure of the size of ideological differences on the evaluation measures.A dummy code indicating whether the explicit evaluation used the liking, warmth, or positivity measure was included when we regressed explicit evaluations onto ideological identification (likability was the reference group) to control for this methodological difference.To get a sense of how precisely we are able to measure ideological differences in the evaluation measures, we computed the effect size (in r)that we have 80% power to detect at the available sample sizes.These effect sizes are reported in Table S1 in the supplemental materials.Importantly, across all of the 190 individual attitude items, we have adequate power to detect small correlations, indicating that our estimates of ideological differences in the evaluation measures are precise.
Step 2: estimating the extent attitudes are associated with domains

Novel data collection
After estimating the size of ideological differences and similarities in attitudes using the three measures (i.e.IAT scores, explicit preferences, evaluation measures) in the AIID Study, we collected two additional samples of data.Participants in these studies rated the extent to which the domains (e.g., morality) are associated with each pair (Rating Sample 1) and each individual item (Rating Sample 2).This allowed us to test the ability of the Elective Affinity Hypotheses and the Diverging Content Hypotheses to explain the size of ideological differences and similarities in attitudes.These rating tasks were tested in a small pilot, which showed that they were highly reliable (see supplemental materials).
In Rating Sample 1, participants (N = 383, 152 men, 221 women, 5 other, 5 missing values, M age = 34.67,SD age = 12.41), were recruited from Prolific, an online service that facilitates crowdsourcing of participants (for an overview see Palan & Schitter, 2018). 6Data quality is high (Peer et al., 2017) and the site has procedures in place to prevent bots and repeat participants (Bradley, 2018).We only recruited participants who reported their nationality as American, and their residency as in the United States via Prolific's own pre-screening procedure.We also only recruited participants with an approval rating of 90% or higher.Participants were informed about the nature of the study, and chose whether to participate or not.After consent, participants were shown the following prompt: Imagine two people talking about two options (Option A and Option B).These two people may disagree about whether Option A or Option B is better for a variety of reasons.In this task, we are interested in the extent to which you think different reasons could affect whether people disagree with one another (i.e., why one person prefers option A and the other prefers option B).
Participants were then shown a random sample of 10 of the 95 pairings.We chose 10 items because in our estimation this would not be too cognitively taxing and still provides an attainable sample size.
Approximately 40 participants were shown each paira sample size where averages typically stabilize in face rating tasks (Hehman et al., 2018).For each of the 10 pairs, participants were asked to rate the extent to which people could disagree on which item is better based on how related the pair of items are to threat, reasons based on the complexity of the items, on moral reasons, political reasons, religious reasons, and whether the items are related to harm.This study design is somewhat similar to face research, where participants rate photographs of faces on traits like extraversion, threat, and trustworthiness (Ma et al., 2015;Walker & Vetter, 2016) or stereotyping/prejudice research where groups are rated on warmth, competence, ideology, status, and perceived choice (e.g., Brandt & Crawford, 2016;Cuddy et al., 2007;Koch et al., 2016).Afterwards, participants answered basic demographic information (age, gender, and political ideology), and were debriefed about the purpose of the study (see supplementary materials for materials).We computed the means for each of the domains for each item pair.We removed missing data and used all of the available data to estimate the means.These means were added to the database containing the estimated ideological differences on the IAT and explicit preference measures.
In Rating Sample 2, a different sample of participants (N = 778, 386 men, 371 women, 3 reporting another gender identity, 18 with missing values, M age = 34.86,SD age = 12.80) were recruited from Prolific, and informed about the nature of the study.This survey was similar to Rating Sample 1 (e.g., nationality and residence are the United States, approval rating of 90% or higher); however, participants responded to individual items rather than item pairings.For example, rather than reporting their attitudes to Jews compared to Christians, participants reported their attitudes about Jews and Christians individually.For some individual items, we needed to adjust the item phrasing so that their meaning was clear.See Table S2 in the supplemental materials for our changes to the phrasing of items from Rating Sample 1. 7 Approximately 40 participants responded to each item.Participants read the following prompt: Imagine two people talking about an option (Option A).These two people may disagree about whether Option A is good for a variety of reasons.In this task, we are interested in the extent to which you think specific reasons could affect whether people disagree with one another (i.e., why one person likes Option A and the other dislikes it).
Participants were then shown a random sample of 10 of the 190 items and were asked to rate the extent to which the item is based on all six of the aforementioned domains (e.g., complexity, harm, etc.).After data collection we realized that all participants also rated the attitude object "Jews" after rating their 10 random attitude objects due to an error in the randomization.This should not affect our results.Afterwards, participants answered basic demographic information, and were debriefed about the purpose of the study (see supplementary materials for materials).We computed the means for each of the domains for each of the individual items.We removed missing data and used all of the available data to estimate the means.These means were added to the database containing the estimated ideological differences on the explicit evaluation measures.
In both Rating Samples 1 and 2, we assessed the raters' political ideology to see if it was associated with the ratings (see also the results from the pilot study).We measured ideology on the same seven-point scale used in the AIID (−3 = Strongly liberal, +3 = Strongly conservative).To estimate the mean ratings of the attitude objects for conservative raters, we regressed the domain rating (e.g., perceived threat) on ideology when ideology is centered on +2 on the ideology scale.By centering the scale here, the intercept of the regression equation is the estimated mean for raters who are conservative.To estimate the mean ratings of the attitude objects for liberal raters, we regressed the rating (e.g., perceived threat) on ideology when ideology is centered on −2 on the ideology scale.Now the intercept of the regression equation is the estimated mean for raters who are liberal (see Brandt & Crawford, 2016 for a similar approach when rating groups).

Power considerations
Although the AIID study includes thousands of participants and our own data include hundreds of participants, the effective sample size for our analyses is based on the number of item pairs and individual items.For analyses using the 90 item pairs, we have 95%, 90%, and 80% power to detect effect sizes (in r) of.36,.33,and .29,respectively.For analyses using the 190 individual items, we have 95%, 90%, and 80% power to detect effect sizes (in r) of .25,.23,and .20,respectively.These sensitivity analyses assume an alpha of .05 and were calculated using G*Power 3.1.9.2 (Faul et al., 2009).

Results
All of the estimated ideological differences for the three measures are in Figure 1.
A cursory look at Figure 1 reveals that the largest ideological differences across all of the measures are for political groups (e.g., Conservatives, Democrats), political figures (e.g., George Bush), and values (e.g., traditional values).It also appears that the ranking of the size of the ideological differences is similar across the three measures.We are able to directly compare this for ideological differences measured with the IAT and the explicit preference measure.Ideological differences across the stimuli were correlated highly for both of these measures, r(93) = .93,p < .001(not pre-registered).
The main goal of this research was to determine if the Elective Affinities Hypotheses and the Divergent Content Hypotheses help explain ideological differences in evaluations of attitude objects.The Elective Affinities Hypotheses propose that when an attitude is associated with threat (threat hypothesis) or complexity (complexity hypothesis), there will be a larger difference between liberals' and conservatives' attitudes.The Divergent Content Hypotheses propose that when an attitude is associated with morality (morality hypothesis), politics (political hypothesis), religion (religion hypothesis), or harm (harm hypothesis), there will be a larger difference between liberals' and conservatives' attitudes.Note: Figure presents ideological differences in term of r, however, the analyses in the text use the unstandardized ideological differences.The targets with the 10 largest and 10 smallest ideological differences are labeled.For the IAT and preference measures, labels were ordered so that the targets conservatives preferred are first.The evaluation measure color (red versus blue) indicates which ideological group preferred the item.The p <.005 criteria for determining "no difference" was not pre-registered.

Preregistered analyses: testing individual predictors
We tested the Elective Affinities Hypotheses and the Divergent Content Hypotheses by regressing ideological differences estimated with the AIID dataset on the domain ratings.We did this separately for ideological differences on IAT scores, preference scores, and evaluation scores and for each of the domains separately.See Table 1 for key results testing the hypotheses.See Figure 2 for scatterplots between the domains and ideological differences on the IAT, preference measure, and evaluation measure.
We found support for all of the hypotheses using all three measures of ideological differences.When disagreement over the attitude object pairs was due to threat, complexity, morality, politics, religion, or harm ideological differences were greater.
For example, for every point higher on the measure of threat, the ideological differences on the IAT measure is approximately .013larger (approximately 6% of the observed range of this measure), the ideological differences on the preference measure is approximately .08 larger (approximately 8% of the observed range of this measure), and the ideological differences on the evaluation measure is approximately .10 larger (approximately 9% of the observed range of this measure). 8Similarly, for every point higher on the measure of politics, the ideological differences on the IAT measure is approximately .015larger (approximately 7% of the observed range of this measure), the ideological differences on the preference measure is approximately .10 larger (approximately 10% of the observed range of this measure), and the ideological differences on the evaluation measure is approximately .13 larger (approximately 12% of the observed range of this measure).
We subjected these basic models to two robustness checks.First, we included item category as an alternative predictor.The models from Table 1 were re-run, but this time including the category of the attitude object (dummy coded).This helps rule out that the associations identified in Table 1 are due to the precise category that the attitude objects fall into (e.g., specific people vs. groups).The conclusions are the same from these analyses (see Table S3).
Second, we tested if domains estimated by liberals or conservatives in Rating Samples 1 and 2 lead to different conclusions.This was not the case for any of the complexity, morality, politics, religion, or harm findings and for nearly all of the threat findings.It only led to different conclusions for threat when it was associated with ideological differences on the IAT.When the domains are rated by liberals, the results are consistent with the analyses in Table 1; however, the relationship between threat and ideological differences on the IAT is non-significant when the domains are rated by conservatives (see Table S4 in supplemental materials).This difference appears because conservatives reported disagreement about several attitude objects as being based on threat, but liberals did not.Further, liberals reported disagreement about other attitude objects as being based on threat, but conservatives did not.For example, liberals rated disagreement over the "atheism-religion" attitude pair as associated more so with threat (M = 5.07) than conservatives (M = 3.92).On the other hand, conservatives rated disagreement over the "Jews-Christians" attitude pair more so with threat (M = 5.47), than liberals (M = 4.57). 9However, overall whether raters were liberals or conservatives was inconsequential.
In sum, we found support for both the Elective Affinities Hypotheses and the Divergent Content Hypotheses.Attitudinal disagreement associated with threat, complexity, morality, politics, religion, or harm is more likely to be associated with differences in IAT scores, preference scores, and evaluation scores.

Preregistered analyses: testing the robustness of the associations
To test if the domains remained significant predictors when taking all of the other domains into account, another set of models were run including all six domains in one model (see Table 2).Disagreement regarding attitude object pairs or individual items associated with the political domain was associated with ideological differences on all three measures controlling for the other five domains.However, the other five domains were not associated with ideological differences when controlling for the other domains.These results suggest attitudes associated with the political domain are more strongly associated with differences between liberals' and conservatives' attitudes, as compared to the other five domains.
We anticipated problems with multicollinearity in the models including all six domains.The VIFs of each domain for each test (i.e., IAT, preference, and evaluation) indicated that this was the case.For both threat and harm VIFs were higher than 10 when predicting IAT and preference scales.The VIF for threat (but not harm) was higher than 10 when predicting the evaluation score (10 is an often used cut-off, Hair et al., 1998).Our original plan was to use parallel analysis (Ruscio & Roche, 2012) to identify the number of principle components that make up our set of predictor variables.However, this analysis found just one component which does not allow us to test the robustness of the different domains against one another.

Multicollinearity
To test the different domains against one another and address the multicollinearity we examined the VIF scores.We found that the threat and harm domains had the highest VIF scores and were nearly perfectly correlated (r = .96in Rating Sample 1, r = .94 in Rating Sample 2), suggesting they may be tapping into the same construct.They are the primary culprits of the multicollinearity.We averaged the threat and harm ratings together and re-ran the models in Table 3 replacing the individual indicators of threat and harm with the averaged threat/harm item.The outcomes of these models can be found in Table 3.As with the model testing each domain individually (i.e. the model in Table 2), this model indicated the political domain was again the only significant predictor for ideological differences when controlling for the other domains.Similar conclusions were drawn when using a Bayesian analysis (see supplemental materials).
After finding the political domain to be the only significant predictor of ideological disagreement, we conducted additional exploratory analyses to test the independent role of the other domains when the political domain is not included.Since there was high multicollinearity between harm and threat, we chose to average them together and enter them into these models as one domain (i.e. the models in Table 3, but without the political domain).For both ideological differences in IAT scores and preference ratings, when the four remaining domains were included, none of the domains predicted ideological disagreement.For ideological differences in evaluations, morality was the only significant predictor of ideological disagreement in evaluation scores.See Table S5 for a complete table of these exploratory analyses.

Robust regression analyses
The scatterplots in Figure 2 suggest non-normal distributions and potential nonlinear relationships.Therefore, we re-ran all of the analyses in Tables 1 and 2 using a rank regression procedure estimated using the Rfit package in R (Kloke & McKean, 2012;Kloke & McKean, 2020).Across all of the re-estimated models the conclusions are the same with two exceptions.When regressing ideological differences in IAT and preference scores onto all domains, none of the domains are significant when using rank regression.When regressing ideological differences in evaluation scores onto all domains both the morality and political domains were significant when using rank regression.See Table S6 for full analyses.
Differences for liberal and conservative attitude objects Some domains may only lead to ideological differences when liberals or conservatives tend to support the domain.To test this, we conducted an additional series of analyses to see if the ideological direction of the ideological differences interacted with the domain to predict the size of the ideological differences.Because ideological direction is essentially arbitrary for the IAT and preference measures (i.e. the direction of the effect only has meaning depending on the order of the paired stimuli), we tested this with the evaluation measure (where a positive effect means that conservatives preferred the stimuli).Overall, the combined threat/harm domain significantly interacted with the Table 3. Summary of unstandardized regression coefficients predicting ideological differences on the IAT, preference scores, and evaluation scores using all of the domains simultaneously including threat and harm averaged as one predictor (due to their high multicollinearity).direction of ideological difference, suggesting the effects of threat are stronger for attitudes that lean conservative compared to attitudes that lean liberal.No other domains had significantly interacted with ideological differences.No interactions were significant when all domains were included in the same model.These results suggest that effects are largely consistent across attitudes that liberals and conservatives prefer.Full analyses are in Table S7.

Discussion
We have two key findings.First, we find support for both the elective affinities and divergent content approaches; topics associated with threat, complexity, morality, politics, religion, and harm are also characterized by greater ideological disagreement than topics not associated with these domains.Second, we found that the political domain was the strongest predictor of ideological disagreement.
The current research attempted to map the types of attitudes where we are most and least likely to observe ideological differences.We used two approaches, the elective affinities approach, and the divergent content approach to explore which domains characterize attitudes when there is disagreement between liberals and conservatives.The elective affinities approach, which suggests people prefer views that match their dispositions (Hirsh et al., 2010;Jost et al., 2009), posits that attitude objects characterized by threat (i.e., threat hypothesis) or complexity (i.e., complexity hypothesis) are more likely to be associated with liberal-conservative differences, compared to attitude objects not associated with threat or complexity.The divergent content approach, which suggests that they key difference between groups are underlying values (Haidt & Graham, 2007), posited that attitude objects characterized by morality (i.e., moral hypothesis), politics (i.e., political hypothesis), religion (i.e., religion hypothesis), or harm (i.e., harm hypothesis) are more likely to be associated with liberal-conservative disagreement, compared to attitude objects not associated with morality, politics, religion, or harm.
We tested these hypotheses by estimating ideological differences on implicit (IAT) and explicit (preference and evaluation) measures of attitudes and analyzed the extent to which attitude objects characterized by the proposed domains are more likely to be associated with ideological disagreement.When focusing on each domain individually, we found support for both the elective affinities and divergent content approaches.Results suggested attitudes associated with threat, complexity, morality, politics, religion, and harm were also attitude objects liberals and conservatives tended to disagree on.This was the case for both reaction time (IAT) and self-report (preference or evaluations) measures and for both joint (IAT or preference) and individual judgment (evaluation) contexts.Further, these findings were consistent when controlling for attitude object category, in nearly all cases was not affected by domain rater ideology, and was consistent across robust regression analyses.
Analyses also indicated that the political domain was the most robust predictor of ideological disagreement.When controlling for the other domains, the political domain was the only domain that still predicted the size of ideological differencessuggesting that ideological differences are substantially reduced outside of the political realm.For example, below the midpoint of political ratings the maximum ideological difference is never larger than a small effect (in r, maximum difference on IAT = .20,preference = .22,evaluations = .18)and the means are quite small (in r, mean difference on IAT = .02,preference = .03,evaluations = .03). 10 This is most consistent with the political hypothesis from the divergent content approach (e.g., Brandt & Crawford, in press;Graham et al., 2011).

Primacy of politics?
There are multiple ways to interpret the primacy of politics result.First, this result might suggest that the results supporting the elective affinities approach, as well as the harm, religion, and morality hypotheses of the divergent content approach are not good evidence because domains such as threat and complexity, as well as morality and religion are conflated with political differences.This is consistent with arguments that suggest that links between political ideology, personality, and motivations may be due to content overlap rather than personality or motivational differences per se (Malka et al., 2017).This possibility is represented in the causal structure in Figure S1 (in supplemental materials). 11All of the domains have the possibility of directly causing ideological differences, but due to shared variance with politics, the political variable is the only significant predictor of ideological differences.
A second possibility is that factors like threat and complexity are the very topics that humans are likely to make into political, moral, or religious issues.When times are threatening or particularly complex, turning issues into political, moral, or religious issues may give people a sense of certainty or a method for interpreting the world that they otherwise would not have.If this is the case, then politics may act more like a mediator of the effects of threat and complexity.This possibility is represented in the causal structure in Figure S2 (in supplemental materials).Notably, in exploratory analyses where we excluded politics as a predictor, threat and complexity were still not significant predictors.Instead either no domain was a significant predictor, or morality was a significant predictor.This hints that morality may also be more proximal than threat and complexity.
The distinction between these two possibilities is theoretically important.The first possibility, represented in Figure S1, would suggest that the elective affinities approach, at least for our research questions, is not viable.The findings that seem to support it are merely due to the confounds between threat, complexity, and politics.The same conclusion could also be drawn from the moral, religious, and harm-related versions of the divergent content approach.However, the second possibility, represented in Figure S2, would suggest that the elective affinities approach, at least for our research questions, is viable.These findings show the disagreements over the political domain is the strongest predictor of ideological disagreement; however, the other domains are potentially still casually potent as precursors to the political domain.Unfortunately, it is not possible to tease apart these possibilities with the current data as the data are cross-sectional.Teasing apart these two possibilities is a necessary task for future research.Ideally, tests might include tracking ideological differences in large numbers across a great diversity of attitudes over time to study changes and stability in ideological differences.

Strength, limitations, and future directions
These findings are just one-step in mapping which attitudes we are most and least likely to anticipate ideological differences.We studied the 190 attitude objects from the AIID study.However, we expect that our findings will likely generalize to other attitudes, especially in the American context.We also would expect a similar pattern of results in other countries with polarized political systems (cf.Pew Research Center, 2017;Vachudova, 2019;Wendler, 2014).We are less certain that these results would replicate in political systems with less polarization and where political differences are, presumably, less important.
In contrast to many studies of ideological differences that focus on differences in one particular attitude, the attitudes in the AIID study cover many topics.These topics range from abstract principles (e.g., realism vs. idealism) to people (e.g., celebrities such as Denzel Washington vs.Tom Cruise) and regions of the world (e.g., Japan vs. the United States).The diversity of attitude objects should make our findings more comparable to the large swath of attitudes in the everyday world.Thus, these findings improve our ability to predict the locations of ideological differences and similarities in untested fields.One challenge with using a large number of attitudes is that not every attitude conforms to the model.As one example, some of the attitudes that scored highly in the political domain nonetheless had low levels of ideological differences.One reason for this is that there were attitudes that were political (e.g., preferences for Bill Clinton vs. Hillary Clinton; evaluations of politicians), but which did not map onto differences between liberals and conservatives.This suggests that more precise predictions can be made by considering the political dimension the attitude maps on to.
Despite the large sample of people and attitudes, and replication across multiple measures, this study also had several limitations.The domains used were based on previously discussed perspectives; however, other domains that we did not include may also play a key role.For example, we did not test the domain of disgust, but attitude objects associated with disgust may be associated with ideological differences, as previous research highlights ideological differences in what is viewed as disgusting (Elad-Strenger et al., 2019;Inbar et al., 2012).Moreover, as previously mentioned, the findings are cross-sectional.Although we discussed the results in causal terms for illustrative purposes, the current data is consistent with a number of different causal models.One way to test this will be to examine if and how ideological differences emerge as an attitude is imbued with different properties.Things that were once not moralized, politicized, threatening, etc., can become linked to our moral or political sensibilities, or be viewed as highly threatening.For example, at one point attitudes about the NFL may have not seemed overly political, however once NFL athletes began kneeling during the national anthem, and President Trump began commenting about these actions (Klein, 2018), the league may have become more politicized.We would expect ideological differences in opinions on the NFL to track this politicization.
Finally, this study is primarily based on self-reports and survey methodology.In the AIID study participants reported which attitude objects they prefer and the extent to which they positively evaluate attitude objects.In Rating Samples 1 and 2, collected many years after the original AIID study, participants self-reported the extent to which they think each domain could explain other people's disagreement about the attitude objects.Thus, Rating Samples 1 and 2 focus on measuring what people think could cause others to disagree about attitude objects, rather than measuring what actually causes people to disagree about attitude objects.This may limit the validity of the study as participants in Rating Samples 1 and 2 may not be aware of the true factors that drive disagreement over attitude objects.
Furthermore, while implicit associations are assessed, and similar results are found across results, some of our findings are based on self-reports.Tracking ideological disagreement in terms of preference and evaluation can only occur in contexts where individuals are able and willing to report their true political attitudesmeaning further exploration into mapping ideological differences using these self-report methodologies relies on participants willingness to disclose their attitudes.
Behavioral manipulations could also be included in future work, such as having participants actually choose between helping one group (e.g., gay people) vs. another (e.g., straight people).This would help map attitudes where ideological disagreement is (or is not) present to aid in our understanding of the behavioral consequences of differences between liberals and conservatives.However, the consistency in results between the reaction time and self-report measures gives us some confidence that these findings are robust to measurement type.
The current research attempted to map where we are most and least likely to see ideological disagreement.In this paper, we have two key findings.First, we find support for both the elective affinities and divergent content approaches; topics associated with threat, complexity, morality, politics, religion, and harm are also characterized by greater ideological disagreement than topics not associated with these domains.Second, we found that the political domain was the strongest predictor of ideological disagreement.These findings provide evidence for a systematic explanation for when and why liberals and conservatives disagree, and can aid in predicting whether future events will be heavily contested or will be similarly perceived by liberals and conservatives.

Notes
order to err on the side of including more data.Participants who participated in Rating Sample 1 were not able to participate in Rating Sample 2. 8.The range for ideological differences on the IAT was [0.0003, 0.22], for preferences it was [0.004, 0.98], and for evaluations it was [0.0002, 1.13].9.In general, liberal and conservative threat ratings were strongly correlated (r =.79 in Rating Sample 1, r =.74 in Rating Sample 2, not preregistered).10.When we look at ideological differences above the midpoint the maximum ideological differences in r are, IAT =.66, preference =.85, evaluations =.65.The mean ideological differences in r are, IAT =.07, preference =.12, evaluations =.10.In all cases, the effects appear to be approximately 3 to 4 times larger above the midpoint compared to below the midpoint.
The comparisons above and below the midpoint were not pre-registered.11.Multiple causal structures are possible.These are merely for illustrative purposes rather than an exhaustive search and a clear identification.

Figure 1 .
Figure 1.All of the estimated ideological differences as measured with the IAT, preferences, and evaluations.

Figure 2 .
Figure2.Catterplots of the association between the six domains and ideological differences as estimated with the IAT, preferences, and evaluations.The solid trend line is the linear association with a shaded 95% confidence interval.The dashed trend line is a LOESS smoothed regression line.Note: One data point per panel was randomly selected for labeling.

Table 1 .
Summary of unstandardized regression coefficients.Regressions test the extent to which each domain predicts ideological differences on the IAT, preference scores, and evaluation scores.

Table 2 .
Summary of unstandardized regression coefficients predicting ideological differences on the IAT, preference scores, and evaluation scores using all of the domains simultaneously.