Gender, Political Resources, and Expressions of Democratic Evaluations

ABSTRACT Are certain citizens more likely to feel entitled to evaluate the quality of democratic institutions in their country? Previous studies show that women are more likely than men to answer “don’t know” when asked to express their views about the state of democracy. This article analyzes how political knowledge and internal efficacy contribute to such a gender gap in item non-response rates in research on democratic evaluations. Based on an original survey in the United States (N = 1,093), we do not find that respondents’ political knowledge is associated with a higher probability of expressing their views about democracy. However, we do find that this is the case for internal efficacy and that these effects are largely driven by women. These findings suggest that gendered social roles about who is entitled or expected to express their views about democracy lie behind the gender gap in item non-response rates.


Introduction
Are members of certain social groups more likely to feel entitled or even expected to evaluate the quality of democratic institutions in their country?Which political resources affect citizens' willingness to express their views about democratic institutions?
The erosion of democratic attitudes documented across many established democracies in recent years has sparked a renewed interest on citizens' attitudes toward democracy (Chu et al. 2020;Foa andMounk 2016, 2017).As a result, major survey projects have included batteries of questions in which they ask citizens to evaluate the performance of specific democratic institutions in their country (e.g., Carey et al. 2019;Ferrin and Kriesi 2016).While these studies have provided very important insights about how citizens think about democracy, questions on democratic attitudes, especially those that focus on the performance of specific democratic institutions, often demand respondents to speak about technically difficult issues and thus pose significant cognitive costs on survey participants.These kinds of questions can be vulnerable to fairly high item non-response rates, in which participants prefer to answer "don't know" or "not sure" rather than to provide an answer (Berinsky 2004;Carmines and Stimson 1980;Krosnick 1991;Krosnick et al. 2002).
Indeed, even though polls and surveys are typically seen as low-cost alternatives for citizens to express their political opinions, previous research has documented item non-response rates of varying magnitudes across various surveys on democratic attitudes (Goenaga and Hansen 2022).Those studies also show that there are substantial differences across social groups in item non-response rates when asked to evaluate democratic institutions.Importantly, these differences overlap with broader gender inequalities in political participation, as women are more likely to answer "don't know" than men.
Understanding the drivers of these differences in citizens' willingness to express their views about the performance of democratic institutions thus matters for both methodological and normative reasons.
On the methodological side, these item non-response rates may be a source of biased estimates if members of certain groups are more likely to be reticent to express their views and those groups also tend to have different views about democratic performance.This is the case when it comes to differences by gender in democratic evaluations, as women are both more likely to be critical of democratic institutions and to answer "don't know" when asked to provide such evaluations (Anderson and Guillory 1997;Gibson, Duch, and Tedin 1992;Hansen and Goenaga 2021;Karp and Banducci 2008;Logan and Bratton 2006;Quaranta 2018;Stadelmann-Steffen and Vatter 2012).Hence, the presence of such biases in research on democratic attitudes can lead to inaccurate conclusions about the population's views about democracy, and it can make researchers overlook pockets of discontent with democratic institutions among certain segments of the population.
On the normative side, these item non-response gaps are also worrisome in relation to democratic principles of equality.They point to differences in citizens' self-perceptions about their role as democratic agents, as they suggest that certain citizens are more likely to feel entitled and even expected to express their evaluations of democracy while others are not.Moreover, since these differences overlap with broader asymmetries in political participation, they put into question the function that polls and surveys can perform as channels for the expression of political preferences.
In this article, we analyze the drivers of item non-response rates for evaluative questions about democracy and how those factors contribute to differences between women and men in their willingness to express their views.We focus in particular on two kinds of political resources that have been associated with item non-response rates in other contexts: political knowledge and internal political efficacy (Atkeson and Rapoport 2003;Berinsky 2004;Ferrin, Fraile, and García-Albacete 2018;Laurison 2015;Lizotte and Sidman 2009;Mondak and Anderson 2004).To this end, we carried out an original survey (N = 1,093) on an online panel of respondents in the United States in which we measured participants' levels of political knowledge and efficacy.We then asked them to evaluate six different aspects of democracy in the country, giving them the option to answer "don't know" if they did not feel confident about their answers.
Our analyses show non-response rates to our evaluative questions of democracy ranging between 3.19 to 6.83%.We also find that women were more likely to answer "don't know" than men for all evaluation questions.A series of logistic and rare event logistic regressions indicate that respondents' political knowledge is not associated with a higher probability of expressing an evaluation of democratic institutions.Conversely, we consistently find that respondents' internal political efficacy -i.e., respondents' confidence in their own understanding of politics -is associated with a higher probability of expressing their views about democratic institutions across all the different questions asked.Finally, we find that this relationship between internal political efficacy and expressions of democratic evaluations is largely driven by women.
These results indicate that women not only need to know more about politics but also to feel more confident about that knowledge before they are willing to express their judgments of how well democratic institutions perform in the country.These findings suggest that gendered social roles about who is entitled or expected to express their views about democracy lie behind the gender gap in item non-response rates, making it more likely that men will express their views and that women will prefer to keep them for themselves (Fraile and de Miguel Moyer 2021;Gidengil, Giles, and Thomas 2008;Thomas 2012).
As noted above, these findings have important normative implications.The presence of gender gaps in item non-response rates means that the same viewpoints that are underrepresented in other channels of political expression are also likely to be underrepresented in polls and surveys about the state of democracy.This is worrisome for ideals of democratic representation, but it is also a problem for our assessments of democratic discontent among the population.If womenand more generally other politically marginalized groups -are less likely to express their views about democracy and if their views differ from those of the rest of the population, we may overlook critical attitudes toward certain parts of the democratic system amongst certain demographics.On the bright side, our results suggest that improvements in political efficacy can help reduce these gender gaps in item non-response rates, since such improvements are particularly important for women's willingness to express their views about democratic institutions.Unfortunately, the persistence of gender roles that underlie differences in internal efficacy between men and women may also explain why internal efficacy is particularly important for women, as they raise the bar for the political resources that women need to possess to express their views about the state of democracy.
In what follows, we first discuss research on gender and item non-response biases in political polls and surveys.We then describe four hypotheses about the relationship between gender, political resources, and item non-response rates in evaluative questions about democracy.The third section describes the data.The fourth section presents the results of our statistical analyses, while the final section concludes with some reflections about the normative importance of our findings and poses some questions for further research.

Gender and item non-response biases
Several studies in public opinion research have documented consistent gender gaps in item nonresponse rates in polls and surveys on a wide range of political issues (Atkeson and Rapoport 2003;Shapiro and Mahajan 1986).Across countries and topics, women are more likely than men to answer "don't know" if given the option when asked about their political views, preferences, or knowledge (Dolan and Hansen 2020;Ferrin, Fraile, and García-Albacete 2018;Kenski 2000;Lizotte and Sidman 2009;Luskin and Bullock 2011;Miller and Orr 2008;Mondak 2001;Mondak and Anderson 2004;Mondak and Canache 2004;Sturgis, Allum, and Smith 2008).The presence of such gender gaps is worrisome for ideals of democratic participation because it means that the viewpoints of certain members of society -especially those that tend to participate less in politics through other channelsare likely to be underrepresented in polls and surveys.Moreover, if men and women harbor different views on political issues, gender gaps in item response rates will produce biased estimates of the population's true values.This is certainly the case for citizens' views about democracy, as several studies show that women tend to have more critical assessments about most aspects of democratic politics than men (Anderson and Guillory 1997;Gibson, Duch, and Tedin 1992;Hansen and Goenaga 2021;Karp and Banducci 2008;Logan and Bratton 2006;Quaranta 2018;Stadelmann-Steffen and Vatter 2012).The presence of such a gender gap (and other similar differences across social groups) in item non-response rates could then obscure discontent about certain democratic institutions among segments of the population.
In a series of seminal publications, Adam Berinsky identified two different psychological drivers of item non-response rates in survey research (Berinsky 1999(Berinsky , 2004)).According to Berinsky's framework, one driver of non-responses is related to the social complexity of the questions asked (i.e., questions on controversial issues in which expressing certain views would violate prevalent social norms).In those instances, respondents who hold unorthodox or polemical views are more likely to perceive those questions as socially costly and will try to avoid expressing their unpopular views (Berinsky 2004, 31-32).If women are more likely to hold polemical views on certain issues or are more likely to avoid expressing those views when they do, social desirability biases would explain gender gaps in item response rates.
Second, survey questions that are cognitively demanding (i.e., questions that "require careful consideration of technically difficult choices") pose high psychological costs on respondents (Berinsky 2004, 10).However, these costs are likely to vary depending on respondents' political resources, such as how much they know about politics (political knowledge) or their self-confidence in their understanding of politics (internal political efficacy).In other words, cognitively demanding questions are likely to impose lower costs on respondents with higher levels of political knowledge or efficacy and vice-versa.If these political resources systematically differ between men and women, these cognitive costs would contribute to explain the gender gap in item non-response rates (Atkeson and Rapoport 2003, 503-7).
Several studies have found differences between women and men in political knowledge and internal efficacy.On the one hand, a large literature has documented persistent gender gaps in political knowledge across different stages in life and across countries (e.g., Dow 2009;Fortin-Rittberger 2016;Fraile 2014;Fraile and Gomez 2017;Jerit and Barabas 2017;Kenski 2000;Miller 2019;Sanbonmatsu 2003;Wolak and McDevitt 2011).Having said that, some research has argued that the gender gap in political knowledge is at least partially an artifact of the measurement instruments used by researchers.Studies show that this gap tends to decrease and even disappear depending on the format of the knowledge batteries (e.g., Ferrin, Fraile, and García-Albacete 2018; Jerit and Barabas  2017; Lizotte and Sidman 2009; Mondak and Anderson 2004; Prior 2014) and the type of political issues asked about (e.g., Barabas et al. 2014;Delli Carpini and Keeter 1996;Dolan 2011;Ferrin, Fraile, and García-Albacete 2018;Dolan and Hansen 2020;Kraft and Dolan 2023).
When it comes to internal political efficacy, previous research has also recorded a gender gap across contemporary democracies (Fraile and de Miguel Moyer 2021;Gidengil, Giles, and Thomas 2008;Thomas 2012).This gap is often attributed to gendered differences in political socialization both in the family and in the political context during adulthood (Fraile and de Miguel Moyer 2021;Wolak 2018).Gender roles are then associated with differences in self-confidence that make men more likely to have more positive views about their subjective political competence and hence more likely to express their views and to participate in politics (Schneider et al. 2016;Wolak 2020).
Given the persistence of these differences between women and men in political resources, it is reasonable to expect political knowledge and internal efficacy to contribute to the gender gap in item non-response rates in polls and surveys.Based on this intuition, Goenaga and Hansen (2022) have recently applied Berinsky's framework to investigate gender gaps in item non-response rates in survey research on citizens' views about democracy.Questions that ask respondents to evaluate the quality or performance of democratic institutions in their country can be cognitively costly since they demand from respondents some knowledge about the political system and some reflection about how well it performs.Even if there is no "correct" answer to such questions, they often refer to very specific aspects of politics that should make respondents with low political resources more likely to answer "don't know."Goenaga and Hansen (2022) evaluate whether this is the case using data from the European Social Survey and the Bright Line Watch project in the United States.They find consistent evidence that women are more likely than men to answer "don't know" when asked about the performance of a wide range of institutions, even in consolidated, post-materialist democracies with fairly high levels of gender equality on other issues.Moreover, focusing on the United States, they find that these differences are wider among respondents with low levels of political knowledge and they narrow down (but nonetheless remain) among more politically sophisticated respondents.Based on these results, they argue that differences between men and women in political knowledge only partially explain the gender gap in item non-response rates for democratic evaluations.Lacking a direct measure of internal efficacy, Goenaga and Hansen interpret the fact that these gender differences remain at every level of knowledge and are especially wide among the least knowledgeable respondents as an indication that differences in internal efficacy might also be driving the gender gap in item nonresponse rates.
In the following sections, we present the results of an original survey designed to evaluate how differences in political knowledge, internal efficacy, and social desirability contribute to the gender gap in item non-response rates in evaluations of democratic institutions.

Hypotheses
To structure our analysis, we first examine a set of hypotheses about which kinds of political resources are associated with item non-response rates in evaluative questions about democracy.We then explore how these factors may explain the gender gap in item non-response rates.
Based on the literature discussed in the previous section, we expect political resources to reduce the cognitive costs of evaluative questions about democracy.Both political knowledge and internal efficacy should increase the probability that respondents provide an answer when asked to evaluate democratic institutions.Hence: Hypothesis 1. Political knowledge is associated with a higher probability of providing an evaluation of democratic institutions.
Hypothesis 2. Internal efficacy is associated with a higher probability of providing an evaluation of democratic institutions.
Having examined the factors associated with item non-response rates in evaluative questions about democracy, we move on to analyze the gender gap in those item non-response rates.If women tend to face higher cognitive or social costs when asked to evaluate democracy, we expect women to be more likely to refrain from giving an answer to such questions.Therefore, Hypothesis 3. Women are more likely than men to answer "don't know" when asked to evaluate democratic institutions.
If gender differences in political resources are mediating the impact of the cognitive costs of these evaluative questions, the size of the gender gap should be smaller when controlling for political knowledge and internal efficacy.However, it may be the case that it is not merely differences between genders in the levels of internal efficacy and political knowledge that are driving the results.As Goenaga and Hansen (2022) argue, it could also be the case that the effects of these political resources on citizens' willingness to express their views about democracy are moderated by gendered differences in how men and women perceive themselves as democratic subjects.Questions that ask respondents to evaluate democratic institutions in their country may evoke internalized stereotypes about who is a citizen that is entitled to voice political opinions and judgments.If gendered constructions of democratic citizenship are present, women would need to know more about politics and to feel more confident about their grasp of politics to feel entitled to express their views on democratic performance.Conversely, men would be more likely to feel entitled (or possibly even expected) to evaluate democratic performance, even when they do not know much about politics or even feel unsure about their own understanding of politics.In other words, women would need to have higher levels of internal efficacy and political knowledge to be willing to express their views about democratic performance compared to men.Therefore: Hypothesis 4. Internal efficacy should have a greater impact on the probability of expressing evaluations of democracy for women than for men.
Hypothesis 5. Political knowledge should have a greater impact on the probability of expressing evaluations of democracy for women than for men.

Data and method
To evaluate these hypotheses, we designed an original survey that asked respondents to evaluate six questions related to different aspects of democratic performance in the United States.The survey was designed on Qualtrics and implemented on a panel of US residents using Amazon's Mechanical Turk (MTurk) on April 7, 2022.Respondents were paid to answer the questionnaire, even if they did not answer all the questions.On average, respondents took 6 minutes and 15 seconds to complete the survey.While not necessarily the gold standard for public opinion research, MTurk offers an inexpensive and effective resource as long as researchers are aware of how their sample may differ from the population and how that could potentially affect their findings.Several studies have confirmed the reliability of the data from MTurk surveys, showing that respondents do not differ from national population-based surveys in unmeasurable ways (Levay, Freese, and Druckman 2016), that the samples are more demographically diverse than other online panels or samples of convenience (e.g., college students) (Buhrmester, Kwang, and Gosling 2011), and that online participants are as attentive as offline respondents recruited for research surveys (Kennedy et al. 2020).
In the end, 1,093 participants completed the survey.In terms of the composition of the sample, 56% self-identified as men, while 44% self-identified as women.The sample contains slightly more men and Democrats than does the population of the US.That being said, on all other demographic variables used in the analysis the sample mirrors the population.In the Online Appendix, we provide descriptive statistics with the composition of the sample, as well as additional models using poststratification survey weights to account for oversampling of men and Democrats.The main results hold.
To measure citizens' willingness to evaluate democratic institutions, respondents were given the following prompt that explicitly encouraged them to answer "Don't know" if they did not feel confident about their answers: We would like to ask you a few questions evaluating democracy in the country.To what extent do you think each of these statements applies to the US today?(0= not at all to 10 = a great degree).If you are not confident in your answer, please indicate 'don't know.' We designed these statements to refer to three general aspects of liberal democracy: political equality, legal equality, and freedom of expression.For each of these aspects of democracy, we asked a more general question about how the democratic system performed overall and a more specific question that asked respondents to evaluate particular institutions.This strategy allowed us to ask questions with different degrees of cognitive complexity, as the more specific questions demanded from respondents more precise knowledge about the political system and more reflection about what that meant for democracy.To assess whether our results were driven by social desirability biases, we intentionally formulated the two questions about freedom of speech using more polemical language, since the issue of democratic justifications for limits to freedom of speech has been a highly prominent and controversial topic in American politics over the past couple of years (Bejan 2020;Norris 2021).For some of the other questions, we borrowed the formulations used by the Bright Line Watch Project (Carey et al. 2019).Table 1 presents the six statements presented to respondents.
Table 1 presents the share of respondents that provided an evaluation to each of the questions, the share of respondents that chose instead "don't know," and the difference between men and women in those item response rates.Whereas the majority of respondents opted to provide evaluations, the number of respondents answering "don't know" ranged between 3.19% (for "the law is enforced equally for all persons") and 6.83% (for "the geographic boundaries of electoral districts do not systematically advantage any particular party").Note that these item non-response rates may in fact be lower than for other surveys (and thus more conservative estimates) due to the nature of MTurk respondents.Since respondents in this platform expect payment for answering the survey, they may opt for other "satisficing" strategies such as guessing or picking a random answer rather than answering "don't know" (Krosnick et al. 2002).Our non-response rates are higher than those observed in Wave 6 of the ESS (which ranged between 0.8 to 3.2%) but lower than those in the BLW public surveys (which ranged between 7.9 and 19%) (Goenaga and Hansen 2022, 7).Intuitively, we find higher non-response rates for the more specific and cognitively demanding questions than for the more general questions.However, we do not find higher nonresponse rates for the more controversial questions related to freedom of speech.Finally, the last column presents the gender gap in response rates.For all six questions, we find that women were less likely than men to respond, and these differences ranged between 0.75 and 2.63% points.We do not find, however, any clear pattern in which these gender gaps were wider for the more cognitively or socially complex questions.
To measure internal efficacy, we used a common survey instrument that asked respondents to evaluate on a scale from 0 to 10: "How confident are you that you understand the important political issues facing the country?"In our sample, we did not find significant differences between men and women in their levels of internal efficacy.The mean score among men was 6.904, while for women it was 7.000.It is worth pointing out that the total variance in our measure of efficacy is fairly small and skewed toward higher values, as very few respondents reported extremely low levels of internal efficacy.
To measure political knowledge, we followed the strategy recently proposed by Kraft and Dolan (2023) to produce a balanced battery of questions.As noted above, previous research has shown that differences in political knowledge -especially between men and women -can be an artifact of the format and content of the questions used to measure this trait.To produce gender-balanced knowledge batteries, Kraft and Dolan (2023) recommend including questions that are indeed relevant for citizen competence in a given political context, that refer to current issues rather than static aspects of the political system (Barabas et al. 2014), and that are "gender-relevant," that is, questions referring to issues that are more directly relevant to women's lives (Delli Carpini and Keeter 1996).To this end, we produced a battery of three factual questions about US politics that combined static institutional aspects about the rules of the game, the people running politics, and a current policy initiative in a realm that, according to previous studies, women are more likely to be well-informed about: childcare (Barabas et al. 2014;Kraft and Dolan 2023;Stolle and Gidengil 2010).Most importantly, these three questions are relevant indicators of political competence related to the functioning of the democratic institutions in which we are interested.We do not offer respondents the option to answer "don't know" in these questions to reduce biases resulting from gender differences in the propensity to guess (Ferrin, Fraile, and García-Albacete 2018;Mondak and Anderson 2004).Table 2 presents the questions asked and the distribution of correct and incorrect answers.In our statistical analyses, we generate an index for political knowledge based on a factor score of the three questions.
We present in the Online Appendix additional information with the distribution of respondents' answers.Compared to conventional batteries of political knowledge in the United States, which tend to have correct response rates over 70%, our battery helps us discriminate among respondents at higher levels of knowledge, without being excessively demanding for participants (Kraft and Dolan 2023, 7).Our gender-balanced battery does not produce statistically significant differences when it comes to the first question related to length of senators' term in office.Conversely, in line with critics of conventional knowledge batteries, we find that women are in fact more likely to answer correctly the "gender-relevant" questions that asked about a high-ranking woman in politics -Ketanji Brown Jackson as President Biden's Supreme Court nominee -and about an initiative related to child carethe repeal of the Child Tax Credit.These differences are statistically significant at the p < 0.05 level.
Finally, the survey also included questions to measure socio-demographic characteristics -age, gender, race, education, and household income -and questions on party preferences, and political ideology.
In the analyses reported below, we present the output of logistic regression models that estimate the probability that a respondent offered a substantive response to the evaluative questions about democracy.Hence, the dependent variable in all the models is a dummy variable that takes a value of 1 if the respondent offered a response in the 0 to 10 scale and 0 if they answered "don't know."Given that "don't know" answers are relatively rare in the survey (between 3.19 and 6.83% of responses) we also estimated rare event logit models as a robustness check (reported in the Appendix).Even though these item non-response rates may seem low, they can lead to substantive differences in our estimates of citizens' assessments of democracy at the aggregate level.To illustrate this, we present in the online appendix the simulated mean values of each of the evaluative questions if the non-responses had been instead selections of the extreme values: either 0 or 10.The results indicate that the non-responses could substantially alter the evaluations by a few percentage points.

Results
To analyze the first set of hypotheses related to the factors associated with non-response rates, Table 3 presents the output of logistic regression models that estimate the probability that respondents offered a substantive evaluation to each of the questions about democratic performance.Our main independent variables of interest are political knowledge (a factor score of the three questions described above) and internal political efficacy.All the models include socio-demographic characteristics (age, sex, race, education, and income), as well as partisanship and political ideology as controls.
We do not find evidence that political knowledge is associated with item response rates in our set of evaluative questions.This goes against the expectations of Hypothesis 1 and against one of the results reported by Goenaga and Hansen (2022), who do find that political knowledge was associated with higher response rates to the questions about democratic institutions asked in the BLW surveys.This may be due to the use of a different knowledge battery (since the BLW surveys use the conventional questions of the number of years for a senator's term, the number of years for a member of the house of representatives' term, and the number of senators for each state).However, if that is the case, we would expect our knowledge battery to be more likely to show a relationship between answering those questions correctly and the willingness to answer our questions about democratic institutions, since our knowledge questions ask about pieces of information that are directly related to those aspects of the democratic system.The fact that we do not find such an association suggests that political knowledge may not be a major driver of non-response rates.
Conversely, in line with Hypothesis 2, we find that internal political efficacy is associated with a higher likelihood of providing an evaluation for all six questions about democratic institutions.The relationship is particularly strong for the general questions (Models 1 to 3), but it is still statistically significant at the p < 0.1 level for the questions on electoral districting (Model 4) and judicial sentencing (Model 5) and at the p < 0.5 level for the question about fighting words being protected (Model 6).To illustrate the size of these differences and taking 95% confidence bounds for reference, the predicted probability of offering an answer to the first evaluative question (votes count the same) ranges between 0.71 and 0.94 at low values of political efficacy, while they range between 0.97 and 0.99 at very high values of efficacy.The large confidence bounds for our estimates at low levels of efficacy are due to having few respondents reporting very low scores.However, even if we take the most conservative estimates within the bounds, respondents with high levels of political efficacy would be 3% more likely to express their views on this question, but those differences could be as large as 28%.We find very similar values for the other evaluative questions in the sample.
In line with Hypothesis 3, the descriptive statistics presented in Table 2 showed the presence of gender gaps in item non-response rates for all six evaluative questions, which were larger (over 2% points) for the questions related to equal law enforcement, the ability to express unpopular views, and disparities in sentencing.The models presented in Table 3 reinforce this finding, showing that women are less likely to provide answers to these three questions even after introducing socio-demographic controls, as well as controlling for partisanship, ideology, political knowledge and political efficacy.While we still find negative coefficients for gender in the models analyzing the other evaluative questions, those coefficients do not meet conventional levels of statistical significance.
We do not find any significant results for the other variables.Neither partisanship nor ideology seem to be associated with greater probability of providing an evaluation, nor is this the case for education.Only income and age seem associated with a higher probability of offering an evaluation for some of the questions.
In the Appendix, we present several robustness checks.First, we present the results from separate logistic regressions in which we control either for efficacy or for political knowledge, in case our findings are being driven by collider biases resulting from controlling for knowledge and efficacy in the same model.We again find that efficacy is consistently associated with a greater probability of expressing evaluations but not political knowledge.Second, we run our main analyses using rare event logit models given that the share of respondents answering "don't know" to each of the evaluative questions is relatively small (between 3.2 and 6.8% of the sample).In both sets of analyses the results are essentially the same: internal efficacy is associated with a higher probability of offering evaluations of democratic institutions, but not political knowledge.We now turn to evaluate Hypotheses 4 and 5, which stated that the effects of political knowledge and internal efficacy on the probability of evaluating democratic institutions should be stronger for women than for men.To examine this claim, we report analyses of split samples by gender. 1 Table 4 presents the output of the logit models for men.Contrary to the models with the full sample, we only find statistically significant coefficients for political efficacy for the first three (general) questions and in two cases this is only at the p < 0.1 level (law enforced equally and ability to express unpopular views).We again do not find any association between political knowledge and the probability of providing an answer to any of the questions.Interestingly, while we do not find a strong correlation between income and the probability of offering an evaluation in the full sample, we find that income is strongly associated with men's willingness to express their views (but not for women).This finding echoes previous studies that found a closer link between affluence and subjective political competence among men than for women (Thomas 2012).
Conversely, in Table 5, which focuses only on women, internal efficacy is associated with a higher probability of providing evaluations for all six questions.These results indicate that the findings from Table 4 were largely driven by women being more likely to respond at higher levels of political efficacy.In Table 5 we also find positive coefficients for political knowledge, but these are only statistically significant in Model 1 (votes count the same) at the p < 0.05 level and in Model 2 (law enforced equally) at the p < 0.1 level, suggesting that political knowledge might have only a modest effect on women's willingness to express their views of democratic performance if it does not also come with a greater sense of internal efficacy.Overall, the results from Tables 4 and 5 support Hypothesis 4 but not Hypothesis 5: gender moderates the effects of internal efficacy on the likelihood of expressing democratic evaluation, but not this is not the case for political knowledge.

Conclusions
This article has analyzed how political resources shape the gender gap in item non-response rates in evaluative questions of democracy.While we do not find that political knowledge is a major predictor of the probability that respondents will express their views about democracy, we find that internal political efficacy is very strongly associated with a higher probability of offering a substantive answer to such questions.Moreover, we also observe that this is especially (if not exclusively) the case for women.Regardless of the cognitive and social complexity of the questions asked, we find consistent evidence that higher levels of political efficacy increase the likelihood that women respondents will express their evaluations of the quality of democratic institutions in the country.
Our results align with previous research that indicates that, even more than actual knowledge of politics, it is self-confidence in their own understanding of political processes that influences women's willingness to express their views about democracy (Goenaga and Hansen 2022).While we cannot know for certain why this is the case based on our data, these findings suggest that gendered stereotypes of what a democratic citizen looks like are reproduced and internalized by citizens in contemporary democracies.Due to these gendered stereotypes, women would need not only to know more about politics but also to feel more confident about that knowledge before expressing their views about the quality of democratic institutions.On the contrary, those same gendered stereotypes would push men to not only feel entitled but even expected to evaluate democratic institutions even when they themselves would acknowledge their poor understanding of democratic processes.It could also be the case, of course, that other personality traits, unrelated to processes of political socialization, also contribute to the gender gap in item non-response rates.These results have normative and theoretical implications.First, they show that the gender gap in other channels of political participation is also present in polls and surveys that ask about the state of democracy, a relatively low-cost channel for citizens to express their political views.This poses a problem for ideals of democratic representation.At the same time, if women or other politically marginalized groups are less likely to express their views about democracy, and if these views are more critical about the performance of democratic institutions than those of the rest of the population, we might overlook brewing discontent against certain parts of the democratic system by members of those groups.
Finally, our results raise the question about which groups are more likely to perceive themselves as full democratic citizens, who can and should express their views about democratic institutions regardless of their political resources.This poses questions for future research about where these perceptions come from and to what extent they are tied to different personality traits or to gendered differences in political socialization.If the latter, future studies may examine how socialization at the parental home or descriptive representation in the political system can contribute to transform the gendered construction of democratic citizenship.

Table 1 .
Expression of Democratic Evaluations -Descriptive Statistics.

Table 3 .
Models Predicting Expressions of Democratic Evaluations -Full Models.

Table 4 .
Models Predicting Expressions of Democratic Evaluations -Men Sample.

Table 5 .
Models Predicting Expressions of Democratic Evaluations -Women Sample.