Interested in Diversity

Using survey evidence from the Netherlands, we explore the factors that influence news readers’ attitudes toward news personalization. We show that the value of personalization depends on commonly overlooked factors, such as concerns about a shared news sphere, and the depth and diversity of recommendations. However, these expectations are not universal. Younger, less educated users have little exposure to non-personalized news, and they also show little concern about diverse news recommendations. We discuss the policy implications of our findings. We show that quality news organizations that pursue reader loyalty and trust have a strong incentive to implement personalization algorithms that help them achieve these particular goals by taking into account diversity expecting user attitudes and providing high quality recommendations. Diversity-valuing news readers are thus well placed to be served by diversity-enhancing recommender algorithms. However, some users are in danger of being left out of this positive feedback loop. We make specific policy suggestions regarding how to address the issue of diversity-reducing feedback loops, and encourage the development of diversity-enhancing ones.


Introduction
Algorithmic agents 1 that personalize our digital information flows are ubiquitous. 2 Personalized recommender systems are often portrayed as necessary to manage the digital information overload and enable user autonomy (Gauch et al. 2007;Oulasvirta and Blom 2007;Friedman and Nissenbaum 1997). Others, however, see personalization as a threat (Borgesius et al. 2016). An influential stream of literature argues that algorithmic agents may manipulate our worldview because they put people and communities in "filter bubbles" and "echo chambers" (Pariser 2011;Sunstein 2009;Yeung 2017). The current media law and policy debate (Borgesius et al. 2016;Yeung 2017) about echo chambers (Sunstein 2002) or "Daily Me's" (Negroponte 1995) is concerned that personalized recommendations are opaque in how they filter/recommend information to individual users, and that this filtering may lead to a number of undesirable consequences. One such potential consequence is the emergence of filter bubbles, which are described in more detail later. Another expected negative effect is that such algorithmic agents could reinforce processes of self-selection. News users were shown to prefer those sources that reaffirm their ideology and worldview, to avert cognitive stress (Sears and Freedman 1967;Stroud 2011). Recommendation algorithms can potentially catalyze this self-selection process by taking away users' choice to avoid or confront dissonant content. Such a situation could conflict with the fundamental values of our information society, including access to diverse information 3 , a shared public sphere (Habermas 1989;Castells 2008), the ability to make free and informed decisions in private 4 or to partake in political decision-making. 5 Despite the increasing amount of literature on the topic, there are many unanswered questions around the processes and dynamics behind personalized recommendations. 6 We have little insight into users' attitudes, concerns, and expectations regarding personalized news selection. Our theories about how algorithms interpret user intentions, attitudes, and interests, and how algorithms respond to user signals, are even more patchy. We understand little of the interaction between end-users and algorithms.
Despite the widespread criticism and the general lack of conclusive supporting empirical evidence, Pariser's filter bubble argument had a major impact on how we imagine the interaction between users and recommendation algorithms (Borgesius et al. 2016;Dutton et al. 2017;Haim, Graefe, and Brosius 2017. High Level Group on fake news and online disinformation 2018; Koene et al. 2015;Quattrociocchi, Scala, and Sunstein 2016;V ik¸e-Freiberga et al. 2013). Next to influencing the academic discourse around news personalization, the filter bubble argument has also begun to frame way into national and international policy discussions on future media regulation. 7 Worries about filter bubbles are typically based on two fundamental assumptions: People are diversity averse, and algorithms reduce diversity. Together, users and algorithms create a spiral, in which users are one-dimensional and prefer their information diet to be filtered so that it reflects their interests, and in which this filtering reinforces the individual's one-dimensionality. The goal of this article is to pave the way toward a better understanding of the users of personalized news services, and of their concerns and expectations. Perhaps some users are more at risk of ending up in filter bubbles. We show that a better understanding of the user matters if we want to understand how users shape personalization algorithms and how these algorithms shape their users.
To reach our goal, we combine survey-based evidence with theory to provide an alternative view of algorithm-user interaction process. In Section 2, we review the premises of the filter bubble discussion. Using evidence from the Netherlands, in Section 3 we show that, contrary to the assumptions commonly made by the followers of the filter bubble argument, the diversity of recommendations is an important factor in citizens' evaluations of news recommenders. In Section 4, we also use our data to demonstrate that not all users are equal, and that the current debate about algorithms and filter bubbles fails to acknowledge that the audience is heterogeneous, and has diverse personal preferences and propensities not just in terms of their interests and ideologies, but also in terms of their attitudes toward news personalization and expectations vis-a-vis news services. We then use theoretical arguments in Section 5 to show that there are ample incentives for the business entities that operate algorithms to

INTERESTED IN DIVERSITY
respond to diversity-seeking people. In Section 6, we conclude that, under the right conditions, the individual user's expectation of diverse recommendations could be the starting point for a diversity-enhancing feedback loop. We also warn that diversity expectation is not a universal user trait, and we provide empirical data on societal groups that might still be in danger of being locked in filter bubbles because this diversity enhancing feedback loop does not kick in.

Premises of the Filter Bubble Discourse
Personalization has been the subject of intense debate since the publication of Eli Pariser's (2011) influential book on filter bubbles. Pariser argues that personalizing algorithms lock people in interest-based filter bubbles. This argument rests on a few somewhat oversimplified assumptions, such as: 1. Individual users do not value diversity and are not interested in complex societal issues (Pariser 2011, 51). 2. Algorithmic agents do not recognize and serve complex user profiles, and they disregard preferences such as users' desire for diverse or in-depth news (Pariser 2011, 54). 3. The sole goal of algorithms is to identify narrow personal interests and provide people with relevant information that fits their profile (Pariser 2011, 54-56). 8 4. Personalization is already ubiquitous, and soon there will be only personalized media (Pariser 2011, 33), and personalization will be invisible to people (Pariser, 2011, 10).
Based on these assumptions, the filter bubble discourse warns of several effects, such as a reduction in the diversity of information and opinions that people are exposed to; the formation of echo chambers; the subsequent polarization and fragmentation of the public debate; and the disengagement of certain social groups from the political process. Underlying the filter bubble discourse is thus yet another, more implicit assumption, namely that diversity, and exposure to diverse news 9 is inherently a good thing. It is worth noting that this assumption is not self-evident. Diversity can compete with other, not less important public or economic values, such as the need for reducing complexities (Neuberger and Lobigs 2010), personal autonomy of the audience, and the provision of information of personal importance to the audience. Also, research shows that diversity policies and exposure to dissimilar perspectives more specifically can at times backfire and increase polarization rather than reducing it (Dilliplane 2011;Wojcieszak 2011). And yet, evidence also suggests that diversity in the media can create opportunities for users to encounter different opinions, self-reflect on their own viewpoints (Kwon, Moon, and Stefanone 2015), enhance social and cultural inclusion (Huckfeldt, Johnson, and Sprague 2002), tolerance (Mutz 2002), increases one's familiarity with views oppositional to one's own (Price, Cappella, and Nir 2002), and also leads people to more accurately perceive public opinion (Wojcieszak and Rojas 2011). This is why for the purpose of this article we follow the European Court of Human Rights in its conclusion that there can be no democracy without diversity. 10 208 BAL

AZS BOD
O ET AL.
The filter bubble theory has been subject to much criticism. Elsewhere (Borgesius et al. 2016), we have argued that empirical evidence adds substantial qualifications to Pariser's assumptions. Research suggests that many other factors shape the diversity of someone's information diet. Algorithmic agents are only one, and probably not the most important, of those factors.
Another qualification is the type of personalized recommendations. Fears about selective exposure and filter bubbles presume that personalized recommendations give people exactly the kind of information they are interested in. Some algorithmic recommenders indeed strongly focus on short-term goals and simplistic metrics, such as the number of clicks and likes. But this is only part of the story. Algorithmic recommendations could also provide people with a more diverse choice, more depth, or less popular contents (Munson and Resnick 2010;Jannach et al. 2010). 11 Recommender technology is maturing, as are the goals of news media, social network sites, search engines, and other parties (Newman at al. 2017). News organizations increasingly employ recommenders to offer their users better services and more choice, unlock long-tail content, and increase long-term engagement (Bod o 2018;New York Times 2014;BBC 2017;Newman et al. 2017Newman et al. , 2018 In all probability, these market-driven developments lead to more complex and sophisticated recommender systems, which can better profile audiences and can better respond to and guide their actual preferences. Algorithms and audiences do not develop in isolation. The current state of algorithmic personalization shapes users' future attitudes and expectations, which, in turn, algorithms are intended to measure and better serve. Thus, it is important to better understand the forces that shape the development of users' relationships with personalized recommendations. Much of the academic literature concentrates on external factors that may influence access and exposure to diverse information. Systemic factors, such as the level of polarized media and politics in a country (Bobok 2016;Boczkowski and Mitchelstein 2013;Garrett 2013;Iyengar and Westwood 2015;Mancini 2013), have been argued to set the baseline for exposure diversity. In addition, studies have shown that the nature of information has a significant effect on information avoidance, particularly if the information is counter-attitudinal (Hart et al. 2009;Knobloch-Westerwick and Kleinman 2012;Lee 2016;Messing and Westwood 2014;O'Hara and Stevens 2015;Sears and Freedman 1967;Valentino et al. 2009;Wojcieszak 2010). Furthermore, the choice of alternative media sources increases the likelihood of exposure to diverse information. Online media offer a seemingly unlimited variety of sources and increase access, if not exposure, to diverse content (Napoli 2011). Online social networks, which have become an important news source, have also been shown to expose people to more diverse information than traditional media or physical networks of friends, family, colleagues, and neighbors through the diversity of weak social media ties, and the accidental exposure they facilitate (An, Quercia, and Crowcroft 2013;Bakshy, Messing, and Adamic 2015;Barber a et al. 2015;Bozdag 2015;Duggan and Smith 2016;Flaxman and Rao 2016;Fletcher and Nielsen 2017;Webster 2010).
Empirical data on users' attitudes and expectations regarding algorithmic agents are scarce. Most studies on personalization have focused on targeted advertising; only a few have looked into media personalization (e.g., McDonald and Cranor 2010;Turow et al. 2009). These studies concentrated on privacy concerns, and not so much on user

INTERESTED IN DIVERSITY
expectations regarding the content or quality of algorithmic recommendations. An exception is the study by Sørensen (2013), who investigated user attitudes toward selfselected personalization in Denmark. A couple of studies have researched how users interact with personalized recommendations, and how users value the output of recommendations. In an experiment on news recommendation diversity, Munson and Resnick (2010) found that 25% of their respondents sought diversity and valued counter-attitudinal recommendations. Duggan and Smith (2016) found that more than a third of their American respondents find political discussions with people they disagree with "interesting and informative," and that 83% of their respondents ignore content they find disagreeable. Yet, the same study also found that almost 40% of the respondents took active steps to curate their information environment, and had at least once removed counter-attitudinal, offensive, or annoying political content, or unfollowed people who posted it.
In this context, the filter bubble theory is useful not necessarily because it gives an accurate description of the impact of algorithmic personalization on our information diet, but because it helped to identify the domains where we need better theories and evidence to adequately reconstruct the societal impact of algorithmic personalization. For instance, it is unclear how users interact with personalized news recommendations, especially if the recommendations contain counter-attitudinal items. Also, while much of the research on profiling and personalization focuses on users' attitudes toward potential conflicts with their privacy (McDonald and Cranor 2010; Turow et al. 2009), research looking into other factors, such as diversity, is scarce. This is why we set out to answer the following questions: Who are the users of personalized recommendations, what is their attitude to diversity, and how great is their propensity to selective exposure? What incentives news producers have to detect and respond to users' attitudes? Does policy have a role in the (algorithmically mediated) interaction of users and news producers?
In the following section, we present the findings of a survey in the Netherlands about users' expectations regarding news personalization. We first analyze factors of user acceptance of news personalization. More specifically, we investigate the relationship between attitudes toward diversity, privacy, efficacy, and a shared public sphere, and acceptance of news personalization. Next, we use latent class analysis to investigate whether these relationships hold for all user groups or whether there are specific user groups that are more vulnerable to filter bubbles. In the second step, we make an effort to lay out how organizations that deploy algorithmic agents may respond to these expectations. A better understanding of users' attitudes toward news personalization and its impact on the diversity of recommendation they receive has relevance in multiple domains. First, it can serve as a basis for a better-informed and more matured academic debate about the potential negative or positive democratic impact of news personalization. Secondly, understanding users' attitudes toward personalization, and diversity is also critical for policymakers. For example, if we find evidence that people do not care about the diversity of recommendations, and at the same time they are less likely to be exposed to a diverse media offer (e.g., because they rely primarily on 210 BAL

AZS BOD
O ET AL.
personalized information offer, e.g., on social media), this could be a signal to policymakers that here is a target group that is not likely to benefit from diversity-enhancing policies, and may indeed be a group that is more likely to be in danger of one-sided information and all the possible consequences thereof (radicalization, polarization, etc.).

User expectations regarding personalization
We conducted a cross-sectional survey to explore what factors influence the desirability of personalized news services. We collected data from a representative sample of Dutch adults (n ¼ 1556) through computer-assisted web interviewing (CAWI) in the period October 5-November 14, 2015. The survey was administered by the Dutch polling company CentERdata, and the sample was drawn from the Dutch academic household panel, the LISS panel 12 .
The Netherlands is a particularly useful case to study for two reasons. First, the technological infrastructure relevant to news personalization is very advanced: There is almost universal access to high-speed internet, and in recent years Dutch media companies have benefited from a steadily growing GDP. Second, because the Dutch journalistic culture is characterized by "freedom of speech, plurality and self-regulation and has a strong tradition concerning ethic codes and codes of conduct" (Paapst and Mulder 2017), it is a good example of the democratic corporatist model (Hallin and Mancini 2004). It can be expected that the Dutch news audience, which is used to independent news from diverse sources, has similar expectations of news personalization. It should be noted that choosing the Netherlands as a case limits the generalizability of the results to countries with similar characteristics. However, we believe that the relationships we identify in this study are likely to be the results of processes of attitude formation toward news personalization that are also likely to occur under different circumstances.
We defined the desirability of personalization as a cumulative of three survey items. 13 We addressed the filtering function of personalization with "I find it useful if a news website leaves out news that is not relevant for me." We measured the perceived user need for such filtering with "I find it annoying if a news site shows news that is not important to me." And we surveyed the usefulness of the recommendation function with "I find it useful if a news website highlights news that is especially important to me." We defined four areas in relation to personalization that we wished to further explore. These areas roughly follow the assumptions of the filter bubble theory and the literature on the effects of online communication on diversity exposure: The use of different news channels Expectations regarding personalized news services Attitudes toward a shared public sphere Privacy concerns.
The use of different news channels, including broadcast, print, and personalized and un-personalized online channels, may be important for multiple reasons. First, many news channels are not personalized, their extensive use may suggest that

INTERESTED IN DIVERSITY
personalization is far from being a ubiquitous phenomenon. Secondly, the parallel use of multiple news channels (source diversity) may be a signal of a demand for diversity. Third, previous experience with personalized and un-personalized news channels may influence the desirability of personalization. We measured exposure to news via the self-reported use of a) news websites (M: 3.03, SD: 2.76), b) news apps (M: 1.96, SD: 2.73), c) main evening news broadcast (M: 4.89, SD: 2.43), d) political information programs (M: 3.27, SD: 2.42), and e) social media (M: 3.32, SD: 3.02). The measure was the number of days the respondent used each channel in a typical week (7-point scale, with higher values indicating stronger agreement).
In addition to the frequency of social media use, we also surveyed the value of personalized social media as a news source. The item was "Social media are a good way to access mass media news" (M: 4.17, SD: 1.84), again measured on a 7-point scale, with higher values indicating stronger agreement.
We directly surveyed users' expectations regarding news personalization in two dimensions. We measured the expected impact of personalization a) on the diversity of recommended news items and b) on the depth of those news items. The questions were respectively "If a news website could account for my interests, the news I would get would have fewer or more topics" (M: 3.67, SD: 1.69), and "If a news website could account for my interests, the news I would get would have less or more depth" (M:4.20; SD: 1.55), both measured on a 7-point scale, with higher values indicating stronger agreement.
We surveyed whether users' are concerned about the negative impact of news personalization on the public sphere with two statements: "There are news and current affairs that everybody should know about" (M: 5.52, SD: 1.52), and "Everybody should have access to more or less the same news baseline" (M: 5.44, SD: 1.68). Both were measured on a 7-point scale, with higher values indicating stronger agreement. We used these last two measures to see whether users have expectations regarding the quality (diversity, depth, societal relevance) of recommendations which recommenders could detect, and take into account.
Since personalization requires personal data collection, we surveyed concerns about privacy in the context of news consumption, and in the more general context of commercial advertising, with the following questions: "How acceptable is it that websites collect information to personalize content based on a) your clicks on political websites (M: 2.29, SD: 1.78), b) your clicks on ads (M: 2.34, SD 1.78)?" Each question was measured on a 7-point scale, with higher values indicating fewer concerns.
We were interested in the relationship between news personalization and political efficacy for two reasons. First, people who are not interested in politics may value personalization, because it could help them to filter out political news. Second, the filter bubble theory predicts that people who rely heavily on personalization might have reduced political efficacy because personalization could reduce the amount of societally relevant information in their news feed, even if they do not actively filter out political information. We constructed a scale to study efficacy with three items, all measured on a 7-point scale: "I know more than most people about what is going on in politics in the Netherlands," "Sometimes politics seems so complicated that people like me cannot understand what is going on," and "I have a good idea about the most important problems in my country" (Cronbach's alpha: .67, M: 4.04, SD 1.33).

AZS BOD
O ET AL.
We constructed an OLS linear regression model to test the effects of news exposure, expectations on the outcome of personalization, and the effects of fears of fragmentation on the desirability of news personalization, and to control for the effects of age, gender, and education. Table 1 presents the results of the regression analysis. The model fit was adequate (Adjusted R 2 : 0.152, F: 12,982 p < .0001). The high unexplained variance points to a limitation of our research, namely that the respondents might have been unfamiliar with news personalization and its opportunities and threats. Yet, the model allows us to formulate some observations for theoretical considerations and future empirical testing.
The model suggests that users' expectations regarding the output of news recommenders have the strongest positive effect on the desirability of personalization. Respondents value personalization more if they expect that personalizers will deliver them more diverse news (Beta: .107, SE: .028). This means: users who have one item step higher expectations of diversity, are predicted to move up on acceptance of news personalization by 0.1 item step. The relationship is highly significant, it is more than 99% certain that the relationship that exists in the sample also exists in the population.
We found a similarly strong effect of the acceptance of social media as a news platform (Beta: 0.121; SE: 0.024). However, the direction of causality here is unclear: Either people who have already had a positive experience with personalization on social media platforms, may consequently positively evaluate personalization, or people who have a positive attitude toward personalization might accordingly more easily accept social media as a news source. We believe that social media experience informs attitudes toward personalization, rather than the other way round since most users will have first experienced news personalization on Facebook and other social media.

INTERESTED IN DIVERSITY
The concern about a shared news baseline became statistically significant in the regression model, although the effect is small (Beta: À.057, SE: 0.27). It is 97% certain the relationship found in the sample also exists in the population) The result is intuitive: The higher the level of appreciation of universal news access, the lower the level of appreciation of news personalization.
Interestingly, privacy concerns became insignificant in our model, implying that the differences in user attitudes toward news personalization cannot be statistically related to differences in concerns about privacy.
Finally, the regression model confirmed that people with less education or less political efficacy place a higher value on news personalization. These findings seem to be contrary to the other findings, which associate the desirability of personalization with diversity and depth. Would less educated, less politically efficacious people also value personalization for its ability to deliver diverse news? If so, personalization could enable emancipation. To answer this question, we added an interaction effect of expected impact on news diversity and efficacy to the model.
The interaction term was marginally significant (Beta: 0.4, SE: 0.02). It is 93% certain that the relationship found also exists in the population. According to this finding, for people with high efficacy the expectation of diversity plays a significant role in the desirability of personalization. A person who feels very confident about politics values personalization much more if she expects it to deliver more diverse news. On the other hand, the assessment of personalization by someone with low efficacy does not depend on diversity.
Taken together, these results suggest that users do not want personalization at all costs. The value of news personalization depends on whether there remains a minimal shared news sphere, while providing people with a diverse and in-depth news diet. The expectation that personalization will prevent the maintenance of a common news baseline, has a negative impact on the desirability of news personalization. The expectation that personalization leads to less news diversity or depth also has a negative impact, at least among more politically efficacious people.
These findings paint a more optimistic picture of the user than the filter bubble theory. The desirability of news personalization increases as people expect these services to deliver more diverse and in-depth news. As we discuss later, this is a useful starting point for understanding the relation between expectations and diverse news recommendations. On the other hand, these diversity expectations are not universal. Less efficacious people value personalization regardless of diversity. A lack of diversity expectations is not problematic if people use both un-personalized and personalized news channels (Borgesius et al. 2016). But if personalized news replaces, rather than complements, print and broadcast sources with human editors, the lack of diversity expectations might lead to undesirable effects. In the following section, we further explore whether this is a real threat in the Netherlands.

Endangered users in the algorithmic recommendation landscape
In this section, we identify people who might end up in filter bubbles because they do not consume diverse news. In the previous section, we identified two factors that could lead to reduced diversity: over-reliance on personalized news sources at the cost of more traditional, un-personalized news sources, and a lack of expectations 214 BAL

AZS BOD
O ET AL.
regarding the diversity and depth of algorithmically recommended news. We used a latent class analysis (LCA), with the Exposure to news variables as input for the model, 14 to classify the population according to news consumption practices. The news exposure variables measured both use frequency and news source diversity. We treated source diversity as a proxy for actual individual demand for diversity. We explored with the LCA whether there are groups in the Netherlands that use less diverse information sources, or demonstrate an over-reliance on personalized news sources. Table 2 shows the fit values for the different models. We chose a four-class model, which had only a marginally worse fit, but suggested four well-defined groups with quite distinct news consumption patterns. 15 Table 3 summarizes the statistics for the four classes. Traditional news seekers (36%) use TV, radio, and newspapers extensively, but use very few digital news sources. They are the oldest and the least educated. News omnivores (13%) make extensive use of all available news channels, including mobile apps and social media. They have the highest net income, are the most educated, and have the highest political efficacy. Moderate news users (30%) also use most news channels, though less frequently than news omnivores; they are younger and slightly less educated than the omnivores. And finally, the social media users (22%) use very few traditional news channels, but they make extensive use of social media. They comprise the youngest, lowest income, second lowest education group, and have the lowest political efficacy.
In addition, we included the cluster means for the variables used in the previous regression analysis and noted where there was a statistically significant difference in the means, using the social media users 16 as the reference category. 17 The LCA offers a different perspective on the relationship between news consumers, personalized and non-personalized news media, and news diversity. The results reflect three generations: older broadcast media users, middle-aged "digital immigrants" (Prensky 2001), and young "digital natives." The two older generations have no trouble accessing diverse news: Older people prefer non-personalized broadcast and print media, and digital immigrants-spanning news omnivores and moderate news users-use both personalized and non-personalized media. Combining personalized and non-personalized use has two effects: These groups have a window into the world outside of any potential filter bubble, and their use of diverse sources may signal diversity expectations that are also present vis-a-vis personalized environments. The youngest generation, however, over-relies on social media for news. They have the lowest level of exposure to traditional media, and only the old-school news consumers

INTERESTED IN DIVERSITY
consume fewer online news sources than this social media users group. This group relies more heavily on personalized social media for news than any other group, even though they do not use more social media than the news omnivores, and use as much social media as moderate news users. The social media users have the lowest level of expectation regarding diversity, and the lowest level of appreciation of a shared public sphere, even though they appreciate personalization and accept social media as a news platform as much as the other groups. In addition, the social media users group is the least politically efficacious.
Taking all these findings together, the social media users match most closely the users envisioned by the filter bubble argument, who have little exposure to non-personalized news and do not expect much diversity. This group has fewer defenses against the possible effects of algorithmic news personalization than any other segment of Dutch society. Whatever the effects of personalization may be, some people, especially the youngest and least educated, are more susceptible to these effects than others.

The machine side of the feedback loop
In the previous sections, we provided evidence that news users differ in more than just their topical interests: Some people have very particular expectations of news recommenders, whereas others do not. For some people, personalized news sources complement non-personalized sources, but for others personalized channels are the dominant information source. Yet, the current theories do not account well for the non-topical heterogeneity of users, and how algorithms may take that into account. In this section, we take the first steps toward extending theory to account for this heterogeneity of users and describe how this heterogeneity shapes the interaction between algorithms and their users.
Feedback loops are part of our current understanding of personalized news media. This understanding, shaped by Pariser, and referred to in this section as the naïve theory of news personalization, assumes that algorithms measure user engagement to provide recommendations that further increase that engagement. 18 The same naïve theory also assumes that user engagement depends on how closely the recommended news items align with the user's interests. Consequently, for the media organization that deploys the recommender, the most relevant differences among users are topical. The task of news personalization is to distinguish, say, the tennis aficionado from the political junkie and provide them with tennis and political news, respectively.
The naïve theory rests on the assumption that algorithmic feedback loops focus on a particular set of user signals. Users curate their information environment all the time. User engagement (e.g., reading, liking, sharing, commenting, or paying for an article) signals topical interest. 19 Other acts, such as hiding news items, unfollowing sources, or ignoring recommendations, may signal disinterest. These engagement signals are both abundant and easy to detect, so recommender algorithms, and the naïve theories of algorithmic personalization tend to focus on them.
Yet, our empirical evidence suggests that users also differ in their long-term personality profiles and fit categories, such as people "who have a wide interest," "who appreciate diversity," "who engage with multiple topics," "who value serendipity," "who are curious about unknown information," "who like to be surprised" (Sch€ onbach 2007), "who get easily bored with familiar things," "who highly engage with societal issues," etc. (Duggan and Smith 2016;Munson and Resnick 2010;Webster 2010). These profiles reflect long-term attitudes, which are harder to detect and harder to reconstruct from short-term engagement signals. Yet, these profiles directly shape news consumption practices, therefore they must indirectly shape the algorithms themselves. To account for these long-term forces that shape news personalization dynamics, we propose an

INTERESTED IN DIVERSITY
extension to the naïve news personalization theory, which we call the producer-focused news personalization theory.
Elsewhere, through interviews we conducted within European quality news organizations, we've found that many commercial and public service news organizations take into account long-term signals, such as their users' diversity expectations, and long term journalistic goals, such as promoting quality articles, to shape the behavior of algorithms, and the recommendations they produce (Bod o 2018). We call this behavior the producer-focused feedback logic of algorithmic recommendation. The userfocus logic assumes that the goal of the recommendation agents is to please users by maximizing engagement. By contrast, the interviews we conducted suggest that the producer-focused logic works in a different manner. The key performance indicators of algorithmic recommenders depend on the business entities that deploy them; maximizing user engagement is not the only, and often not the most preferred, goal; and ultimately these producer-set optimization goals define the development path of recommendation models.
All kinds of news organizations deploy algorithmic news recommenders, each striving to achieve their own goals through personalization (Bod o 2018). Some commercial media organizations aim to sell as much advertising as possible, and for that they need as much user engagement as possible. Media organizations that produce serious journalism may pursue a steady subscriber base, which they hope to achieve by nurturing loyalty to the brand and cultivating trust in the quality of their journalism (Boczkowski and Mitchelstein 2013). Public service media have charters that oblige them to educate, inform, and sustain social cohesion (Splichal, 2007), and an ongoing challenge for public service media is interpreting their mission in the light of the contemporary societal and technological context (Jakubowicz 2007;EBU 2016). The performance metrics by which these organizations measure the success of their algorithmic recommendations will reflect these particular goals, namely profitability, loyalty, trust, or social cohesion (Bod o 2018; Hindman 2017; Van den Bulck and Moe 2017).
The key performance indicators are different from mere user engagement with the recommendations, although the indicators also reflect engagement (Ferrer-Conill and Tandoc 2018;Tandoc and Thomas 2015;Powers 2018). Producer-set metrics use the aggregates of user engagement signals over long periods of time and across a large number of users. These aggregates expand the evaluation beyond individual users and short-term goals. In addition, producers often need to balance contradictory short-and long-term goals. For example, in the domain of news personalization, recommending controversial stories might maximize ad revenues, as these create much user engagement. But such controversial stories may harm long-term goals, such as the trustworthiness and brand value of the news organization. The performance of the algorithms is measured by complex metrics that mix these short-and long-term considerations.
In this producer-focused feedback loop, the performance of recommender algorithms is measured by a number of indicators that cover user engagement and measures for long-term goals, such as loyalty, brand value, conversion to subscribers, or diverse recommendations (Bod o 2018). In the end, the development of recommendation algorithms will be determined by long-term producer-focused goals, rather than by short-term user-centric goals. The user has a different role in this feedback loop than that envisioned by the naïve theory. The purpose of the recommender is to maximize producer-set performance metrics, sometimes at the cost of satisfying the short-term desires of users. These metrics determine the development of the recommendation algorithm and not just the next recommendation the user receives. If the organization wants to provide recommendations that lead to loyal subscribers, then the performance of algorithms must be measured by their success in serving users who expect diverse and in-depth recommendations, as well as topically relevant suggestions. In other words, the producer-set algorithmic feedback loop leaves room for indirect user agency, whereby the user's more complex expectations are represented in, and measured by, the long-term performance metrics.

Conclusions and policy considerations
The filter bubble argument is alarming because it envisions diversity-averse users and diversity-blind algorithms. Yet, as we have shown in this article, this is not the only logic that plays a role in the interactions between algorithmic recommenders and users.
We have shown that some users assess the value of news personalization according to the diversity and depth of the recommendations. We also identified why news producers may be interested in catering to these expectations. Confronted with misinformation and attempts by states to influence the political process of foreign countries, many traditional news organizations reaffirmed their mission to serve the public interest, and to maintain trust or customer loyalty (BBC 2015;Viner 2017;Coleman 2017). These news organizations now need to consider how algorithmic agents can help them achieve these goals, instead of merely maximizing user engagement. Were news recommenders to look beyond simple engagement and meet diversity-seeking users, this could create a virtuous (in the truest sense of the word) circle of increasing diversity.
Nevertheless, we also found that there are substantial differences in the intensity and diversity of the channels people use to consume news, and these seem to correlate with expectations of content diversity. Many Dutch people rely on social media for their news and show little concern for diversity or the public sphere. These people use news channels that focus mostly on short-term engagement and that mirror their users' limited concern for diversity or the public space. This group constitutes a strong case for policy intervention, because its members risk falling victim to unconstrained market forces, which may or may not develop in a societally desirable manner. What are the policy tools to prevent these people getting caught in the wrong feedback loop?
The model outlined in this article offers two loci of policy intervention: on the user's side, creating favorable conditions for exposure to diverse content, and on the side of the producer of the algorithmic agent. Policy measures and regulations that address exposure diversity move at a tenuous balance between the positive obligations of states to ensure the optimal conditions for people to exercise their right to freedom of expression, including the right to diverse content, 20 and the obligation to refrain

INTERESTED IN DIVERSITY
from state interference with the media as well as respect for users' privacy (for an extensive overview of this debate see Helberger, 2012). It is worth noting, however, that while in earlier policy documents matters of exposure diversity were per se excluded from the regulatory ambit (Council of Europe, 1999), more recent policy documents have moved to address very explicitly the need for states to take measures to "enhance users' effective exposure to the broadest possible diversity of media content" (Council of Europe, 2018, para. 2.5). And while with the commercial and the online media the focus is on encouraging those to facilitate and promote exposure to diverse content, it is the public service media that is expected to play an active role in furthering media diversity (Council of Europe, para. 2.5 and 2.8). Public service media traditionally play a strong role in informing the public and introducing the young to citizenship by, for example, providing a common set of issues and balanced, diverse information. However, in the current fragmented media landscape, these public service media fail to capture the attention of especially the young digital natives.
Accordingly, public service media and quality news media should be stimulated to reach digital native audiences with appropriate, possibly personalized content (Helberger 2015). Many of the leading public service broadcasting organizations are currently investigating ways in which data-driven recommendations can contribute to promoting public values, such as diversity, and more generally their mission to inform (Sørensen 2013). The first of the 10 recommendations of the European Broadcasting Union (EBU)'s Vision 2020 report is "Better understand your audiences", so that the public service media is able to adjust their services to the different information needs and preferences of a heterogeneous audience (EBU 2016, 15). Personalization is explicitly mentioned as a possible tool to do so, as is what the report calls 'innoversity' (EBU 2016, 17)using diversity as a source of content innovation and development of new formats. In doing so, the public service media are moving within the sensitive minefield of initiatives that still may qualify as part of their general public mission, and areas in which they come dangerously close to the no-go area that is within the competitive domain of the commercial media and the press, and where state aid laws as well as national media laws draw strict lines that publicly-funded organizations may not cross, in order not to distort the overall competition in media markets (Van den Bulck and Moe 2017; Donders and Van Rompuy 2012).
Moreover, media law and policy should acknowledge that audiences are diverse, and that some groups in society are more prone to selective exposure than others. Media law and policy are often built on simplistic assumptions about media users, without differentiating between people and how they consume media content 21 , or the impact that doing so has on public opinion formation (Craufurd and Tambini, 2012). Minors and people with disabilities are an exemption: in many jurisdictions, media law protects minors from certain content 22 and formulates specific access rights for people with disabilities. 23 But even in this area, media law protects people by limiting or enabling access to certain media or content. 24 Existing media laws are seldom informed by a deeper understanding of how minors and people with disabilities engage with media, benefit from diverse content, or are hindered by technological developments, for example through a dependence on personalized recommendations. elderly. We argue that diversity policies need to move from a one-size-fits-all to a more diversified approach. On the producer side, we saw that some media organizations are motivated to implement recommendation agents with performance goals that incorporate more than just maximum short-term engagement. Traditional news organizations have professional codes of conduct, sometimes centuries-old reputations, and deep roots in local socioeconomic conditions. In other words, these news organizations have a lot to lose if recommendation performance indicators do not take into account long-term and societal considerations. Having said that, investing in the development of more sophisticated recommendation agents can be both costly and time-consuming, and often requires expertise that these organizations do not have in-house. Media policy makers can have an important role in creating more favorable conditions for the development of such more sophisticated algorithms by, for example, funding innovation projects, academic research, and concrete academia-industry research collaborations.
As a baseline, and as this article has also shown, for the broad majority of users probably the best safeguard against the dangers of selective exposure and filter bubbles is a vibrant media landscape where users can encounter and choose from various personalized and un-personalized news sources. Thus, public policy should continue to stimulate the broad availability of news sources. In addition, more targeted initiatives may be needed to reach "social media only users," particularly those who are not interested in diverse news.
For the news media, our article has demonstrated that a significant share of the audience does care about diverse news, and that there is a demand for diverse recommendations. Open and audited metrics to measure the societal impact and diversity of algorithmic recommenders could help news organizations to meet that demand, and offer personalized news in a societally responsible way (Helberger 2016;Eskens, Helberger, and Moeller 2017). Offering a diverse media diet will matter for some types of media more than for others. Due to the logistics of digital advertising, some social media platforms clearly have an incentive to disseminate unchecked and highly engaging information (Tambini 2017) for the sake of clicks and other metrics of short-term engagement. This may also apply to some media organizations that have little concern for a healthy public debate, and instead focus on increasing profitability and shareholder value. Still, public pressure and reputational costs have forced even these organizations to change some of their algorithmic priorities (Ohlheiser 2016;Trefis Team 2016). Seeing the impact that social media have on the media diet of at least certain parts of the population, a new challenge for media policymakers and regulators is to establish systematic monitoring of this field to assess the impact of these new, highly personalized players on social cohesion, a diverse information landscape, and a shared public sphere.

Limitations and future directions
Studying attitudes in the Netherlands provided us with the unique opportunity to study attitudes towards a technology that is just emerging. However, as it is a case study, the generalizability of our results is limited. The Netherlands a characterized by a functioning, diverse media system that includes a strong public broadcaster and almost

INTERESTED IN DIVERSITY
universal access to broadband internet. Future research should further investigate how contextual factors such as the media system shape the attitudes towards news dissemination technology (see also Thurman et al., 2018).

DISCLOSURE STATEMENT
No potential conflict of interest was reported by the authors.

NOTES
1. We use the term "algorithmic agent" to suggest that machine learning methods enable firms to delegate decision making to algorithms in a growing number of activities from credit scoring to selecting relevant information. The use of the term "agent" does not imply that these algorithms operate autonomously, without human oversight. On the contrary, as we explain in the paper, such oversight, either direct (provided by those who develop and oversee the algorithms) or indirect (provided by users who feed data into these systems) is an essential part of the system. 2. Nearly all of the companies leading the list of the most visited internet websites (https://www.alexa.com/topsites, last visited on March 23, 2018) employ algorithmic information personalization one way or another. While almost all major e-commerce websites, search engines, and social media websites utilize personalization techniques, it is easy to encounter personalized information even on non-personalized websites if they carry ads served by third party advertising networks. Advertising networkssuch as Google's AdSense network, which currently has almost a 70% market sharedecide which ads to show based on who the user is, and thus would also qualify as an algorithmic information personalization service. See: https://www.datanyze.com/market-share/advertisingnetworks, last visited on March 23, 2018. 3. Article 19 Universal Declaration of Human Rights: 'Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.'; Article 19(2) International Covenant on Civil and Political Rights: 'Everyone shall have the right to freedom of expression; this right shall include freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice.' 4. In both the US and the EU, the fundamental right to privacy has been interpreted as containing a right to 'decisional privacy': the right to autonomously make lifedefining choices. See Van der Sloot (2018); Roessler (2005). representatives.'; Article 25 International Covenant on Civil and Political Rights: 'Every citizen shall have the right and the opportunity, without any of the distinctions mentioned in article 2 and without unreasonable restrictions: (a) To take part in the conduct of public affairs, directly or through freely chosen representatives.' 6. While the technical details of information personalization are well known (Jannach et al. 2010), there is very little research on how these technologies are implemented in various settings, such as news (Bod o 2018). 7. In one of its recent recommendations, the Council of Europe warned that "[selective exposure to media content and the resulting limitations on its use can generate fragmentation and result in a more polarized society" (Council of Europe 2018) thereby repeating earlier warnings by e.g. the High Level Expert Groups on Media Pluralism (V ik¸e-Freiberga et al. 2013) and Fake News (High Level Group on fake news and online disinformation 2018). 8. More recent discussions suggest that besides topics, higher-level factors, such as ideological preferences, might also be considered as the basis for filter bubbles; however, such factors are notoriously hard to establish and measure, especially outside of relatively simple, binary ideological systems like the US. 9. An extensive discussion and conceptualization of this malleable concept would exceed the scope of this article, see instead: Helberger and Wojcieszak (2018). 10. E.g. European Court of Human Rights, Refah Partisi and Others v Turkey, 13 February 2003, paras. 87, 88, 89. 11. A growing number of recommendation algorithms seek to break filter bubbles.
Examples include Huffington Post's Flipside and Wall Street Journal's Red Feed, Blue Feed; independent initiatives such as Read Across the Aisle; Escape your Bubble (Chrome), and the Swedish Filterbubbland; as well as sophisticated recommender projects by, for example, the New York Times, Blendle, and the Dutch Volkskrant. 12. By relying on standardized survey data, we could compare attitudes towards personalization across a large and representative sample of the population. This approach allowed us to statistical tests to ascertain the relationship between individual characteristics and opinions, and attitudes towards news personalization. Yet, standardizing questions means that we cannot capture the full causal mechanism that links these variables together. For this purpose, future research should aim to gain a deeper understanding of what it is that links expectations of diversity and acceptance of news personalization, for example in focus groups or using big data research. By relying on self-reported attitudes and behaviors we also rely on the capability of respondents to accurately recall and express their attitudes and behavior in a questionnaire. Therefore, our results should be interpreted keeping in mind that respondents misunderstanding or misremembering could have impacted the findings. 13. Cronbach's alpha between the three items was .75, M: 3.42; SD 1.54. Higher values indicate higher levels of agreement. 14. We used the poLCA package (Linzer and Lewis, 2013) in R to conduct the analysis. 15. The three-class model had the best fit, but did not distinguish between News omnivores and Moderate news users. 16. We tested the results with different reference groups to the same effect.
17. The inclusion of cluster membership in the OLS model instead of the news exposure variables did not produce statistically significant results for cluster membership. 18. In this section, we use the term "algorithm" to refer to different approaches to provide personalized content recommendations based on user profiles built on engagement history. For our discussion, the choice of a particular algorithmic approach, and its parametrization, is less important than the fact that user engagement is collected, preserved, and used by a particular subset of algorithmic recommenders that rely on user histories (rather than, for example, the recommended contents semantic proximity) for making recommendations. For a detailed discussion of the different algorithmic models, including collaborative, content-based, knowledge-based, and hybrid models see, for example, Jannach et al. (2010). 19. We arguably follow an individualistic approach when we interpret users' engagement signals as a representation of their intrinsic values, interests, attitudes, and beliefs. In contrast, Csig o (2016)