Understanding data professionals in the police: a qualitative study of system-level bureaucrats

Through the introduction of algorithmic systems into police organizations, a new employee emerged: the data professional. Contrary to street-level officers, little is known of the discretionary power of these system-level bureaucrats. Our qualitative research into the Netherlands Police provides a first empirical and theoretical understanding. The study shows that data professionals exert discretion and are aware of public values, but their value-sensitivity often does not translate into responsible practices. Data professionals use a variety of arguments to dissociate themselves from, or downplay, their responsibilities. We conclude that this distancing hampers the connection between the discretion and responsibility of data professionals.


Introduction
Police organizations have a history of data-driven decision-making dating back to the early twentieth century.Over time, police departments have become increasingly dependent on a large variety of information systems.Technology plays a role in all aspects of policing, from planning police activities (Meijer, Lorenz, and Wessels 2021) to providing officers on the street with information about citizens and locations, registering police actions (Manning 2008) granting them access to actions of colleagues (Manning 1992).Over time, information systems have come to transform police practices (Chan 2001) and modern police organizations have become highly dependent on technological systems.
Recently, the introduction of new data and algorithmic technologies has been transforming police organizations (e.g.Brayne 2020;Lorenz, Meijer, and Schuppan 2021;Meijer, Lorenz, and Wessels 2021).Police organizations around the globe increasingly introduce new data and algorithmic technologies in their work (e.g.Brayne 2020;Lum et al. 2019;Meijer, Lorenz, and Wessels 2021;2012;Schwartz-Shea and Yanow 2012;Weiss 1995), as well as observations of four digital meetings of their professional community.
This study contributes to specific debates about discretion of professionals in the police (Bennett Moses and Chan 2018;Chan 2001;Meijer, Lorenz, and Wessels 2021) and, secondly, to broader debates about algorithmic systems in the public sector (Zouridis, Marlies, and Mark 2020;Wilson and Broomfield 2022;Boer and Raaphorst 2021).It also contributes to work on public values a concept relating to the participation of organizations in the formation of a shared set of norms and values (Jørgensen and Bozeman 2007).

Sensitizing concepts
In this subsection we present the literature and relevant theoretical insights that we used to build a conceptual lens for studying data professionals in the police.Due to the interpretative nature of this work, we regard these insights as sensitizing concepts.Bowen states that 'sensitizing concepts draw attention to important features of social interaction and provide guidelines for research in specific settings' (Bowen 2006, 14).We elaborate on our three main sensitizing concepts, which are 'algorithmic systems', 'responsibility' and 'public values'.

Initial understanding of domains of data work in the police
When it comes to algorithms in police organizations, significant academic attention has been paid to different types of predictive policing algorithms, including tools used for risk assessment, planning and surveillance, often based on historical data (e.g.Bennett Moses and Chan 2018;Brayne 2017Brayne , 2020)).The actual deployment of algorithmic systems in the Netherlands Police, however, goes way beyond predictive policing.In a recent study, Schuilenburg and Soudijn (2021) 2 identify three domains in which the Netherlands Police uses data and algorithmic technologies: 'work on the street', 'criminal investigations and detective work' and 'intelligence'.We build upon this distinction both because it fits our empirical domain and because no established overviews of the different types of data work are available in the international literature.
The first category, 'work on the street', denotes those algorithmic projects used by or designed for police officers and surveillants at the street level to replace or enhance routine tasks.One such a system is the Netherlands Police's predictive policing algorithm, the 'crime anticipation system', or 'CAS' (Lorenz, Meijer, and Schuppan 2021;Oosterloo et al. 2018;Waardenburg, Anastasia, and Marleen 2018).This category further includes systems that allow a police officer to access certain information (e.g.unpaid fines registered to a licence plate) or to start working on incident reports while on the move.Schuilenburg and Soudijn also include in this category 'applications for business operations', denoting systems for police officers to register activities or coordinate operational capacity.The second category concerns 'criminal investigations and detective work'.According to Schuilenberg and Soudijn, this work involves large amounts of data, for instance from confiscated devices.Such data needs to be collected, combined and analysed to discover new patterns or correlations that can help in criminal investigations -investigations that may concern both traditional crime and cybercrime.The third category, 'intelligence', is different from the other two categories as no new data is created, but large amounts of data are analysed to lead to business intelligence products that help with detection, enforcement or assistance -for instance, to decide which cases warrant further investigation using which strategies.
Schuilenburg & Soudijn's base their analysis on publicly posted job opportunities, but their study lacks empirical evidence from inside the police organization.We will therefore use their categorization as a starting point for our own analysis of data professionals' work, and adapt it as we see fit in view of our empirical data.

Responsibility and discretion of data professionals
In traditional bureaucracies, public policy is implemented by frontline workers who interact directly with citizens, such as police officers.These so-called 'street-level bureaucrats' are found to have considerable discretionary power (Lipsky 1980).Discretion refers to a certain level of professional autonomy or freedom in making decisions.Decision-making, then, is at the heart of a street-level bureaucrat's work, for instance using their discretion to interpret the rules and deciding how to apply or bend them in specific cases.As such, street-level bureaucrats can react to a given situation as they see fit, within given boundaries (Tummers and Bekkers 2014).
However, with the rise of IT systems and later data and algorithmic technologies, organizations and policing practices have changed significantly.Already in 2002, Bovens & Zouridis signalled the emergence of 'system-level bureaucracies' in which direct contact with citizens is greatly reduced and mostly limited to assistance provided by help desks.Decision-making at a case-by-case level becomes less prevalent, as the system-level bureaucrat works on the development, optimization and maintenance of algorithmic systems (Bovens and Zouridis 2002;Bullock 2019;Boer and Raaphorst 2021).More recent studies show how system-level bureaucracies have developed even further since 2002, especially considering the prevalence of algorithms.One important development is the emergence of the data professional, a role previously unknown which is related to the emergence of 'big data' (Eck, Bovens, and Zouridis 2018;Zouridis, Marlies, and Mark 2020).There is a large body of literature suggesting that discretionary power is increasingly shifting towards these data professionals (e.g.Kool et al. 2017;Eck, Bovens, and Zouridis 2018;Zouridis, Marlies, and Mark 2020;Meijer 2009).
Researchers have also sketched a more nuanced situation where technology does not fully replace street-level discretion (Buffat 2015).Recent empirical studies of the Netherlands Police confirm these nuances.For one, the Netherlands Police currently uses no decision-making systems without human control (Schuilenburg and Soudijn 2021).Second, police officers operating at the street level continue to have significant discretionary power, particularly building on their own expertise for quick decisionmaking, acquiring data and evaluating algorithmic advice (Boer and Raaphorst 2021;Meijer, Lorenz, and Wessels 2021).Third, some algorithmic systems at the Netherlands Police do not influence street-level discretion at all, e.g.algorithms that monitor IT-use of police employees, using data to detect significant changes in employee behaviour.Although these systems indirectly influence police officers, they do not directly affect their discretionary power at the street-level (Brayne 2020).
The nuanced perspective on the relationship between data and algorithmic technologies and human actors divulges an organization where the street-level and system-level bureaucracies co-exist, the boundaries between them fluid.As such, the discretion of system-level data professionals does not fully replace street-level discretion, but these new employees hold significant decision-making power.However, surprisingly little is known about how data professionals exert this discretionary power, particularly when it comes to weighing public values.

Public values and technical choices
Algorithmic systems are often introduced with promises of increased effectiveness and efficiency, but they are also praised for their potential to decrease human bias, make better informed decisions, relieve human labour, and provide insights in a reality too complex for people to comprehend (Meijer and Grimmelikhuijsen 2021;Wieringa 2020).For police organizations, algorithms take on a load of standard (administrative) tasks, ensuring employees can spend more time on complex cases.Schuilenburg and Soudijn further note algorithms can have a positive organizational impact in two ways.First, they can speed up internal organizational learning by processing large amounts of data and creating feedback loops.Second, as data becomes more widely and easily accessible to those within the organization that have a right to this data, it also helps unify different police departments and actors within the organization, creating an equal knowledge base (Schuilenburg and Soudijn 2021).Finally, algorithmic systems can also help ensure limited resources are deployed more efficiently, as is the intention of the CAS system in the Netherlands.Ultimately, data and algorithmic technologies are expected to play a (direct or indirect) role in solving crimes or reducing crime rates (Brayne 2020).
This positive perspective is built largely on an assumption of these technologies being neutral.In this view, algorithmic output is based on what is regarded as factual data, detached from imperfect, emotional, and opinionated human decision-making.Output is thus presented with a particular guise of objectivity and certainty, granting the algorithm legitimacy.However, many scholars have demonstrated that algorithms are anything but neutral (Ananny 2016;Bennett Moses and Chan 2018;Gillespie 2014;Kitchin 2017).Algorithmic systems have been found capable of (re)producing bias and discriminatory practices as well as inflicting harm, thus having serious real-world consequences (Meijer and Grimmelikhuijsen 2021;Meijer, Tobias Schäfer, and Branderhorst 2019;Stahl and Wright 2018;Wieringa 2020).This means that the work of data professionals entails making value-based choices in the design and implementation of algorithms.
From a public management perspective, the role of data professionals becomes relevant when public value decisions are disguised as seemingly technical choices.In literature, a distinction can be made between a broad perspective on public values (e.g.Hood 1991) and more specialized perspectives focusing only on bureaucratic values (e.g.Møller, Zinck Pedersen, and Svejgaard Pors 2022).In the current study, we depart from a broad perspective, where we regard public values as those values that contribute to, and are essential for, 'good' public governance.They specify citizens' rights and obligations, as well as the principles on which we should base governments and policies (Jørgensen and Bozeman 2007;Beck Jørgensen and Sørensen 2012;Graaf, Huberts, and Smulders 2014;Dijck, Poell, and de Waal 2018).
Literature on public values, however, stresses that they are often incompatible or incommensurable in nature, leading to value conflicts (de Graaf, Huberts, and Smulders 2014;Graaf and Meijer 2019).When different desirable values are found to be conflicting, values must be negotiated.This process of weighing and negotiating public values requires significant discretionary power, and can be regarded as a core responsibility of any bureaucrat in a public organization.
The negotiation of public values is historically anchored in (professional) institutions where, after extensive deliberation, they are moored in laws, agreements, professional codes and codes of good governance ( van Dijck 2020;Graaf, Huberts, and Smulders 2014;Beck Jørgensen and Sørensen 2012).In our empirical work we rely on public values as listed in such documents relevant to the Netherlands Police, and how they are observed by data professionals.This includes values such as privacy, security, inclusion (non-discrimination), fairness and efficiency, which are deemed highly important qualities in public governance.Data professionals are partly responsible for the safeguarding and incorporation of public values in algorithmic systems (e.g.Ananny 2016;Martin 2019;Meijer 2009;Zouridis, Marlies, and Mark 2020).For the purpose of this paper, we refer to this weighing and negotiating process as 'making value-judgements or value-decisions'.
In sum, our overview of relevant literature highlights (1) that data and algorithmic technologies are becoming increasingly important for police organizations, (2) these technologies are value-laden, and (3) values are inserted in algorithmic systems at least in part through human actions of data professionals.Since the role of data professionals is becoming increasingly important, the empirical part of this study focuses on how data professionals in the Netherlands Police exert their discretionary power when it comes to making value decisions in their daily work.

Methodology
The current study builds on existing qualitative research on police organizations, adapting similar methods to the modern context (Manning 1992(Manning , 2008;;Hulst and Tsoukas 2021).We focus on data professionals in the Netherlands Police, which we expect to be a particularly rich research site for multiple reasons (Bryman 2012;Haverland and Yanow 2012;Schwartz-Shea and Yanow 2012).First, in contrast to many other police organizations internationally, the Netherlands has a centralized national police force, which is primarily responsible for developing and implementing data-practices at the system level.Our preliminary research revealed the emergence of national networks of professionals (e.g. the Data Science Community), as well as projects organized with local or regional units.A second consideration is that the Netherlands Police actively invests in digital innovations, e.g. through the establishment of an AI lab, where universities and police collaboratively research AI projects.Third, the Netherlands Police has recently instigated a high-level policy portfolio on 'Ethics and Privacy' and appointed a project leader on 'Ethics in AI'.This combination of factors makes the Netherlands Police an interesting case to investigate how data professionals negotiate public values in algorithmic systems.
We studied data professionals working for the Netherlands Police, employing a qualitative interpretive methodology to gain in-depth understanding about their work, value-sensitivity and discretion.In contrast to more positivist research designs, interpretive research focuses on unravelling meaning-making processes through abductive reasoning in particular contexts.Such abductive reasoning begins with a puzzle, the researcher continuously going back and forth between theory and empirical data to make sense of the puzzle.As a result of this iterative-recursive relationship between theory and data, the research design is constantly adapted (Ospina, Esteve, and Lee 2018;Schwartz-Shea and Yanow 2012).To guide our interpretative research, we have identified three empirical subquestions: ( In the explorative phase of the research, our understanding of the work of data professionals was informed by numerous informal interviews as well as desk research. The main findings are grounded in a total of 21 one-hour qualitative semi-structured interviews with data professionals in the Netherlands Police. 3Interviews were complemented by observations of four online meetings of the Data Science Community at the Netherlands Police, an informal platform where data professionals from across the police organization discuss their work. 4The researchers were granted approval by the Ethical Committee of their institution (FETC-REBO decision on 'Value-sensitive algorithmization in the Dutch National Police').
Interviews aimed at gaining insights into all subquestions, but particularly questions 2 and 3. Following De Graaf and Meijer (2019), instead of asking abstract questions about public values, we focused on dilemma situations or discussions participants experienced (de Graaf and Meijer 2019).For our analysis of applying discretion, we focus on the decision when a system is 'good enough' to move it to a next phase of development.This can be either a pilot phase, where the technology is tested in real world situations, or the definite implementation of an algorithmic system.We decided to focus on this assessment to move towards implementation as it is a decision that all data professionals in the Netherlands Police are bound to face at one point.Additionally, we feel this decision comprises a very critical step in the design processthe moment where public values should be safeguarded to prevent harm.
In the first round of interviews, participants were approached through a broad call for police employees in the role of 'data scientist or developer' as well as through snowballing techniques.We were soon confronted with the fact that this group of 'data scientists or developers' is very heterogeneous.There are no official function profiles within the organization and there is no clear indication of the number of people that are employed as data scientist or developer.Some participants do not self-identify as data scientists, although their daily work does involve algorithm design, modelling and/or handling large amounts of data.We use the term 'data professional' to refer to a very broad group of police employees, working on diverse projects in different teams and departments, but all working with data and algorithmic systems.In a second round of interviews, we specifically approached participants working in different departments or on different projects than we had previously included in the research, in an attempt to maximize 'exposure' to different understandings and interpretations (Schwartz-Shea and Yanow 2012).In line with interpretative research methods, we employed an abductive strategy in coding, going through various iterations of searching for wider patterns across our data, then finding empirical evidence for these patterns at the sentence-level and subsequently connecting patterns and evidence to theory.
We did a total of three main rounds of thematic coding using the NVivo software.Each round consisted of an iterative process where codes were adapted and added as deemed necessary.Complexities we encountered were elaborated on in a separate coding memo.Instances of open, axial, and selective coding were present during each round (Bowen 2006).The first round of coding was mostly open, and included the first eleven interviews.This motivated us to conduct a larger number of interviews.In the second round of coding we attempted to link our codes to existing literature and our conceptualization of public values more explicitly.In addition to these main rounds of coding, a small sample of three interviews was coded by two of the authors, to check for intercoder reliability and discuss the coding strategy for the final round.After each round, we went back to the literature and used various perspectives to provide new interpretations of the data.

Results
In this section, we will look at our data through the lens of our three subquestions, relating to the work, value-sensitivity, and discretion use of data professionals in the Netherlands Police.We discuss each of the subquestions separately and will turn to our main research question/focus and implications of our findings for police practitioners in the final section.

Subquestion 1: the work of data professionals
Our first subquestion concerns the daily work of data professionals in the Netherlands Police where they constitute a relatively new group in the organization.The majority of data professionals in our sample has spent on average less than two years in their current job with limited previous work experience.They are often highly educated in fields like data science, artificial intelligence, IT or mathematics but lack operational experience at lower levels of the police organization.Depending on the department where they are employed, some basic training is provided to introduce them to the police organization.
The type of work that data professionals do is diverse.Using the division made by Schuilenburg and Soudijn (2021) as a starting point, we have analysed all algorithmic projects discussed in our data.This has resulted in a total of four categories (Table 1).Most notably, we have added a distinct fourth category we name 'enabling technologies', denoting projects data professionals work on that play a role in further development of data science in the Netherlands Police.Enabling technologies include technologies that transform data in such a way that allows the use of further algorithmic systems, for instance, an application which transcribes spoken language into texts, making it searchable.Enabling technologies can also play a role in further establishing the work of data professionals in the organization, such as an explainable AI toolkit to help data professionals report their decision-making, allowing for more transparency and accountability.
In analysing our data, we noticed overlap between the 'intelligence' category and the other categories.This is not surprising, as intelligence is traditionally an important task, supporting the core operational policing tasks of street-level police work such as pattern analysis of criminal activities.Intelligence also supports criminal investigations and detective work, for instance as a result of information captured routinely by traffic monitoring.In case of overlap, we discussed and decided on the most fitting category for each project.If projects had various goals, they were placed in multiple categories.
Table 1 shows that most data professionals (76,2%) we talked to are at least in part involved with enabling technologies.This is not surprising, considering how new they are to the organization and the novelty of data science as a profession.We expect that as data professionals are more embedded in the organization and their work matures, the number of projects on enabling technologies will decrease.
In conclusion, the work of data professionals in the Netherlands Police is heterogeneous, and as a result their daily work and activities vary greatly.Since data professionals occupy a new position in the police organization, they are often unfamiliar with the nature of police work when starting employment, which means they are challenged to create their own data science projects and find their place in the organization.As a result, they often work at substantial distance from daily police operations; particularly when working on experimental technologies, data professionals experience much freedom and few constraints in their work.Our subsequent analysis focuses on the use of this freedom -or discretion (Bovens and Zouridis 2002;Busch and Zinner Henriksen 2018) -and more specifically whether they apply valuesensitivity in their work.

Subquestion 2: value-sensitivity of data professionals
In general, we found values were often left implicit or not recognized by participants as public values.This required additional work on part of the researcher to explicate these values.For example, one data professional talking about including 'alternative features' in code, c.f., signals that someone's behaviour is changing for the better, observed this may be 'just a way to increase your model's score and get better intel to the organization and the people on the street, so it's just quality, not ethics [laughs]' (P16).Only when the researcher asked whether it could be both, did the data professional consider that a (likely) possibility.In this case, something that could qualify as a value judgement (was understood by the data professional as a'quality' booster. Value-sensitivity amongst data professionals in the Netherlands Police is informed and shaped through various contexts and discourses, including public debate, rules and legislation, the organizational culture in which the data professional is embedded, as well as one's personal background and attitude towards professional integrity and ethics (see also van der Steen, van Twist, and Bressers 2018).
First, ongoing public debates reflect politics and media coverage.Typically, the media reports worst practice examples of AI or data projects rather than successful implementations; in recent years, GDPR-transgression and privacy debacles received national media coverage.Data professionals are aware of these debates and take notice of negative outcomes and headlines, especially if it touches on police work or public management.However, they may also use such discourses to help them reflect on public values when doing their work, as exemplified by the following statement by a participant: Because you often work with sensitive data, and you regularly see news reports that police systems are being abused by corrupt cops or corrupt police employees, selling data.You just don't want that to happen to your system, so this is for any case where you have an app in which you show data or something.But also when it comes to models, you don't want people to be disadvantaged by predictions or whatever.Well, luckily I'm not involved with projects where that would be the case.(P12) Most notable is the recent media coverage of political debates concerning the Dutch Childcare Benefits Scandal (Peeters and Widlak 2023). 5As one participant explains, such scandals substantially impact her work.'(. ..)I would find it extremely awful if something like that would happen inside the police, it is something I think is really important' (P4).Several participants indicated they have learned from this crisis (Boin, McConnell, and 't Hart 2008) and hope to prevent something similar ever happening in the Netherlands Police, but most are not worried about such a risk for their own work, for example because they do not process individual citizen data.
The second discourse concerns rules and regulation.These are most notably, albeit not exclusively, referred to in terms of privacy and data security, as there are specific laws to safeguard these 'core values' for data professionals.Rules are not limited to legislation but also consist of organizational rules (Bozeman 1993).Most participants (95,2%) are positively aware of the existence of European Union guidelines and the 'Quality Framework for Big Data' (Dutch: Kwaliteitskader Big Data), in addition to formal legislation relevant to their work (also mentioned in meetings of the data science community, O1, O2, O3).This framework was developed by the Netherlands Police in collaboration with the Public Prosecution Service, explicitly listing relevant public values and providing a clear starting point for value-sensitivity.However, despite positive attitudes, participants report they do not use such tools on a regular basis, as such they do not seem to contribute significantly to value-sensitivity (see also Fest, Wieringa, and Wagner 2022).
The third context or discourse to consider is the organizational culture.Core values are expressed in the main motto of the Netherlands Police: 'protect and serve'.Many of the data professionals we spoke to show a strong intrinsic motivation -public service motivation (Perry 1997;Ritz, Brewer, and Neumann 2016) to do something for the public good through their work.Indeed this was often the reason they were attracted to working for the police in the first place.As one participant explains '(. ..) of course you don't think about it on a daily basis, but the fact that you do something which has added value for society.That is what makes it special' (P10).Another participant claimed that police employees have a 'high sense of morality and a strong ethical compass' (P2).They note that this built-in compass helps safeguard public values.Some of those values were commonly visible in our data, most notably safety -a value corresponding closely to police work.Most data professionals feel that their work contributes directly or indirectly to 'catching bad guys' (P12).In their daily work, contributing to the public good also takes the shape of values such as efficiency and efficacy and the ambition to reduce red tape (Bozeman 1993;Pandey and Scott 2002).A productive police organization where red tape is minimized, will contribute to the public value of safety.
In addition to these three societal discourses, we find data professionals' personal background and education to be formative for their value-awareness and valuesensitivity.Increasingly, ethics modules are part of the curriculum in many data science-related fields.However, depending on the exact study program, the level and extensiveness of these ethics courses varies greatly.Further, there seems to be a common understanding amongst data professionals that algorithmic systems can never be perfect and have limitations by default -an observation that is sometimes related to the idea of transparency: communicating openly about such faults and limitations (P2, P5, P7, P8, P9, P10, P11, P13, P14, P16, P19, O3).Finally, personal values might align or conflict with data professionals' work.As one data professional states, he is happy he does not work on facial recognition, which he regards as controversial, due to a perceived conflict with his own personal values: I can motivate to myself and others why I think it is good to work on this.Because I am very pro privacy and freedoms et cetera, I would really not want to be working on something like facial recognition.(. ..)There are some techniques where I would say: well the police should not take that risk, and then I am mostly just happy not to be working on that myself, because that would definitely make my work more difficult.(P6) In conclusion, data professionals in the Netherlands Police show a high level of valueawareness and value-sensitivity.These values concurrently reflect media, legal, and organizational discourses, as well as personal background and education.Our empirical analysis continues in the next section by investigating how data professionals apply these values in their daily practices.

Subquestion 3: applying discretion by data professionals
Our final subquestion relates to implementing public values in day-to-day decisionmaking.Such decisions highly depend on the type of project a data professional is working on.It might include, for instance, deciding on a specific threshold -a value that determines what colour is allocated to a pixel in a video-processing system or what score determines whether a person is regarded as low, medium or high risk.As noted in our methodology, we will focus here on the decision when a system is 'good enough' to move it to a next phase of development.Developing discretionary judgement and balanced evaluations is a prerequisite for making such decisions and implementing public values into systems as such.
As noted in the previous sections, respondents report much autonomy and freedom in their decision making, and are very aware of rules and regulations, especially when it comes to privacy and data security.Privacy officials and legal experts are available for consultation; however, data professionals are not required to consult them.None of our respondents recall instances where they conferred their interpretation of rules with privacy officers and legal experts before implementing them into systems.In some cases, regulation is still full of 'grey areas' and changes relatively quickly, making it difficult to apply in the daily work of data professionals.As one participant aptly notes, laughing, 'the law is so outdated'.(P18).
Participants indicate projects often lack hard prerequisites, clear descriptions, deadlines or goals, enhancing the importance of data professionals' discretionary power This was explicitly discussed in (at least) 9 interviews.Some participants attribute this at least in part to the technical illiteracy of higher-level managers as well as end users in the police organization (see also Cetindamar Kozanoglu and Abedin 2021).As illustrated by one participant, such illiteracy creates a lot of discretionary space for data professionals: '(. ..) we often wonder: What do you actually want?And how is this even data science?But those people themselves don't know that very well because it's not their field, so they have the greatest difficulty expressing themselves in technical terms.(. ..) usually that simply means you give it your own interpretation or come up with several interpretations for it and go back to those people.You tell them, "we could do this, what do you think?"And generally they say yes, because they don't have a better idea themselves'.(P9) Data professionals thus have much decision-making and discretionary power; before they decide whether a system is 'good enough' to move to a next phase of development, data professionals focus on two factors: end users and system quality.
When it comes to the first factor, end users, the key notion is that the system should be useful and contribute towards more effective or efficient work.Who exactly this end user is, depends on the type of system (see also Table 1).In order to determine whether the system is good enough, data professionals often turn to user experiences, satisfaction and critical feedback.In some cases, a user might distrust an algorithmic system, particularly when it comes to innovations using specific techniques, e.g.predictive analytics.As noted by one participant, distrust usually fades when the system is perceived as useful or effective: ' . . . the moment you frame it like it is going to relieve your workload, people are much more open to it' (P7).In three of the earliest interviews, this topic was not discussed.Most of the remaining data professionals we spoke to (83,3%) explained that they considered user satisfaction the most important factor in assessing their work.User satisfaction was also mentioned in the data science community (O2, O3).
The second determining factor can be denoted as system quality.Before an algorithmic system is implemented, either as pilot or as definitive system, data professionals may run several types of tests.This may include tests for privacy and data security (so-called penetration tests), tests to assess accuracy (e.g. by using a test dataset to determine the amount of false negative and false positive results the system returns), tests to determine impact on effectiveness or efficiency (e.g.how many cases can a user process in a given time with the help of this system versus without it) and many more.The focus is on quantitative testing to determine system quality, suggesting a certain neutrality and objectivity in these assessments.However, when asking critical follow-up questions, data professionals concede that in most cases, no benchmarks or target values exist (P4, P5, P6, P9, P10, P11, P12, P13, P14, P16, P19, P20).In other words, in most cases there is no common standard or approach to define for example, a good or acceptable accuracy rate in a specific context.One participant explains the process he went through to launch a particular machine learning application.When it came to deciding for the system to be implemented, they created a dashboard that stated its accuracy.'Is it not good enough?Keep labelling (. ..) because that [dashboard] has to turn green', he explains.When asked how they determined when it would be green, the response seems a bit flustered, and the participant explains it is difficult to come up with good standards.'At the end of the day, those are the kind of things data scientists just kinda make up' (P9).
Some participants explained that systems cannot be flawless or that there is no way to know whether a training set of data is representative.These and other reasons might be used by the data professionals to define their limited responsibilities.
Our findings highlight an interesting paradox: although data professionals are aware of and sensitive to public values, they do not apply their discretionary powers to decide whether a system is 'good enough' to move onto the next stage of development.We would expect data professionals to report contradictions, but instead, there seems to be an internalized routine to elude taking responsibility; data professionals seem to distance themselves from their perceived responsibility in various ways.Our notion of 'distance' refers to arguments expressed by data professionals that allow them to dissociate from, or downplay, their responsibility.They figuratively create a distance between themselves and their technological designs, evading the responsibilities that come with discretionary powers.This finding connects well to the risks that Bovens and Zouridis (2002) identify for 'system-level bureaucrats', software developers who operate at a distance but condition outcomes, although their analysis focuses on accountability whereas ours focus on their responsibility.Based on our findings, we identify four arguments of distance to responsibility: a) distance from citizens, b) distance through organizational complexity, c) distance through experimental quality and d) temporal distance.All four contribute to a pattern of absolving the data professional from taking discretionary responsibility and applying public values in processes of decisionmaking.

Distance from citizens
About half of our participants (N = 10) feel they operate at a large distance from citizens because their work does not immediately impact citizens.While in some interviews this was not explicitly discussed, some data professionals do work at a close distance from citizens.e.g. one participant who works on systems that are designed for use by citizens (P11), or a participant (P18) who works on projects to directly support detective work.In the latter case, however, these citizens are already regarded as 'bad guys', that need capturing.Participants' feeling of detachment increases when they do not process individual citizen data, or when they merely optimize work that is already being done, so their computational labour does not alter existing police practices.As one participant explains: (. ..) no one gets shot because I put something at the top of a list.(. ..)But yeah, actually it does happen that way.The Machine Learning decides what you get to see, so you decide what to pursue.But the Machine Learning does not make the decision, and there are so many steps in between . . .It isn't like the computer says Bob is the big drug lord, and Bob is immediately shot to pieces or anything like that [laughs].It isn't like that, so it's not that sensational, actually (P9) This type of detachment is reflected in the type of projects data professionals work on.Only one of the data professionals reporting this distance was involved in projects related to 'work on the street', and in this case, the feeling of distance was exclusively associated with processing individual citizen data.As this respondent observed: 'People often think about the police, like, hey that's a violation of citizen rights and stuff like that, but no, I actually never work directly with citizen data myself' (P18)

Distance through organizational complexity
The second type of distancing from responsibility we found can be understood as detachment caused by organizational complexity (see also : Bovens 1998;A. ;Meijer 2009).This type of distancing was reported by 85% of our participants and is thus a very common feeling amongst data professionals in the Netherlands Police.Data professionals feel their responsibility is limited because responsibility for algorithmic system design and implementation is distributed: some decisions are made by managers or other superiors, some decisions are shared between peers during code reviews, or a data professional may expect others to pick up on mistakes.Algorithmic systems designed by data professionals do not hold final decision-making power, there is always human control embedded in the organization.Hence, errors in the technology will not necessarily result in faulty judgement by humans.As one participant exclaims, 'deciding about someone based on AI?People don't want to touch that!' (P2)

Distance through experimental quality
The third type of distancing (reported by 12 out of 21 participants) we found relates to algorithmic systems that are in an experimental phase.If a project is still in this phase, data professionals feel they are not responsible for mistakes made by the system as long as the experimental quality of a project is explicitly communicated.As one participant explains, 'in some projects we go live as soon as possible, and just say it's still an experiment, so we kinda warrant there may still be mistakes' (P12).

Temporal distance
Related to arguments of experimental quality we found evidence of a 'temporal distance' argument used by data professionals to describe their perceived responsibility.In some cases, especially when algorithmic systems are still in early phases of development, responsibility for safeguarding public values is postponed, for example because data professionals feel the system is far from being completed and it is too early to think about values, or because time pressure requires a pragmatic short-term solution.As one participant says: 'If there is more time pressure I think yea, we know this solution works for now, but if we want to use it for another project in two months, we really have to do it differently' (P13).
Despite their explicit awareness of value-sensitive aspects, data professionals at the Netherlands Police often deploy arguments of distance to evade responsibilities for safeguarding public values in the algorithmic systems they help design.Indeed, data professionals' use of various arguments of distance mostly relate to the novelty of this profession and the lack of explicit agreements on distributed responsibilities.Notably, data professionals' detachment is aggravated due to their relative inexperience with street level police practices, the experimental quality of their computational work, as well as the organizational complexity of the police force -factors that hamper a clear and unambiguous allocation of responsibilities for implementing public values in data and algorithmic systems.Again, our research does not intend to assign responsibilities to data professionals or even scrutinize their strategies of distancing.Rather, we see their detachment as a symptom for the novel role data professionals occupy within the organization; they signal a kind of unease with the discretionary powers assigned to them vis-a-vis the responsibilities that come along.

Conclusions and discussion
This paper set out to enhance our understanding of the work of data professionals, who maintain considerable decision-making power within the police force while operating behind their computers at distance from the street-level as system-level bureaucrats (Ananny 2016;Bovens and Zouridis 2002;Kool et al. 2017;Martin 2019;Meijer 2009;van ;Eck, Bovens, and Zouridis 2018;Zouridis, Marlies, and Mark 2020).Our qualitative empirical research shows that data professionals employed by the Netherlands Police demonstrate succinct awareness to value-sensitive aspects of practices informed by data and algorithmic technologies.They also express a high degree of felt responsibility for their work and acknowledge their professional discretionary powers and autonomy in decision making processes.
Ideally, responsibility for value-sensitive design, responsible data science, and good practice in general could be delegated from the institutional level at the Netherlands Police to the individual data professional.But even if data professionals are committed to safeguarding public values, find it hard to connect public values to their data practices and apply them to their design decisions.In coping with this vague, distributed responsibility, data professionals distance themselves in various ways from taking responsibility.Therefore, we conclude that the value-sensitivity of data professionals does not translate into the responsible weighing of values in concrete decisionmaking situations due to organizational processes of distancing.This absence of responsibility poses a number of risks for the development and use of algorithms: • The distance from citizens argument fails to acknowledge that citizens can indirectly be impacted by the deployment of data-driven policing applications, or the organizational transformation it might cause.Although a direct decision may not come from an algorithm, the algorithm does impact the decision-making processes and inner workings of the police organization (Meijer 2009).
• Distance through organizational complexity can, as is well-documented, advance a practice of implicitly transferring responsibility to others, effectively leading to a responsibility void (Bovens 1998).When expectations are that others will correct mistakes, there is no guarantee such responsibility will be taken up at all.Despite their high level of autonomy and decision-making discretion, data professionals are ultimately dependent on their organizational context.They cannot carry the sole burden of safeguarding public values in algorithmic systems design, and responsibilities should be explicitly divided between various actors in the organization.Once the relatively novel role of data professionals becomes more established in the Netherlands Police, it is likely that the distribution of organizational responsibilities becomes more transparent.• Distance through experimental quality neglects the impact that experiments may have down the line as they affect institutional trust.Experiments are essential for innovation and organizational learning relies on allowing room for making 'mistakes'.However, it is important to articulate the necessary conditions and terms for experiments.Even small-scale experiments with algorithmic systems may affect citizens and impact citizens' trust in algorithmic systems for the simple fact that they are executed by the police.As such, the experimental qualities of an algorithmic project may never compromise the ultimate responsibility of the police to safeguard public values.Public values should play a role even when an algorithmic technology is still in its experimental phase from the very onset (Meijer and Thaens 2021).• The temporal distance argument showed that data professionals might postpone value-judgements, particularly during the early phases of a project.Postponement of value judgements, however, always runs the risks of them being 'forgotten' over time, and these public values being erased from later judgements, thus enhancing our previous point.
As our research shows, much responsibility is allocated to the group of data professionals in the Netherlands Police, but too little attention is paid to anchoring and safeguarding public values in their daily decision making, generating risks connected to the well-known problems of complex organizations (Bovens 1998;Meijer 2009).We see a number of potential remedies to solve this dilemma faced by the still very novel 'guild' of data professionals employed by the police force.For instance, a code of conduct, documentation of best practices, institutional support, quality control and monitoring could effectively be implemented in the police organization.As it stands now, data professionals are left to their own device when it comes to developing institutional judgement.Informed by public debate and organizational discourse, they rely heavily on their personal education and idiosyncratic discretion.Surprisingly, legal or ethical consultation plays a limited role in their evolving judgement, even if they are aware of regulation.As responsibilities are not clearly defined and not institutionalized within the organization of the Netherlands Police, the evasive tactic of 'distancing' themselves from responsibility can be considered a logical effect.We have conducted our research at the Netherlands Police and one should be careful in translating these patterns to other public organizations or policy domains in view of the specific features of the Netherlands Police.The Netherlands Police is a very large public sector organization which employs data professionals directly.Many algorithmic systems are therefore developed in-house instead of bought off the shelve from external parties.The current research provides no insight in the extent to which data professionals at commercial companies employ their discretion in weighing public values, or how such processes take place.It should also be noted the Netherlands Police has an established and explicit commitment to ethical and public values.This commitment is visible both in general police work (c.f. the police motto In order to ensure responsible design and implementation of algorithms, public sector organizations should explicate how public values can be implemented and safeguarded in algorithmic system design, and how responsibilities can and should be allocated to the various actors in the organization, particularly when the development of data and algorithmic systems is outsourced.More research is needed to find ways to ensure public values can be translated to the organizational embedding of algorithmic design.This should not be regarded the sole responsibility of data professionals but requires a wider public, organizational and academic commitment.
Meanwhile, the risks identified in this research have not gone unnoticed by the organization itself.The Netherlands Police is working on guidelines for implementing public values in practices; they are working on new internal IT platforms that will further strengthen data security and privacy and they have recently installed a portfolio concerning 'ethics & privacy'.International developments such as the proposed European Union AI Act also underline the importance of ethical and public values.
Such developments can help move the public organizations towards the realization of responsible implementation of data and algorithmic technologies.However, sensitivity to public values requires much more than guiding frameworks.Public sector organizations that increasingly rely on data and algorithmic technologies must pay explicit attention to the translation of public values in the daily decision-making processes of data professionals, focusing on data ethics, individual, professional and organizational responsibility, hence contributing to a general culture of responsible algorithmisation.
Finally, this paper brings back the human side of public organizations in an information age (Jansson and Erlingsson 2014).Public organizations are becoming increasingly dependent on technological systems, but we need to realize that these systems are created through human agency.The discretionary power of human actors needs to be acknowledged in relation to their responsibility in the context of organizations.Arguments of distancing, while understandable, may be dangerous since they can undermine the ultimate responsibility of human actors in the police organization.Bridging the gap between specific technological decisions and their real-life implications, between awareness of public values and taking responsibility, are key challenges for modern police organizations.

Notes
1.No clear definition of data professionals is provided by Zouridis, Marlies, and Mark (2020).In other literature, this term is used it as a denominator for a variety of jobs, including 'data analysts', 'data scientists', 'data specialists', 'data architect', 'database administrator' or 'data modeller' (Kennan 2017;Santoni de Sio and Mecacci 2021;Virkus and Garoufallou 2020;Gotterbarn and Kreps 2021;van ;Eck, Bovens, and Zouridis 2018).Building on this literature, we understand the term 'data professional' to broadly denote employees who work with data or on data science projects in various ways, including the processing of data, applying machine learning (e.g.neural networks) in the development of system-level software.2. The original text is in Dutch, terminology used here has been translated.
3. Interviews were transcribed verbatim and saved in pseudonym on a secured server (YODA).
Both interview transcripts and fieldnotes of the meetups of the Data Science Community were uploaded to NVivo for consecutive rounds of open, axial and selective coding (Bowen 2006).4. Interview data was collected in two rounds, separated by a round of data analysis.
Interviews were conducted via MS Teams due to Covid−19 pandemic restrictions in the Netherlands.5.The Childcare Benefits Scandal was a political scandal in the Netherlands, concerning fraud allegation, where certain parents receiving childcare benefits were falsely flagged by an algorithmic system and some recipients had to pay back the received benefits.This automated system was in place between 2013 and 2019 and was later found to be discriminatory and informed by institutional bias.
1) Work of data professionals: what comprises the daily work of data professionals in the Netherlands Police?(2) Value-sensitivity: how do data professionals in the Netherlands Police recognize and acknowledge (the relevancy of) public values?(3) Applying discretion: how do data professionals in the Netherlands Police weigh and implement public values in their daily work?

'
protect and serve'), as well as in the development of new technologies.Public organizations where this commitment is less established might show different patterns.Further research can indicate to what extent the patterns we found are specific to the Netherlands Police and connected to these features or to what extent these patterns are similar in other public organizations.

Table 1 .
Algorithmic projects in the Netherlands police divided per category.