Finding meaning in crowdwork: An analysis of algorithmic management, work characteristics, and meaningfulness

Abstract In this study we investigate the implications of different aspects of algorithmic coordination and algorithmic quantification for perceived work conditions and the meaningfulness of crowdwork. Using survey data obtained from 412 crowdworkers, our analysis shows that work conditions and the meaningfulness of work are impacted differently by algorithmic coordination and the feeling of being quantified by an algorithm. Specifically, it shows that algorithmic coordination has either a positive or null impact on perceived work conditions and meaningfulness of work. However, negative associations between algorithmic quantification and perceived work conditions, suggest that the algorithmic quantification seems particularly problematic for crowdworkers’ experienced work conditions. Furthermore, algorithmic coordination is positively associated with the meaningfulness of work, while algorithmic quantification is negatively associated with the perceived meaningfulness of work. Using work design theory, the findings also provide insights into the mechanisms explaining these relationships.

Formerly considered a curious novelty, platform work is now an established phenomenon in the global labor market and increasingly receives attention, both inside and outside the academy (Kost, Fieseler, and Wong 2018;Vallas and Schor 2020).It is estimated that online labor platforms in Europe and the US facilitate work for 163 million independent workers, contractors, and freelancers, amounting to 20-30% of the working population (Möhlmann et al. 2021).Although there are many forms of platform work, we specifically focus on what is often referred to as crowdwork or micro-tasking (Forde et al. 2017;Vallas and Schor 2020).On such online platforms requesters or "clients" can "delegate tasks in the form of an open call addressing a large and undefined group of people" (Fieseler, Bucher, and Hoffmann 2019, 988), and these "workers" complete tasks in batches (Kost, Fieseler, and Wong 2018).Although tasks, also called Human Intelligence Tasks (HITs), differ in complexity, crowdwork often consists of small digital tasks, such as transcribing, translating, labeling images, and categorizing content.
We specifically focus on crowdwork because, unlike more visible forms of platform work such as Uber drivers or better-paid platform-enabled work (e.g., freelance work), crowdworkers in particular operate in the shadows, and their employment conditions seem especially dire (Gray and Suri 2019;Heeks et al. 2021).In contrast to traditional forms of labor, crowdworkers conduct work on a task-for-pay basis.This work typically does not entail formal employment contracts, organizational support, or fringe benefits.Furthermore, although tasks are often snippets or smaller fractions of much larger projects, crowdworkers often operate in relative isolation (Kost, Fieseler, and Wong 2018) and lack information about the overall context in which their tasks are embedded (Kaganer et al. 2013).The platform provider facilitates the matching between requesters and workers, typically involving a process of algorithmic management. 1  Algorithmic management of crowdworkers is particularly problematic as decision-making is opaque and often felt as falling short of due process (Heeks et al. 2021).Further, the design of the technologies used to organize work can make work conditions better or worse, impacting employees' psychological states, well-being, and performance (Parent-Rocheleau and Parker 2022; Parker and Grote 2022;Wang, Liu, and Parker 2020).All this often leads workers to share a sense of dehumanization (Meisner, Duffy, and Ziewitz 2022).In a nutshell, with algorithms permeating the organizational processes, unidirectional reductionism is leveraged on workers by the platform (Newman, Fast, and Harmon 2020).
A hallmark of crowdwork platforms is that coordination and organizing (e.g., matching workers and requesters, and facilitating transactions) are efficiently offloaded to algorithm-based systems (Jarrahi and Sutherland 2019).Generally, coordination elicits a more neutral perception of an algorithmic function, while quantification elicits a feeling of unfair treatment by algorithm-based evaluation systems. 2 With this study, we seek to better understand platform workers' responses to algorithmic coordination 3 and algorithmic quantification. 4 In doing so, we seek to make several contributions.In particular, we propose that traditional theories of work design are important in understanding new forms of employment.Specifically, the Job Characteristics Model (JCM) of Hackman and Oldham (1976).Several researchers have argued that traditional work design theories should take a central role in understanding the digital revolution's impact on work (Demerouti 2022;Parent-Rocheleau and Parker 2022;Parker and Grote 2022;Schroeder, Bricka, and Whitaker 2021;Wong, Fieseler, and Kost 2020).Following Morgeson and Humphrey (2006) exposition of a work design model, we focus on skill variety, task identity, task significance, autonomy, and feedback.These are widely considered "core" work dimensions affecting employees' psychological states (Humphrey, Nahrgang, and Morgeson 2007).In doing so, we also respond to recent calls for empirical investigations into the role of these work characteristics in digital work environments (Schroeder, Bricka, and Whitaker 2021).In addition, we suggest that algorithmic management is not inherently good or bad, rather we show that different aspects related to algorithmic management -coordination and quantificationinvoke different consequences (Lee 2018).
In addition, our findings provide insights that support ongoing debates and initiatives aimed to incentivize platforms to provide better working conditions (Fredman et al. 2020;Gegenhuber, Ellmer, and Schüßler 2021;Heeks et al. 2021).Specifically, we identify several conditions relevant to perceptions of meaningfulness in platform work, contributing to emerging studies on meaningful work in the gig economy (Kost, Fieseler, and Wong 2018;Wong, Fieseler, and Kost 2020;Wong, Kost, and Fieseler 2021).Furthermore, these findings are important to the broader world of work as algorithmic management is increasingly observed within more traditional employment relationships (Wood et al. 2019).

Algorithmic management and platform work
Algorithmic management entails the use of computerized technologies, typically algorithms, to (partially) automate processes related to decision-making and control, enabled through the speed, scale, and ubiquity of surveillance technologies, data processing and machine learning (Bucher, Schou, and Waldkirch 2021;Evans and Kitchin 2018;Helles and Flyverbom 2019;Kellogg, Valentine, and Christin 2020).Typically, algorithms take on managerial tasks such as work assignment, scheduling, performance evaluation, and monitoring (Kellogg, Valentine, and Christin 2020;Lee 2018;Parker and Grote 2022;Parent-Rocheleau and Parker 2022).Many, mostly conceptual or qualitative, studies have focused on the implications of algorithmic management for workers in general and crowdworkers specifically (Burrell 2016;D'Cruz and Noronha 2006;Dourish 2016;Heeks et al. 2021;Kellogg, Valentine, and Christin 2020;Wood et al. 2019).An essential aspect of algorithmic management is the quantification of work as managerial algorithms rely on the large-scale collection and use of data to carry out coordination and control functions, traditionally performed by human managers (Möhlmann et al. 2021).The ways in which algorithms reduce qualitative aspects of performance into quantitative metrics -quantificationmay lead to a failure to adequately consider performance in a broader context.Hence, algorithmic management is not without problems or controversy, for instance, as it has the potential to undermine human(e) and meaningful work experiences (Gal, Jensen, and Stein 2020;Lamers et al. 2022).

Job characteristics model in the context of platform work
Scholars have recently used work design models to theorize the impact of algorithmic technologies on people's work experiences across work contexts (Parent-Rocheleau and Parker 2022;Parker and Grote 2022;Schroeder, Bricka, and Whitaker 2021;Wang, Liu, and Parker 2020).We build on this work by drawing on the Job Characteristics Model (JCM) by Hackman and Oldham (1976), to study the impact of algorithmic coordination and algorithmic quantification on job characteristics and, consequently, on experienced meaningfulness of work by crowdworkers.In general, the JCM systematizes the relationships between work characteristics and individual responses to work.The model distinguishes five core work characteristics: skill variety, task identity, task significance, autonomy, and feedback.Skill variety concerns the degree to which work requires a variety of activities; task identity calls attention to the importance of a whole and recognizable piece of work; task significance touches on the importance of the work for the lives and work of others; autonomy describes the freedom and discretion workers experience while carrying out their work; and finally, feedback refers to receiving direct and clear information about performance effectiveness.It is assumed, and empirically confirmed, that these work characteristics are positively related to employees' psychological states such as perceptions of meaningfulness at work (Humphrey, Nahrgang, and Morgeson 2007).However, the JCM was built on certain premises that are quite different from those that pertain to platform work (Parent-Rocheleau and Parker 2022;Parker and Grote 2022;Schroeder, Bricka, and Whitaker 2021).For example, this model assumes that workers are part of an organization, have jobs that come with a set collection of tasks, have a set salary, and work with managers and colleagues who can provide feedback.Platform work, in contrast, is conducted chiefly alone at home or in other spaces with limited social interaction with colleagues, is not a well-defined collection of tasks, and does not entail a fixed salary or other employment benefits (e.g., fringe benefits).Therefore, it is important to investigate if the premises of the JCM still hold in the context of crowdwork and how these might be affected by algorithmic features of platform work.Schroeder, Bricka, and Whitaker (2021) study structural factors that influence work designs and suggest that while organizational structure might not be that important for crowdworkers, the technological and physical context of work are important factors.In the context of crowdwork, algorithms enable the division and allocation of tasks and resources (Faraj, Pachidi, and Sayegh 2018).Such algorithmic coordination dictates how work is assigned, typically serving the platforms' goal to efficiently match labor supply and demand (Duggan et al. 2020).As such, a platform will try to ensure speed and efficiency by using algorithms to allocate tasks amongst the workers who are better, more quickly, and reliable to cater to requesters needs (Duggan et al. 2020;Gramano 2020).On the other hand, some studies point to potentially negative implications of algorithms on job characteristics (see for instance: Galière 2020; Parent-Rocheleau and Parker 2022; Parker and Grote 2022).

Algorithmic coordination and job characteristics
Notably, in the context of app work, Verelst, De Coomanand, and Verbruggen (2022) could not confirm a negative impact of algorithmic control on meaningfulness through skill variety, task identity, or task significance.However, we theorize that some of the negative implications of algorithmic management discussed in conceptual studies are more likely when algorithms are used to execute roles that require human skills (Lee 2018), as this may highlight tensions related to power structures in organizations (Kellogg, Valentine, and Christin 2020).Negative implications of algorithmic management are more often ascribed to perceptions of unfairness or reductionism (Newman, Fast, and Harmon 2020).
Notably, in the context of algorithmic control, Wood et al. (2019, 70) conclude that: "algorithmic management techniques enabled by platform-based rating and ranking systems facilitate high levels of autonomy, task variety and complexity." In addition, algorithmic matching (i.e., coordination) may facilitate the optimal matching of supply and demand (Möhlmann et al. 2021), which could enhance overall work quality and perceived autonomy (Wood et al. 2019).Furthermore, because the allocation of work and pay is relatively straightforward and easily understood by workers, algorithmic coordination may contribute to feedback and role clarity (Parent-Rocheleau and Parker 2022).This is in line with Wood et al. (2019) andD'Cruz andNoronha (2006), who find that working in a digital environment governed by algorithms grants high degrees of flexibility, autonomy, task variety -requiring skill variety and complexity -affording task identity and significance.Further, we build on conceptual work that discusses the potentially positive impact of algorithmic management on feedback (Parent-Rocheleau and Parker 2022).We theorize that mere algorithmic coordination may highlight the efficiency of matching supply and demand positively affecting autonomy, skill variety, task identity, task significance, and feedback.Accordingly, we hypothesize that algorithmic coordination may positively affect work characteristics.

Algorithmic quantification and job characteristics
While the algorithmic coordination of tasks seems an efficient and impartial way of distributing work, algorithmic quantification of human beings and their work feels reductionistic (Newman, Fast, and Harmon 2020) because only some quantifiable aspects of work are considered, often using inadequate proxies, e.g., whether a requester decided to pay them for the work or not (Gray and Suri 2019).The feeling of being quantified by an algorithm may limit workers' potential to flourish and cultivate their virtue (Gal, Jensen, and Stein 2020) and end up disrespecting their humanity (Lamers et al. 2022).Newman, Fast, and Harmon (2020) studied the use of algorithmic reductionism in a human resources management context.All of their five experiments (four laboratory experiments, one large-scale randomized experiment in an organizational setting) indicated that personnel decisions using algorithmic evaluations of workers are perceived as less fair than human-made decisions.This is rooted in algorithmic reductionism, which they operationalized as a disregard of qualitative performance indicators (i.e., algorithmic quantification) and a holistic approach to performance evaluation (i.e., decontextualization).They found that algorithmic reductionism negatively impacted workers' affective commitment of workers (Newman, Fast, and Harmon 2020).Notably, we decouple quantification from the potential decontextualization of work performance that together are measures of reductionism algorithmic in their conceptualization (Newman, Fast, and Harmon 2020).Rather, we study the extent to which workers feel that algorithms quantify them and their performance thereby failing to accurately capture certain qualitative attributes (Faraj, Pachidi, and Sayegh 2018).
Broadly, the quantification of work requires simpler task definitions and quantifiable work methods and objectives, reducing skill variety, task identity, task significance, and the autonomy of workers in choosing their work methods (Parent-Rocheleau and Parker 2022; Wang, Liu, and Parker 2020).Conversely, algorithmic quantification may be particularly detrimental to certain work characteristics (Parent-Rocheleau and Parker 2022).The feeling of quantification may lead workers to focus on those tasks that may contribute most to performance evaluations (Faraj, Pachidi, and Sayegh 2018), leading them to elude tasks that may not or those that present greater risks, hampering task variety (Tomczak, Lanzo, and Aguinis 2018).Moreover, algorithmic quantification often yields negative responses to the feedback workers receive from the algorithm (Gray and Suri 2019;Gregory et al. 2021).Often feedback through algorithms is perceived as resulting from irrelevant metrics, leading to confusion about expectations and reduction in the quality of feedback (Parent-Rocheleau and Parker 2022).
Conversely, algorithmic mechanisms often remain largely opaque with limited feedback or resource options (Bucher, Schou, and Waldkirch 2021).
Hence, overall, algorithmic quantification may decrease autonomy by removing human influence from the work process (Kinowska and Sienkiewicz 2022), decrease skill variety by requiring increased standardization of tasks, and decrease feedback by impairing contextual awareness (Parker and Grote 2022).Conversely, algorithmic quantification requires metrification and frequently leads to perceived reductionism (Newman, Fast, and Harmon 2020) as well as lower task significance and identity as work branched out in smaller and quantifiable tasks (Moore and Robinson 2016;Parent-Rocheleau and Parker 2022).Therefore, we hypothesize: H2: Algorithmic quantification is negatively associated with (a) autonomy, (b) skill variety, (c) task identity, (d) task significance, and (e) feedback.

Algorithmic coordination, work characteristics, and meaningfulness of work
Scholars interested in the work conditions of crowdworkers increasingly try to understand if and how workers develop meaningful work experiences (Kost, Fieseler, and Wong 2018;Wong, Fieseler, and Kost 2020).Meaningfulness of work has been identified as an important psychological state (Hackman and Oldham 1976;Spreitzer 1995).A central premise in Hackman and Oldham (1976) job characteristics model is that the five core work characteristics, skill variety, task identity, task significance, autonomy, and feedback, enhance the possibility of meaningful work experiences.In their meta-analysis on work design features, Humphrey, Nahrgang, and Morgeson (2007) empirically confirmed these relationships.In line with the job characteristic theory, we follow early conceptualizations of meaningful work as a unidimensional concept that captures the perception of workers that their work is worthwhile, important, or valuable (Hackman and Oldham 1976;Spreitzer 1995).Spreitzer (1995) further notes that meaning involves a fit between individual beliefs, values, and behaviors, and their job roles.
The relationship between job characteristics and meaningful work experiences is well established (Hackman and Oldham 1976).The above-mentioned five job characteristics are found to be predictors of three critical psychological states -meaningfulness, responsibility, and knowledge of results (Allan 2017).In line with our reasoning above, algorithmic work coordination is associated with an efficient and impartial way of distributing work (Bai et al. 2021;Lee 2018), potentially increasing job characteristics.Consequently, algorithmic coordination may facilitate meaningful work experiences because it enhances work characteristics.
Conversely, algorithmic quantification may represent fertile ground for dehumanizing logics associated with quantifying and algorithmically evaluating performative acts, as opposed to considering the efficiency benefits often associated with algorithmic coordination (Lee 2018).As such, we theorize that algorithmic quantification may reduce job characteristics.Consequentially, algorithmic quantification may create an online work environment where crowdworkers will lose their sense of purpose and connection to one's work goals (Spreitzer 1995) through a devaluation of the core work characteristics.Ensuing the above discussion, we hypothesize: H3: Algorithmic coordination is positively associated with meaningfulness through (a) autonomy, (b) skill variety, (c) task identity, (d) task significance, and (e) feedback.

Procedure and participants
Data were collected among European crowdworkers by posting tasks on four major platforms: Mturk, Clickworker, Microworkers, and Prolific.These platforms were selected for their availability across the EU.In addition, these platforms operate in similar ways in terms of matching labor supply and demand.Specifically, these crowdworker platforms allow requesters to submit tasks and optionally specify worker characteristics (e.g., gender) or qualifications (e.g., education or language skills) for the desired work force.Once submitted the platform distributes the task to workers that meet the criteria.Once a worker completes a task the work is (automatically) rejected or accepted after which workers receive their compensation through the platform.
Following ethics guidelines, workers who completed the task -i.e., questionnaire -were compensated for their time and effort (Gleibs 2017;Silberman et al. 2018).The survey took, on average, 15 min to complete and workers were compensated €3,00 upon task completion, equivalent to an hourly wage of €12,00.Initially, 923 workers started the questionnaire.As we were interested in understanding the working conditions, we were interested in workers that spend a substantial amount of time working on the platform.Therefore, an exclusion criterium was set at less than 10 h per week.This resulted in screening out 409 respondents.In addition, 81 respondents were screened out after failing the one of the attention checks and, 21 responses could not be used as only the first question was answered.Hence, the final sample comprises 412 crowdworkers.
On average, they indicate spending 19.44 h per week completing various tasks through these platforms (SD = 11.76).The tasks mostly include data entry, content moderation, data processing and cleaning, transcription and translation, and image labeling.The workers indicated that platform work amounted to 36.13% of their total income while spending about 42.04% of their work time doing platform work.Asking about the employment status outside platform work, 36.3% of the respondents indicated working full-time in a traditional labor agreement, 18.4% reported being self-employed or doing freelance work, while 10% was unemployed.Other employment statuses were homemaker (1.5%), student (14.8%), part-time work (17%), retired (0.5%) or unable to work (1%).The average age of the mostly male (65.9%) workers in our sample is 33.29 (SD = 11.02), and they reported having 3.98 years of crowdwork experience (SD = 2.78).Most of the workers were highly educated, holding an undergraduate degree (22.4%), graduate degree (29.7%), or doctorate (2.7%).

Measures
Table 1 lists all measurement items and corresponding factor loadings.
Algorithmic coordination refers to the automatic coordination and matching of labor transactions through algorithms (Duggan et al. 2020;Lehdonvirta 2018;Möhlmann et al. 2021).Though algorithmic assignment of work is a central feature of platform work (Wood et al. 2019), thus far research only conceptually or qualitatively examined algorithmic coordination.Hence, for the purpose of this study we developed three items to measure the algorithmic assignment of work and pay.Sample items include, "the platform assigns work algorithmically." Responses were anchored on a seven-point Likert scale ranging between 1 = strongly disagree and 7 = strongly agree.
Algorithmic quantification refers to the feeling that qualitative aspects of work and performance are reduced to quantifiable metrics.Algorithmic quantification was initially measured with five items; three items were adopted from Newman, Fast, and Harmon (2020).Two items were generated to reflect the theoretical definition of quantification more closely, specifically about capturing the accuracy of quantitative metrics to reflect qualitative abilities and performances.Hence, these two items address the extent to which algorithms primarily consider quantitative factors while qualitative aspects of attributes, abilities, and performance may be ignored.Sample items include "I feel like the evaluation process would reduce me to a number."Answer options were anchored on a seven-point Likert scale ranging between 1 = strongly disagree and 7 = strongly agree.
Since both measures have not been validated before an exploratory factor analysis (EFA) was conducted prior to the structural analysis discussed below.Based on the Eigen Values two factors emerge: algorithmic coordination (EV = 2.056) and algorithmic quantification (EV = 2.601), with factor loadings for the former ranging between 0.76 and 0.84 and for the latter factor between 0.77 and 0.83.One item from algorithmic quantification "this evaluation process would adequately recognize my qualitative attributes, abilities, and performance" [reversed] was omitted from further analysis due to low factor loading (0.47).Hence, in the final model and analysis algorithmic quantification was represented by four items (Table 1).Omega reliability for algorithmic coordination (ω = 0.78) and algorithmic quantification (ω = 0.81) indicated sufficient reliability.This factor solution is replicated in the confirmatory factor analysis presented below.
The job characteristics were measured using the work design questionnaire by Morgeson and Humphrey's (2006).Responses were anchored 1 = strongly disagree and 5 = strongly agree.Skill variety reflects the extent to which task completion requires crowdworkers to use a variety of different skills.Skill variety was measured by adapting four items from Hackman and Oldham (1976) and Morgeson and Humphrey (2006).Task identity refers to the degree to which a task involves a whole piece of work, easily identified by its results.Typically, tasks that involve providing a complete unit or entire product generate higher task identity than tasks that only involve small parts of a bigger task or job (Hackman and Oldham 1976).Task identity was measured by adapting four items.Task significance refers to the degree to which workers perceive that the tasks they complete influence the lives of others and was measured using four items.Autonomy reflects the extent to which a job allows freedom and discretion to schedule work, make decisions, and choose the methods used to complete tasks (Breaugh 1985;Morgeson and Humphrey 2006).Hence, autonomy comprises three dimensions; freedom in (i) work scheduling (ii) decision-making, and (iii) work methods.Each dimension was measures using three items.To reduce the variable-to-observations ratio, the three items for each dimension of autonomy were parceled.In addition, feedback refers to the degree to which the job provides direct and clear information about task performance (Hackman and Oldham 1976).This study focuses on feedback directly from the job itself or knowledge of one's own work activities, as opposed to feedback from others, given that crowdworkers often operate in isolation.
Finally, meaning, or purpose, refers to fit between the needs of one's work role and one's beliefs, values, and behaviors (Hackman and Oldham 1976) and taps the intrinsic motivation manifested in intrapersonal empowerment (Spreitzer 1995).Meaning was measured by adopting three items from Spreitzer (1995).Responses were anchored 1 = strongly disagree and 7 = strongly agree.

Analysis
The hypotheses were tested using structural equation modeling (SEM) in AMOS.Comparative indices -i.e., Tucker-Lewis Index (TLI) and the Comparative Fit Index (CFI) were used to gauge model fit.Additionally, absolute fit indices -i.e., the standardized root mean squared residual (SRMR) and the root mean square of approximation (RMSEA), with cutoff values of ≤0.08 and ≤0.05, indicating good model fit.Finally, the χ2 statistic was presented.For all models, a maximum likelihood estimator was used, and bias-corrected parameters were obtained by extracting 5,000 bootstrap re-samples.

Measurement model
The Since we rely on data collected at a single moment from a single source common method variance was assessed using Harman's single factor test.This test indicated that one factor explained 22.13% of the variance in the observed variables, suggesting common method variance is not a major concern in the data.

Hypotheses testing
We included hours spent on platform work per week, years of experience, percentage of income attributed to platform work, age, gender, education, and platform (i.e., Mturk, Clickworker, Microworkers, and Prolific).

Discussion
In this study we contribute to emerging research on the implications of algorithmic management on job characteristics and worker experiences.Our findings provide additional empirical evidence for the divergent ways in which algorithmic coordination and quantification are associated with the autonomy, skill variety, task identity, task significance, feedback of workers, and ultimately perceived meaningfulness of work.Algorithmic coordination, measured as the perception of an algorithmic function, is positively associated with meaningfulness.Algorithmic quantification, measured as the feeling of being quantified by an algorithm, is negatively associated with meaningfulness.Furthermore, algorithmic coordination and algorithmic quantification are related to the meaningfulness of work through autonomy, task significance, and feedback.These findings have several theoretical and practical implications for technology and work design in general and platform work specifically.

Theoretical implications
Our study confirms that algorithmic management can make work designs both better and worse, affecting workers' psychological states -here, meaningfulness of work (Parker and Grote 2022).Specifically, our results suggest that the feeling of being incompletely, or inaccurately quantified by an algorithm is more detrimental to work characteristics and meaningfulness, than the use of algorithms to coordinate, which was found to be positively related to work characteristics and meaningfulness.These findings are important as they provide empirical evidence for recent arguments for the central role of more established job design theories in understanding the implications of algorithmic management (Demerouti 2022;Parker and Grote 2022;Parent-Rocheleau and Parker 2022).In showing the ways in which job characteristics mediate the relationship between algorithmic coordination and algorithmic quantification and the meaningfulness of work, we contribute to a more granular understanding of the ways in which algorithms may impact worker experiences.
This is important as some argue that crowdworkers lack the relational and organizational architectures for providing meaningful work, rendering traditional work design theories less suitable (Kost, Fieseler, and Wong 2018).In addition, recent theorizing suggested that algorithmic management and decisions affect worker perceptions (Newman, Fast, and Harmon 2020), limit workers' potential to flourish (Gal, Jensen, and Stein 2020), and may violate workers' dignity through dehumanization and instrumentalization (Lamers et al. 2022).We show that such detrimental consequences, in the context of the meaningfulness of work, are more likely to be associated with algorithmic quantification than algorithmic coordination.These findings may indicate that rather neutral perceptions of algorithmic functions (e.g., work assignment) in coordinating work are not as problematic to workers compared to feelings of being quantified by an algorithm.This aligns with findings in the context of human resource algorithms, where people indicated decisions about work assignments and scheduling were equally fair and trustworthy when made by an algorithm or human decision-maker.However, for more complex tasks such as hiring and work evaluation, human decision-makers were believed to be fairer and more trustworthy than algorithmic decision-makers (Lee 2018).Arguably, more complex tasks increase the possibility of inaccurate deductions, e.g., wrongful interpretations of algorithmic information and perceived misrepresentation of workers and their qualities.In the context of our study, algorithmic coordination could have a more positive impact because the associated benefits of algorithms (e.g., efficiency) are more congruent with tasks that require mechanical skills (work allocation, payment) compared to tasks that traditionally would require more human skills and increase the possibility of inaccurate representations through algorithmic quantification.
In addition, our findings point to a central role of task significance (Allan 2017) for crowdworkers.Of the five work characteristics considered in this study, task significance is the strongest predictor for the meaningfulness of work among crowdworkers.Interestingly, although crowdworkers often work in isolation, their psychological work state is particularly affected by the extent to which they feel their work tasks impact others' lives.Gray and Suri (2019), describe how a woman conducting image labeling tasks explained that she was helping to keep the internet safe for other families.This signifies how completing work tasks may be part of a much broader goal beyond satisfying the requester, i.e., task significance (Morgeson and Humphrey 2006).Our findings provide additional evidence that algorithmic coordination can provide an efficient way to complete as many labeling tasks as possible, allowing workers to generate an even greater impact.On the other hand, quantification may counteract these possibilities, not because it is less effective but because workers feel that qualitative attributes of their work are not adequately captured or valued.As such, algorithmic quantification may chip away at perceived task significance.More generally, the opposing impacts of algorithmic coordination and algorithmic quantification on job characteristics is important because it may suggest that these aspects of algorithmic management cancel each other out.This could explain why studies not specifying different algorithmic constructs failed to find significant impacts of algorithmic management on job characteristics (Verelst, De Coomanand, and Verbruggen 2022).
Finally, our findings are relevant beyond the context of platform work as organizations with traditional work designs are also increasingly implementing automated decision-making tools and algorithmic management applications.Our study shows how algorithmic management literature can inform traditional work design literature.Specifically, our findings bring nuance to earlier suggestions that algorithmic management is predominantly negatively related to work design by lowering job resources and increasing job demands (Parent-Rocheleau and Parker 2022).Notably, Demerouti (2022) suggested that digitalization and automation could help create healthy jobs if they are designed to increase resources and reduce demands and enable people to craft their use of the system.With the rise of algorithmic systems, organizations and human managers need to decide what kinds of algorithmic software to implement, what (managerial) activities to allocate to an algorithm, e.g., performance reviews, incentives, and scheduling (Jarrahi et al. 2021), and how to navigate challenges associated with the quantification of work and workers.Notably, we do not suggest that algorithmic coordination always has positive implications, and that algorithmic quantification necessarily leads to detrimental outcomes.
Future work needs to examine contextual conditions that may affect the work characteristics and its antecedents in the context of crowdwork (Schroeder, Bricka, and Whitaker 2021).Specifically, it will be important to understand how different socio-technological moderators (e.g., system transparency, human influence, and fairness) may inform our understanding of the conditions under which algorithmic management coordination and algorithmic quantification may have different consequences for different workers.In addition, recent scholarship on developing fair AI systems to manage workers in organizations raised questions about whether fairness should be determined by equity or equality (Robert et al. 2020).There is a long debate on the merits of equity versus equality and the preferred managerial approach likely depends on the extent to which the individual needs of workers are highly uniform (equality) or divergent (equity).Future research is needed to generate a deeper understanding of the ways in which equality versus equity preferences affect the impact of algorithmic coordination and perhaps in particular algorithmic quantification.

Practical implications
Our findings have several practical implications for understanding and facilitating meaningful work experiences for crowdworkers.First, they indicate that task significance is an important predictor of the meaningfulness of work.From a work design perspective, task significance refers to the relative importance of a task.In a context of crowdwork, where a task is often briefly and narrowly defined, requesters using platform organizations may consider ways in which they could cultivate the contributions they seek from crowdworkers.For instance, by highlighting the ways in which their contribution is making an impact on the lives of others or the problems the requester aims to solve.In addition, platform organizations themselves can review the ways in which qualitative work performances are quantified, which seems to be negatively associated with the perceived significance of tasks.One improvement could be to go beyond quantitative metrics that determine the adequacy or mere completion of a task and incorporate more descriptive evaluations of work.
Second, the findings highlight the importance of feedback for the meaningful work experiences of crowdworkers.As such, requesters posting tasks on the platform may consider different ways to delineate expectations about the quantity and quality of the work more clearly.Such clarification, before task acceptance, could prevent uncertainty and conflict at later stages.Such an approach would require a more proactive form of feedback, in management often referred to as feedforward (Budworth, Latham, and Manroop 2015).Simply put, feedforward involves anticipating and avoiding problems before they might occur (Kreitner 1982).One potential advantage of clarifying the parameters for adequate task performance a priori is that crowdworkers are less likely to experience situations in which requesters reject tasks.In addition, the platforms could consider ways to provide more information regarding the outcomes their algorithms.This will directly contribute to the feedback workers receive and, subsequently, the meaningfulness of their work.Ultimately, this recommendation echoes previous studies on the importance and benefits of greater transparency in algorithmic processes and their outcomes (Ananny and Crawford 2018;Glikson and Woolley 2020;Helberger, Pierson, and Poell 2018).

Limitations and future research
Our study comes with several limitations.The first limitation is that our study relies on self-reported cross-sectional data, increasing potential self-report biases and limiting our ability to draw causal inferences from the data.Longitudinal data would enhance understanding of the causal dynamics and temporal effects.Specifically, future research could generate a more thorough understanding of the ways in which different aspects of algorithmic management operate.For instance, it is possible that algorithmic coordination may subsequently trigger a process of algorithmic reductionism (i.e., algorithmic quantification and decontextualization, Newman, Fast, and Harmon 2020), while quantification may also facilitate coordination.Yet, others suggest that different aspects of algorithmic management operate as independent but correlated factors consequentially affecting job demands and resources (Parent-Rocheleau and Parker 2022).In addition, longitudinal designs would allow the inspection of the directionality of the relationships.For instance, it is possible that low meaningfulness leads to perceptions of quantification or vice versa.Second, this study does not include any moderating factors that could help explain the relationships between algorithmic management and work design, such as perceptions of fairness or transparency (Lee 2018;Parent-Rocheleau and Parker 2022).Future research could examine how these aspects may impact the relationships between algorithmic management and perceived work conditions.
In addition, the results of our study suggest that platform type is correlated with several job characteristics.While we did not find the hypothesized relationships to be affected by the inclusion of platform type, future research may investigate how differences in work designs across online labor platforms may qualify worker outcomes.Finally, our study considers algorithmic coordination and algorithmic quantification as two fundamental elements of algorithmic management.However, we acknowledge that conceptualizations of algorithmic management differ among authors, tasks ascribed to algorithmic management systems are potentially expansive, and validated measurement tools do not yet exist.For instance, prior studies suggested that algorithmic management comprises six functions: monitoring, goals setting, performance management, scheduling, compensation, and job termination (Parent-Rocheleau and Parker 2022).Future research should try to conceptualize and validate a measure that adequately captures the complexity and diversity of algorithmic management.
Although much work still needs to be done, our present study contributes to a more nuanced understanding of how algorithmic management affects work design and the meaningfulness of work among crowdworkers.Our results indicate that algorithmic coordination and algorithmic quantification affect the meaningfulness of work in opposing ways.Specifically, our findings inform and extend earlier findings by showing that algorithmic coordination has positive implications for task significance, feedback, and consequentially meaningfulness of work.In contrast, algorithmic quantification negatively impacts task identity, task significance, autonomy, and feedback.Accordingly, our study highlights the central role of work design models in understanding work experiences in today's algorithmically imbued work environments.

Table 1 .
measurement items and factor loadings.
a Items generated for the purpose of this study.