Evaluation systems and the pace of change – The example of Swedish higher education1

Abstract The aim of1 this article is to illuminate and discuss evaluation and evaluation systems in relation to the pace of change. It is argued that evaluation promotes and accelerates change. The article will thus contribute to a critical scrutiny of evaluation as a societal phenomenon and as a widespread practice in education. To accomplish this aim, the inherent purpose of evaluation and evaluation systems is brought forward. National evaluation systems for Swedish higher education are used as an empirical example. An analysis using Rosa’s three aspects of social acceleration (technical acceleration, acceleration of social change, and acceleration of the pace of life) is offered to demonstrate how the evaluation systems are related to, sustain, and promote an increase of the pace of change (acceleration) in educational practice in higher education.


Introduction
Evaluation and its siblings audit, quality assurance, inspection, and other kinds of assessments have expanded in all public and private sectors. Scholars write about 'the audit society' (Power, 1995), 'the evaluation society' (Dahler-Larsen, 2012), and 'the evaluative state ' (Neave, 2012) to denote the increased importance of these activities and phenomena in our contemporary societies. Education at all levels is no exception (e.g. Gornitzka & Stensasen, 2014;Harvey & Williams, 2010;Kohoutek, 2016;Ozga, Dahler-Larsen, Segerholm, & Simola, 2011;Saunders, 2014), and in higher education, evaluation and quality assurance have emerged and developed into evaluation systems during the last decades (Kristensen, 2010).
In Europe, and as part of the continuing Bologna Process started in 1999, there are serious efforts to disseminate evaluation and quality assurance policy and practice to member states. The establishment of the European Higher Education Area and the founding of the European Association for Quality Assurance in Higher Education (ENQA) in 2000 (Thune, 2010) facilitated this. Several national, regional, and state agencies are now members of the ENQA, and as such, they are involved in setting up, developing, disseminating, and trying to live up to the membership requirements of the ENQA. In short, these membership requirements are that the agencies ensure that internal evaluation and quality assurance systems are installed in their higher education institutions and comply with the Standards and Guidelines for Quality Assurance in the European Higher Education Area (ESG) (2015), that the agencies live up to the standards, and that the agencies, in turn, are evaluated/audited by an external ENQA-approved organisation (ENQA, n. d.). Sweden had been part of this movement as a member of the ENQA since 2005, but it failed to live up to the requirements of the external evaluation in 2012; thus, its membership was terminated in 2014 (ENQA, 2014). The Swedish Higher Education Authority (SHEA) now aims at once again becoming an approved member (SHEA, 2016a(SHEA, , 2019. In Sweden, evaluation and quality assurance became mandatory activities in higher education by the 1993 reform, and the national agency at the time was commissioned by the government to push and control that higher education institutions implemented internal evaluation systems (Government Bill, 1992/93:1). Since then, a number of national evaluation and quality assurance system reforms have succeeded each other. Taking these successive changes over time as a starting point, the aim of this article is to illuminate and discuss evaluation systems in relation to the pace of change, using Swedish higher education as an empirical example. In doing so, another aim is to contribute to a critical scrutiny of evaluation as a societal phenomenon and as a widespread practice in education. It is argued that evaluation and evaluation systems promote and accelerate change.
The article is organised as follows: First, a description of the purposes of evaluations and characteristics of evaluation systems is presented. This is followed by the theoretical underpinnings, or premises, for a critical discussion about evaluation systems (in higher education) and their accelerating power and the pace of change. Thereafter, using Rosa's (2013) three aspects of social acceleration (technical acceleration, acceleration of social change, and acceleration of the pace of life) as a lens, the Swedish example is described and discussed, illuminating how the evaluation systems are related to, sustain, and promote an increase of the pace of change (acceleration) in higher education.
In this article, 'evaluation' denotes both what is commonly called evaluation and what is called quality assurance. The reason for this is twofold: (a) in the Swedish case, both concepts have been used for the national evaluation systems, and (b) all evaluative activities share a common trait, which is their purpose to evaluate, value, make judgements, or assess.

What is the purpose of evaluation?
Evaluation is a normal part of our everyday life in that we constantly make assessments in order to judge, for example, if a friend's behaviour is tolerable, the best fruit to buy, whether or not a book is good, and so forth. Those evaluations are seldom based on explicit criteria. Rather, they are based on our experience of what is socially acceptable, on what traits to look for in a fruit, or on what we cherish in a certain genre of books. Evaluation as part of the public sphere or other organisations expanded after the Second World War (Shadish & Luellen, 2005) and became a means to rationally plan and reform societies, which has been labelled 'the rational reform paradigm' (Wildavsky, 1979) or 'the engineering model' (Vedung, 2010, p. 266). In this paradigm, the idea is to base (central/state) political reforms on scientific knowledge, similar to what is nowadays called evidence-based policy (e.g. Davies, Nutley & Smith, 2000;Vedung, 2010). Later, it was also argued that evaluation could help professionals to develop their practice on a more solid foundation than merely intuitive professional knowledge and that evaluation should involve different stakeholders in discussions about merit and worth (Vedung, 2010). Both types of evaluations are more formal in the sense that they are planned, have stated purposes, are conducted with particular methods, and are reported in one way or another to the commissioners, be they politicians, managements, professionals, or other stakeholders (for an overview of different evaluation approaches and models, see, e.g., Owen, 2006;Stufflebeam & Coryn, 2014).
From this swift account, it is clear that the general purpose of evaluation is change, either as a basis for political or management decision-making or as a basis for direct change in some professional (educational) practice. As Mark and Henry put it, 'In short, the link between evaluation and the betterment of social conditions is absolutely crucial as a collective raison d'être of evaluation ' (2004, p. 36, italics in original). However, evaluation theorists often qualify this purpose to be about control and accountability on the one hand, or about development and quality enhancement on the other hand. Control/accountability is then perceived as a means to know whether or not the evaluand (that which is evaluated) attains the goals and what new decisions and actions are needed. If the purpose is development or enhancement, an evaluation should be aimed directly at changing the practice of the evaluand and not primarily at controlling the outcomes as part of forthcoming decisions. The analytical concepts of 'summative' and 'formative' evaluation (Scriven, 1967(Scriven, , 2012) capture these differences. However, as Scriven (1991) also points out, in actual decision-making for future actions, policies, and so forth, summative evaluations are often used formatively. This is akin to how evaluation and quality assurance policy in higher education is now perceived, at least in a European context. The ENQA's standards and guidelines explicitly stress the need for 'clear guidance for institutional action' (Standards and Guidelines for Quality Assurance in the European Higher Education Area [ESG], 2015, p. 19) in order for higher education institutions to learn and continuously improve, that is, to change.
As we now know, evaluation and evaluation systems may influence or lead to different kinds of effects (e.g. Dahler-Larsen, 2012; Mark & Henry, 2004;Segerholm, 2001). Well-known examples from the evaluation literature and empirical studies are adjustment to evaluation criteria (similar to teachto-the-test), the establishment of new administrative functions, increase in documentation, imitation, window dressing, and emotional reactions (e.g. Dahler-Larsen, 2014;Grek, Lindgren, & Clarke, 2015;Segerholm, Lindgren, Hult, Olofsson, & R€ onnberg, 2016;Mark, 2017). It is also known that evaluation may begin to influence the evaluand already when it becomes known that an evaluation is to take place, during the evaluation process, and finally, after the evaluation results have been presented (e.g. Kirkhart, 2000;Segerholm & Åstr€ om, 2007;Weiss, 1988).
In Sweden, single national evaluations were the way to assess the state of higher education in the 1950s-1990s (Gr€ ojer, 2004). Starting in 1995, and mandated by the 1993 reform, national evaluation systems were developed. What, then, characterises evaluation systems?

Evaluation systems
Swedish higher education is far from unique in increasingly relying on evaluation systems for control, development, and more general information. According to Leeuw and Furubo (2008), this is a development common in several public sectors and organisations. They present four characteristics of evaluation systems and propose that an evaluation system has a distinctive epistemological perspective (what kind of knowledge is produced), organisational responsibility (a unit responsible for carrying out the evaluation and at least one other organisation requiring this information), permanence (planned activities that occur over time), and intended use linked to implementation and decisions (Leeuw & Furubo, 2008, pp. 158-160). In Leeuw's and Furubo's critical discussion, they bring forward the hypothesis that evaluation systems breed evaluation systems, which is also a reason to study them (Leeuw & Furubo, 2008, pp. 166-167).
From a Swedish point of departure, Segerholm (2006) describes evaluation systems as a mix of different, frequent, and regular evaluative activities and as comprehensive, targeting several different areas in organisations.
Examples of evaluative activities are: external and internal (self-) evaluations, quality assurance, quality assessment and audits. Evaluation systems are comprehensive in the sense that evaluative activities are carried out and usually involve several levels in (public) organizations from single units to national or state level. In evaluation systems, evaluative activities are undertaken frequently and regularly, which means that there are a number of evaluative activities in operation simultaneously. Quite often these activities are also carried out repeatedly and with regular intervals. There may also be formal requirements of how these activities are to be performed laid down in acts and statutes and/or established by instances like national agencies or local authorities. (Segerholm, 2006, p. 3).
The 'evaluation machine', described by Dahler-Larsen (2012, pp. 176-182), shares many of the above characteristics and is influenced by Leeuw and Furubo (2008). The evaluation machine is characterised by 'permanence' (it is present for some time), 'organisational responsibility' (it is embedded in the organisation and not an individual responsibility), 'distinctive perspective' (it is based on particular assumptions/conceptions manifested in, e.g., indicators/criteria), and 'abstract and general coverage' (it describes several types of activities in systematic ways, facilitating comparisons) (Dahler-Larsen, 2012, pp. 176-182). As shown by Lindgren, R€ onnberg, Hult, and Segerholm (2019), evaluation systems, or evaluation machineries, are dependent on actors who do the work required for them to function. Decisions about the evaluation systems have to be made, they have to be designed, implemented, and carried out, meaning collecting a lot of information and documenting different aspects, writing reports, and using and acting on the results. Different actors are involved in these various activities in evaluation systems; that is, these systems are also social. One of the basic ideas of evaluations (and evaluation systems) as an instrument of change is that they have to deliver some incitement for change. In that respect, they are always means of power in a temporal sense, always aiming for the future to be different.
Theoretical underpinningstemporality and the contemporary society As explicated above, evaluation and the future are inextricably intertwined, and in order to critically discuss evaluation and evaluation systems, and their relation to time in our contemporary societies, Rosa's (2013) theory of social acceleration is presented as a theoretical resource. Dahler-Larsen's (2014) theoretical concept of 'constitutive effects' is also valuable and a starting point in relating influences of evaluation to social acceleration. 'The main idea in constitutive effects is that the world changes as it is being measured' (Dahler-Larsen, 2012, pp. 208-218, emphasis added). This logically leads to the conclusion that if the measurement frequency increases, the pace of change increases. Related to evaluation and evaluation systems in education, this means that education changes while it is being evaluated. Not only does it mean that education and teaching plans and activities change, but we also change our conceptions about what education, teaching, learning, a teacher, a student, and a higher education institution are, or ought to be. This understanding of evaluation and its influences relates well with what Rosa writes about societies as a whole. His general understanding and concepts are used here to situate evaluation and evaluation systems in a societal context of change and directed at the future.
According to Rosa (2013), three phenomena are of particular interest in understanding social acceleration: a) technical acceleration, which refers to 'the intentional increase of velocity in goal oriented processes' (pp. 28-29, my translation), such as increased speed in transportation or (electronic) communication; b) accelerating social changes (p. 28), meaning, for example, rapid change in social relations, trends, or governments, meaning that past experiences and expectations become obsolete; and c) the accelerating pace of life, which has to do with the increase of the 'number of episodes of actions or experiences per time unit' (p. 32, my translation).
These phenomena are dependent on and interact with each other (Rosa, 2013, pp. 38-41). Technical acceleration makes us experience and act differently in relation to time and space. Communication through e-mail, for example, is almost independent of time and space. We adapt to that and send e-mail or phone messages at all hours, knowing that the message will most probably reach the recipient much more quickly than a letter or repeated phone calls, should the person not answer. Hence, social changes are accelerating, such as our increased use of smartphones, which leads us to spend less time in face-to-face communication. Rosa claims that this, in turn, promotes a feeling of 'having to keep up' (Rosa, 2013, p. 39, my translation) and experiencing a lack of time, so that more activities and experiences have to be lived during a lifetime, which is an accelerating pace of life. We start to look for new technology to save time in order to get more done. A circular, amplifying process of tempo is therefore present in our contemporary (Western) societies, Rosa argues (2013). Furthermore, social acceleration occurs when these processes are not counteracted by equally strong retardation processes. Rosa (2013, pp. 33-38) claims that over time, there is actually a movement towards social acceleration.
This acceleration process does not, however, come about in a vacuum. There are external forces, 'logically independent motors' (Rosa, 2013, p. 39), that promote the circular process described above. These motors are the economic motor (time is money), the structural motor (functional differentiation), and the cultural motor (the promises of acceleration). What then can be said about evaluation systems in relation to the pace of change, employing Rosa's conceptual frame to illuminate and critically discuss this relationship?
The Swedish national evaluation systems of higher education as an example The 1993 reform made internal evaluation systems, and course evaluations, mandatory in Swedish higher education institutions. At the same time, a national performance funding system was implemented based on a per capita reimbursement for each enrolled student and for each student managing the course requirements. Dougherty, Jones, Lahr, Natow, Pheatt, and Reddy (2016) and Herbst (2009, p. 82) explain performance funding as a response to the expansion of higher education, and with restricted budgets, there is a need for increased efficiency and productivity. This explanation is certainly also true for Sweden, and the higher education institutions increased the number of students they admitted in order to increase production. The number of students each higher education institution produced was later regulated by the government as a set amount of money. Over the years, this amount has not increased enough to cover the increase in expenditures for teaching for the higher education institutions (Åmossa, 2018, pp. 9-15). The introduction of performance-based funding was also the visible start for a more market-oriented policy in Swedish higher education, which over the years has become increasingly pronounced. This performance funding design has bred competition, marketing, and management strategies, meaning that the economic motor (Rosa, 2013) has become more noticeable because a larger number of students have been produced per time unit.

Evaluation systems as technological accelerationgoal orientation, expansion, and frequency
From 1995 to the present (2019), there have been five different national evaluation systems in operation in Swedish higher education. They have all been introduced with explicit political aims. It is therefore reasonable to view them as goal-oriented processes, which is one of Rosa's descriptors of technical acceleration (2012/2013, pp. 28-29). The first system was to stimulate higher education institutions' internal quality work and to uphold and enhance quality (Government Bill, 1992/93:1; Swedish National Agency for Higher Education, SNAHE, 1998). The aim of the second was to guarantee a minimum standard in provision, to enhance trust in higher education institutions, to increase student influence, and to deliver information to students, enabling them to make informed choices (Franke & Nitzler, 2008;Government Bill, 1999/2000SNAHE, 2001SNAHE, , 2003. The third was similar to the second but aimed at better alignment with the Bologna agreement, with an emphasis on predefined objectives expressed in expected learning outcomes (SNAHE, 2007). Although the fourth system was radically different, directed at student outcomes (product-oriented, in House's, 1978 terms), the aims were rather similar to the previous ones: to increase quality, to strengthen Sweden's position in the global market, and to better inform students and society about quality in higher education (Government Bill, 2009/10:139). The fifth system, which is the present one, is aimed at control of student outcomes, increasing quality, a shared responsibility between the state and higher education institutions, better alignment with the ENQA's standards and guidelines, and better compliance with laws and regulations (Ministry of Education, 2015; SHEA, 2016b).
A second descriptor of technical acceleration is an 'intentional increase in velocity (in goal oriented processes)' (Rosa, 2013, pp. 28-29). Here, the intentional increase in velocity has to do with the expansion and increase in frequency of evaluative activities in the different national systems over time. The first system, 1995The first system, -2001, was rather simple, and the higher education institutions' internal evaluation systems, including course evaluations, were evaluated in 3-year cycles (Government Bill, 1992/93:1). Another part was accreditation for awarding magister degrees (a degree between a bachelor's and master's). The next system, 2001-2007, was radically expanded to also include quality evaluations of all academic subjects and programmes in 6-year cycles, as well as thematic evaluations (e.g. student influence) (Government Bill, 1999/2000:28, Franke & Nitzler, 2008. In the third system, 2007-2011, a minor retardation process occurred because of the termination of the evaluations of higher education institutions' internal systems; otherwise, it was much the same as the second system (SNAHE, 2007). The fourth system, 2011-2014, was actually a contraction in evaluation types but an expansion of number of actors/persons involved and frequency of detailed assessments. Accreditation for certificates and degrees was the same, but the quality evaluations were redirected to student outcomes as measured by assessment of individual students' independent projects (for bachelor's and master's degrees) in different academic subjects, in 4-year cycles (SNAHE, 2012a;SHEA, 2013). The present and fifth system, since 2016, is a combination of the previous ones, including all types of evaluations: accreditations, evaluation of higher education institutions' internal systems in 6-year intervals, quality evaluations, and thematic evaluations (SHEA, 2016b). In parallel, assessments and supervision of how higher education institutions comply with the Higher Education Act and Ordinance are carried out (SHEA, n. d.). The latest decided part of the national system is the inclusion of research evaluations (Government Office, 2017). Quite an elaborate evaluation machine (Dahler-Larsen, 2012) has thereby been constructed throughout the years.
Taken together, and over time, the expansion of the national systems and the increased frequency of evaluative activities lead to increased velocity of evaluation processes at higher education institutions, and this development is also intentional. On top of that, and in order to live up to the national requirements for internal evaluation systems, higher education institutions have expanded their own evaluation systems, leading to even more evaluative activities per time unit . One may actually label evaluation and evaluation systems as social and political technologies that nowadays also themselves rely on technologies of digital systems to store and use information and knowledge about the evaluands. The Swedish national agency supervising higher education and responsible for the national evaluation systems has, for example, recently developed such systems: one for the assessment panels (UK € A Bed€ omarrevy) and one for the higher education institutions to upload self-evaluations and other materials (UK € A Direkt). Other techniques supporting the different evaluations over the years are guidelines for assessors, guidelines for the higher education institutions, and templates for self-evaluations and reports. For every system, new such techniques have been developed.

Evaluation systems, accelerating social changes, and the accelerating pace of life
In line with Rosa's (2013, pp. 38-41) reasoning about how the three different phenomena promote social acceleration and interact with each other, understanding evaluation systems as technical acceleration implicates acceleration of social change and an accelerating pace of life. The Swedish example with changing national evaluation systems points to some such interactions in how the systems influence higher education institutions and education practice in a broad sense. To recognise this, it is important to mention how all evaluation processes, independent of type, are carried out. Over the years, a similar model has been used. Broadly described, it consists of a national template for self-evaluation, an external assessment panel review with site visits, a public report by the assessors with a decision by the responsible national agency, and a follow-up procedure if the judgement is 'not sufficient'.
Although quite simple, the first system was nevertheless the beginning of an expansion and increased frequency of evaluative activities at higher education institutions. Some people had to be involved in evaluative activities, and such activities had to be developed within the higher education institutions. Internal educational development work was also initiated, according to the SNAHE (2012b, p. 10). In the second system, many higher education institutions extended their central and faculty administrations with quality officers and deputy vice chancellors responsible for education quality, and the number of external and internal evaluations increased (Segerholm & Åstr€ om, 2003). The SNAHE follow-ups showed that teachers' competence level increased, national collaboration increased, and contacts with employers developed (SNAHE, 2012b, p. 14). Little has been said about changes brought about in relation to the third system, apart from it being very time consuming (Olausson, Hilliges, Åkesson, & Ystr€ om, 2008). Several changes were, however, reported related to the fourth system. Sørensen, Haase, Graversen, Schmidt, Mjelgaard, and Ryan (2015, pp. 18-28) found that it led to a redistribution of resources (more to students' independent project courses and hence less to other courses), revision of course plans, more attention to and understanding of the rationale behind expected learning outcomes for different degrees and certificates, improved documentation of and more systematic internal evaluation processes, time-consuming work, and high costs related to the evaluations. Changes from the fifth system have only briefly been studied so far.
Several of the changes in higher education linked to the different national evaluation systems described above indicate an increase in social changes as well as in the pace of life.
The successive increase in the frequency of evaluation processes and activities (with some minor tendencies of retardation during the third and fourth systems), along with the particular evaluation model in use, implies that the number of people at higher education institutions who have to be involved as suppliers of information or authors of self-evaluations, and as assessors (together with students and, in later systems, working-life representatives), increases. To be involved in these kinds of activities/work is different from teaching, research, or other types of administration. Recruited to these evaluation activities are mainly persons already employed as teachers/researchers/managers, but their role, and hence relation to other colleagues and staff, actually changes in their functions in these exercises. They have to pay attention to, and develop knowledge about, the national system presently in operation. They have to ask colleagues and staff for documentation and information and communicate internally about how to live up to the national requirements. Thereby, the social relationships change. Some even become experts, or at least are assigned special responsibility for handling the national evaluation systems, such as quality managers. Because the national systems change with some regularity, and direct attention to different aspects of higher education institutions, there is often a need to engage new and different people to perform the evaluation activities. This means a constant change in social relationships, as well as in the policies within the higher education institutions, always trying to adapt to the changes required by the national systems. The speed of such social changes naturally increases when the number of evaluation processes increases. Here, one may wonder why the higher education institutions in Sweden are so compliant. One answer is that the adherence to the national evaluation policy and its shifts is quite easily achieved because of the high-stakes character of the systems from the second system onwards. The higher education institutions risk losing their right to award certificates and degrees should they be assessed as inadequate. This in turn means that they do not qualify for state grants. Over time, this has also fostered a progressively benevolent attitude towards evaluations in general among central management, such as vice chancellors , which is a constitutive effect (Dahler-Larsen, 2012, pp. 208-218) that most likely supports the acceleration trend.
Going back to the economic premises for Swedish higher education described above, the increase in evaluation processes over time from the 1990s until today has not rendered a matched increased in resources to the higher education institutions. This means that the more frequent evaluations required by the national systems leads to an increase in the number of evaluation activities that have to be performed per time unit at the higher education institutions. This is also what Rosa (2013, p. 32) means by the acceleration of the pace of life.

Evaluation systems and the pace of change
The interaction between the three different forms of acceleration is visible in the development of standardised procedures, templates, and other technologies used. One example of this is the digital systems, a technology within the evaluation systems, developed as a way to simplify and save time for all actors (assessors, evaluands, and national agency staff). In the middle of the 1990s, templates for self-evaluations were rather simplistic and became more elaborate with higher capacity digital platforms (UK € A Direkt and UK € A Bed€ omarrevy) in the latest national system. This technical acceleration (Rosa, 2013, p. 28-23) has made it possible to collect, store, and analyse an increasing amount of information at the national level. Likewise, higher education institutions have invested in digital systems and platforms for their internal management, control, and evaluation processes, which are repeatedly changed when better ones are developed. This type of technical acceleration within evaluation systems thereby facilitates the development and use of internal evaluation systems in higher education institutions. Evaluation systems indeed seem to breed evaluation systems (Leeuw & Furubo, 2008, pp. 166-167).
Over the years, the national evaluation systems have directed attention to various aspects of higher education. As far as can be seen at present, the speed of these changes seems to increase slightly, the periods becoming shorter. What has happened during these periods is that social relationships within higher education institutions have changed through the instalment of new functions and professional positions related to the different requirements of the national evaluation systems. There are now positions exclusively aimed at working with internal evaluation systems and with the different kinds of mandatory national evaluations. Changing the direction of what is being evaluated in the national systems also changes the direction of the internal work with evaluations among higher education institutions. This was perhaps most visible in the sharp shift of focus in the 2011-2014 national system, in which resources were redirected to particular kinds of courses, changing the conditions for teachers to teach different courses. The increase in frequency of evaluation processes most probably has also supported an increase in the change of social relationships within higher education, as explained above.
Since the 1993 reform, it has been mandatory to conduct course evaluations, something also checked particularly in relation to evaluations of higher education institutions' internal evaluation systems. This conditions teachers, departments, and higher education institutions as a whole to carry out course evaluations continuously, providing new information aimed at changes in course plans, teaching, and examination. When the national systems also demand that teachers and managers at all levels engage in both internal and external evaluations, an increased number of different evaluation activities have to be carried out within the same time frame. The stress on feedback loops in the latest national system will likely promote this state of affairs even more. This is a sign of an accelerating pace of life within the area of higher education.
In constantly changing the national evaluation systems, the incitements for change are raised, so that higher education institutions have to be 'on their toes' and shift their attention in the new directions required by the changing systems. External forces, motors (Rosa, 2013, p. 38-41), such as the performance funding system, and the ENQA policy, are part of this. First, due to its incentives for production per time unit, competition, and marketing. Secondly, by the alterations in the national systems in a direction that is approaching teaching practice in that the criteria explicitly target the relationship between course requirements and expected learning outcomes, as proposed by the ENQA (see Standards and Guidelines for Quality Assurance in the European Higher Education Area [ESG], 2015).
Taken together, the discussion of national evaluation systems of Swedish higher education leads to the conclusion that evaluation systems actually sustain and promote social acceleration (Rosa, 2013).
It has been argued here that evaluation systems in themselves constitute technical acceleration. Their purpose is to change future higher education, and by producing information and knowledge as a basis for change with increasing haste, they also promote an increase in the pace of change in educational practice. As such, evaluation systems are productive forces that make us change behaviours. Some of these changes are probably constitutive (Dahler-Larsen, 2012 in generating a shift in how we conceptualise teaching, learning, and higher education. It is also a bit ironical that evaluation systems, as a venue for rational, linear, goal-oriented change, so typical for what Rosa describes as traits for modern societies, actually spur characteristics of late modern societies according to Rosa (2013, p. 53 and 55). We may in fact face what he calls a 'furious standstill' (borrowing from Virilios, 1990in Heidegren & Wittrock, 2013, in Rosa, 2013, in which things change but do not develop: there are innumerable possible choices, but since the alternatives constantly change shape there are no long-term strategies to benefit from them cumulatively. The movement becomes without aim, and contingent, yes, wandering. It loses its temporal, objective and political goals (Rosa, 2013, p. 55, my translation).