‘Human oversight’ in the EU artificial intelligence act: what, when and by whom?

ABSTRACT Human oversight has been much stressed and discussed as a safeguarding measure to ensure human centrism in AI deployment. Through its proposal of a new EU Artificial Intelligence Act, the Commission is breaking new ground by promoting the introduction of the first general and sharp worded human oversight requirement over AI systems in European law. This Article discusses the content, limitations and implications of this oversight requirement. It does this by addressing the questions of what the Regulation prescribes on ‘what’ is to be overviewed, ‘when’ the overview is to be exercised and ‘by whom’. The article points to some of the AIA’s unclarities and gaps, and to the implications of vesting too much trust in providers to secure the oversight infrastructure of high-risk AI systems.


Introduction
The seemingly ever-increasing use of artificial intelligence, AI, systems to assist or augment human decision-making in various ways is, naturally, a hot topic for public as well as academic interest and debate.One overarching dilemma tied to this debate and development is how to utilise AI technologies to the fullest without risking or causing harm to society and its citizens.In the European efforts to try and regulate AI, a hesitant position towards letting the development and performance of AI systems 'run loose' is evidentthrough a fairly high prevalence of restrictions to AI deployment, or safeguarding-requirements linked to their use.One focal point is how to ensure meaningful human control where and when needed.In this context, 'human oversight' is a much-stressed measure which represents a type of have fundamentally different orientations regarding what aspects of a system process the oversight is to be aimed at, when the oversight is to be performed, and who the human supposed to perform the oversight is.This also means that the 'type' of oversight that is prescribed or performed can affect what type of risks or harms that humans are able to detect and mitigate. 5he reasoning above points to the need to look more closely at the specific legal arrangement of the AIA's human oversight requirement.Although still in the draft phase, it may provide some insight into the direction that the first sharp-worded general human oversight obligation over AI systems in the EU may take.This article aims to do that by analysing the content, limits and implications of Article 14 AIA in relation to the other related provisions of the draft that jointly forms the obligation to ensure human oversight over high-risk AI systems.The analysis will be structured around what the regulation prescribes regarding 'what' is to be overviewed, 'when' the human oversight is to be exercised, and 'by whom'?Special attention will be given to how responsibilities are distributed between the providers and users of the systems.This contribution will be dispositioned as follows.It will start out by discussing human oversight as a safeguard and manifestation of the stress and vision of 'human centric' AI.It will then shortly introduce the content and configuration of Article 14 AIA and its relation to some of the other oversight-relevant provisions of the draft, before turning to address the questions of the 'what', 'when' and 'by whom' of the oversight in consecutive order.I will finish off with a few conclusions and pointers to some of the draft AIA's unclarities and gaps, and to the implications of vesting too much trust in providers to secure the oversight infrastructure of high-risk AI systems.
The analysis takes its starting point in the Commission draft proposal but also takes into account and incorporates, where relevant, the proposed amendments contained in the general approach adopted by the Council (hereafter the Council draft version) and the draft negotiating mandate which was adopted by the Internal Market Committee and the Civil Liberties Committee jointly and endorsed by the European Parliament on the 14th of June 2023 (hereafter the European Parliament draft version). 6. Human oversight at the centre of human centric AI As introduced, the notion of 'human centric' AI does not imply a given regulatory strategy.The normative content of 'human centrism' is primarily of an ethical quality but can nevertheless be operationalised to provide legal guidance on more specific issues.Human centricity, as it has come to be (broadly) understood, does not only reflect that human needs are to be met by new technologies, but also incorporates the aim to safeguard individual rights and increase human well-being.Its underlying basis is that humans enjoy a supreme and unique moral status in the civil, political, social and economic sense. 7Human centricity in AI is therefore a concept that places human-beings at the centre of any reflection about AI, its development, features and use. 8As the most pressing issues of AI deployment relate to the fear that technological development and rationalised efficiency will take place at the cost of human agency and safety or rights, 'human centrism' is an obvious counterweight.The idea that the sensitive judgement of humans, based on the view that humans are better able to make complex deductions relating to the social dimensions of law or other norms (to an extent which has not yet proven to be realised in even the most advanced AI systems) is foundational.When considering how to design concrete regulatory measures to ensure that AI systems operates in 'human centric' manners, the jump to 'human oversight' is therefore rather intuitive.This jump, however, also represents a shift from technical autonomy to human accountability, as well as a shift from a substantive and proactive approach to a procedural and reactive approach.This is because human centric applications of AI as opposed to human oversight over AI, while being conceptually related notions, have different aims, focuses and manifestations.The aim of human-centric AI applications is to try and weave human values into the technical fabric of AI systems with the aim of substantively meeting human needs and preferences in diverse contexts such as health care, benefits administration, business and law enforcement.In doing this, human centric AI applications also strive to proactively avoid harm and protect human ethical values.Human oversight measures are, instead, of a more reactive nature as they entail the monitoring and addressing of risks, biases and harms of AI systems in operation, and therefore are complementary to upholding accountability structures around AI.Both of these notions engage with normative frameworks such as regulatory, social and cultural ones.The function of human oversight is, however, primarily procedural and aimed at exercising control over AI applications serving human needs in substance.This makes human oversight a safeguarding measure that is intricately tied to the inner rationality of the legal system and the accountability structures on which it is premised.Along these lines Koulu argues that for law, human oversight provides an attractive, easily implementable and observable procedural safeguard.9Through the process of juridification 'human oversight' thus also becomes a legal concept, binding it to the internal rationality of law. 10 It can therefore be perceived as having a special standing as an almost selfsufficient safeguard.The EU Independent High-level Expert Group on Artificial Intelligence, AI HLEG (set up by the Commission to provide advice on its artificial intelligence strategy), even advocated a type of inverse relationship between human oversight and other safeguarding mechanisms.
All other things being equal, the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required. 11though Article 22 of the General Data Protection Regulation,12 GDPR, already holds a limited right to human intervention in solely automated decision-making, the AIA requirements of human oversight will be a novelty in European law.Such requirements were expected as they had been indicated also in the earlier stages of the preparatory process.The AI HLEG pinpointed human agency and oversight as one of seven key requirements that AI systems should meet in order to be deemed trustworthy. 13heir reasoning was that the allocation of functions between humans and AI systems should follow human-centric design principles, and leave meaningful opportunity for human choicewhere securing human oversight over work processes in AI systems is key. 14Similarly, the Commission in its AI White Paper states that a trustworthy, ethical and human-centric AI can only be achieved by ensuring human involvementwhich may include human review before or after a decision is made.The Commission there enlisted some non-exhaustive manifestations of human oversightincluding situations where the output of the AI system does not become effective unless it has been previously reviewed and validated by a human; where the output of the AI system becomes immediately effective but human intervention is ensured afterwards; real time human supervision and potential intervention; or safeguards installed in the design phase, through operational constraints on the AI system. 15The Commission has also, in early 2022, proposed a Declaration on digital rights and principles for a human-centred digital transformation, where it commits to ensure that algorithmic systems enable human supervision of outcomes affecting people. 16Neither of these policy documents goes into much detail on how human oversight is to be performed more specifically.Moreover, they do not contain much detail on either the particular types of problems that human oversight is meant to solve, or on the conditions under which those humans tasked with performing it will need to operate to be effective overseers, or 'watchdogs', over ethical or rule of law principles.
The notion of human oversight does not either imply a specific distribution of responsibilities across the different actors in the AI value chain.Nor is any specific set of measures implied.As already introduced, the functions of human oversight may thus serve different intermediary goals and be achieved through governance mechanisms at different stages.Drawing on the theoretical distinctions made in the academic literature on human oversight, which have also been recognised by EU high-level expert group in its white paper, the practice of performing human oversight could take different forms relative to who is exercising the oversight and what is being monitored; categorised as either human-in-the-loop (HITL), human-on-theloop (HOTL), or human-in-command (HIC) approaches to exercising oversight.Here HITL refers to the capability for human intervention in every decision cycle of the system, HOTL to the capability for human intervention during the design cycle of the system and monitoring the system's operation, and HIC to the capability to oversee the overall activity of the AI system (including its broader economic, societal, legal and ethical impact) as well as the ability to decide when and how to use the system in any particular situation. 17The delineations of this terminology differ between scholars as well as other commentators.The focus area, extent and intensity of human involvement needed for an oversight regime or routine to either be classified as HITL, HOTL or HIC may be both narrowly or extensively construed.For the purposes of this article, it is thus important to point out that there is no consensus on how to categorise the particular responsibilities of those humans tasked with performing some degree of 'human oversight'. 18And even more importantly, there is no real consensus on what they should do.
It should also be pointed out that the EU is not alone in its efforts and aspirations to regulate or guide the development and use of AI systems.It is also not alone in emphasising the importance of a human centric approach to AI.Such an approach is similarly advocated by the Council of Europe, which is (too) considering a legal framework for AI based on its standards on human rights, democracy and the rule of law'. 19Moreover, the Organization for Economic Cooperation and Development, OECD, has developed a set of recommendations, 'Principles on AI'.And, drawing on these principles, the G20 in June 2019 adopted so called 'human centred' AI principles, which includes recommendations to implement capacities for 'human determination' in certain contexts. 20Similarly, the United Nations Educational, Scientific and Cultural Organization, UNESCO, has proposed the development of a comprehensive global standard-setting instrument aimed at providing AI with a strong ethical basis to protect and promote human rights and human dignitywhich stresses the importance of monitoring and human oversight, not only in the respect of individual human oversight, but also in the respect of inclusive public oversight. 21Green also shows that human oversight is emphasised in numerous regulatory or policy documents around the world. 22These frameworks, naturally, vary in substance and intended scope, and are in part under development.They do, however, point to a fairly universal emphasis on human oversight in AI regulatory regimes.And, as Article 14 AIA is likely to form the basis of the first general human oversight provision, it is likely to attract much attention and serve as the tryer-out of general oversight requirements.It is also conceivable that the AIA's approach to human oversight will become influential or even standard setting in relation to other regulatory bodies' efforts to install human oversight requirements. 23hile both the EU and other stakeholder organisations find credence in that human oversight can help mitigate the risks associated with AI systems, 18 Green (n 5). 19 there are also many who advocate caution and highlight the risks to being overly reliant on the capacity of humans to effectively remedy the negative consequences of AI systems. 24Human oversight clearly has a fairly strong signal value precisely because it is a concrete measure in a legal and sociotechnical context that is characterised by non-transparency and knowledge imbalance.In relation to public authorities' use of AI or automated decision-making, the fears include that 'human oversight' will serve as a veneer legitimising government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools, and thus also providing a false sense of security in adopting algorithms in government operations. 25Brennan-Marquez, Levy and Susser also, importantly, argue that there is a danger of misalignment between actual human oversight and perceived human oversight over AI systems, as it frustrates our ability to robustly assess the systems' goals, functions and performances.Their argument holds that where the stakes of automation are obscured by either a too-human or a falsely inhuman veneer, democratic oversight suffers. 26s human oversight is not a single type measure with universal or predefined characteristics, it is of relevance to analyse what oversight regime the AIA actually would prescribe.The detailed design of the obligation, on whom it is placed and in what contexts it surfaces, is of great importance for evaluating what impact human oversight might and may have on the supervision of AI system processesand, importantly, their outputs.In other words, it is important to consider the 'what', 'when' and 'by whom' that the oversight requirement entails.

Short introduction to article 14key aspects
One central aspect of the human oversight requirement in Article 14 AIA, which will limit its applicational scope, is that it only applies to so-called 'high-risk' AI systems (and thus not to all AI systems). 27The European Parliament draft version of the AIA does propose the inclusion of a new Article 4 a, which would contain a general principle of human oversight applicable to all AI systems.However, this would not impose any strict oversight obligation, as it would only necessitate operators to exert their 'best efforts' in developing and utilising AI systems or foundation models in line with the principle of 'human agency and oversight', among others.If this amendment follows through it would, therefore, not change the fact that the AIA's stricter regulatory regime for oversight will be reserved for those AI systems that classify as high-risk.This fact reflects the risk-based approach of the draft, where the strictness of the regulatory requirements relates to the perceived risks that the AI system poses to the health and safety or the fundamental rights of persons.Distinctions are made between those AI systems deemed to be so potentially dangerous that they are prohibited, systems that are high-risk, low-risk or minimal risk.No exhaustive enumeration will be made here, but the 'high-risk' classification includes systems that are either (part of) products covered by the EU legislation listed in Annex II, or fall within certain high-risk areas listed in Annex III AIA. 28Examples of such systems are, among others: AI systems used for biometric identification and categorisation of natural persons; critical infrastructure management and operation; education and training, employment, personnel management and access to self-employment; accessibility and use of basic private and public services and benefits; law enforcement; migration, asylum and border control; administration of justice and democratic processes.This approach centres the assessment of whether a specific system would qualify as 'high risk' around specific sector uses and whether they are listed in the AIA, rather than around the specific risks and associated effects of a particular system.The strictness of the AIA's design in this respect is, however, somewhat cushioned by the fact that Article 7 AIA would empower the Commission to make certain updates to the list of 'high risk' systems in Annex III. 29 By proxy of the Commission, new or overlooked types of uses for AI that can be classified as highrisk could thus be added to the list, if identified.
The fairly static enumeration of what types of systems that would be considered as 'high-risk', means that the applicability of the human oversight requirement in Article 14 AIA does not hinge on the specific complexity of the system that is being deployed in those settings.This not only affects the range of AI systems that would be subject to 'human oversight' requirements, but also means that the challenges to performing 'meaningful' human oversight may vary in relation to the types of systems, their uses or more particular impacts and associated risks. 30The obligation to ensure human oversight would, therefore, have to be implemented in varying technical and legal 28 Article 6 AIA. 29Article 7 AIA would authorise the Commission to adopt delegated acts in accordance with Article 73 AIA.Such an update may only be made where two conditions are metthe AI systems must be intended to be used in any of the areas listed in points 1-8 of Annex III, and must also pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights.In both the Council and the European Parliament draft versions, the Article would also allow the removal of AI systems from the high-risk listing under specified conditions. 30Methnani and others (n 17).
environments, where the difficulties in ascertaining a compliant level of oversight may also vary greatly.
Turning to the structure and substantive content of Article 14 AIA, it includes five paragraphs, of which the first four are of general relevance for all high-risk AI systems.The first paragraph, 14(1), sets out the main rule already introducedthat high-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the system is in use.As pointed out by Onitiu, the provision's idea of human oversight here is to ensure the expert-in-the-loop and individual agency regarding the human operator's operation of high-risk systems. 31his obligation is fleshed out in paragraph 14(2), which states that the oversight should aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when high-risk AI systems are used in accordance with their intended purpose, or under conditions of reasonably foreseeable misuse.This is, in particular, to be the case when such risks persist notwithstanding the application of the other main safeguarding obligations placed on the providers of high-risk AI systems, that are set out the same chapter (2) of the draft.These are the obligations: in Article 9 to ensure that there is a risk management system in place; in Article 10 to ensure appropriate data governance and management practices for training, validation and testing of models and data sets are in place; in Article 11 to ensure that there is sufficient technical documentation available; in Article 12 to ensure proper record-keeping; in Article 13 to ensure a transparent system design and appropriate instructions for use; and in Article 15 to ensure the accuracy, robustness and cybersecurity of the systems. 32he above-mentioned design aspects of Article 14 AIA indicate a regulatory as well as a functional interrelation to the other main safeguarding measures in articles 10-12, and 14-15 AIA.From the regulatory perspective, the relation (just as advocated by the AI HLEG) is inversethe provider's obligation to ensure oversight capabilities increases where the risk-mitigating functions of the other main safeguarding measures cannot be expected to be efficient enough. 33Functionally, all these safeguarding measures support the general transparency objectives of the regulation, and will all contribute with relevant components to the oversight infrastructure available to the human overseer.Through human intermediaries as interpreters of the data produced by the operating algorithms, human oversight is aimed at aiding better transparency of the system's operations.However, this exercise is also dependent on a certain level of openness of and accessibility to the system's input models as well as outputs in relation to the overseer.Without a sufficient level of system transparency (as substantiated by the other pro-transparency safeguarding measures such as documentation, records-keeping and provisions of information to users etcetera), the human overseers would have nothing substantive to review.These legal as well as functional interrelations between Article 14 and the other safeguarding measures of the AIA underscore that the Article's content and implications cannot fully be delineated or understood without taking some of the other provisions of the Regulation into consideration.As I will return to later, such considerations may also offer some further insights on the 'what', 'when' and 'by whom' of Article 14 AIA in more detail.The more concrete measures by which the oversight obligation should be met by the providers of AI systems are listed in Article 14(3) AIA.It lays out two options.One option is for providers to ensure human oversight through identifying and building oversight measures into the high-risk AI system before it is placed on the market or put into service (when technically feasible).Alternatively, providers could also identify any appropriate measures that are to be implemented by the user.Notably, the Article places these obligations on the providers rather than the users of high-risk AI systems.A 'provider' would be defined as a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge. 34'Users' would, instead, be defined as any natural or legal person, public authority, agency or other body using an AI system under its authority. 35The fact that the obligations in Article 14 AIA are placed on the providers thus highlights a predominately preventive nature of the Article.The risk assessments should have been made during the system design phase and observed via the installation of oversight 'capabilities', rather than oversight performancebefore the system is provided to users.As put by Lazcoz and De Hert, this also implies that the Commission's understanding of human oversight focuses on the human agent interpreting and following or modifying the output at the use stageand does not extend to concepts such as organisational oversight.The more specific technical features that the providers must equip their high-risk AI systems with are listed in Article 14 (4) AIA.It obliges the providers to ensure that the high-risk AI system will enable the individuals to whom human oversight is assigned to, as appropriate to the circumstances, to: (a) fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible; (b) remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system ('automation bias'), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons; (c) be able to correctly interpret the high-risk AI system's output, taking into account in particular the characteristics of the system and the interpretation tools and methods available; (d) be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system; (e) be able to intervene on the operation of the high-risk AI system or interrupt the system through a 'stop' button or a similar procedure. 37e Council draft version of Article 14 contains some suggested amendments as compared to the above proposals from the Commission's draft.Most of these appear to have a clarifying purpose.It can, however, be noted that the word 'fully' has been removed as a prefix in (4)(a) to the requirement that the human overseers should be enabled to understand the capacities and limitations of the high-risk AI systemindicating a relaxed expectation, or recognition, that it may not always be feasible or necessary for users to have a complete understanding of an AI system.All in all, however, the main wording of Article 14 has not been subject to propositions of major changes as compared to the Commission's draft proposal.As clear from the provider obligations in Article 14(a-e), as laid out above, the Article is thus drafted on the basis that technical capability obligations should be placed on the provider.And notably, all of these obligations are linked to the relative criteria 'as appropriate to the circumstances'.This underscores that a type of proportionality requirement is intended (as is made more clear in the Council draft version, which contains the additional 'and proportionate' to the circumstances).This indicates a discretionary space for the provider to determine and develop the specific technical configuration to put in place.The question is therefore posed to what extent these obligations would be relative to the circumstances of an AI system and its particular use?Neither one of the drafts include any definition or further specific recital guidance on how to relate 'appropriate' or 'proportionate' to 'circumstances'.It is thus not clear to what extent this prerequisite could modify the obligation for providers to enable human oversight.Further guidance on the proportionality assessment in this respect will thus be needed.
In sum, the abovementioned provisions frame what human oversight entails in the meaning of Article 14 AIA.The most central aspects of this framing have now been introducedthat none of them includes obligations that stretch beyond the provider, and that none of them go beyond regulating what types of technical oversight capabilities should be in place before the system is placed on the market or put into service.As we will see, this, however, does not mean that the AIA or the specific content of Article 14 lacks any bearing on the users' exercise of human oversight over high-risk AI systems as they are deployed and in operation.The content and limitations as well as implications of Article 14 AIA will more clearly come into relief when also taking the broader legal arrangement and distribution of oversight responsibilities between providers and users of high-risk AI systems into consideration.It is with this perspective in mind that I will move on to analysing the 'what', the 'when', and the 'by whom' of the AIA's human oversight requirement.I will address each of these questions in the mentioned order, before turning to conclusions on the content and implications of the Article.

'what' are the aims and objects of the human oversight?
It is in the interaction between the human overseers and the specific information that the AI system presents to them, that any system-to-human knowledge transfer may occur.To be able to detect and react to prospective risks, biases or errors, human overseers must not only take account ofbut also assess and synthesisethe system's output. 38To determine, or at least roughly fixate, 'what' aspects of a high-risk AI system's operations that is supposed to be reviewed by humans, it is therefore important to consider the more granular building blocks or components of what the oversight as an act and exercise is supposed to be aimed at.
A suitable starting point, in order to address the 'what' of Article 14 AIA, is to consider its stated purposes and aims.Strictly speaking, these could perhaps be better read as questions of 'why' human oversight is needed.However, the Article's general aim to prevent or minimise the risks to health, safety or fundamental rights also indicates that the oversight should aim towards identifying certain adverse effects of high-risk AIsystems to important public interests and values. 39This provides some direction on what is to be monitored.From a regulatory perspective, however, too broad and comprehensive descriptions of what human overseers should consider in their review risks obscuring the more specific aims of the oversight.More specified objectives than the abstract supervision of, for example, fundamental rights, would in many cases be necessary for addressing what human oversight substantially includes as a task and act of performance.Here, and as I will turn to next, consideration of the slightly more detailed regulation on what type of information the human overseers are expected to assimilate, can contribute with some concreteness.
As already introduced, Article 14 AIA in conjunction with other main safeguarding obligations placed on the provider indirectly emphasises the importance of system transparencyas a necessitator as well as an enabler of oversight.The Article primarily emphasises the relational aspect of transparency between systems and humans. 40Transparency is thus not only construed as a form of 'openness', but also as a quality of being identifiable and understandable. 41This is signalled through the provider obligations to enable human overseers to fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation; remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system; be able to correctly interpret the high-risk AI system's output. 42While not providing detail on how, for example, a system's output should be presented to ensure that humans will be able to interpret it correctly, these formulations point out the central aspect of the human oversight's target.There is no real point in performing the oversight if the overseer is not provided with information (or system output) in such a representation that it can interpret or understand.And, as high-risk AI systems will vary greatly in configuration, purpose and use, it might be that the system providers are best equipped to make well-founded assessments of the type of information and presentation that would aid a qualitative knowledge transfer to human overseers.Even so, they leave a large discretionary space for the providers to determine how to configure their high-risk AI systems to meet these relational transparency goals. 43n addition to the above-mentioned goal-oriented type provisions, Article 14 AIA would also comprise obligations that more specifically address what type of powers that human overseers must be allowed to exercise in relation to the system.Providers of AI systems should make sure that humans are able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse its output.They should also be able to intervene on the operation or interrupt the system through a 'stop' button or a similar procedure. 44This means that the obligations are of technical type and that providers thus must ensure that their high-risk AI systems are equipped to allow for human interference.Any powers to interfere are, however, not directly attributed to the human overseers.Neither are any direct obligations placed on the users of high-risk AI systems to vest such powers in the overseers. 45Article 14 AIA thus primarily engages in the functional rather than performative aspects of human control over high-risk AI systems. 46I will, however, soon return to how some user responsibilities are still tied to the oversight requirement in Article 14.
Turning to what type of information that, more specifically, should be available to the human overseers, Articles 12-13 AIA are of special interest as they establish requirements of record-keeping and transparency through information to users.
Firstly, Article 12 AIA would, by requiring the automatic recording of events (logs) while a high-risk AI system is in operation, oblige the providers to ensure that the system offers a level of traceability of its functioning that is 'appropriate' to the intended purpose of the system. 47Generally, the keeping of logs during a system's operation is held as one important and possible measure for increasing the transparency and especially the traceability of 43 See, also, Onitiu (n 31), 2. 44 Article 14(4)(d-e) AIA. 45Interestingly, an earlier version of the draft included a provision which would have stretched into placing obligations also on the system users, as it included that human overseers should have been able to 'decide not to use the high-risk AI system or its outputs in any particular situation without any reason to fear negative consequences on-ai-a-threat-to-labour-protection/>, accessed 9 September 2022.More detailed requirements are set for those high-risk AI systems intended to be used for the 'real-time' and 'post' remote biometric identification of natural persons.In these cases, providers should ensure that the system is built so that no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been verified and confirmed by at least two natural persons, Article 14(5) and point 1(a) of Annex III. 46This reasoning is inspired by Onitiu (n 31), 8. 47 Article 12 AIA.system processes. 48In relation to the human overseer's role in utilising such system transparency, this is probably especially the case when the specific content and configuration of system logs forms a part of the system's real time interaction with, and knowledge transfer to, human overseers.The AIA is, however, not very detailed on the specific content of these logs, which limits the extent to which the Regulation in itself provides some detail on the specific objects of the human oversight.The Article 12 logs should, in particular, enable the monitoring of whether the system presents risks at national level or if it is being remodelled to an extent that would qualify as a substantial modification.The logs should also facilitate the provider's post-market monitoring of the system, where it is obliged to actively and systematically collect, document and analyse relevant data provided by users or others throughout its lifetime. 49The Article points the providers towards conforming to 'recognised standards' or 'common specifications' (which thus may vary between specific sector uses) when providing for these capabilities. 50For Article 12, both the Council and European Parliament draft versions propose somewhat additional details regarding the logging obligations.However, these additions primarily pertain to the intended purposes rather than concretely delineating specific logging requirements.These purposes include that the logging should enable traceability for the identification of situations that may present a risk within the meaning of Article 65(1) or lead to a substantial modification.They also address the facilitation of post-market monitoring as referred to in Article 61, and the monitoring of high-risk AI systems during operation as referred to in Article 29(4). 51uring to Article 13 AIA, it requires that high-risk AI systems should be designed and developed so that their operation is sufficiently transparent to enable users to interpret the system's output and use it appropriately.Just as with Article 14, it thus emphasis the relational aspect of transparency, though setting the goals for the knowledge-transfer to human overseers.On how this goal is to be realised, Article 13 also provides some detail on the type of information that system users and human overseers should have access to (and be able to evaluate) when performing the oversight.This information spans all the way from contact details of the providerto more system specific information such as the level of accuracy, robustness and cybersecurity, known or foreseeable risks to health and safety or fundamental rights, or relevant information on the system's specifications for the input data.As pointed out by Onitiu, it thus focuses on transparency though technical specifications that are 'pre-determined' by the provider and gives an account of the user monitoring the system's performance and instructions of use, rather than the algorithms' decision-making. 52f importance to how the transparency goals of Article 13 AIA translate into the system user domain, Article 13(2) AIA holds that the providers must develop and supply the systems with accompanying instructions. 53These should include information on the human oversight measures installed, as well as the technical measures in place to facilitate the interpretation of system outputs by the users. 54The link that is thus created between the provider's design of the human oversight functionality in a specific high-risk AI system and the end users of these systems, is also further reinforced through Article 29(1) AIA.This Article would oblige the users to follow the instructions through monitoring the system's operation on the basis of the instructions of use. 55Here, the Council draft version even further emphasises the link between the instructions of use and the Article 14 oversight requirement, as its Article 29(4) contains a proposed amendment stating that users also shall implement human oversight on the basis of the instructions of use.Furthermore, Article 29(6) in all of the draft versions also holds an express obligation for users to, in particular, use the information provided through Article 13 to, where applicable, carry out a data protection impact assessment under Article 35 GDPR.As stressed by Lazcoz and de Hert, this would mean that users (controllers in the GDPR) would be obliged to utilise provider information and instructions to enable the individuals (in the controller's organisation) to whom human oversight is assigned under Article 22 GDPR, to understand the capacities and limitations of the system. 56hese obligations and powers for providers to make instructions, that at least to some extent would be binding on the users, thus extends the providers' Article 14 obligations into a sort of user obligation to perform it.The potential width of these user obligations is, however, limited through Article 29(2) AIA.This Article, in all the Commission, Council and 52 See, also, Onitiu (n 31), 7. 53 See, also, Recital 47 AIA. 54Article 13(3)(d) AIA. 55Article 29(4) AIA.This requirement is combined with obligations on the users of high-risk AI systems to inform the provider or distributor and suspend the use of the system when they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 65(1) AIA.This obligation also applies when users have identified any serious incident or any malfunctioning within the meaning of Article 62 AIA.See also, Recital 58 AIA. 56Lazcoz and others (n 36), 25.
European Parliament draft versions, holds that the obligation to follow instructions is without prejudice to other user obligations under Union or national law.And perhaps even more importantly in the context of how the human oversight related responsibilities are distributed between providers and users, the paragraph also states that the obligation to follow instructions is without prejudice to the user's discretion in organising its own resources and activities for the purpose of implementing the human oversight measures indicated by the provider.The AIA thus vests much trust in providers to point the users to the particular objects of the oversight.
To sum up, Article 14 AIA does not, and is not really aimed at, providing much detail on what the human overseer is to consider or direct its attention towards when performing the oversight.This lack is somewhat understandable considering the omnibus-character of the AIAan overarching Regulation that will apply across very different specific sectors and contexts of society as well as of law.It is inevitable that the specific configuration of AI systems or their particular uses may require different types of data access and focuses of the human overseers.But, while the provider's and user's respective obligations to make versus follow the system instructions would enable more specific and circumstanced guidance on the objects of the oversight, this would not counterbalance the fact that the edge of the Regulation is primarily directed towards the providers and their preventive installation of various system capacities for oversight.From the perspective of legal certainty, this makes the potential efficiency of 'human oversight', either as a procedural or substantive safeguard of health safety and fundamental rights, hard to evaluate.Further guidance on the extent to which the wording of Article 14 specifies the particular objects of the human oversight, or on the more detailed content and configurations of logs and user instructions might, in due course, be provided by the national or European designated supervisory agencies or the Court of Justice of the European Union.As the Commission's proposal stands (and as well as indicated in the Council and European Parliament draft versions) the AIA would, however, leave a great deal of room for the providers to determine the detail on what human overseers will be presented with, and thus be able to direct their attention to, when performing their oversight.

'when' to perform human oversight?
Another important question for determining the substantive content of human oversight in Article 14 AIA, is to look at what it prescribes regarding when the oversight is to be performed.In this context, the 'when' does not primarily refer to mere time factors (although the timeliness of any needed human intervention in relation to a high-risk AI system's operations is of course important). 57Here, the focus is rather at what stages of an automated process the human overseer should engage in some sort of review.The process leading up to a decision being made with or by support of a high-risk AI system could be comprised of several and interlinked elements.Depending on what perspective is applied, the start-and end-points of this process could vary.This also means that 'human oversight' could have different functions depending on at what stage in the process it is being exercised.Should, for example, the overview be exercised over the algorithms that run the system processes, or should it be exercised over the system's outputs in the form of recommendations or decisions?Should it be exercised at regular intervals according to certain instructions on what to direct the attention and review against, or only in response to certain pre-defined impulses indicating a risk of erroneous functioning (internal from the system, or external from concerned individuals or stakeholders)?Should the oversight be directed at the system's general accuracy and proper functioning, or against the accuracy with which the system deals with specific cases?And if these different focuses of oversight are to be combined, in what configuration should that be done?
Here again, Article 14 AIA does not provide direct answers to if and how the obligations to ensure technical oversight capabilities would relate to specific stages of a system process.Two of the obligations are of explicitly 'perduring' character as they relate to the system's workings 'across the board'and are thus not stage or time-bound.These are the obligations found in 14(4)(a) on providers to ensure that overseers will be able to duly monitor its operation, and the obligation in 14(4)(e) on the providers to equip the system in such a way that overseers are able to intervene on the system's operation or interrupt the system through a 'stop' button or a similar procedure.Moreover, three of the obligations explicitly relate to the overseers' interaction with the system's output.Article 14(4)(b) requires providers to ensure that the system is designed to make the human overseers remain aware of the possible tendency of automatically relying or overrelying on the system's output.Article 14(4)(c), requires that overseers should be able to correctly interpret the system's output.Finally, Article 14(4)(b), requires that human overseers should be (technically) able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse its output.'Outputs', in this respect, should not be narrowly construed as only relating to final decisions or recommendations by the system.Article 3(1) AIA, for instance, refers to system outputs by non-exhaustive exemplifications such as content, predictions, recommendations, or decisions.What qualifies as outputs would thus depend on the provider's design choices regarding what data the system will collect, log, assess and produce, as well as make readily available to the human overseers.Recalling the discussion in section 4, on the users' obligations to follow the providers' instructions, the provider would to some extent be authorised to add binding detail with regard to at what stages of the system process or combined processes a human should exercise oversight.This legal arrangement again reflects the AIA's focus and rather large portion of trust vested in the providers to secure the sufficient human oversight infrastructure all the way to the user's end.
To ensure that human oversight is actually performed by the user, providers of high-risk systems could design the system processes to be hybrid, or semi-automated.They could do this by technically conditioning the use of their systems with the need for human input at certain stages of the system process.This could involve fixed design decisions manifested through specific instructions to automatically interrupt the process and redirect the case or issue to human oversightat certain instances or as a response to certain impulses.Another example is built in 'rejector' applications, that decides on whether a given task is best handled by the system or a human expert. 58Notably, Article 14 AIA does not mention such fixed instances of oversight performance, apart from in those cases where the AI system is used for biometric identification and categorisation of natural persons.In those cases, Article 14(5) AIA holds that the system must be designed so that no action or decision is taken by the user unless this has been verified and confirmed by at least two natural persons.For all other high-risk systems, however, the Article only prescribes the technical capabilities for overseers to actively (thus on human initiative) intervene in the procedure and disregard the system's output.It should be pointed out that this circumstance in no way prevents providers from instructing their high-risk systems to direct a process to human oversight at certain predetermined instances or stages.Many AI systems include fixed design decisions on the need for human input, such as in the case of recommender systems or where systems are programmed to interrupt processes on the basis of difficulties to interpret the input data.This type of technical arrangement for ensuring that human oversight is exercised would continue to be both lawful and preferable in many cases.The wording of Article 14, however, does not specify to any great extent when a system would be required to direct a case or process to human oversight.This circumstance risks reducing the Regulation's impact on the prevalence and extent to which human oversight is actually being performed.
In sum, Article 14 AIA, as well as the requirements in Article 13 AIA on the specific content that the provider must include in their instructions to the user, provides rather vague guidance on 'when' human oversight over highrisk AI systems is to be performed.This is particularly the case for the type of impulses that would trigger the need for oversight.If implemented, the AIA would of course not apply in a vacuum.Users of high-risk AI systems might as well be subject to regulations placing direct or indirect obligations to utilise the built-in oversight capabilities at certain stages of a system process. 59Users may of course also want to remain in control over the sensitive aspects of those tasks entrusted to the system.That Article 14 AIA clarifies that users at every given instance should have the technical capacity to interrupt the process, and that they should be able to disregard the system's output, is thus an important component in ensuring human review of substantive character.At the same time, the focus on the user's capability to actively intervene does not fully capture the fact that the particular human overseer may have difficulties identifying those stages of the process that would benefit from human inputwithout help from the system itself.So, although the AIA would make sure that the user is able to insert human input into the process whenever wantedthe provider's design choices on how to ensure that the oversight capabilities of Article 14 AIA are sufficient, would likely have a great impact on when the human oversight can be expected to be performed.

6.
'by whom' is the oversight to be performed?
'Who is to perform the human oversight?' is a question of great practical importance to the quality that the review and supervision by humans may have, and thus also of great importance to its efficiency as a safeguarding measure.This is because the competence (know-how) as well as authority (powers) of the human overseers will affect the reasonable expectations on what they will be able to detect and react to, as well as on what types of measures they could take to mitigate the identified risks or errors. 60t has already been discussed that Article 14 AIA stresses the relational aspects of the knowledge transfer between the systems and humans, as a way to emphasise substantive and qualitative oversight, as opposed to more symbolic and 'token gesture' management. 61Indirectly and overall, however, the AIA indicates that a certain level of know-how is expected from those humans tasked by the user to perform the oversight.For example, Recital 48 states that human oversight measures, where appropriate, should guarantee that the system is subject to in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.As this, notably, points towards obligations under the responsible user's authority, it is hard to see how the providers would be able to make such guarantees.It would typically be beyond their control how the users choose to organise their work.This is, however, an example of where the Recitals indicate further obligations than are backed up by the binding provisions of the Regulation.It is, namely, clear from Article 9(4) AIA that no 'guarantees' have to be made by providers.The Article merely obliges providers to give due consideration to the technical knowledge, experience, education and training to be expected by the user and the environment in which the system is intended to be used.This is a fairly modest requirement obliging the providers to make approximate estimations of what types of competences would perform the oversight at the system at the user's end, without any specific research requirements regarding the organisational setting or particular competencies at disposal at the user's end. 62In those cases where the provider and user of a system is one and the same, more detailed knowledge on the specific training and education of those humans tasked with overseeing the systems should be retrievable.One could argue that this would obligate the provider to make more tailored instructions as opposed to when a provider develops an AI system of more general-purpose type.
To sum up, as Article 14 AIA is addressed to the system providers it does not, in itself, address whom in particular should perform the oversight at the user's end.The provider should have the intended users in mind when designing the technical oversight capabilities (and thus also when composing the accompanying user instructions).The lack of specific guidance on the provider's obligations in this respect makes it hard to assess the sharpness of the provision.Nonetheless, the provider's considerations and expectations of the user´s knowledge, experience, education and training would not be binding on the userespecially considering that the obligations to follow instructions in Article 29(2) AIA are without prejudice to the user's discretion in organising its own resources and activities.As long as users of highrisk AI systems are not subject to further specific regulation, they thus enjoy a fair amount of organisational discretion regarding how to implement the human oversight measures indicated by the provider. 63he AIA's fairly sparse regulatory detail on the competency issues in human oversight is, in my view, a risk and flaw of the Regulation.It should, however, be added here that the Council draft version includes a new express obligation for users to 'assign human oversight to natural persons who have the necessary competence, training and authority'. 64Similarly, in the European Parliament draft version a new Article 29 (1a)(ii) provision would oblige the users (in the draft called deployers) to ensure 'that the natural persons assigned to ensure human oversight of the high-risk AI systems are competent, properly qualified and trained', and additionally also that the overseers have 'the necessary resources in order to ensure the effective supervision of the AI system in accordance with Article 14'.If some variants of these obligations were to be reflected in the final form of the AIA, they would further reinforce the link between the provider obligations in Article 14 and the user obligations in Article 29 of the AIA.Such obligations would thus place more substantive competency requirements on users.Although neither one of the Council and European Parliament's draft versions establishes a specific minimum standard for the education or training level required to assign humans with the task of overseeing high-risk AI systems, their references to criteria such as 'properly' and 'necessary' express a requirement on system users to shape their human overseers into a more specific 'whom'.And while similar obligations indeed would make the AIA more obtrusive on the users as compared to in the Commission's draft proposal, they would likely also decrease the risk of users not utilising the potential of those oversight capabilities which have been technically enabled by the provider.It would thus also help counterbalance the risk of mere superficial engagement of humans, or a 'too human veneer' of the system, as eloquently put by Brennan-Marquez, Levy and Susser. 65Ultimately, the quality of the oversight relates to the essential question of whether any human input has been made at all.In relation to the GDPR, the European Data Protection Board and the United Kingdom's Information Commissioner's Office, has stressed that the training of staff that are to perform human oversight over automated systems is pivotal to ensuring that the system is considered partly automated (and thus not subject to the specific requirements set up for solely automated decision-making processes in Article 22 GDPR). 66Similar reasoning could be made in relation to the AIA, on whether any oversight has, in fact, been performed.It is important not to forget that the question of 'by whom' the oversight is to be performed, is not only one of formal division of responsibilities.When evaluating the quality of the oversight's performance, the focus must in part be shifted from the user's actions and responsibilities down to the individual employees or civil servants who are to interact with the high-risk system.That this interaction allows for evaluation and contextuality, and serves to bridge between human-machine asymmetries, is fundamental. 67And, while 'meaningful' human oversight is a much more complex issue beyond the scope of this article, and also most likely too complex to be resolved through a few provisions targeting a diverse set of AI systems, they do lie at the core of human control and the essence of human centric AI. 68

Conclusions
The aim of this article has been to discuss the content and implications of Article 14 AIA and those related provisions which together make up the human oversight regime of the Regulationas specified through the questions of 'what' is to be overseen, 'when' the oversight is to be performed, as well as 'by whom'?The underlying assumption has been that these questions are important building blocks for the legal substance of the oversight regime that the AIA would introduce.As indicated through previous research in various fields, but not further explored within this contribution, the function and effectiveness of human oversight is a complex matter with many dimensions stretching beyond the legal domain. 69There is good reason to balance the expectations on what human oversight is or could be capable of achieving in relation to AI systems.There is, however, also good reason to analyse the content, limits and implications of those regulatory efforts aiming to address the difficulties of effective human oversight.The identification of regulatory gaps or grey areas are important inputs to discussions on the qualitative aspects of oversight, or 'meaningful' control over AI systemsas well as important from the perspective of the rule of law and legal security of law to all concerned stakeholders. 70n this contribution, I have therefore focused on the legal design of the AIA draft as such, and on what it does and does not cover in relation to my three stated questions.The analysis showed that neither the Article 14 AIA, or the other relevant related provisions of the draft, provide much detail on what the human overseer is to consider or direct its attention towards when performing the oversight.It also showed that a great deal of room would be left for the providers to determine the detail on what data the human overseers will be presented with.Similar conclusions were made on the issue of 'when' the oversight is to be exercised. 71The providers are to ensure that users at all times may interrupt or override the system processes.The lack of guidance on what type of impulses that would trigger an obligation to perform such oversight, however, means that the provider's system design choices would still greatly impact when the human oversight is likely to be performed.These circumstances thus limit the permeation of the AIA's oversight obligations into the everyday use of these systems, and will affect the scale at which human overseers will be able to discover or mitigate risks of errors in single cases.Moreover, the draft AIA does not address the 'by whom' in particular should perform the oversight at the user's end, although the emphasis that the Council and European Parliament's draft versions have put on user obligations to ensure the training and qualifications as well as resources of the human overseers highlights the recognition of the question's significance and priority.All in all, however, although providers should adopt the technical design as well as instructions to the intended system users, the latter enjoy a fair amount of organisational discretion in how to perform the oversight.
Overall, the recurring theme here is that the AIA is predominately directed at the providers, with no or indirect links between provider and user responsibilities.This legal arrangement thus signals a prioritisation of system level safeguarding.As argued by Kudina, the 'systems lens' that the AIA purports to adoptspanning people, technologies, and institutionsunderestimates the responsibilities placed on individual users to navigate the implementation of AI. 72 In my view, addressing the more specific questions of 'what', 'when' and 'by whom' is one way to acknowledge the arduousness of this navigation, in a way that also makes visible that the law's engagement surfaces with the interaction between humans and systems.And while absolute legal clarity in the detail is not likely to be achievable, the pursuit of further mapping is still valuable.The more specific answers to these questions that can be found, the more concretely the tasks of the individual overseers when exercising 'oversight' could be identified and defined.And, thus, the configuration of the compliance systematicity that the AIA sets up, moves more clearly into relief.
It is beyond the scope of this article to examine the limits of the EU legislative competence on placing detailed human oversight requirements on system users.The main legal basis of the AIA is, however, Article 114 of the Treaty of the Functioning of the European Union 73and the Regulation's corresponding primary aim is to ensure a free internal market for AI systems (while at the same time ensure a protection of health and safety, and fundamental rights).This makes the focus on system level preparedness for AI systems to be placed on the market or put into service unsurprising.Too detailed obligations on the users could also, for example, risk coming into conflict with the principle of procedural autonomy (through limiting the state's discretion in how to organise their efforts to realise the legislation). 74Irrespective of whether the EU will, or could have, detailed the Regulation more substantively, or included a broader set of responsibilities that more clearly followed through to the applicational phase, the legal design of the AIA is not the end point of the legal discussion concerning what human oversight does or should entail.
The AIA's emphasis on provider obligations that are goal-oriented clearly indicates that human oversight in the AIA is not meant to be understood as a one size fits all formula. 75The AIA addresses the question of human agency primarily through technical architecture requirements, and thus provides an oversight regime that substantiates a basic infrastructure for the oversight.Obviously, these requirements would only apply to high-risk AI systems, although there will surely be other AI systems associated with risks of adverse impacts that are also in need of efficient oversight. 76Furthermore, every AI system will, of course, depending on its configuration and use, be subject to other general or sector specific regulations that may, at least indirectly, provide more detail on the components of what human oversight should involve for a particular AI system use.This contribution has not, for example, expounded on the links between the draft AIA and the GDPRbut the latter Regulation would definitely provide some guidance in this respect.The GDPR's Article 22 holds, in itself, an oversight requirement that in some cases of solely automated decision-making directly places obligations on the users of AI systems to perform human review.It also holds some detail on  75 Lazcoz and others (n 36) 10 f. 76 In this context, the European Parliament's proposition to incorporate general principles such as human agency and oversight into the AIA for all AI systems (not solely limited to high-risk ones) would expand the realm of anticipated human oversight, as discussed in section 3. Nevertheless, these general principles would probably have a lesser effect since they lack a strict obligation or accountability framework.
the restrictions of what personal data the AI system may run on as well as the purposes it may be used for. 77here is, however, need for further legal analysis across all legal hierarchical levels, dwelling more in depth on the specific regulatory contexts for particular AI system and sector uses.Such legal analysis may help the identification of those particularly intricate and sensitive components of a decision or recommendation in need of more elaborate contextual assessments, which thus may place legal constraints on the possible level of automation without human oversight.More contextualised legal analysis may also help elucidate what duty of care principles should apply for each type of system use. 78Furthermore, such research could help address very fundamental questions, such as in what cases shortfalls to either the technical oversight capabilities or the oversight performance would call the legality of the systems into question.Vague criteria on the components of lawfully compliant oversight regimes by either providers or users of AI systems challenges and impairs the outlook for providing clear answers on legality issues.Ultimately, more detail on the 'what', 'when' and 'by whom' would thus help improve the accountability of, not only system providers, but also of users as well as the individual human overseers employed by the users.These are important aspects of preventing the AIA inadvertently legitimisingor even incentivisinga human oversight infrastructure that chiefly allows for mere superficial human involvement. 79

Disclosure statement
No potential conflict of interest was reported by the author(s).
rights.Her focus area within the project is technology assisted decision-making within social welfare systems.

31
See, also, D. Onitiu, 'The Limits of Explainability & Human Oversight in the EU Commission's Proposal for the Regulation on AI-a Critical Approach Focusing on Medical Diagnostic Systems' (2022) 32 Information & Communications Technology Law 170, 181. 32See, also, Article 16 AIA, which enlists provider obligations under the AIA, and in (a) refers to the requirements set out in Chapter 2 of the draft. 33Independent High-Level Expert Group on Artificial Intelligence, 'Ethics Guidelines for Trustworthy AI', 8 April 2019, p. 16.
48 K. Yeung, A. Howes, and G. Pogbrena, 'AI Governance by Human Rights-Centered Design, Deliberation, and Oversight: An End to Ethics Washing' in M.D. Dubber, F. Pasquale, and S. Das (eds), The Oxford Handbook of Ethics of AI (Oxford University Press 2020), 76.; J.J. Bryson and A. Theodorou, 'How society can maintain human-centric artificial intelligence' in M.Toivonen and E. Saari (eds), Human-Centered Digitalization and Services (Springer 2019) 305. 49Article 12(3), which also refers to Articles and 61 and 65(1) AIA. 50Article 12(1) AIA.The Article is more detailed regarding what logs would need to be kept if the AI system is used for biometric identification and categorisation of natural persons Article 12(4) AIA.The Commission would also, under Article 41 and in all draft versions, be empowered to adopt common specifications under specified circumstances.51In the European Parliament draft version, a new Article 12(2a) is proposed, which would specifically oblige high-risk AI systems to have logging capabilities for recording energy consumption, resource use, and environmental impact.
57 H. Roff and R. Moyes, 'Meaningful Human Control, Artificial Intelligence and Autonomous Weapons' (2016) Briefing paper for delegates at the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), p. 3.

73
Consolidated versions of the Treaty on European Union and the Treaty on the Functioning of the European Union [2016] OJ C202/1 (TFEU).See, for example, Recital 2 AIA. 74Although not an absolute right, the Member States are, according to settled case law of the CJEU, to enjoy an amount of freedom in the choice of means and methods in how to implement and safeguard the effectiveness of EU regulation, see C-39/70 Norddeutsches Vieh-und Fleischkontor ECLI:EU: C:1971:16; C-9/90 Francovich EU:C:1991:428; M. Verhoeven, 'The Costanzo Obligation The Obligations of National Administrative Authorities in the Case of Incompatibility Between National Law and European Law', C.J. Wiarda Institute for Legal Research 13 (Utrecht University 2011), p. 13. 36 term 'deployer' should be used instead of 'user'.Both the Commission and the European Parliament versions also expressly exclude cases where the AI system is used in the course of a personal non-professional activity. 36G. Lazcoz and P. De Hert, 'Humans in the GDPR and AIA Governance of Automated and Algorithmic Systems.Essential Pre-Requisites against Abdicating Responsibilities' (2022) 8 Brussels Privacy Hub Working Paper, p. 10.