Regulatory sandboxes in the AI Act: reconciling innovation and safety?

ABSTRACT This paper explores the regulatory sandbox regime under the EU’s draft Artificial Intelligence (AI) Act. It investigates how useful the sandbox regime is for testing an AI-based skin cancer detection systems in an EU member state. The paper focuses on whether the proposed AI regulatory sandbox regime can resolve tensions between innovation and safety. Although we find considerable potential for the sandbox regime, the proposal also creates several legal issues. It blurs jurisdictional boundaries between the EU and member states, raises concerns of legality and equal treatment, creates liability risks for innovators, and fails to require informed consent from testing subjects. To address these problems, the paper suggests adopting a more targeted legal basis for the sandbox regime that takes inspiration from conventional testing mechanisms such as clinical investigations for medical devices.


Introduction
In recent years, innovators have sought new ways to test their products in the real world.'Living labs', 1 'test beds', 2 'real-world laboratories', 3 and related approaches promise the possibility of testing and advancing innovations such as digital smart city technologies 4 or automated vehicles 5 under realistic yet relatively controlled laboratory conditions. 6hese sites are attractive to researchers, policymakers, and companies because novel technologies can be observed and tested at a small scale prior to a wider roll-out.In response to this rise to prominence, legislators have explored novel ways to legally enable real-world experiments. 7In 2020, the Council of the European Union emphasised how important these legal tools could be, 8 highlighting 'flexibility and experimentation' as potentially powerful tools to address future regulatory challenges in relation to innovation. 9his attitude has materialised in initiatives such as the 'testing and experimentation facilities' under the Digital Europe Programme 2021-2027 10 and in legislation such as the EU pilot regime for distributed ledger technology. 11n recent policy papers, analysts and authorities have given particular attention to the concept of regulatory sandboxes as one subset of such experimental law approaches. 12egulatory sandboxes are legal frameworks that enable limited testing of innovations under regulatory supervision. 13They can be used to provide individualised legal guidance for innovators or to make legal exceptions for certain innovations to let innovators and the general public 'experience the effects of novel technologies in real life "as if" these technologies had already been proven safe and effective'. 14At the same time, scholars have pointed to the potential risks of sandbox approaches, which may allow innovators to evade responsibility during and beyond testing, lower safety standards permanently, and expose participants to potential harm. 15urthermore, the performative functions of real-world experiments in regulatory sandbox settings often come at the cost of democratic deliberation, as Engels and others have argued. 16his paper analyses the legal conflicts arising for regulatory sandboxes inbetween the goals of fostering innovation and addressing safety concerns, using the prominent example of the regulatory sandbox framework that was proposed as part of the EU Artificial Intelligence Act (draft AI Act). 17he AI regulatory sandbox regime will be the first comprehensive EU-wide framework of its kind.As negotiations about the framework are continuing, this paper scrutinises the most recent drafts put forward by the Council of the EU and the European Parliament and makes suggestions for their improvement.
To test the value of the AI regulatory sandbox regime against a concrete use case, the paper applies it to a data-driven medical device ('medical AI' 18 ).The field of medical AI raises particularly urgent questions of safety because human lives can be directly at stake in the context of diagnosis and treatment: Do patients have a say in how the AI should be trained?Which kind of training should doctors receive in order to make use of AI-generated decision support?Should patients have the right to obtain a second (human) opinion on AI-generated results?At the same time, innovation in medical technologies can enable significant patient benefits and potentially unlock new competitive industries in global medical and biotechnology markets, which is why more flexible testing regimes might be desirable. 19Within the area of medical AI, this paper examines a specific case study about the diagnosis of skin cancer using an AI system developed by an Austrian-led research group (hereinafter: cancer detection AI system). 20The researchers expressly refer to the need of testing the AI system 'under real-world conditions in the hands of the intended users and not as stand-alone devices'. 21Furthermore, the case study illustrates which problems regulatory sandboxes can and cannot solve.The legal context of the case study concerns, among other areas, EU medical device law and, at the member state level, medical professional law. 22To ground aspects of member state law in a specific legal system, the paper refers to Austrian law, from where the case study originates, as a sample jurisdiction.
The paper first outlines the theoretical concept of regulatory sandboxes and the concrete sandbox proposal under the draft AI Act (part 2) and subsequently turns to the legal implications of the cancer detection AI system (part 3).After explaining the key legal tensions between innovation and safety that the AI system creates, the paper highlights that, although EU medical device law already provides for a mechanism similar to a regulatory sandbox, there are potential regulatory gaps in other areas of the law that a regulatory sandbox framework could close.The paper then discusses how the EU's proposal for a regulatory sandbox under the AI Act could do so (part 4) and concludes with observations on how regulatory sandboxes might fit into the broader landscape of innovation law (part 5).

Theoretical foundations
There are multiple definitions of the concept of regulatory sandboxes in legal literature, 23 policy papers, 24 and legislation. 25The Council of the EU defines regulatory sandboxes as concrete frameworks which, by providing a structured context for experimentation, enable where appropriate in a real-world environment the testing of innovative technologies, products, services or approaches … for a limited time and in a limited part of a sector or area under regulatory supervision ensuring that appropriate safeguards are in place. 26he origin of regulatory sandboxes lies in fostering innovation in the financial services sector,27 but sandboxes are also starting to emerge in other areas such as commerce,28 mobility, 29 energy market regulation, 30 data protection, 31 health care, 32 and AI regulation. 33,34Common goals of regulatory sandboxes in all these areas are to enable innovation and to ensure safety through legal certainty, law enforcement, and regulatory flexibility.To realise these aims, regulatory sandboxes are usually equipped with legal powers to .provide legal guidance, .issue no-enforcement letters, and/or .grant exemptions from legal rules. 35rough powers to provide legal guidance, the competent supervisory authority and the innovator can determine whether the product or service in question complies with current legal requirements and, if it does not, what a legally compliant product design could look like. 36Supervisory authorities can issue guidance in different forms.They can issue general guidance documents with an abstract group of addressees or give individualised advice, tailored to the innovator's individual questions. 37This type of regulatory sandbox power primarily improves legal certainty for innovators and increases regulatory knowledge for the competent authority, which can have positive effects both for legal compliance and law enforcement.Improved legal compliance and law enforcement, in turn, help fostering innovation and maintaining safety.
By issuing no-enforcement letters, competent authorities can commit themselves to refraining from enforcement action in individual cases. 38he no-enforcement letter can be a supporting tool for either legal guidance or legal exemption.If the supervisory authority uses the no-enforcement letter merely to clarify the legal requirements that the innovator needs to comply with in order to avoid enforcement action, it serves as a tool of legal guidance.However, if the supervisory authority uses the no-enforcement letter to guarantee that there will be no enforcement action for a planned test activity even if it is not compliant with certain legal requirements, the no-enforcement letter becomes a tool of legal exemption.The latter option reduces the risk of breaching legal provisions during testing and consequent exposure to law enforcement.For third parties who participate in the testing of the innovation as testing subjects it means that some of the legal provisions intended to protect their interests may be rendered ineffective.Without any safeguards that protect the interests of third parties, in particular legal grounds to hold innovators liable for incurred damage, using no-enforcement letters to disregard the enforcement of statutory provisions would be unlawful in many jurisdictions. 39However, when accompanied by adequate restrictions and a balanced liability regime, noenforcement-letters have the potential to promote innovation while compensating for a possibly reduced level of third parties' safety through other legal measures.
The third sandbox power is to allow individual innovators to deviate from individual legal rules that hinder the testing of their innovation. 40Contrary to no-enforcement letters, the function of this regulatory power is to exempt innovators from a certain legal duty, rather than abstaining only from imposing legal consequences (such as administrative fines) in case that duty is breached.This can make a difference in terms of who knows about the exemption (e.g. a transparent statutory exemption versus a possibly confidential noenforcement letter) and may have consequences in relation to fault-based civil liability for third parties' damage.This type of regulatory sandbox power aims to increase regulatory flexibility by exempting the innovator from certain constraints that the legal framework would otherwise put on the testing procedure.To compensate for exemptions from provisions that are designed to protect third parties, the supervisory authority can impose other conditions on the innovator, for example intensified supervision by the authority, duties of the innovator to co-operate with the authority, and limitations on what can be tested where, when, and on whom.If the legislator manages to strike a balance between increased flexibility and intensified supervision, these types of exceptions can, like the other two sandbox tools, foster innovation and protect the safety of those involved.
The competent authority may already have the legal powers to issue guidance, no-enforcement letters, or legal exceptions when a regulatory sandbox goes operational.Depending on the jurisdiction at hand, supervisory authorities often have considerable leeway within their statutory powers already, provided their actions meet the test of proportionality. 41n these cases, the legislator might install regulatory sandboxes to encourage the competent authorities to make use of their existing discretionary powers in order to provide innovators with maximum legal certainty while having regulatory flexibility. 42n other cases, the legal powers needed to provide bespoke guidance, to issue no-enforcement letters, or to facilitate exceptions from legal rules do not yet exist or are insufficient. 43In these cases, the legislator needs to adopt a new statutory basis for the regulatory sandbox.To allow the supervisory authority to provide legal guidance, a possible statutory basis could look like this: The supervisory authority may provide legal guidance to sandbox participants whether the proposed innovation is compliant with the applicable legal requirements.Legal guidance can be issued in the form of general guidelines or bespoke legal advice to individual participants.
The legal basis for issuing no-enforcement letters could be phrased as follows: 41 Parenti (n 12) 9-10. 42See Financial Conduct Authority (n 36) 3; Allen (n 23) 593. 43See for example DLT Pilot Regulation, arts 4 and 5. See also Zetzsche and Woxholth (n 11) The supervisory authority may issue no-enforcement letters to sandbox participants in which the authority declares that it will refrain from enforcing specified legal provisions.The authority may specify detailed conditions for refraining from certain types of enforcement action.
For adopting a new statutory basis to facilitate legal exceptions, a distinctive type of legal provision, usually referred to as experimental clause, has been gaining popularity. 44Experimental clauses are individual statutory provisions that authorise supervisory authorities to deviate from statutory law for the purposes of experimentation. 45The experimental clause provides the legal grounds to make an exception and specifies under which conditions the supervisory authorities may deviate from which statutory rules.It typically takes the following shape: The supervisory authority may, for the purposes of experimentation, allow deviation from sections A and B for no longer than X year(s), provided that the interests of group C and D are secured.The authority may specify detailed conditions for deviation from sections A and B.
The Council of the EU has stated that 'experimentation clauses are often the legal basis for regulatory sandboxes, and are already used in EU legislation and in many Member States' legal frameworks'. 46Notably, the experimental clause does not contain the exception itself but a general template for how to make them. 47The task of making a concrete exception is usually delegated to the supervisory authority.Because the details of the legal testing framework need to be defined on a case-by-case basis by the competent authority, an experimental clause can be used to create multiple real-world experiments under different sets of rules, with varying limitations placed on the geographical, personal, or temporal scope of the experiments. 48n summary, regulatory sandboxes operate with different combinations of three main types of legal tools.First, regulatory sandboxes equipped with the powers to give legal guidance serve to inform innovators about the applicable legal framework and regulators about trends in technological innovation.Second, through no-enforcement letters, supervisory authorities can commit themselves to refrain from specific enforcement activities.This can provide innovators with more legal certainty about the legal conditions 44  for the testing phase of their innovation.However, in many jurisdictions, issuing no-enforcement letters will only be lawful under narrowly defined circumstances and with adequate safeguards in place.Third, regulatory sandboxes equipped with the legal power to exempt innovators from legal provisions serve to make it easier for innovators to test their innovation under realistic conditions.To protect third party interests, the supervisory authority may need to impose conditions such as intensified supervision and limitations on the scope of testing.

The AI Act's framework
The European Commission published a first draft of the AI Act in April 2021. 49The Commission stated that the AI Act aims to provide a uniform legal framework for AI, thus encouraging innovation through legal certainty. 50At the same time, the AI Act is intended to achieve 'a high level of protection of health, safety, and fundamental rights' in the context of using AI systems. 51The regulatory sandbox framework, which is one of the 'measures in support of innovation', is set out in Articles 53-54. 52In December 2022, the Council of the EU adopted a consolidated compromise version of the draft as its general approach, which incorporates substantive changes to the draft brought forward by the Council under the Slovenian, the French, and the Czech Council presidencies. 53The European Parliament adopted its own draft with further amendments to the European Commission's version in June 2023. 54 a concrete framework set up by a national competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real world conditions, an innovative AI system, pursuant to a specific plan for a limited time under regulatory supervision. 55e European Parliament's definition is similar but refers more broadly to 'a controlled environment established by a public authority' instead of 'a concrete framework set up by a national competent authority'. 56he revised article 53 provides for sandboxes to be established by 'national competent authorities' (Council version) or 'establishing authorities' (Parliament version) in the member states. 57The sandbox testing is intended to occur before the AI system is placed on the market or put into service. 58It may include 'testing in real world conditions', for which the Council offers a circular definition: [testing in real world conditions means the] testing of an AI system for its intended purpose in real world conditions outside of a laboratory or otherwise simulated environment with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the requirements of this Regulation; … 59 The sandbox regime is intended to serve specific objectives, including to foster innovation, to accelerate market access for AI systems, to improve legal certainty, and to contribute to 'evidence-based regulatory learning'. 60An AI system's conformity with requirements of the AI Act during sandbox testing can later be taken into account during a conformity assessment. 61imilarly to the original Commission proposal, 62 both the Council and Parliament drafts clarify that liability for damage caused during testing remains with the innovator, 63 and that the details of the sandbox operation will be separately determined by committees in implementing acts. 64The participation time in the sandbox shall be limited to what is appropriate 'given the complexity and scale of the project'. 65The Council draft also sets out that competent authorities may cooperate with relevant institutions and other sandbox initiatives. 66The sandbox framework contains powers to provide legal guidance and powers to refrain from imposing enforcement action.To some extent these powers are pre-existing, to some extent they are newly established by the AI Act.
The Council draft, unlike the Parliament draft, does not state expressly that AI regulatory sandboxes shall be used to provide legal guidance to innovators. 67However, the Council implies that legal guidance will play a role during the operation of sandboxes where it states that the framework is established 'to enhance legal certainty for innovators', 68 and that it does so 'with a view to ensuring compliance' with the AI Act and other Union and member state legislation. 69 core sandbox provision of the Council draft provides that the competent authorities shall exercise their discretionary powers 'in a flexible manner' but 'within the limits of the relevant legislation'. 70This is in line with the type of sandbox envisioned by the Commission and the Parliament.The Commission originally proposed that the AI sandbox activities would be carried out 'under the direct supervision and guidance by the competent authorities with a view to ensuring compliance with the requirements' of the AI Act and other legislation supervised within the sandbox. 71In its impact assessment, the European Commission stated that '[n]o derogations or exemptions from the applicable legislation would be granted'. 72Rather, the competent authorities would have a 'certain flexibility in applying the rules within the limits of the law and within their discretionary powers when implementing the legal requirements to the concrete AI project in the sandbox'. 73The Parliament clarifies that one of the objectives of the sandbox framework is 'for the competent authorities to provide guidance to AI systems prospective providers to achieve regulatory compliance with this Regulation or … other applicable Union and Member States legislation'. 74owever, both the Council and the Parliament drafts go beyond making use of existing discretionary powers, adding new possibilities to exempt innovators from legal consequences, similar to provisions for noenforcement letters.The Council draft states: [P]rovided that the participant(s) respect the sandbox plan and the terms and conditions for their participation … and follow in good faith the guidance given by the authorities, no administrative fines shall be imposed by the authorities for infringement of applicable Union or Member State legislation, including the provisions of this Regulation. 75is amounts to a broad legal basis for abstaining from issuing administrative fines.The difference compared to a typical no-enforcement letter provision is that the competent authority does not appear to have discretion whether to abstain from enforcement or not.An earlier version of the Council draft went even further by exempting innovators not only from administrative fines but from any 'administrative enforcement action'. 76he Parliament's version of this provision is phrased similarly but restricts exemptions to provisions of the AI Act itself. 77n summary, the latest Council and Parliament drafts of the AI Act retain the Commission's original ideas of providing guidance and utilising existing regulatory discretion.They go further than the Commission draft where they exempt sandbox participants from administrative fines.Therefore, the Council and the Parliament drafts operate with a mix of legal guidance and legal grounds similar to those providing for no-enforcement letters.Notably, neither the Commission, nor the Council, nor the Parliament make use of the third regulatory sandbox power, granting exceptions from legal rules.

Problem outline: between innovation and safety
From an innovation law perspective, medical AI systems signify a typical tension of innovation and safety. 78Medical AI is, as already mentioned, a fast-evolving field whose innovative output has the potential to substantively improve medical services.At the same time, the medical AI systems need to be subjected to heightened scrutiny compared to other potential applications, given their safety-sensitive areas of application.This tension also applies to the use case that this paper elects to consider in more depth.
The use case concerns an AI system developed by an international group of medical scholars (the team leaders being based at the Medical University of Vienna), which was featured in a prominent article in the journal Nature Medicine. 79For their study, the developers asked around 300 doctors to use the AI system as a decision support tool when making skin cancer diagnoses. 80The relevance of this AI system for our paper is twofold.First, it is an example where the study authors themselves plead for real-world testing. 81Because the diagnostic accuracy turned out to depend on how doctors and the tool interacted, 82 the study authors stated that ' … the performance of AI-based systems should be tested under real-world conditions in the hands of the intended users and not as stand-alone devices.' 83 Second, the system has legal implications that possibly call for a regulatory sandbox approach.This concerns one particular study setting where the AI system was used for telediagnosis, i.e. in situations where patient and doctor do not meet face-to-face. 84emonstrating the safety of a new medical technology is difficult without testing it under realistic conditions. 85However, exposing patients to untested technologies carries inherent safety risks to their lives, well-being, and privacy.How can the patient's health be best protected if the technology in question has not been tested yet?Where and how should the resulting data be stored?Legislators both at the EU and the member state level have developed elaborate procedural mechanisms to deal with the regulatory dilemma of balancing innovation and safety, embracing a wide range of approaches depending on the technology and setting. 86This includes medical trials for drug approval, 87 pre-market approval procedures for other potentially dangerous products, 88 and data protection impact assessments for novel and large-scale personal data processing operations. 89owever, novel technologies often also affect domains of law that do not 79 Tschandl and others (n 20). 80 feature specific testing procedures.In these areas of law, real-world testing of innovations will usually fall under the same rules that apply to their regular use.The question is whether these rules allow for adequate testing conditions.
From the broad array of legal areas relevant to medical AI, this paper selects for further analysis medical device law, which is mainly provided for at the EU level, and medical professional law, which is mainly provided for at the member state level.Both legal areas have implications for the cancer detection AI system.EU medical device law aims to resolve tensions between innovation and safety through procedures such as clinical investigations.Although clinical investigations predate the idea of regulatory sandboxes, they function very similarly.In contrast, medical professional law in Austria, the sample jurisdiction for this paper, tries to balance innovation and safety by providing for the education of doctors and by ensuring a certain standard of professional conduct.The application of a specific principle of professional conduct, which will be explained below, shows that there are regulatory gaps that could be addressed through an AI regulatory sandbox.

Clinical investigations: a pre-existing 'sandbox' regime
The EU Medical Device Regulation (MDR) lays down harmonised rules for manufacturing, distributing, and using medical devices in the European Union. 90It aims to support innovation in the field of medical devices and to secure 'a high level of safety and health' of patients and users. 91The MDR requires accompanying legislation by the member states.In Austria, the Medical Devices Act 2021 (MDA) mirrors the MDR and adds certain specifications at the member state level. 92he cancer detection AI system is a type of software that processes input data (images of skin lesions) and creates output data (allocation to diagnosis classes with probabilities) with the goal of recognising and monitoring forms of skin cancer on human beings. 93It qualifies as a medical device if it is 'intended by the manufacturer to be used, alone or in combination, for human beings' for medical purposes including diagnosis and monitoring of diseases. 94The cancer detection AI system falls into this definition and, consequently, into the scope of application of the MDR (and the MDA). 95here are different ways under the MDR to test under realistic conditions whether the cancer detection AI system is effective and safe.The most comprehensive one is to conduct a clinical investigation.Clinical investigations aim to generate clinical data about new medical devices 96 while achieving the same level of safety as required for approved medical devices but through different means. 97The rigid requirements for market access are replaced with a more flexible and procedure-orientated set of test rules. 98he clinical investigation rules allow for usage of medical devices during the investigation even if there are residual uncertainties around the device's efficacy and potential risks to patients or users.For market use, the manufacturer of the device would normally need to demonstrate that the medical device conforms with the general safety and performance requirements of the MDR. 99That is, if marketed as a product, the medical AI system would need to prove 'sufficient accuracy, precision and stability for [its] intended purpose'. 100As software, this specifically entails ensuring the 'repeatability, reliability and performance in line with [its] intended use' of the AI system, and manufacturing it 'in accordance with the state of the art'. 101However, for the period in which a clinical investigation is evaluating the AI system, the manufacturer does not need to demonstrate that the legal requirements are fulfilled. 102At the same time, there are additional safety measures that only apply to clinical investigations to account for safety risks associated with novel devices.The clinical investigation needs to be authorised, 103 it must follow a pre-approved, detailed clinical investigation plan, 104 and there are strict requirements on participants' informed consent 105 and on procedural safeguards 106 to ensure adequate protection of 'the rights, safety, dignity and well-being' of the participants. 107f an innovator wanted to carry out real-world testing of the novel cancer detection AI system, the clinical investigation would be a suitable way to do so.A clinical investigation might not be obligatory (it depends whether the system is classified as a class III device) 108 to introduce the medical AI system to the market.However, if the AI system is completely new, a clinical investigation is necessary to produce clinical data that will later be the basis for any conformity assessment under the MDR.
Clinical investigations show that there is no need to reinvent the wheel in order to facilitate real-world experiments of medical AI.Much like the proposed AI regulatory sandboxes, they allow the testing of innovation under realistic conditions while also providing for procedural safeguards. 109

Direct treatment: a potential regulatory gap
Having discussed clinical investigations as a pre-existing tool similar to regulatory sandboxes, the paper now turns to regulatory gaps that might be closed by establishing an AI regulatory sandbox for medical AI systems.
The regulation of the professional conduct of medical doctors falls within the legislative competence of the member states. 110The EU only has a coordinating role, which it assumes, for example, by providing uniform standards for the recognition of professional qualifications. 111In Austria, the relevant piece of legislation is the Doctors Act 1998 (ADA), which lays down rules on the education of medical doctors, their professional representation, and their professional duties. 112he cancer detection AI system can be considered a diagnostic aid that provides decision support to medical doctors in the context of examining the presence or absence of skin cancer. 113The use of the AI system by medical doctors in the course of carrying out their profession falls within the ADA's scope of application. 114The ADA does not contain any special procedures designed to test the use of new technologies in the medical profession.Real-world testing of medical AI systems must meet the same legal standards as regular use.For the use of the cancer detection AI system by medical doctors, this creates two main legal issues.First, a doctor may only use the AI system in a way that complies with their obligation to perform their profession with the necessary diligence and in accordance with medical science and experience. 115That means that the doctor may take the decision support from an AI into account only if the doctor, at a minimum, checks its plausibility. 116n addition to this first legal issue, a second one arises where patient and doctor do not meet face-to-face and have not met before.In such situations, the standard for relying on AI input is even higher.The reason lies in the ADA's principle of direct treatment ('unmittelbare Behandlung'). 117It requires that doctors treat patients generally 'face-to-face' 118 to ensure that doctors can rely on all sensory perceptions to obtain a diagnosis, including a 'personal impression' of the patient's state. 119The principle serves as a 'protection and control measure'. 120Recent legal literature on the matter does not consider that the principle of direct treatment imposes a blanket ban on telediagnosis. 121Rather, the predominant opinion is that the legal permissibility of telediagnosis procedures depends on whether the doctor can diligently meet all professional requirements in the individual circumstances, in particular whether there has been prior contact with the patient and whether the doctor can be in control of potential safety risks. 122owever, some scholars apply a 'strict standard' to the doctor's conduct in these situations. 123To counterbalance the inherent risks of telediagnosis, doctors may only rely on their telediagnosis if they have high confidence in their diagnostic result. 124If the doctor has 'even the slightest doubt about the decision-making basis', 125 a face-to-face examination instead of the telediagnosis setting is compulsory. 126he study authors of the use case state that the cancer detection AI system could be used to 'extend the intervals between face-to-face visits in low-risk cases'. 127However, the high threshold for the permissibility of telediagnosis may pose a particularly difficult problem for the use of AI for telediagnosis in particular.This problem can be illustrated using the following hypothetical situation: A doctor makes a diagnostic decision via telediagnosis about a patient without having had prior face-to-face contact.The doctor then consults the AI system for decision support.The AI system's diagnosis advice differs from the doctor's diagnosis, causing the doctor to reconsider their own diagnostic decision.Here, the crucial point is that the doctor reconsidering their decision implies that the doctor had at least some doubt about the original diagnosis decision.According to the principle of direct treatment, the doctor would now be obliged to carry out a face-to-face diagnosis.This would be the case in every situation where the doctor reconsiders their original diagnosis decision based on the AI system's inputthe very purpose it was designed for.
Whether the use of AI decision support for medical diagnosisbe it in face-to-face consultations or in telediagnosis settingsis consistent with the ADA primarily depends on whether the doctor can meet the statutory standard of practising with necessary diligence and in accordance with medical science and experience.The separate requirement of direct treatmentwhich is not an AI-specific issueadds a heightened standard regarding the doctor's confidence in the evidence on which they base their diagnosis.The unique legal issue that arises from AI-based legal diagnosis is that the AI system poses a source of potential doubt around the diagnostic decision for the doctor that would not arise without it.Diverging diagnosis decisions between doctor and AI system in combination with the strict 'not even the slightest doubt' rule for telediagnosis renders the use of AI-supported telediagnosisat least in a setting that would have practical benefits for doctorsdifficult, if not legally impossible.
Scholars adhering to a less strict interpretation of the principle of direct treatment might come to another result.However, it is evident that how the principle of direct treatment applies to novel ways of conducting telediagnosis is not sufficiently clear, in particular when AI-based decision support is involved.Maintaining this situation of lacking legal certainty would make both the testing and regular use of AI in telediagnosis settings undesirable for innovators.The Austrian legislator could clarify the extent to which AI support in telediagnosis settings is allowed. 128However, the field of medical AI is evolving quickly, frequently raising new regulatory questions.Therefore, it might be in the legislator's interest to introduce a regulatory sandbox regime to initiate the use of novel medical AI systems and foster support among a lead user base, while maintaining the principle of direct treatment in all other cases.A sandbox approach might also prove a useful forum to discuss further questions around the use of AI-based telediagnosis by medical doctors.This may include questions as to what extent a doctor needs to check the results given by the AI system from case to case and how much understanding of the tool is required to use it diligently. 129

Implementation
The draft AI Act will apply to, among others, 'providers placing on the market or putting into service AI systems' and to 'users of AI systems'. 130he Council draft defines the term 'AI system' as a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logicand knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts[.] 131 comparison, the Parliament draft's definition is broader, omitting any reference to specific methods or approaches. 132The cancer detection AI system receives data input in the form of images of skin lesions; it processes the input data through a convolutional neural network (CNN), 133 which is a particular form of machine learning method; 134 and, finally, it generates recommendations by allocating the image at hand to a specific diagnosis class and providing the associated probability.As a result, the cancer detection AI system falls squarely within the scope of application of both drafts of the AI Act.As a medical device that requires third-party conformity assessment, 135 the AI system qualifies as a high-risk AI system under both versions of the AI Act. 136n AI regulatory sandbox for medical AI would require a sandbox setting where the AI Act, EU medical device law, and national medical professional law are supervised together. 137According to the draft AI Act, the 'national competent authority' (Council version) or the 'establishing authority' (Parliament version) is responsible for setting up the sandbox regime. 138According to the Council draft, this can be either the notifying authority (e.g. a national accreditation body) or the national market surveillance authority pursuant to Regulation (EU) 2019/1020. 139Austria has yet to determine which authorities shall be tasked with these roles.In the past, similar functions have been fulfilled by the Austrian accreditation body, which is the Federal Minister of Economics and Labour, 140 and sector-specific market surveillance authorities. 141However, an Austrian government official suggested that the government was planning to establish a specialised authority to regulate AI systems in Austria. 142Whichever national competent authority the Austrian legislator will designate, the Council draft of the AI 131 Draft AI Act (Council version), art 3(1). 132Draft AI Act (Parliament version), art 3(1). 133Tschandl and others (n 20) 1229. 134 Act also allows cooperation with other relevant authorities. 143In the case of medical AI, one relevant authority could be the Austrian Federal Minister of Health, who is responsible for the administration of the relevant provisions of the Austrian Doctors Act. 144hrough an Austrian regulatory sandbox for medical AI, the competent authority could offer legal guidance how the principle of direct treatment affects planned experimentation on an AI-based telediagnosis system.The authority could explain legal constraints and assure medical doctors that they could test the AI-based telediagnosis system in a setting where the telediagnosis decision by the AI has no impact on how the patient is treated, for example by combining telediagnosis tests with subsequent face-to-face examinations.This could be a cautious way for the legislator to maintain the principle of direct treatment while also addressing innovators' potential legal uncertainties. 145dditionally, the Council's regulatory sandbox framework could be used to exempt doctors who breach the principle of direct treatment from administrative fines, 146 provided that the authority imposes other suitable conditions and restrictions.However, the problem of AI telediagnosis and the principle of direct treatment outlined above can only be partly addressed through the existing legal basis for exceptions from administrative fines under the AI Act.The problem is that the regulatory sandbox can serve to exempt innovators only from administrative fines but not any other type of enforcement act.This renders the provision less useful for doctors who would be in breach of the principle of direct treatment, because in addition to administrative fines, they could also face temporary disbarment. 147

Jurisdiction
The first legal issue that presents itself when envisioning an AI regulatory sandbox for medical AI is whether the member states are allowed, or even required, to adopt provisions that accompany or supplement article 53 of the draft AI Act.
The draft AI Act, which takes the form of a regulation, 148 aims to harmonise rules 'for the placing on the market, the putting into service and the use of [AI systems] in the Union' 149 and to harmonise 'transparency rules for certain AI systems'. 150The European Commission has specified Article 114 TFEU as the primary legal basis for the harmonisation. 151Wherever the EU legislator aims for maximum harmonisation, 152 member states may not pass differing national legislation. 153For regulations generally, member states 'are precluded from taking steps, for the purposes of applying the regulation, which are intended to alter its scope or supplement its provisions'. 154owever, regarding regulatory sandboxes, the Council's draft AI Act is to be characterised as an 'incomplete' or 'limping' regulation, 155 because member states may decide whether to implement an AI regulatory sandbox or not ('national competent authorities may establish … '). 156Recital 72 states that '[t]o ensure uniform implementation across the Union … , it is appropriate to establish common rules for the regulatory sandboxes' implementation.' 157Therefore, the Council draft gives member states discretion whether they implement an AI regulatory sandbox, but, should they choose to do so, allows only implementation along the lines of article 53 of the draft AI Act.As a result, the Council draft expressly allowsbut does not requirethe implementation of AI regulatory sandboxes through legal acts at the member state level.The Parliament, on the other hand, requires member states to establish at least one AI regulatory sandbox. 158n addition to member states being able to decide whether to implement an AI regulatory sandbox, they may also be able to adopt certain substantive provisions under the Council draft.The reason for this is that in the Council draft the words 'no administrative fines shall be imposed by the authorities for infringement of applicable Union or Member State legislation relating to the AI system supervised in the sandbox, including the provisions of this Regulation' 159 in article 53(3) raise questions how far the EU's jurisdiction to prescribe exemptions from administrative fines can reach.The Parliament version restricts the scope of 150 Draft AI Act (Council version), art 1(c).For a detailed discussion see Veale and Borgesius (n 49) 108-10. 151  this exception to the AI Act itself. 160However, the Council version of the provision appears to make exceptions from any administrative fines that are to be issued under legislation relating to the AI system.If Austria were to establish a regulatory sandbox for medical AI, would the applicable provisions of medical professional law, e.g. the principle of direct treatment, fall under 'Member State legislation relating to the AI system'?Although the principle of direct treatment is not an AI-specific regulation, it does closely relate to AI systems that are intended to facilitate telediagnosis.If the principle of direct treatment does fall under 'Member State legislation relating to the AI system', the next question would be: Could the article 53(3) AI Act be used as a legal basis to make exceptions from imposing administrative fines on doctors for breaching the principle of direct treatment, even though the protection of human health falls within the legislative competence of member states?According to the CJEU, this would not be the case.It has held that harmonisation under Article 114 TFEU (then Article 100a TEC) must not be used 'in order to circumvent the express exclusion of harmonisation' of member states' competence. 161Therefore, the provision on administrative fines must be interpreted to apply only to areas where harmonisation is not expressly excluded.For the cancer detection AI system, it means thateven if the principle of direct treatment falls under 'Member State legislation relating to the AI system'article 53(3) of the draft AI Act alone, i.e. without accompanying member state legislation, cannot serve to exempt doctors from administrative fines under the ADA.

Legality
As already mentioned, the Council draft, even though the AI Act will be a regulation rather than a directive, leaves room for member states to decide whether or not to establish AI regulatory sandboxes. 162For the implementation of incomplete regulations, the same rules as for the implementation of directives apply. 163Therefore, the legal acts through which the Austrian legislator establishes AI regulatory sandboxes need to comply both with requirements at the level of EU law and at the level of Austrian constitutional law ('principle of dual obligations'). 164In Austria, the principle of legality, 165 which forms part of the Austrian conception of the rule of law, is particularly important for the lawfulness of sandbox regimes. 166It requires that any administrative action has a basis in statutory law and that it is 'sufficiently determined' by the statutory legal basis. 167Whether a statutory provision is consistent with the Austrian principle of legality needs to be assessed on a case-by-case analysis, because the level of detail required for 'sufficient determination' depends on the legal area concerned. 168gainst this background, a key legal issue is whether member states need to adopt a statutory basis to establish AI regulatory sandboxes.This is not the case for the European Parliament's draft, which already provides the statutory basis itself and does not grant the member states discretion whether to establish an AI sandbox or not. 169Whether the Council version of the draft requires implementation through Austrian national law depends on the type of sandbox.In the specific context of AI regulatory sandboxes, it is argued that a statutory basis is needed for a regulatory sandbox that allows supervisory authorities to deviate from statutory rules. 170However, the legal basis could be pre-existing and does not need to be a specialised 'sandbox law'. 171The legal basis for deviation from statutory rules couldwithin the limits of jurisdictional boundaries (see chapter 4.2.1 above)be found in the AI Act itself ('no administrative fines shall be imposed … ') 172 or in accompanying legislation at the member state level.
If the national legislator intends to pass accompanying legislation, the provisions effecting possible exemptions need to be consistent with the requirement of sufficient determination.The degree of this requirement depends on the supervised legal area and the nature of third parties' rights affected.Generally, regulatory sandboxes are intended to regulate dynamically evolving situations that concern the future, which is an indicator for a lowered standard for the test of 'sufficient determination'.However, where fundamental rights are at stake, as is the case with the cancer detection AI system, a higher degree of determination is required. 173

Equal treatment
A third legal issue is that the regulatory sandbox framework differentiates between innovators inside and innovators outside the sandbox.Because a sandbox participant might receive a more beneficial legal treatment than innovators outside the sandbox, it is important that access to the sandbox regime is not discriminatory.
According to the principle of equal treatment, a general principle of EU law enshrined in article 20 of the Charter of Fundamental Rights (CFR), 'comparable situations must not be treated differently, and different situations must not be treated in the same way, unless such treatment is objectively justified'. 174Particularly relevant in the context of sandboxing is the Court of Justice of the European Union's (CJEU) Société Arcelor Atlantique case, 175 which Advocate General Maduro described as 'a ruling on the relationship … between the practice of experimental legislation and the legislative requirements of equal treatment.' 176The context of the case was a directive establishing an emission allowance trading scheme that included the steel sector but excluded the aluminium and plastic sector from its scope.Although the CJEU itself did not remark on equal treatment issues of experimental legislation specifically, Advocate General Maduro did discuss this aspect.He held that 'the discrimination which experimental legislation inevitably entails is compatible with the principle of equal treatment only if certain conditions are satisfied'. 177First, '[t]he experimental measures must … be transitory', and '[s]econd, the scope of the trial measure must be defined in accordance with certain objective criteria', i.e. that the criteria relate to the subject-matter and the purpose of the relevant rules. 178lthough there are some conceptual differences between the directive in the Société Arcelor Atlantique case and the draft AI Act's sandbox regime, testing the latter against AG Maduro's criteria can provide an indicator whether it is consistent with the principle of equal treatment.First, although the sandbox framework itself is intended to be permanent, the time frame for individual instances of experimentation is limited to what is appropriate 'given the complexity and scale of the project'. 179Individual instances of tailored rule settings for real-world testing are therefore transitory in nature.Second, the purpose of the regulatory sandbox regime overall is '[t]o ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption'. 180The specific purpose of individual instances of sandbox testing may vary, depending on what type of AI system is being tested for what intended area of use.The AI Act also provides that criteria for a selection process will be set up in implementing acts. 181If these criteria will revolve around factors such as innovativeness and market-readiness rather than the branch or sector of the innovation in question (as it was the case in Société Arcelor Atlantique), the test formulated by AG Maduro will be met.If objective selection criteria could justify the differentiation for an emission allowance trading scheme (without any procedure to be either included in or excluded from it), then they can certainly justify a regulatory sandbox regime, that is generally open to any innovator that satisfies the access criteria.What is more, the CJEU has given the legislator broad discretion 'where its action involves political, economic and social choices and where it is called on to undertake complex assessments and evaluations'. 182It restricts the scope of its review to 'manifest errors', which mainly refers to errors that are 'evident'. 183Therefore, the AI regulatory sandboxes will likely not be found to be in breach of the principle of equal treatment under article 20 CFR.
Again, if the member states decide to adopt accompanying legislation, the national requirements of equal treatment may have to be taken into account as well, provided their level of protection does not compromise that of article 20 CFR. 184In Austria, the principle of equal treatment 185 requires that the legislator treats 'equal situations equally and different situations differently'. 186Any deviation from this rule requires an objective justification.In the context of regulatory sandboxes, the criteria for access to the regime are particularly relevant. 187The admission criteria need to be transparent and adequate for the type of innovation and the supervised legal area. 188

Liability
A fourth legal issue with the AI regulatory sandbox is the current distribution of liability between those undertaking experiments and those participating in experiments as subjects.Non-contractual liability for the use of AI systems is a much-debated topic in legal literature 189 and will soon be covered in the AI Liability Directive, of which the European Commission has recently published a first draft. 190In the AI Act's regulatory sandbox framework, '[t]he participants remain liable under applicable Union and Member States legislation for any damage caused in the course of their participation in an AI regulatory sandbox'. 191It is an important aspect of sandbox testing that third parties are not left out of pocket should they suffer any damage from the sandbox testing.However, to ensure policy consistency, the distribution of liability should be considered in conjunction with the rule to exempt sandbox participants from administrative fines.Under the currently proposed rules, participants might end up being held liable by third parties for damages that resulted from breaching a provision for which they are not being fined by the state. 192n the case of the cancer detection AI system, this could mean, for example, that patients would still be able to hold doctors or health institutions liable for damages resulting from breaches of the principle of direct treatment, even though the doctor would have been exempt from the administrative fine.Although third-party liability and administrative fines are two different legal measures with different aims, from the perspective of the person undertaking experimentation being exempt from administrative fines could be falsely interpreted as a signal that no legal consequences will follow from breaching the provision in question.
There are different ways to approach this problem.One of them would be to adapt the legal basis for granting exceptions to sandbox participants.Instead of 'no administrative fines shall be imposed', the relevant provision could read 'the sandbox participants are exempt from the duty to … '.The key consequence would be that, under fault-based liability regimes, sandbox participants could not be held liable for damage they caused because they have not breached any duty.Here, the legislator could take inspiration from established testing frameworks such as clinical investigations to address problems of liability during testing.Clinical investigations exempt innovators not just from the consequences of breaching an obligation under the MDR, i.e. from administrative fines under the MDR, but from meeting the obligations under the MDR themselves, for example the general safety and performance requirements. 193In order to protect third parties who might incur damage from being part of experiments, the MDR requires member states to ensure that 'systems for compensation for any damage suffered by a subject resulting from participation in a clinical investigation … in the form of insurance, a guarantee, or a similar arrangement that is equivalent as regards its purpose and which is appropriate to the nature and the extent of the risk.' 194 The AI Act could take inspiration from the MDR in this respect.

Informed consent
A fifth legal issue is about informed consent.While medical studies typically require some form of informed consent from testing subjects, 'many current forms of public experimentation lack meaningful consent procedures '. 195 This contrast can also be observed between clinical investigations under the MDR on the one hand and real-world testing within regulatory sandboxes under the draft AI Act on the other hand.
Unlike the Parliament draft, Article 54a of the Council draft provides rules for testing high-risk AI systems in real-world conditions outside AI regulatory sandboxes.Providers or prospective providers of AI systems may only conduct real-world testing subject to a catalogue of requirements.One of these requirements is that 'the subjects of the testing in real world conditions have given informed consent'. 196The qualifications of informed consent are laid out in Article 54b of the draft AI Act: Informed consent needs to be given prior to testing and after having been duly informed about the testing itself and their rights in the process. 197However, these requirements only apply to testing outside of regulatory sandboxesnot within. 198The rationale behind this differentiation might be that under the regulatory sandbox regime, the authority can exercise intensified regulatory oversight and impose additional safeguards that would not be put in place for real-world testing outside the sandbox regime.For testing the cancer detection AI system, this would mean that, although patients would still need to give consent to their medical treatment, their 193 MDR, art 62(4)(l). 194MDR, art 69. 195 Pfotenhauer and others (n 2) 22, citing Engels, Wentland and Pfotenhauer (n 2). 196Draft AI Act (Council version), art 54a(4)(j). 197Draft AI Act (Council version), art 54b. 198ibid.
The national competent authority may define conditions under which providers may deviate from the following individual provisions of Union legislation for the purpose of testing AI systems in the AI sandbox: [an exhaustive list of provisions that may be deviated from] The period for the exemption shall be limited to what the national competent authority determines is appropriate given the complexity and scale of the project.
The national competent authority may only allow deviation upon innovators' application and if suitable safeguards are put in place that guarantee the safety of the experiment (such as individual informed consent from testing subjects and/or mandatory insurance for persons carrying out testing).
To implement a sandbox regime that allows deviation from the principle of direct treatment in Austria, the Austrian legislator would need to insert an additional experimental clause into the ADA.It should be phrased in a way that allows the replacement of the principle of direct treatment with other requirements that achieve the same level of patient safety but allow more flexible real-world testing. 202The experimental clause could read as follows: The Federal Minister of Health may define certain conditions under which doctors may deviate from the principle of direct treatment (section 49 of the ADA) for the purpose of testing innovative medical-diagnostic aids under real-world conditions.
The period for the exemption shall be limited to what the Federal Minister of Health determines is appropriate given the complexity and scale of the project.
The Federal Minister of Health may only allow a deviation upon application and if suitable safeguards are put in place that guarantee the safety of the patient (such as individual informed consent from testing subjects and/or mandatory insurance for persons carrying out testing).

Conclusion
In applying regulatory sandboxes to real-world testing of a cancer detection AI system, this paper shows that the principle of direct treatment, although it was not designed to restrict the use of medical AI specifically, might cause legal uncertainty around the use of AI-supported telediagnosis.An AI regulatory sandbox established as a part of the national implementation of the EU's AI Act could help to reconcile innovation and safety in this regard.However, the chosen example also highlights some of the legal issues with the current draft of the AI Act.Wherever the regulation of a particular AI system concerns law at both the EU and the member state level, the implementation of AI regulatory sandboxes can become a complex task; the legislator also needs to be aware that regulatory sandboxes inherently create differential treatment between legal actors that needs to be justified through objective access criteria; distribution of fault-based liability needs to be considered in conjunction with potential no-enforcement provisions; and the requirement of informed consent cannot necessarily be replaced by intensified regulatory oversightrather, it might be that new models for obtaining informed consent in test settings need to be developed.
The underlying question, which goes beyond the AI Act's sandbox proposal, is for which types of problems regulatory sandboxes are the preferred legal approach.As this paper shows, there are specific situations where a regulatory sandbox can be a suitable legal measure.However, it is difficult to imagine many situations where innovation and safety cannot be reconciled through otherand simplermeans.So what do regulatory sandboxes contribute, at a conceptual level, to the regulatory toolbox of innovation law?
Regulatory sandboxes, viewed as a regulation strategy, are embedded in the broader context of how to address legal problems posed by new technologies. 203When new technologies emerge, the legislator can rely on administrative authorities, courts, and legal scholarship to apply existing law to new socio-technical phenomena. 204However, sometimes there will be intended or unintended regulatory gaps, and where there is no gap, existing legal provisions might not always yield the desired policy aim when applied to a new technology.Where this is the case, the legislator needs to take the initiative by amending laws.Regulatory sandboxes can help the legislator in making this decision by contributing to a better understanding of the (in-)adequacy of existing and planned laws in their social context.By maintaining the established rules while granting individual exceptions in individual cases, the legislator can observe whether general provisions yield the desired policy outcome if applied to the innovation and at which point is it necessary to amend them.The legislator can provide regulatory flexibility and generate empirical evidence about innovations in individual cases while maintaining the current rules in general.In other words, regulatory sandboxes allow legislators to adopt a 'wait and see'-approach and use this opportunity to generate more robust regulatory knowledge.
However, regulatory sandboxes are not the first legal tool designed for that purpose.Similar mechanisms, like clinical investigations under EU medical device law, have existed for a while already.Because mechanisms such as medical trials or clinical investigations are mainly established in areas that typically deal with new technologies and trade-offs between innovation and competing legal interests, the main innovative value of regulatory sandboxes lies in the possibility to copy those legal concepts and paste them into other areas of the law.As real-world experimentation expands beyond its traditional domains, so can, and should, the legal mechanisms that govern it.
Finally, regulatory sandboxes are not a panacea.Questions of how to distribute competence between the EU and member states or how to allocate liability among various actors do not vanish and new problems arise, for example in terms of equal treatment or informed consent.Although none of these problems are unsolvable, they require careful consideration and balancing of various interests.Therefore, regulatory sandboxes in general, and the framework proposed by the AI Act in particular, are a promising legal innovation, but they are not an easy fix for any, let alone many, conceptual problems of innovation law.
See generally Roger Brownsword, Law, Technology and Society: Re-Imagining the Regulatory Environment (Routledge 2019) 242. 86See generally Eisenberger (n 50) 126. 87See Regulation (EU) No 536/2014 of the European Parliament and of the Council of 16 April 2014 on clinical trials on medicinal products for human use, and repealing Directive 2001/20/EC [2014] OJ L158/ 1. See also Mohamed Abou-El-Enein and Christian K Schneider, 'Deciphering the EU Clinical Trials Regu- 85 89 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), [2016] OJ L679/1, art 35.See also GDPR, recitals 95-98; Article 29 Data Protection Working Party, 'Guidelines on Data Protection Impact Assessment (DPIA) (Wp248rev.01)'<https://ec.europa.eu/newsroom/article29/items/611236/en> accessed 11 December 2022.