Rule-based versus AI-driven benefits allocation: GDPR and AIA legal implications and challenges for automation in public social security administration

ABSTRACT This article focuses on the legal implications of the growing reliance on automated systems in public administrations, using the example of social security benefits administration. It specifically addresses the deployment of automated systems for decisions on benefits eligibility within the frameworks of the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AIA). It compares how these two legal frameworks, each targeting different regulatory objects (personal data versus AI systems) and employing different protective measures, apply for two common system types: rule-based systems utilised for making fully automated decisions on eligibility, and machine learning AI systems utilised for assisting case administrators in their decision-making. It concludes on the combined impact that the GDPR and the AIA will have on each of these types of systems, as well as on differences in how these instruments determines the basic legality of utilising such systems within social security administration.


Introduction
There are today plenty of examples that public administrations can rely heavily on automated systems to make or support legal decision-making even where there is risk that vulnerable groups in society are affected, and where malfunctions or inbuilt biases in automated systems have had detrimental effects at scale.
One sector which in many countries has become a focal area for various automation efforts is social security administration.Done 'right', the administration of social security benefits obviously advantages from fast and expedient automation (serving both fiscal and individual interests).However, within this sector, several instances demonstrate that the deployment and dependence on automated systems for administering or supporting social benefits have affected the legality and fair distribution of benefits, sparking a crisis in legitimacy.To name some of the most well-known examples, one is found in the authorities to ensure the lawful deployment of automated systems that are utilised in benefit administration. 6he GDPR and the AIA are the two key regulatory frameworks at EU-level which intersect with the deployment of automated systems in public case administration.While they do share some of their regulatory aims relating to the protection of human and fundamental rights and have points of intersection in the realms of data usage, they are primarily centred around different regulatory objects.As the primary regulatory object/s of the GDPR are personal datathe regulation also primarily establishes conditions for acts of processing such data. 7As the primary regulatory object/s of the AIA are AI systemsthis regulation, instead, primarily establishes conditions for the design and use of such systems. 8These differences, when translated into the specific criteria governing applicability and shaping the extent of obligations set forth by each instrument, lead to variations in the impact that they have (both in isolation as well as combined) on the legal conditions for public social security administrations to utilise technologies in their case administration on benefits.This article will demonstrate these differences through an analysis of their implications for two prevalent types of automated systems that are commonly deployed within social security benefits administration.
As indicated, the article will be structured around two example types of automated systems commonly deployed within social security administrations.Example type A is a so-called rule-based system (meaning that the system operates based on predefined, static rules and criteria) which can make fully automated decisions on benefits eligibility.Example type B is a so-called machine learning AI system (meaning that it is a data-driven model for pattern recognition) that is used to make inferences from the data it processes and guide decision-making administrators on how to decide on eligibility.The basic legal conditions for deploying each of these system types, as laid down through the GDPR and the AIA, will be analysed in the following sections in consecutive order.I will then, lastly, turn to drawing some conclusions on how the differences in their regulatory approaches impact the basic legality of A and B type systems respectively.92. System type A -'rule-based' systems used to make fully automated decisions on benefits eligibility Systems used for making fully automated decisions on benefits eligibility are typically of so-called rule-based type, meaning that they operate based on predefined, static rules and criteria which have been coded by humans. 10For such systems to be able to produce lawful decisions, one key issue is the ensuring that the eligibility criteria as well as applicable procedural requirements are translated into code in such a way that they generate full correspondence with the law.A defining characteristic is that systems of type A, due to their static properties, remain strictly confined to their prescribed instructions, and that this extends to both the types of data they handle and the deductions they make from this data.A practical example of a system of type A could be found in the Swedish social security setting and the administration of parental benefits, where the utilisation of a rule-based type system allows the Swedish Social Insurance Agency to issue around 65-71 percent of the cases fully automatically. 11This system is programmed to automatically and consecutively check each benefit criteria against a defined set of case evidence to draw conclusions on eligibility and effectuate a decision based on that conclusion, as well as issue a decision and notice to the claimant.

Ramifications of Article 22 GDPR for type A systems
When turning to the regulatory frameworks that pertain to systems of type A, a suitable starting point is that their use triggers the application of Article 22 GDPR.This article governs cases where personal data, such as the case evidence in a benefits claim, are processed to make solely automated decisions which produces legal effects concerning the data subject, by imposing certain conditions on its use. 12There is much discussion around the meaning of Article 22, where one of the debated questions has been whether solely automated decisions are to be interpreted as being prohibited, or whether the article rather regulates a right not to be subject to such decisions which must be invoked by the data subject him-or herself.That the prior interpretation is the valid one is clear since the Court of Justice of the European Union, CJEU, in December 2023 gave its ruling in the C-634/21, OQ versus Land Hessen case. 13However, even though the CJEU has finally answered this question, and as is relevant for this article, it should be noted that the article does not impose a blanket ban on automated decision-making.The article, namely, contains opening clauses which allow for such decision-making under specified circumstances.For public authorities, such as most social security administrations, the relevant derogation is found in Article 22(2)(b) GDPR which establishes that solely automated decisions may be made if (a) it is authorised by Union or Member State law and (b) the referenced regulations in (a) also lays down suitable measures to safeguard the data subject's rights and freedoms and legitimate interests. 14dditionally, if the data qualifies as special category data under Article 9 of the GDPR (that is, data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, genetic or biometric data, data concerning health or a natural person's sex life or sexual orientation), Article 22 GDPR lays down a qualified prohibition (which thus applies even if the 22(b) derogation applies) against solely automated decision-making based on special category data.However, this additional prohibition also allows for derogationsas relevant in this article if the 11 Swedish Social Insurance Agency [Försäkringskassan], 'Försäkringskassans Årsredovisning 2022' (2022), 36. 12I will here presume that all data from individual claimant's case files that are fed into an automated system that makes individual decisions on eligibility for social security benefits will qualify as personal data under the extensive definition of personal data laid down in Article 4(1) GDPR. 13Case C-634/21 OQ v Land Hesse EU:C:2023:957, para 52. 14The other derogations allow for such processing if necessary for entering or performing contracts, Article 22(2)(a), and if it is based on the data subjects explicit consent Article 22(2)(c) GDPR.None of these derogations will typically enable processing by public authorities, not the least since the GDPR indicates a hesitant position towards whether consent can be freely given due to the power imbalances at play where the controller is a public authority, Recital 43 GDPR.
processing is necessary for reasons of substantial public interest under Article 9(2)(g) GDPR on the basis of Union or Member State law.The structure of Article 22 is thus rather complicated but contains opening clauses that refer to Union or Member State laws for determining when decisions can be made solely automated.As public social security administrations generally operate based on obligations laid down in law, where the specific eligibility criteria of public social security benefits schemes are also laid down in law, these clauses (from a GDPR perspective) therefore seemingly open the gates rather wide open for fully automated decision-making as long as there is a legal basis.One basic condition is nevertheless that any specific Union or Member State laws invoked under a GDPR opening clause must align with the human and fundamental rights as laid down in the European Convention of Human Rights, 15 ECHR and the EU Charter of Fundamental Rights,16 CFR. 17 As established in CJEU case law, Member States who exercise options granted by a GDPR opening clause must also use their discretion under the conditions and within the limits laid down by the provisions of that regulation, and must therefore legislate in such a way as not to undermine the content and objectives of that regulation. 18In the recent C-634/21 OQ versus Land Hessen case, the court stressed in particular that Member States may not adopt legislation under Article 22(2)(b) authorising profiling without respecting the requirements of Articles 5 and 6, as interpreted by the case-law of the CJEU.The same goes for legislation allowing for automated decision-making based on the processing of special category data as under the qualified prohibition in Article 22(4) GDPR. 19The court thus clearly reinforced that national legislation installed under the Article 22(2)(b) opening clause remains subject to scrutiny under the Article 5 GDPR fundamental principles, such as lawfulness, fairness, transparency, purpose limitation, data minimisation etcetera.It also made clear that national legislations cannot disregard the fact that any processing of personal data must satisfy at least one of the legal bases for processing personal data under Article 6 GDPR. 20ltogether, these above discussed limitations on the space for manoeuvre offered by GDPR opening clauses such as Article 22(b) GDPR mean, among other things, that proportionality considerations become central to assessing whether national law can provide a basis for GDPR-compliant derogations.A linked question here is also what degree of specificity that a Union or Member State law must have to allow for an Article 22(2)(b) derogation.Here, Recital 45 clarifies that a specific law for each individual processing is not required and that it is for the Union or Member State law to determine the purposes of processing in such cases. 21This means that a specific statutory power to process personal data is not required, although the underlying task, function or power must have a clear basis in law.The recital also makes clear that purposes of public health, social protection as well as the management of health care services are considered as being in the public interest (thus clarifying the public interest status of social security in a broad sense).The recital does not refer to Article 22, neither in the direct or implied sense.Instead, it addresses opening clauses related to public interests, just like those found in Article 22.It could therefore be inferred that the Recital's clarifications have bearing for interpreting the opening clauses pertaining to 'public interests' in Article 22 GDPR.
Article 22 GDPR thus lays down an obligation to establish a legal basis for any public decision-making that is solely automated, while the language and structure of the article still seems to invite to discussions around its specific interpretation on the generosity of the opening clauses.The same is also true for what safeguards must be in place for the derogations from the prohibition on processing personal data to apply.For instance, Sweden has chosen to incorporate a broad provision into its Administrative Procedures Act that covers most public decision-making. 22This provision simply states that decisions can be made automatically without specifying further criteria for when such practices are considered lawful.It is worth noting that these automated decisions are subject to general safeguards outlined in the same regulation, which apply regardless of whether the case is handled through automation or manually.This approach has by the national legislator been viewed as meeting the safeguarding requirements of Article 22 of the GDPR.Nevertheless, ongoing debates question whether such a generalised provision can justify an exemption under Article 22(b) of the GDPR, and whether one can be confident that technology-neutral safeguards are adequate to fulfil the safeguarding requirements throughout the various instances of fully automated decision-making that may take place across the public sector. 23eneral advocate Priit Pikamäe's opinion in C-634/21 OQ versus Land Hessen indicates that there, at the very least, must be alignment between the scope ratione materiae of the national regulation and Article 22 GDPR, meaning that regulations designed for all too broad purposes cannot serve as a legal basis for the adoption of a national legislative measure under Article 22(b). 24In its judgement, the CJEU did not specifically address this aspect of the general advocate's opinion.However, the court confirmed the spirit of this reasoning through its emphasis of the obligation to make a careful legality assessment, which must be able to identify the personal data processing that the legal basis enables, as well as the security measures linked to that processing, in order to enable an assessment of whether the processing meets the requirements laid down in pursuant to the constitutional order of the Member State concerned.Such measures must, however, be clear and precise and its application foreseeable in accordance with the case-law of the CJEU and the ECtHR. 22Section 28 of the Swedish Administrative Procedures Act (2017:900). 23 Articles 5, 6 and 9(a) or (g) (by proxy of Article 22(4)). 25Since such an assessment is only possible if the national legal basis has a sufficient degree of specificity, the judgement emphasises the precision-aspect of the legality requirement.While this reasoning in itself does not imply a requirement of explicit links between the national legislation and Article 22 GDPR (or the GDPR as a whole), it stresses that legislation intended to allow exceptions to the article's prohibition of automated decision-making cannot be designed for all too broad purposes.
Future CJEU case law will likely clarify the perimeters of Article 22 in further detail.Its design, however, makes clear that social security administrations utilising systems of type A must clearly delineate the statutory support that they base their automated decision-making onwhile that support must not explicitly reference the GDPR.A general regulation authorising automated decision-making cannot therefore alone justify an exception to the prohibition in Article 22(1) GDPR, but the regulation must also (alone or in combination with other regulations) clarify the conditions of the specific personal data processing to be carried out as well as the safeguards provided for.A transition from manual to automated decision-making does, however, not inherently necessitate active legislative action by national lawmakers to address issues of permissibility or specific safeguards.Existing legislation that was not designed with a specific decision-making process in mind may suffice.For public authorities, like social security administrations, operating under Union or Member State law, Article 22 GDPR therefore sets thresholds for the alignment of these regulations rather than providing its own specific criteria.

GDPR lawfulness of utilising type A systems in benefits administration
Outside of Article 22, the GDPR also applies to the processing of personal data which systems of type A will perform when in use. 26The basic lawfulness-criterion for data processing is found in Article 5(1)(a) GDPR and can only be met if at least one of the legal bases enumerated in Article 6 GDPR is satisfied.Furthermore, if the data qualifies as special category data (that is, data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, genetic or biometric data, data concerning health or a natural person's sex life or sexual orientation), one of the legal bases enumerated in Article 9 must, additionally, also be met (as the article prohibits the processing of such data unless an exception applies).Again, the fact that social security administration qualifies as a public interest, means that the lawful bases for processing personal data that are utilised in automated decision-making procedures are found in those provisions of Articles 6 and 9 GDPR that relate to processing of personal data for public interests.In the case of a type A system, the most relevant lawful bases would here be Article 6(1)(e), which allows processing necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller, and Article 9(2)(g), which allows for processing of special category data if necessary for reasons of substantial public interest. 27For both articles, the basis for such processing must be laid down in Union or Member State law (as per Article 6(3) and 9(2)(g)), serving dual purposes for the legality assessment in the GDPR. 28t the Member State level, the fact that the processing must have a basis in law means that national lawmakers who wish to utilise a GDPR opening clause must in the legislation make explicit the tasks which may necessitate the processing of personal data. 29As already touched upon above, such regulations must be legitimate, necessary in a democratic society and proportionate according to the standards set by the ECHR and the CFR, as well as align with the content and objectives of the GDPR regulation which contains the opening clause. 30Through a rather open-ended formulation in Recital 93, stating that laws which delegate tasks to be carried out in the public interest or in the exercise of official authority may deem it necessary to carry out an impact assessment prior to any processing activities are started, the GDPR also encourages situated proportionality considerations at the legislative level to secure such compliance.Against the background that automated decision-making often is justified by an efficiency-rationale, it may also be noted that a lack of resources, according to the CJEU, cannot in any event constitute a legitimate ground justifying interference with the fundamental rights guaranteed by the CFR. 31 Altogether, this limits the discretion for national lawmakers' utilisation of the opening clauses, while still leaving much room for different legislative approaches to automated decision-making based on personal data. 32owever, it is important to note that compliance with Union or Member State legislation corresponding to one of these opening clauses does not guarantee the lawful processing of personal data.This is because at the applied level, where social security administrations act as data controllers and must determine whether personal data can be processed for specific purposes, such as implementing automated decisions using type A systems, the scope and substance of that legislation also serves as the benchmark to assess whether the specific processing sought by the controller qualifies as necessary (by the standards of Articles 6(1)(e) and 9(2)(g) GDPR). 33National social security administrations, when assessing the lawfulness of the specific personal data processing activities resulting from the use of a type A system would ideally assess the compliance of that processing both at the legislative and the applied level. 34However, it is more likely that they typically will focus on the latter assessmentwhere they must determine whether their data processing is necessary in relation to their tasks laid down in law, where the concept of necessity should be narrowly construed in favour of the data subject, and where derogations and limitations in relation to the protection of personal data only apply in so far as is strictly necessary. 35The necessity assessment at this stage is also to be conjunct with the 'data minimisation' principle of Article 5 (1)(c) GDPR, which emphasises the proportionality principle. 36This means that social security administrations in their roles as controllers must balance these considerations at the detailed level even if there is legislative support at Union or Member State level.
While securing a legal basis for the processing of personal and special category data is fundamental for securing the lawful deployment of an automated decision-making system of type A, the GDPR also imposes several additional obligations that social security administrations must cater to when deploying such systems.All of these cannot be elaborated here, but it is worth highlighting those obligations that may be considered particularly relevant in contexts where personal data are processed by the aid of technologies in public benefits administration.I will here prioritise those facets of the GDPR that extend beyond the scope of data processing alone and consider the broader potential impacts that technologies may introduce to the processing.
Even where a legal basis for processing has been established, the Article 5 GDPR principles relating to the processing of personal data circumscribes the administration's possible use of such data.Particularly, the principles of fairness and transparency, purpose limitation, and data minimisation play a significant role in addressing issues related to discrimination and bias, data excess, and data overuse.These principles are 'active' principles in the sense that they must continuously be considered and met.They may also give rise to specific challenges, especially if the automated system is configured to process more and different types of data than would have been considered in a fully manual process.However, a common characteristic of rule-based systems is that they adhere to strictly predefined rules, especially when making fully automated decisions, ensuring that they are not programmed to consider data beyond what is relevant to the specific eligibility determination.Under the assumption that type A systems are generally not configured to consider excessive amounts of data, an (indeed very general and broad) assertion could be made that type A systems also typically do not cause added tension at least with the principles of data minimisation and purpose limitation.However, there will be reason to return to these principles further on. 33When public social security administrations utilise type A systems for making decisions based on personal data, they act as controllers under the GDPR, Article 4(7), with the determination of purposes being the decisive element The GDPR obligations framework is built around a risk-based approach.Public social security administrations operating type A systems act as controllers under the GDPR and as such, they are mandated to assess the risks associated with specific data processing activities and take appropriate measures to protect individuals' rights and freedoms. 37They must therefore implement appropriate technical and organisational measures to ensure a level of security for the data which is appropriate to the risk, Article 32 GDPR.This requirement has been made relative to 'the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons', highlighting its dual focus on risk assessment and goal attainment.Taking note of the fact that it includes a responsibility to have in place a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processingthe requirement assumes a fairly comprehensive approach where data protection and technological affordances must be conjointly considered.
In their capacity as controllers, social security administrations considering deploying a type A system must typically also perform an impact assessment under Article 35 GDPR.This article calls for such an assessment to be made where a type of processing, in particular using new technologies, is likely to result in a high risk to the rights and freedoms of individuals.The assessment should be made considering the nature, scope, context and purposes of the processing, and the assessment must be made before the system is put into use. 38Article 35 here, as indicated also in Recital 36, prescribes a two-step assessment process.The initial step requires the controller to determine if the intended use of the system, concerning its personal data-related operations, triggers the application of Article 35.At this stage, the key question is whether the processing carries significant risks.It should therefore be noted that Article 35(3)(a) holds that a data protection impact assessment, DPIA, should in particular be required in cases of a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person.Similarly, the Article 29 Working Party guidelines on DPIA's (as endorsed by the European Data Protection Board, EDPB) have identified that automated decision-making with legal or similarly significant consequences serves as an indicator that the processing is likely to qualify as high risk. 39Social security administrations considering deploying type A systems are thus likely obligated to carry out a DPIA.As will be elaborated, however, there are some possible exemptions to this obligation. 40 DPIA can cover a single data processing operation but can also address multiple similar processing activities with high-level risks.As indicated in Recital 92 GDPR, this might be the case where public authorities or entities aim to create a shared application or processing platform.One DPIA can therefore cover processing activities with common characteristics, aiming to systematically evaluate situations posing significant risks to individual rights and freedoms, rendering it unnecessary in situations using similar technology to collect the same data for identical purposes.41 Or, in other words, a DPIA may be omitted if the processing closely resembles a previous DPIA.A DPIA is also not required if the national competent supervisory authority has utilised the Article 35(5) option to establish and make public a list of the kind of processing operations for which no data protection impact assessment is required.This exemption, however, applies only if the processing strictly adheres to the specified procedure in the list and continues to meet all GDPR requirements.42 Another possible exemption from the DPIA-obligation is found in Article 35(10) GDPR which acknowledges that data processing activities which take place based on Union or Member State law might have already been subject to prior impact assessments in the context of the adoption of that legal basis, and that this circumstance might render a DPIA superfluous.As the 35(10) derogation refers to situations where such laws regulate 'the specific processing operation', however, this exception applies only in those cases where there is a specific legal basis targeting the processing performed by the type A system (so that the impact assessment performed during the legislative phase may have covered the more specific risks of system use).43 This means that social security administrations cannot escape performing a DPIA by relying on the assessment made at the legislative level unless there is close alignment with the regulations purpose/s and the specific processing that they are to perform.If the legislation governing type A systems is not explicitly tailored for that particular application, then a DPIA must be performed before implementing such a system.44 Even though these possible exemptions to the DPIA obligation have a design which is fairly accommodating towards personal data processing in the law regulated public interest sphere, they all build on the presumption that a prior (but transferable) impact assessment has considered the types of risks also associated with the new deployment.One could make the argument that a type A system, while sharing many common risks with other systems used in public decisionmaking, also presents distinct risks related to the specific benefit it automates.This consideration aligns with the EDPB's viewpoint, emphasising the importance of conducting a DPIA when introducing a new data processing technology.Therefore, the assertion that such an introduction typically necessitates a DPIA remains valid.The EDPB's position 40 Of note is that Article 36(5) GDPR features a provision that permits Member State legislation to demand controllers to engage in consultations and prior authorisation from the national supervisory authority concerning controller-initiated data processing tasks conducted for the purpose of fulfilling a public interest task.This includes processing related to social protection and public health.41 Article 29 Data Protection Working Party (n 39), 8, 12. 42 ibid 9. 43 It may also be mentioned that Article 36(1) GDPR mandates prior consultation with the supervisory authority when the DPIA indicates that the processing would result in a high risk in the absence of measures taken by the controller to mitigate the risk.44 Thus, while a shift from manual to automated procedures may not require legislative intervention, targeted legislative measures could eliminate the need for a DPIA.
is also that when there is uncertainty about whether an obligation to conduct a DPIA applies, it should be carried out as a precaution. 45he proactive DPIA evaluations are, furthermore, closely linked to the GDPR's Article 25 principle of privacy by default and by design, as this principle centres around the idea that data protection compliance might best be helped if protective strategies are established and integrated in technical or organisational measures. 46It holds that the controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed.This obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility.Importantly, this duty is applicable prior to the initiation of processing practices.Consequently, the GDPR imposes specific requirements on systems of type A, emphasising the imperative for privacy-conscious design and the safeguarding of personal data throughout the entire processing lifecycle.
The main objective of the GDPR is to protect personal data as a proxy for protecting primarily the right to data protection, as laid down in Article 8 CFR and as included in Article 8 ECHR.The regulation is therefore not primarily geared towards regulating risks relating to automated systems which utilise personal data to make or support decisionmaking (in social security administration or elsewhere).In other words, and as eloquently put by Bygrave, the rules of data protection law do not engage directly with the processes involved in creating models, algorithms and other elements of inferential architecture. 47s seen in the design and objectives of Articles 25, 32 and 35, however, the GDPR requires proactive approaches to data protection.Especially through the principles of purpose limitation and data minimisation, the GDPR also signals a concern to ensure that data controllers duly reflect over the nature of the problems/tasks for which they process data, and over the quality (relevance, validity etcetera) of the data they process to address those problems/tasks. 48While this protection does not have collective interests as the guiding principle, they can contribute to a more careful selection and management process regarding personal datawhich may impact the GDPR legality assessment of a system of type A. Essentially, the GDPR thus implies a consideration of collective risks stemming from automated processing procedures.Proactive protection methods demand a more inclusive, anticipatory approach to risk management, although the main focus remains on safeguarding the personal data as such.
However, another aspect of the GDPR is that it places responsibilities primarily on controllers and processors.This implies that when the controller has not been involved in the development of the system, the GDPR will not regulate it comprehensively.Instead, it will apply separately to each controller based on the processing they carry outmeaning that also those obligations in the GDPR that have a more holistic and prognostic element to them, such as the obligations of making an impact assessment or to design systems by the principles of data protection by design and default may be implemented in a way that is fragmented in relation to the final use and implementational setting of the system.So, when a public social security administration opts to purchase a type A system from a private actor, this means that the GDPR compliance responsibility in the development phase lies with that private actor.It may, however, be noted that that Recital 78 stresses that if public authorities (such as here public social security administrations) are procuring systems, they should consider the principles of data protection by design and by default. 49This implies a responsibility resting with public authorities to take overall responsibility for ensuring that data protection principles are backed and upheld even when the immediate responsibility rests upon another (private) actor.It should, nevertheless, also be stressed that systems such as those of type A, which are intended for deployment within highly specific as well as regulated areas like social security administration and decision-making, are likely to have at least partially been developed or customised by the responsible authority.In such cases, the GDPR will have more leverage over the development phase of such systems in relation to what the systems will ultimately be used for.

Applicability of the AIA for type A systems
What about the AIA for type A systems then?Rule-based systems might rely on AI technologies to process information, meaning that they are not a mutually exclusive category in relation to AI systems.However, it seems likely that most rule-based systems coded to make fully automated benefit decisions have completely or at least clearly dominant elements of predefined rules and logic. 50The AIA definition of AI aligns with the revised approach and definition proposed by OECD in November 2023, which includes machine-based systems that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.The Council's press statement (as an example) attributes this alignment to the clarity provided by the OECD definition in distinguishing AI from simpler software systemsmarking a wish to avoid the inclusion of less advanced systems that have been deployed for a long time. 51nder the AIA, the determination of whether a rule-based system will meet the AI definition will clearly necessitate a detailed assessment of the specific system's technical configuration.For the purposes of this article, however, it seems likely that type A systems will not trigger the application of the AIA, and the following analysis will proceed on that presumption.Consequently, the regulation's scope will not be further discussed in relation to type A systems.

Summary remarks
In summary, the utilisation of systems of type A in social security benefits administration triggers the applicability of the GDPR but is not likely to trigger the AIA.In the GDPR, the public interest nature of the task opens up its provisions for a rather large portion of national discretion through allowing for the installation of specific national provisions which can qualify the processing as lawful under the general provisions outlined in Articles 6 and 9, as well as for solely automated decision-making under Article 22.This approach allows for national flexibility though alignment with national legal frameworks on social security, and thus recognises the often context-specific considerations involved in the administration of social security benefits.Framed from another perspective, however, the approach also allows for the persistence of regulatory diversity across Member States.This means that the harmonising influence of the GDPR on the permissibility of, and safeguards for, automated decision-making may not be as strong within the field of social security administration.The GDPR impact for those administrations shifting from manual to automated decision-making procedures by the aid of type A systems, in terms of the basic legality of such practises, is thus fairly low as long as there are eligibility conditions laid down in law (enabling an assessment of the necessity of the processing) and as long as automated decision-making is a lawful practise.

System type Bmachine learning AI systems used to support manual decisions on benefits eligibility
As indicated in the introduction, it is also possible to discern a trend of increasing curiosity and innovation regarding the utilisation of machine learning AI systems within the social security realmhere called type B systems.While these types of systems do share some merits and challenges with the rule-based systems of type A, their quintessentially different operational logics also set them apart in terms of their aptness to mimic legal reasoning and successfully execute lawful decisions.Machine learning systems draw conclusions based on statistical data and can adapt their logic to improve accuracy.Instead of premising on the manual translation of rules into code, machine learning systems identify patterns which they learn from data such as, for example, past decisions, judgments, or case evidence.Machine learning AI-systems thus function by making statistical inferences rather than operating on subsumption logics. 52It should be pointed out that 'AI', or 'machine learning' is not one monolithic block, but may include a diverse range of techniques, algorithms, and approaches. 53The inherent opacity and functional logic that contrasts with legal reasoning at both an ontological and epistemological level, however, generally makes machine learning AI systems riskier to utilise in contexts where regulations are to be appliedas it is difficult to ensure that the system does not take account of circumstances that go beyond what is legally relevant, and thus also difficult to ensure that the system makes judgements that align with legal reasoning and legal criteria.Consequently, in public decision-making practises such social security benefits allocation, which are regulated by law as well as meant to ensure lawful exercises of power, machine learning AI systems are more often used to assist administrative tasks rather than making fully automated decisions. 54ne example of a type B system can be found in the US setting, where the so-called Insight system helps administrators to analyse draft decisions on eligibility by identifying and directing their attention towards flagged potential quality issues in the draft.Insight applies natural language processing, which is an AI technology, to extract information from a written decision and combines it with structured data from workload systems to both apply rule-based and probabilistic machine learning algorithms, and is thus based on a combination of rule-based and machine learning technologies. 55Another example could be taken from the Swedish social security context, where a machine learning AI system called SAMU, which is also based on natural language processing technologies, is deployed to help administrators direct their attention towards passages in medical certificates which are relevant for assessing claimants' work ability based on criteria related to their functioning, disability, and health (which are determining factors for sickness or activity compensation benefits). 56

Ramifications of Article 22 GDPR for type B systems
As systems of type B does not make any decisions, they do not trigger the application of Article 22 GDPR.It is, however, worth noting that Article 22's concept of 'decision' is not confined to the strictly public sphere, which raises the question of whether also such positions taken by an automated system that do not characterise as a final decision in the sense of administrative law can constitute a decision under the article. 57It is also worth noting that when the elements of human supervision over an otherwise fully automated process are minimalsuch as if a machine-learning AI system generates a preliminary decision that a human regularly approves and implementsit may still qualify as solely automated within the meaning of Article 22. 58 The GDPR thus expresses a functional rather than strictly technical view of solely automated decision-making.Even so, the types of tasks that our example systems of type B perform are unlikely to trigger the application of Article 22.This is because the involvement by human administrators that is required to assess and balance the information flagged or recommendations made by the system is quite substantialmeaning that the system's engagement with the final decision-making likely is too indirect to trigger the article.

GDPR lawfulness of utilising Type B systems in benefits administration
Just as in the case of type A systems, the general GDPR provisions that govern the processing of personal data applies to type B systems.As these latter systems rely on machine 54 Alon-Barkat and Busuioc (n 50) 153. 55 learning technologies, this renders their operations and functionality data-driven in a twopronged sense.First, their functionality hinges on a training phase, during which the system utilises data to learn and adapt its operational logic from these. 59Second, their functionality hinges on an operational phase, in which they utilise their learned insights to make real-time predictions based on new data inputs.The data utilised in each of these phases might be personal data, and the legal conditions for processing these data might differ depending on which of these phases the processing is carried out during.
In the training phase, type B systems typically require training with data that meets the criteria of personal data to effectively evaluate similar data during the operational phase. 60Other aspects are that a considerable amount of training data is typically crucial, and that there might be a need for bias-conscious data selection practises to avoid the systems replicating biases in the data.This raises important questions about whether and how the GDPR addresses these likely features of type B systems.Here it may, initially, be noted that the GDPR does not contain any specific provisions for training data.In the determination of a legal basis for processing training data which qualify as personal data, the general provisions in Articles 5, 6 and 9 GDPR will therefore be the starting point.The specific legal basis for such processing may also depend on whether the system is developed by a private entity or a public authority, such as social security administrations themselves.If the system is developed by a private company, the options may include relying on legal bases such as legitimate interests or consent, which are applicable in the private sector (as opposed to in the public sector). 61Since the primary emphasis of this article revolves around the fundamental legality of public social security administrations using type B systems, I will refrain from delving into the potential legal foundations for private entities.Also, it is worth noting that even in those cases where social security administrations purchase 'pre-trained' type B systems, they may often need to train them further themselves to fine tune the system's functionality for the specific domain use.So, in cases where the type B system is either fully developed or further trained by public social security administrations themselves, the legality of processing personal data during the training phase should be evaluated in accordance with Article 6(1)(e) for personal data and Article 9(1)(g) for special category data.And, as already noted, both these bases contain opening clauses which refer to a further legal basis in either Union of Member State law.
As the training data often consist at least partially of data collected from previous cases, their processing therefore gives rise to questions about compliance with the GDPR purpose limitation principle, which requires that data should not be further processed in a manner that is incompatible with the initial specified, explicit and legitimate purposes they were collected for, Article 5(1)(b). 62However, the tension arising from the 59 P Hacker, 'Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against Algorithmic Discrimination under EU Law' (2018) 55 Common Market Law Review 1143, 1146 f. 60 It may be noted that training data might often consist of mixed datasets containing both personal and non-personal data, potentially leading to the inference of personal data from non-personal data through the combination of different data. 61It may be noted that public authorities are not formally precluded from basing their processing on consent, but that the GDPR, as stated in note 14, takes a restrictive view of this possibility due to the unequal balance of power in play.purpose limitation principle is mitigated in those cases where the further processing of data aligns with Union or Member State laws, as Article 6(4) makes it clear that such processing does not conflict with the principle.This reduces its impact on principles within public sector applications, such as those in social security administration. 63There must, however, exist a clear legal mandate that requires or allows for the new processing.In other cases, the new purposes for processing must pass the test for being compatible with the initial purposes. 64Furthermore, the principle of fairness in Article 5(1)(a) GDPR also strives to combat discriminatory practices, thereby indirectly mandating a thorough examination of the training data for discriminatory biases that could potentially harm data subjects during its processing. 65However, the precise ramifications of the fairness principle concerning bias remain uncertain, as both the GDPR and CJEU case law lack specific guidance in this regard.Notably, the EDPB in its binding decision on the dispute submitted by the Irish SA regarding TikTok Technology Limited has recently stressed that the GDPR fairness principle should be construed as an independent ground of possible GDPR infringement. 66By stressing that fairness should also protect against processing practices that are detrimental and discriminatory to the data subject, the EDPB construed fairness as a substantive principle that extends beyond mere informational fairness.In this broader context, the ramifications of fairness is not limited solely to transparency (which demands that data subjects are not deceived or misled about the processing of their data). 67When construed in this substantive way, the GDPR by proxy of the fairness principle holds governing potential in relation to bias and design of automated systems, and places a general obligation on social security administrations to ensure equitable treatment through their data practices.Additionally, the sheer volume of training data commonly used and its alignment with the data minimisation principle in Article 5(1)(c) is another crucial consideration. 68One GDPR challenge here is that bias washing in training data often presupposes the processing of data which qualifies as special category data under Article 9 GDPR, and which therefore is subject to a presumption of prohibition of processing unless (as noted being of relevance in this article) there is a basis in Union or Member State law that allows for a derogation to be made.According to van Bekkum and Zuiderveen Borgesius, no national lawmaker in the EU, nor the EU, has yet adopted a specific law that enables the use of special category data for auditing AI systems. 69However, in contrast, since public social security administrations engage in activities based on legal mandates, and as the GDPR does not mandate explicit references to Union or Member State laws for them to qualify under an opening clausea pertinent question arises.Can a statutory obligation to ensure the legal and efficient administration of benefits, or an obligation to meet information security requirements, serve as a (Member State level) legal basis for processing training data within the GDPR?As an example, the Swedish standpoint has been that testing activities are typically seen as an essential administrative measure required to facilitate the fulfilment of an authority's statutory duties, and that therefore no explicit mandate for testing activities is deemed necessary. 70Against the backdrop that the latter type of interpretation would relax the GDPR's impact on training data utilisation in regulated public sector settings, as well as against the backdrop of the discussion in section 2.1 on that the CJEU has emphasised the alignment of national regulations with the fundamental data protection principles in Article 5 GDPR as well as the legal bases for processing in Article 6 GDPR, the national regulatory mandate would at least need to be sufficiently precise to determine what personal data processing is authorised, for what reasons and what safeguards are in place.Clarifications in future case law would be valuable here.
Also, given these typical features and recognising that both the purpose limitation and data minimisation principles should be interpreted narrowly for special category data, it is worth noting that Article 10(5) AIA will introduce a specific authorisation for processing special category data if strictly necessary for the purposes of bias detection and correction in high-risk AI systems.A condition for such processing is that the bias detection and correction cannot be effectively fulfilled by processing other data, including synthetic or anonymised data.This authorisation will also come with obligations of securing safeguards, including technical limitations and state-of-the-art security and privacy-preserving measures, such as pseudonymisation or encryption.This provision thus denotes an attempt to address the paradox that arises from the fact that high-risk AI systems might need to process vast amounts of special category data to function properly and fairly, and that the processing of such data thus might be needed in order to protect that same data or other future data. 71With the entering into force of the AIA, the legal landscape for GDPR compliant training of AI systems such as type B systems thus seem to improve.However, until a precise interpretation of the term 'strictly necessary' is established in this context, the AIA provision also poses difficulties for social security administrations aiming to ground their processing of special category data on this provision. 7269 M van Bekkum and F Zuiderveen Borgesius, 'Using Sensitive Data to Prevent Discrimination by Artificial Intelligence: Does the GDPR Need a New Exception?' (2023) 48 Computer Law & Security Review 105770, 7 f. 70Swedish Government Bill, Prop.2019/20:113 (2020), 19 f. 71 van Bekkum and Zuiderveen Borgesius (n 68), 9. Of note is also that Recital 45 b AIA recognises that such special category personal data processing (exceptionally and to the extent that it is strictly necessary), could be done by providers to ensure bias detection and correction for high-risk AI systems as a matter of substantial public interest within the meaning of Article 9(2)(g) GDPR. 72It is worth mentioning that the AIA provides certain options for creating AI systems within regulatory sandboxescontrolled environments where innovative technologies can be tested under relaxed regulationswhich public Following strategies for ensuring access to vast amounts of data as well as strategies to implement data protection by design and by default principles under Article 25 GDPR, it seems that it has also become more common to generate and utilise so-called synthetic data during the training phase.The Swedish Social Insurance Agency, for example, have used a combination of personal and synthetic data to train the so-called SAMU system as mentioned in the chapter introduction. 73Synthetic data refers to artificially generated data that mimics the characteristics of real data but does not directly correspond to any specific individual's personal information.Since synthetic data are not derived from actual individuals and do not contain real personal data, it might be argued that they fall outside the scope of the GDPR.This, however, depends on whether the data in combination with other data might allow for identification through inference. 74As discussed by, amongst others, Bygrave, the GDPR, as it only applies to personal data, leaves aggregate or group data that cannot be readily linked to a particular identifiable outside of its ambit. 75This circumstance weakens the potential for the GDPR to protect collective entities from collective risks.As Bygrave also points out, however, the GDPR definition of 'personal data' is expansive with the main emphasis placed on 'identifiability', where case law for example has made clear that combinations of data sets can render the data identifiable even in cases where the controller is not in control of all the data needed to achieve identification. 76Emphasis is thus placed on the technical possibility to identify someone through the combination of data rather than whether it is likely that such efforts will be made.
Hence, as previously mentioned, the training phase of a type B system typically demands substantial data, setting it apart from type A systems in this regard.However, when focusing on the system's operational phase, it is less certain that the system will require the processing of more or different data compared to a type A system, or that it presents privacy issues that are truly distinctive.Take, for instance, systems like Insight and SAMU mentioned earlier, which analyse decision designs or the content of medical certificates which serves as evidence in individual cases.These systems do not necessarily require more input data to process than what is available for each specific case.Consequently, the considerations that social security administrations need to make when assessing the conditions for personal data processing in the operational phase resemble those required for a type A system, as discussed in the preceding section. 77From a GDPR perspective, automated processing occurs regardless administrations like social security authorities can also employ.However, the details and boundaries of these regulations will not be explored further here. 73 personal-data-always-personal-case-t-557-20-srb-v-edps-or-when-the-qualification-of-data-depends-on-who-holds-them/> accessed 8 December 2023. 77Those GDPR provisions which in section 2.2 were referenced as having a more holistic and prognostic element to them, such as the DPIA obligation or the principles of data protection by design and default will apply.Of note is that Article of the specific technology or technologies on which the system is constructed, and regardless of whether the data are processed as part of an automated decisionmaking procedure or for other reasons.

Applicability and ramifications of the AIA for type B systems
Unlike systems of type A, type B systems, as they are based on machine-learning technologies, will trigger the application of the AIA.Any system based on machine learning technologies will, namely, qualify as AI under Article 3(1) AIA.That the AIA will apply, however, does not mean that the full force of the regulations' obligations will apply for a specific system.
The AIA is built around an even more pronounced risk-based approach than the GDPR, where the strictness of the regulatory regime increases with the higher the risk that the AI system is perceived to pose.The risk classification to which systems of type B are allocated therefore greatly affects the scope of the requirements imposed by the AIA on such systems.Of interest here is that Annex III (5)(a) AIA qualifies as high-risk those AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services.Recital 37 of the proposal recognises the power imbalance at play when public authorities deploy AI systems in their benefits and services.It states that natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services, and in a vulnerable position in relation to the responsible authorities.It also adds that AI systems which are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, including whether beneficiaries are legitimately entitled to such benefits or services, may have a significant impact on persons' livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy.The recital thus explicates the justification for the high-risk classification in Annex III(5)(a), as well as references some fundamental rights listed in the CFR. 78he delineation that Annex III (5)(a) offers on high-risk applications concerning the allocation of benefits invites to both some conclusions and some questions.The wording of the provision makes clear that the emphasis is on there being a connection between the AI system's functions and the actual evaluation of benefit entitlement for it to be classified as high risk.It also makes clear that the high-risk classification extends beyond cases where AI systems are deployed to make fully automated eligibility decisions.While this clarification sheds some light, it also raises questions about the degree of proximity required for this connection and the future need for further clarifications on the closer scope of Annex III (5 a).All in all, however, the definition is a fairly broad one, meaning that most AI-performed tasks in social security administration which involve assessing 35(3)(b), although type B systems not being solely automated in relation to the decision-making, will likely still qualify as high risk, as they typically require the processing on a large scale of special categories of data referred to in Article 9 (1), Article 29 Data Protection Working Party (n 39), 8, 18. 78 The referenced rights to human dignity (Article 1), social security and social assistance (Article 34), non-discrimination (Article 21) and an effective remedy (Article 47) are all fundamental CFR rights.
aspects of or deciding on benefits eligibility are likely to qualify the system use as highrisk.
Returning to the initially mentioned examples, where the so-called Insight system flags potential quality issues in draft decisions, this system might not operate in a close enough intertwinement with the eligibility assessment and decision-making for it to qualify as high-risk. 79Considering the other mentioned example, the Swedish SAMU system, which assist case administrators in interpreting medical certificates in relation to eligibility criteria by directing their attention towards passages in the certificates which are likely to be relevant to the assessment, this system's functionality is more intertwined with the eligibility assessment.Nevertheless, it remains uncertain whether this intertwinement is strong enough to fall under the Annex III(5a) definition.For the purposes of this article, however, I will proceed on the assumption that most type B systems containing elements of assessment which relate to eligibility criteria that are utilised in the case administration of social security benefits claims will qualify as high risk.
As established in Article 8 AIA, high-risk systems must comply with several requirements.For a system of type B, a risk management system must thus be established, implemented, documented as well as maintained.As type B systems make use of techniques involving the training of models with data, Article 10 AIA furthermore requires them to be developed based on training, validation and testing data sets that meet certain quality criteria.These include, amongst other, appropriate data governance and management practices to ensure the use of relevant representative, complete and error free data sets for the training phase of the system.It should also be noted that 'data' is here not confined to personal data (but includes non-personal data as well both real factual data and synthetic data).The AIA thus addresses the question of biased data both in the training and operational phase more directly than the GDPR does.
The high-risk classification of type B systems also comes with rather extensive obligations for the provider of the system to supply technical documentation (Article 11) and ensure the keeping of records (Article 12).Further provider obligations relate to transparency requirements including an obligation to supply comprehensible instructions of use (Article 13), an obligation to ensure that the system is technically equipped to allow for human oversight (Article 14), as well as obligations to secure that the systems perform with an appropriate level of accuracy, robustness and cybersecurity (Article 15).Furthermore, there is an additional requirement to establish a quality management system to ensure compliance, where Articles 16-17 outline the specifics.It may also be noted that Recital 54 states that public authorities using high-risk AI systems for their own purposes have the option to adopt and implement these quality management rules at a national or regional level, thus allowing for some flexibility to consider the unique characteristics of their sector as well as the competencies and organisation of that authority.When social security administrations act as deployers of type B systems they must also, before putting the system into use, perform a fundamental rights impact assessment.This involves a thorough examination which encompasses defining the system's purpose and scope, identifying affected individuals and groups, ensuring compliance with relevant fundamental rights laws, evaluating foreseeable impacts, assessing risks to marginalised or vulnerable groups, considering environmental consequences, and formulating a detailed plan for mitigating identified harms.Additionally, this process mandates the establishment of a governance system, which may include elements like human oversight, complaint-handling, and redress mechanisms. 80n in-depth analysis of the above-mentioned obligations is not expedient here, but a few notes could be made on this compliance framework to clarify its regulatory design.First, it should be noted that these obligations revolve around the system design and implementation/use.In contrast with the GDPR, which in addition to the important and previously discussed opening clauses of the Articles 6, 9 and 22 also contains numerous other opening clauses which allows for Member States to shape their data protection regimes to cater to public sector interests, another notable aspect of the AIA is that it does not.While the AIA does pursue a number of overriding reasons of public interest, as stated in Recital 1 AIA, when it comes to the possible utilisation of AI systems for public interest uses, the explanatory memoranda of the Commission proposal explicates that the regulation aims to ensure a level playing field between public and private actors. 81Consequently the AIA does not permit the same degree of divergence, at least in principle, through Union or Member State laws as the GDPR does.Notably, this distinction is also most pronounced in scenarios involving AI systems for public sector applications, such as for social security administration purposes.

Summary remarks
As mentioned, the GDPR allocates responsibilities between 'controllers' and 'processors' of personal data, which means that the obligations outlined in the regulation are tied to the usage of personal data itself.One here notable effect of this distribution is that it, at the time the system is used for data processing, becomes immaterial whether a controller or processor developed the automated system themselves or not.As the AIA, instead, distributes its obligations between 'providers' and 'deployers' of AI systems, where the absolute lion part applies to providers, the obligations are tied to the operational aspects of the systems rather than the data.As put by Jacobs and Simon, the AIA's approach to assign obligations to fixed addressees means that it circumvents the necessity to engage with the possibly ambiguous setup of competencies and capabilities of actors involved in developing, deploying, and operating AI systems. 82or public administrations, like social security administrations, using machine learning technologies this distribution, however, also implies that they might potentially avoid the comprehensive compliance requirements of the AIA by acquiring AI systems from external providers instead of developing them internally.This approach, viewed from a public interest as well as rule of law perspective, could introduce accountability and legitimacy gaps.It should, however, be added that Article 3(2) AIA includes in its definition of 'providers' not only those who develops an AI system, but also those who has an AI system developed and places them on the market or puts the system into service under its own name or trademark, whether for payment or free of charge.This means that public administrations cannot escape the compliance framework by contracting external parties to develop a tailored AI system for them.It should also be added that even in cases where public administrations would acquire an already developed system 'off the shelf' to utilise its functionalities in social security case administration, there is a chance that the agency may come to assume the responsibility of a provider.Article 28 AIA, namely, states that any deployer should be considered a provider for the purposes of the regulation if they put their name or trademark on a high-risk AI system already placed on the market or put into service, if they make a substantial modification to a high-risk AI system in a way that it remains high-risk and if they modify the intended purpose of an AI system which has not been classified as high-risk in such manner that it becomes high-risk. 83In such cases, the initial provider are relieved of its duties under the act.Article 3(23) AIA defines substantial modification as a change to the AI system following its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment and as a result of which the system's compliance with the Chapter 2 requirements for high-risk AI systems is affected or results in a modification to the intended purpose for which the AI system has been assessed.Also, Recital 66 AIA indicates that changes to an AI system which follows from the self-learning aspect of machine learning systems should not constitute a substantial modification, provided that those changes have been predetermined by the provider and assessed at the moment of the conformity assessment.Unclarities thus remain as to what is meant by substantial modification in this context.It, however, seems likely that an AI system that is tailored enough to assist social security administrations on tasks that relate to case administration for benefits allocation would often necessitate modification either in its functionality or purposes of use, to adapt the system to the specific administrative tasks at hand.While a careful assessment would need to be made, this aspect of the AIA's delegation of obligations tightens the door a bit for social security administrations to escape the full grasp of the AIA for systems of type B.

Conclusions
This article has offered a structural analysis of the differences in how the GDPR and AIA applies for two commonly deployed types of automated systems in social security administration.I will now try to summarise this two-pronged analysis by focusing on the combined impact that the GDPR and the AIA will have on each of these types of systems on the one hand, and on the differences in how these two instruments will determine the basic legality of utilising type A or B systems in social security administrations on the other.
First off, the analysis shows that systems of (rule-based) type A used for making fully automated benefits eligibility decisions will most likely only trigger the application of the GDPR.Any personal data processed by the type A system will need to comply with the full body of relevant GDPR provisions, although the public interest status of social security administration opens up the regulation for sector specific applications through its opening clauses.Given that social security provisions remain largely a matter of national law, resulting in considerable variation in the types of benefits provided as well as in their administration, this context significantly influences the application of the GDPR and thus the de facto protective regime for those personal data that are processed by type A systems to make eligibility decisions.As elucidated in Recital 15 of the GDPR, the regulation adopts a horizontal and technologically neutral regulatory framework. 84This signifies that whether data undergo processing with computational assistance but are subsequently evaluated by a human administrator, or if the data are processed autonomously without any human intervention or manual evaluation, it does not impact the application of the GDPR core principles.However, the type of automated means which the personal data are processed with may influence ancillary factors such as the aspects to consider in a DPIA, what measures to be taken to adhere to privacy and data protection by design principles, as well as the assessment of whether there are adequate safeguards in place to ensure GDPR-compliant processing.For type A systems, Article 22 GDPR will also apply in addition to the general GDPR provisions.The prohibition against solely automated decision-making laid down in this article is, however, relaxed quite considerably for applications in law regulated public sector settings.The question is therefore primarily what type of safeguards these regulations must establish in order to suffice (although not a focal point of this study).
For systems of type B, the analysis shows that both the GDPR and most likely the AIA will be triggered by their use.While Article 22 GDPR will likely not apply, the GDPR principles of fairness, data minimisation and purpose limitation obliges social security administrations utilising type B systems to consider and attend to whether the personal data processing is likely to cause discriminatory effects, whether the data used to train or operate the system while in use are excessive, and whether any further use of data beyond their initial purpose of collection is lawful.However, the flexibility and reliance on a further basis in (typically) national law may provide some leeway.As for type B systems falling under the AIA's definition of AI systems and thereby invoking its application, it remains somewhat unclear how closely the AI system's functions must align with the actual evaluation of entitlement to benefits to qualify as high risk under Annex III 5(a).The fact that systems evaluating the eligibility of individuals for public assistance benefits and services, as well as the granting, reduction, revocation, or reclamation of such benefits and services, are explicitly categorised as high-risk, however and importantly, indicates a recognition of the sensitive nature of benefits allocation.It also indicates a recognition of the potential risks that automated processes may introduce in relation to rule of law principles such as legality, foreseeability, and fairness (which cannot be expanded upon in this article).The author's opinion is that the delineation between solely and partially automated decisions, in terms of practical impact, is not only a technical concern but also hinges on those organisational aspects within the agency which determine the implications of the systems' outputs.In practice, decision-making support systems, particularly if their outputs tend to supplant substantive human assessment, can significantly influence the outcome of eligibility decisionseven though human administrators formally make these decisions.The AIA's applicational indifference to both complete and partial automation therefore 84 See also C-25/17 Proceedings brought by Tietosuojavaltuutettu EU:C:2018:551, para 53.Here, the CJEU stresses that an application of data protection principles that does not depend on the techniques used aims to avoid the risk protection circumvention.
F Adolfsson 'Därför arbetar Försäkringskassan med AI och syntetisk data' (Voister 2 May 2023) <http://www.voister.se/artikel/2023/05/forsakringskassans-framgang-med-ai-och-syntetisk-data/> accessed 8 December 2023. 74EDPB and European Data Protection Supervisor 'Joint Opinion 03/2021 on the Proposal for a Regulation of the European Patrick Breyer v Bundesrepublik Deutschland EU:C:2016:779, paras 31-49.See, however, A Lodie on that recent case law from the General Court indicates an emphasis of that the data recipient must be reasonably able to reidentify the data subject, and tends to view personal data as a relative rather than objective concept 'Are Personal Data Always Personal?Case T-557/20 SRB v. EDPS or When the Qualification of Data Depends on Who Holds Them' (European Law Blog, 7 November 2023) <https://europeanlawblog.eu/2023/11/07/are- Section 2.2 in the explanatory memoranda of the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, COM/2021/206 final. 82M Jacobs and J Simon, 'Assigning Obligations in AI Regulation: A Discussion of Two Frameworks Proposed By the European Commission' (2022) 1 Digital Society 6, 6.