Fundamental rights control when implementing predictive policing – a European perspective

ABSTRACT The paper approaches preventive justice and big data from a predictive policing point of view. Predictive policing is a controversial technology because of the many risks it imposes on fundamental rights. Through use cases from the EU, the paper focuses on how the implementation of predictive policing technologies has been limited in the EU but also how difficult the limitations might be due to the high hopes set upon technology. Although predictive policing technologies are not yet used in Finland by the police, the paper discusses how the use of police powers is limited in Finland and how fundamental rights control in the democratic process will work if the police were willing to adopt such technology.

The approach to preventive justice in this paper is a bit different. It does not discuss about using criminal law as a means to prevent future acts but increasing use of modern digital technologies as a tool to predict crime. Datafication of society has enabled new opportunities for authorities to collect data and use it to maintain surveillance over large masses of people that would be impossible to do manually. Additionally, the development of 'artificial intelligence' (AI) technologies, such as machine learning, has enabled the authorities to get analysis on large data sets and the creation of which might go beyond human understanding. Thus, it can be stated that the modern digital technology and possibilities seen in it, feed the ethos to prevent crime before happening. 4 There are as many ways to use modern digital technologies for crime prevention purposes as there are imaginative minds. In this paper, the focus has been narrowed down to discuss predictive technologies used by a particular public authority, the police. In addition to policing, they can be used by the prosecution (predictive prosecution) 5 and by the courts (predictive sentencing), 6 to predict future criminality. 7 Predictive technologies used by the police are generally referred to as predictive policing. 8 One definition of predictive policing is 'the application of analytical techniquesparticularly quantitative techniquesto identify likely targets for police intervention and prevent crime or solve past crimes by making statistical predictions'. 9 This paper is limited to proactive policing, which is different from the reactive use of predictive policing technologies. It is another question for the police to act before a crime has been committed compared the police trying to solve a crime which has already happened. Also, a distinction can be made between place-oriented and person-oriented predictive policing. The former refers to predicting the time and place a crime is mostly likely to occur and the latter refers to predicting which people will commit a crime in the future. 10 It needs to be emphasised that predictive policing is not criminal intelligence that is targeted at a certain person. It refers to digital tools used as police officers' daily work to maintain surveillance over ordinary citizens, usually aiming to prevent violent crime but also petty crimes such as pickpocketing. 11 The difference in predictive policing compared to the previous proactive methods of the police is that it is based on big data and powerful algorithms such as machine learning.
In Europe, many countries' police forces are adopting predictive policing. 12 In addition to the member states, the EU is doing this. These big data technologies are seen as the panacea to prevent especially terrorism and serious crime. 13 However, a range of supervisory authorities and courts will later control the implementation. The latest case from one of the member states occurred in February 2023 in Germany, when the Constitutional Court declared predictive policing to be unconstitutional. 14 In Finland, intelligence-led policing is one of the cornerstones of police work. 15 However, the National Police Board, the central administrative authority of the Finnish Police, has not announced that big data or 'AI' technologies such as predictive policing are used in Finland. 16 As a member of the European Union, the role of the union is essential when discussing big data policing in Finland. For instance, the EU is aiming, the EU is aiming to regulate the use of AI on a union level, including predictive policing. 17 Predictive policing is a controversial technology because of the many risks it imposes on fundamental rights. Since the context of the use of these technologies is in exercising public power and furthermore, the within monopoly of the legitimate use of physical force, 18 their implementation should be put under especially detailed scrutiny specifically to set strict limits on the use of power by the state that goes hand in hand controlling risks of breaching fundamental rights. Through use cases, the discussion in this paper is on how the implementation of predictive policing technologies has been limited in the EU but also how difficult the limitations might be due to the high hopes set upon technology. Although AI technologies are not yet used in Finland, the paper paints the picture of how the democratic process will work in Finland and how fundamental rights control would be done.
The paper is structured as follows. In Section II, there is a short overview of the technological side of predictive policing and fundamental rights risks related to it. It needs to be emphasised that since the paper does not focus on particular predictive policing systems, these are all generalisations related more to the risks of big data and machine learning technologies. The particular functioning of each predictive policing system and the risks posed to fundamental rights have to be done through a case-by-case assessment. However, this overview will hopefully provide the reader with an understanding of why implementing these technologies in a democratic society that respects fundamental rights is especially controversial and why they should be scrutinised so strictly. In Section III, the paper introduces two cases from the EU in which the union has adopted legislation that enables the use of predictive policing-type technologies. Both of the cases address fundamental rights, worries relating to personal data collection, and analysis for crime prevention in the specific case. What is of interest here is that these two cases have been ex-post target of fundamental rights' control. Before concluding, Section IV discusses on how the control of fundamental rights would be in Finland if the police were willing to adopt such a technology.

A. Generalising predictive policing technology
It is clear that prediction is not new method in policing. However, instead of making predictions based on the training, experience and intuition of police officers, in predictive policing they are made by computer software. 19 When computer software replaces the human in this way, legal scholars have been interested to know how the new technology actually functions. Many contributions on predictive policing contain an analysis of the technological side of predictive policing. This is essential because making a legal assessment of the risks to fundamental rights, it is necessary to know what the phenomenon under review actually is. However, as stated before, the description of the technology is a theoretical generalisation because each system has its own properties regarding the data it uses, algorithms it contains, and so on. The two generalisations discussed here are that: predictive policing is based on big data and the data are processed by machine learning algorithms or 'AI'.
The first theoretical premise is that predictive policing is based on big data. 20 Big data refers to masses of data, including both personal and other data, the amounts of which are so huge that a human could not process it, much less make conclusions about it. As Chan and Bennet Moses define it, big data it can be characterised through the three Vs: 'Volume (the amount of data), Velocity (the speed at which data is being added and processed) and Variety (the fact that data may come from multiple sources using different formats and structures)'. 21 Usually the data included are historical crime data but also data from private companies or other public registries.
The data analysed by predictive policing software is only a half of the story. The second theoretical premise is linked to the algorithms that process big data. Simply put, an algorithm is a set of rules or instructions that tell a computer how to function. Not all algorithms are the same. Some of them are rule-based, basically telling the computer an instruction 'when X, then Y'. However, in predictive policing, the algorithms do not function like this. These algorithms are machine learning, which is a sub-category of artificial intelligence. 22 In machine learning, the algorithm learns from its own experience and evolves over time without human programming. The algorithms are first trained with a set of training data from which it learns to recognise patterns and create rules based on them. 23 After training, it is moved to its true working environment in which the role of training data is reduced, and the role of data provided in this environment increases. 24 It develops continuously and recognises new patterns and creates new rules and replaces the old ones. 25 It is essential to understand the role that the input data has in machine learning. In the case of predictive policing, the use of crime data in the algorithm means that the police officers who are logging incidents in the police systems are also producing information for the algorithm. 26 This problem, characterised as a self-fulfilling prophecy, is one of the problems related to predictive policing technologies that are discussed next.

B. Generalising fundamental rights' risks of predictive policing
Legal scholars and civil rights organisations have recognised several risks for individuals who are targeted by a predictive policing software. Many of the problems are linked to each other. Here it is only possible to provide an overview of the problems which are complex and each worth their own contribution.
The problems already referred to concern both the data used in predictive policing as well as the nature of machine learning. Firstly, characterised as 'garbage in, garbage out', the quality of prediction is as good as the data used as an input. 27 The way data are collected or inserted into the algorithm can have a major impact on the prediction. Also, although some data used in predictive policing algorithm could be empirically accurate, its use might be unethical. For instance, using empirical facts that certain parts of the population are more involved in crime would lead to the algorithm classifying people belonging to these groups as being more likely to be potential offenders. 28 This is again the problem related to self-fulfilling prophecy and accuracy but also a question of discrimination.
It is essential to remember that predictive policing only provides statistical predictions created by complicated algorithms. It does not imply what will happen in the future. There is a danger that the predictive policing system merely repeats the society's hidden structural biases, which is a problem that cannot be solved merely by deleting discriminatory elements from the data. 29 In addition to the data, the functioning of the algorithm is not neutral because the programmer impacts the system variables and what data the system uses. 30 Hence, neither the data nor the algorithms are neutral and never can be. They will always reflect either the societies' discriminatory imbalances or the programmers' worldview. Another problem linked to machine learning is the opacity of the functioning of the algorithm. 31 This can be approached from three points of view. Firstly, if the predictive policing software is developed by a private company, it is likely that the software and its functions will be trade secrets. 32 Secondly, the functioning of machine learning algorithm is a sort of black box. 33 Due to the self-learning property, there is a risk that the developer of the system will be unable to explain how the software ended up with a certain result. Lastly, although all the information about the technological properties of the predictive policing software was public, it is unlikely that an ordinary citizen could understand it when considering that explaining the workings of a machine learning system is difficult even for a professional. 34 Opacity is problematic because it reduces the legitimacy of these systems and trust in the authorities. Additionally, understanding the working of the predictive policing software would be essential for the police officers acting on it. 35 This would be important because there is the risk for automation bias also in predictive policing. Automation bias refers to the risk of humans blindly trusting technology and limiting their own consideration if there is suspicion about whether the algorithm is working correctly. 36 There is also a risk that the police officer in this case would not consider arguments contrary to the prediction but would instead feel the need to act according to the prediction provided by the algorithm. This is even though the known problems of predictive technologies would urge police to be highly critical when using these systems.
Lastly, the issues with privacy, data protection and the fear of surveillance are seen as one of the major problems regarding predictive policing. Many contributions concerning predictive policing start with a reference to the movie Minority report (2002) by Steven Spielberg or the book 1984 (1949) by George Orwell. The concerns are necessary since examples from outside Europe show that predictive policing can be used for mass surveillance and deprivation of some parts of population. The examples from the EU presented in the next section are strongly connected to the issues of privacy and data protection risks and well as opposing mass surveillance in Europe. In the EU as well as in Finland the right to privacy and data protection are protected as fundamental rights. Especially in Europe, after the GDPR 37 and LED 38 came into force, data protection is usually the first question that is raised in the context of predictive policing since these systems are usually based on the collection and processing of personal data.

A. Limiting the use of passenger name record data and analytics
Air travel is highly regulated, especially after the terrorist attacks using aircraft in the US in September 2001. Modern digital technologies and the digital form of data enable law enforcement authorities to pre-screen large numbers of passengers in the name of crime prevention. Finding potential terrorists is one goal of the EU's Passenger Name Record (PNR) directive 2016/681. 39 The directive was implemented in Finland as the Act on the use of passenger name record data for combatting terrorist offences and serious crime (657/2019). The core idea of the PNR directive is that airline companies transfer personal data provided by passengers to the EU member states' law enforcement authorities for purposes of preventing, detecting, investigating and prosecuting terrorist offences and serious crime. PNR data includes information such as name, date(s) of intended travel, all forms of payment information, frequent flyer information, all baggage information, seat number and other seat information. PNR data is used for carrying out an assessment of passengers prior to their scheduled arrival in or departure from the Member State to identify people who require further examination by the competent authorities. The assessment includes comparing PNR data against relevant databases and also assessing passengers against the risk criteria and profiles that have been developed. 40 If a passenger fits into a risk profile or criteria inserted in the system, they are flagged as a positive match. For example, a person travelling without luggage or buying their ticket at the last minute and paying by cash are 'deviant behaviour'. 41 Any positive match resulting from the automated processing of PNR data is individually reviewed by non-automated means to verify whether the competent authority needs to take action under national law.
The PNR directive was under the scrutiny of the European Court of Justice (CJEU) in its recent ruling Ligue des droits humains (C-817/19) handed down on 21 June 2022. The case was brought before the Belgian national court by a fundamental rights organisation and sought the annulment of the national law implementing the directive. The national court referred the case to the CJEU that according to Art. 267 of the Treaty on the Functioning of the European Union (TFEU) has the competence to give a preliminary ruling regarding the interpretation or validity of a provision of EU law. The case had several questions referred for a preliminary ruling. Only the relevant parts of the case for the purposes of the paper have been discussed here. These are firstly, what the Court stated about the relationship between the PNR Directive and fundamental rights to data protection and privacy protected by the Charter of Fundamental Rights of the European Union, 42 and secondly what it stated about processing PNR data against pre-determined criteria.
To start with the conclusion, the CJEU did not declare the directive invalid but limited its application. The CJEU stated that the PNR Directive entails undeniably serious interferences with the rights guaranteed in Articles 7 and 8 of the Charter, in so far, inter alia, as it seeks to introduce a surveillance regime that is continuous, untargeted and systematic, including the automated assessment of the personal data of everyone using air transport services. 43 It limited the use of PNR data to what is only strictly necessary for the purposes of preventing, detecting, investigating and prosecuting terrorist offences and serious crime. 44 Firstly, the member states must ensure to 'that the application of the system established by the PNR Directive is effectively limited to combating serious crime and that that system does not extend to offences that amount to ordinary crime'. 45 Secondly, the wording of the directive allows member states to choose whether they apply PNR analysis only to extra-EU flights, or also to intra-EU flights. 46 This option was also limited by the court. The application of the system must be limited to the transfer and processing of the PNR data of flights relating, inter alia, to certain routes or travel patterns or to certain airports in respect of which there are indications that are such as to justify that application. 47 The court also limited the type of technology that can be used when discussing processing PNR data against pre-determined criteria. Firstly, the starting point of the court was that despite the fact that the system also produces false positives, 'automated processing carried out under the said directive have indeed already made it possible to identify air passengers presenting a risk in the context of the fight against terrorist offences and serious crime'. 48 However, according to it, the wording of the directive 'pre-determined criteria' limits the type of algorithms that produce matches on the basis of PNR data: As noted by the Advocate General in point 228 of his Opinion, that requirement precludes the use of artificial intelligence technology in self-learning systems ('machine learning'), capable of modifying without human intervention or review the assessment process and, in particular, the assessment criteria on which the result of the application of that process is based as well as the weighting of those criteria. 49 These quotations from the CJEU are only a very short overview of the case. What is essential here is that after the judgement, PNR analysis cannot be developed into a predictive policing type of tool because of the technological limitations the court set. Also, as can be seen from the judgment, although the PNR directive is seen as being necessary, strict limits were set on its application.

B. Europol's big data challenge 50
Europol is the European Union Agency for Law Enforcement Cooperation with the purpose of countering serious crime and terrorism. Since its establishment in 1995, 51 the objective of the agency has been to improve the cooperation between the EU member states and its tasks have centred on facilitating information exchange and 43 ibid para 111. 44  analysing information. 52 Hence, the purpose of the Europol has always been strongly connected to working with data. Although the Convention of 1995 had provisions on computerised systems of information, the technological reality especially when it comes to opportunities for data collection, was different. In the 2010s, the aim in the EU was that Europol would become 'a hub for information exchange between the law enforcement authorities of the Member States, a service provider and a platform for law enforcement services'. 53 Now, the foundation of Europol is enacted in Art. 88 of the Treaty on the Functioning of the European Union. According to Art. 88(1) Europol's task is to support and strengthen action by the Member States' police authorities and other law enforcement services and their mutual cooperation in preventing and combating serious crime affecting two or more Member States, terrorism and forms of crime which affect a common interest covered by a Union policy.
According to the TFEU 88(2)(a), one of the tasks of Europol is the collection, storage, processing, analysis and exchange of information, in particular that forwarded by the authorities of the Member States or third countries or bodies. More specific regulation on Europol's powers is enacted in Europol regulation (EU) 2016/794. 54 The regulation contains provisions on Europol's rights to process personal data. 55 The right to data protection is fundamental and is recognised in Art. 8 of the Charter of Fundamental Rights of the European Union. 56 European data protection supervisor (EDPS) is the data protection authority for the European Union institutions, bodies and agencies and thus, it also monitors Europol's data processing activities (Europol Regulation Art. 43). In 2019 the EDPS opened its 'own initiative inquiry on the use of Big data analytics by Europol for purposes of strategic and operational analysis'. 57 The problematic practice was that Europol received large datasets from the member states, other operational partners and also collected data in its open-source intelligence. The EDPS characterised these data as large datasets because their volume and the nature or the format of the data, could not be processed with 'regular tools, but require the use of specific tools and/or storage facilities'. 58 Annex II B (1) of the Europol regulation limits the categories of data subjects to those whose data may be collected and processed by Europol. These categories can only be ( the EDPS noted, due to the volume of information it is impossible for Europol to ascertain that all personal data included is within the limits of the aforementioned categories. 59 In 2020 the conclusion of the inquiry was clear: Europol had overstepped the mandate given to it by the provisions concerning data processing in the Europol regulation. As the EDPS describes in his decision, over the years, Europol's operational practices had evolved towards gaining larger and larger volumes of data. 60 In their current form there was a high risk that Europol was storing personal data which were not linked to criminal activity. This could cause damage to these people's fundamental rights, for instance freedom of movement. 61 Although according to Art. 43(3)(e) of the Europol regulation the EDPS has the power to order Europol to carry out the rectification, restriction, erasure or destruction of personal data, or according to 43(3)(f) impose a temporary or definitive ban on processing operations by Europol, the EDPS chose not to do so. Instead, the inquiry led to the EDPS admonishing Europol and it required Europol to draw up an action plan to mitigate data protection issues. Europol responded to the admonishment cooperatively, but the agency also called for revision of the Europol Regulation. 62 In December 2020, the Commission presented a proposal to review and expand Europol's mandate. 63 In the proposal, the Commission suggested changes to Europol's mandate to enable Europol to process large and complex datasets as a response to the EDPS's decision. 64 The reform of the Europol regulation raised concerns in civil society organisations. In January 2022 23 civil society organisations wrote a public letter to the EU legislators on their concerns about the impact of the reforms on fundamental rights and hoped the legislator would change its mind on the changes to the Europol Regulation. 65 Despite of the loud fundamental rights activists, the amending regulation was accepted on 8 June 2022, and it came into force on 28 June 2022.
In addition to allowing the big data practices of Europol, what is especially problematic from the fundamental rights oversight perspective is that the amendments made to the regulation retroactively legalises forbidden practices. In January 2022, the EDPS had ordered Europol to delete the personal data of individuals who were not linked to criminal activity. 66 The EDPS gave Europol 12 months to comply with the decision regarding datasets received before the decision. However, when the amended Europol regulation entered into force, in practice it overrode the EDPS. In September 2022 the EDPS requested that the CJEU annul two provisions of the amended Europol regulation because The two provisions have an impact on personal data operations carried out in the past by Europol. In doing so, the provisions seriously undermine legal certainty for individuals' personal data and threaten the independence of the EDPSthe data protection supervisory authority of EU institutions, bodies, offices and agencies'. 67 In June 2023, the case is still pending. 68 The Europol big data challenge shows how the ethos to prevent serious crime and terrorism can override fundamental rights in political decision-making. Interestingly, the EU Parliament has taken a different approach to predictive policing in general. As mentioned earlier, the EU tries to respond to predictive policing technologies in the Proposal for Artificial Intelligence Act (AIA). 69 In the AIA, the EU's aim is to regulate AI systems also used in law enforcement or on its behalf to assess the risk posed by a natural person for offending or reoffending and AI systems that predict the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups. 70 On 14 June 2023 the European Parliament adopted its negotiating position on the AIA. The Parliament's stand concerning predictive policing is that it should be banned in the AIA. However, the fate of predictive policing is not yet decided because the European Commission, the European Parliament, and the Council of the European Union still have to negotiate on the final wording of the AIA. 71 The Parliament's stand concerning predictive policing in the AIA is a bit controversial because as noted by the European Digital Rights organisation, the new Europol regulation actually legalises Europol's predictive policing. 72 IV. Limits on the use of predictive policing practices in Finland Before concluding, the paper looks into the Finnish fundamental rights' control and how it would function in the case that the Finnish Police would adopt predictive policing. Limitations to using predictive policing can be derived from data protection, as the cases from the EU level that were presented in the previous section have shown. 73 However, data protection only controls the use of personal data. Other relevant question is what the law enforcement authorities are entitled to do based on the predictions from predictive policing. The section starts by discussing the limitation to use predictive policing technologies as a basis for police activities. These limitations can be derived from the legislation concerning policing in general in Finland. The other half of the section focuses on controlling the adoption of predictive policing at the legislative level.
In Finland, the starting point in limiting the use of public power, such as by the police, is enacted in Section 2(3) of the Constitution of Finland (731/1999) in the form of the Principle of Legality: The exercise of public power must be based on the law. The law must be strictly observed in all public activities. Additionally, the police officers are government officials and work under the official accountability required of them in Section 118 of the Constitution: an official is responsible for the lawfulness of his or her official actions. In addition, an individual who has suffered an infringement or damage due to an unlawful act or omission by an official, has the right to seek imposition of a punishment on the relevant official and claim damages for the harm suffered.
The key statute for policing in Finland is the Police Act (872/2011). It lists in Section 1 of Chapter 1 the duties of the Police to inter alia secure the rule of law, maintain public order and security, and prevent, detect and investigate crimes. However, despite this general listing, all policing activities must always be based on a specific provision of law when an officer intervenes in an individual's rights. 74 Thus, this means that the law then should contain a particular provision that allows the police to intervene in individuals' rights, such as the right to liberty and security has been enacted in Section 7 of the Constitution, merely based on a software prediction.
Currently, the Police Act contains a provision in its Chapter 2 in Section 10 on Preventing an offence or disturbance. According to subsection 1 of the Section: A Police officer has the right to remove a person from a scene if there are reasonable grounds to believe on the basis of the person's threats or other behaviour, or it is likely on the basis of the person's previous behaviour, that he or she would commit an offence against life, health, liberty, home or property, or would cause a considerable disturbance or pose an immediate danger to public order or security. 75 The Police Act came into force in 2014 when the usage of big data technologies was most likely not been considered by the Finnish police. The preparatory documents do not contain references to using big data or other algorithmic tools when assessing a person's behaviour and the provisions clearly refer to onsite evaluation by the police officers. 76 Hence, this provision would not be a sufficient basis for the police to act based on predictive policing prediction. In addition to providing powers, Chapter 1 of the Police Act also contains principles that restrain activities of the police officers. It contains the obligation to respect fundamental and human rights (Section 2), the Principle of Proportionality (Section 3), 77  Minimum Intervention (Section 4), 78 the Principle of Intended Purpose (Section 5), 79 and postponing actions and refraining from taking actions (Section 9). 80 Thus, despite whether in the future the Police Act would give explicit power to act based on a predictive policing prediction, the officers would still need to comply with these principles. If and when the day would come when the police wanted to implement predictive policing, it would have to go under a strict fundamental rights' scrutiny first. According to Section 22 of the Constitution, public authorities must guarantee the observance of fundamental rights and liberties and human rights.
When assessing the potential implementation of predictive policing in Finland from the fundamental rights control point of view, it is necessary to understand how the constitutional control of securing fundamental rights works in Finland. When the fundamental rights chapter of the Constitution was reformed in the 1990s, the Government proposal explicitly stated: Fundamental rights influence the legislator in many ways. Not only do they limit Parliament's powers as legislator, but they can also impose active obligations on the legislator. A fundamental right provision may give general guidance to the legislator or contain an explicit constitutional mandate to implement a particular act. 81 In Finland, fundamental rights control happens in two ways: parliamentary ex-ante supervision by the Constitutional law committee and ex-post by the courts. This obligation is explicitly stated in Section 74 of the Constitution: The Constitutional Law Committee shall issue statements on the constitutionality of legislative proposals and other matters brought for its consideration, as well as on their relation to international human rights treaties. In addition, the supreme overseers of legality, the Ombudsman of the Parliament and the Chancellor of Justice can assess the legality if technological implementation 82 as well as different supervisory authorities, such the data protection ombudsman who supervises the application of legislation in practice as well as give statements during the legislative process. Ex-post supervision of fundamental rights is done by the courts and Finland does not have a constitutional court. Sections 106 (The primacy of the Constitution) 83 and 107 (Limitation on the application of subordinate legislation) 84 of the Constitution give courts the mandate to ex-post oversee constitutionality of the Acts and lower regulations such as government's degrees. However, ex-post supervision is exceptional and secondary to parliamentary supervision by the Constitutional Law Committee. In addition the provisions concern only individual cases and do not give the courts powers to assess overall validity of an act.
Lastly, from the individuals' point of view, Section 21 of the Constitution constitutes a right to an effective remedy, fair trial, and good administration and is always relevant when 78 The police shall not take action that infringes anyone's rights or causes anyone harm or inconvenience more than is necessary to carry out their duty. 79 The police may exercise their powers only for the purposes provided by law. 80 Subsection 1: The police have the right to refrain from taking an action if completion of the action could lead to an unreasonable conclusion compared with the outcome sought. 81 Government proposal to amend the fundamental rights provisions of the Constitution (HE 309/1993 vp) 26. 82 Previously the assessment has focused on automated decision making in administration, e.g., in taxation and immigration services. 83 According to the Section: If the application of a statutory provision in the case before the court would be manifestly unconstitutional, the court must give preference to the constitutional provision. 84 If a provision of an ordinance or other subordinate legislation is contrary to the Constitution or other law, it may not be applied by a court or other authority. exercising public power. 85 This refers to individuals' rights to take their case, e.g. related to predictive policing for the assessment of supervisory authority or under a court scrutiny. Currently, the Ministry of Justice has an initiative to reform the Finnish Data protection Act and Act on the Processing of Personal Data in Criminal Matters and in Connection with Maintaining National Security (1054/2018). 86 The initiative aims to alter the provisions of the two acts so that the data subjects can refer the case to the court, if the data protection ombudsman has not dealt with the complaint or informed the data subject within three months of the progress or outcome of the case. 87

V. Conclusions
At the EU level, the approach to predictive policing has been controversial. The EU legislator has implemented legislation that allows predictive policing or at least technological solution that is very close to predictive policing. However, when it comes to the AIA, it is still unclear whether the EU will allow predictive policing in this context or not. What seems clear is that in the EU, the different institutions (the EDPS, the CJEU but also the legislator), and fundamental rights organizations are very aware of the fundamental rights risks of predictive policing. Because of the high hopes set upon the modern technology in crime prevention, the difficult thing is to find a right balance between the implementation of these technologies and fundamental rights.
When it comes to implementing predictive policing, Finland is lagging. This can be considered to be a good thing because of the complex fundamental rights problems outlined in this paper. The EU legislator has implemented predictive policing practices in the context of fighting serious crime and terrorism, which is still different from applying these technologies to prevent petty crime.
Although the EU could allow the use of predictive policing in the future, Finland has constitutional structures that place its use under strict scrutiny. Legislation on the police and the use of public power as well as the fundamental rights control during and after legislative process provide many ways to control fundamental rights risks if Finland would decide to implement predictive policing.