High tech and legal challenges: Artificial intelligence-caused damage regulation

Abstract With the rapid development of IT and high-tech manufacturing, artificial intelligence has become a complex issue whose resolution is one of the highest priorities in the development of the high-tech society. The application of artificial intelligence, much like the application of many other technologies, opens the door to new opportunities and the use of AI for the benefit of organised crime. This issue highlights the need for an effective legal and regulatory framework, which the state itself and state agencies are responsible for putting into place. The purpose of this study is to identify and analyse numerous aspects of the legal regulation of the application of artificial intelligence in contemporary civil law, which faces the challenge of adapting to the rapid advancement of high tech. The authors review international experience in understanding artificial intelligence and its main characteristics, analyse some cases of artificial intelligence-caused damage, and discuss the interpretations and characteristics of artificial intelligence applications as a subject of legal liability. The results of the study give reason to say that the key issues under consideration are connected with the applicability of the concept of guilt to AI aspects, the subjective perception of causality by AI, the possibility of attribution the concept of “intention” to the AI category, and issues of interpretability of AI decision-making processes.

Artificial intelligence (AI) is adept at many human tasks, including disease diagnosis, language translation, and consumer market service, and it is rapidly improving.This raises legitimate concerns that AI will eventually replace humans throughout the economy (Wilson & Daugherty, 2018).Nevertheless, the rate of change will vary depending on the country under consideration and the relevant economic sector.Inadequate development of automated accounting and manufacturing management systems, as well as associated software, can harm users/customers (Raisch & Krakowski, 2021).This in turn relates to rights infringements and potential user harm, which could undermine consumer confidence and negatively impact a profit margin.Additionally, issues may have far-reaching implications that endanger information security.Threats almost always originate from humans as they create and employ the necessary tools.However, as AI advances, it may become a source of threats.Consequently, in cybersecurity, an intruder is a real person, group of people or AI property that tends to cause damage by completely or partially disrupting services in the target system or causing direct harm in the physical world (such as in industrial control systems) or to obtain unauthorised information (Bederna & Szadeczky, 2020).
Given current trends, AI may have an impact on every aspect of people's lives, including their work and the labor sphere in general.AI will have a significant impact on how employers run their businesses.AI will eventually have an impact on employment laws in many countries.As a result, legal regulation of robotics relationships pertaining to the creation, commissioning, use of AI, and self-operation of autonomous robots will become unavoidable.However, discussions are currently underway about these and other issues concerning the legal regulation of various aspects related to the AI activities.The issue of legal liability for the use of AI catalyzed a more intense discussion among experts in various fields, including the technical community, policymakers, practicing lawyers as well as scholars.There is currently very little regulation of AI, although AI use in society is growing in response to digitalisation and can undoubtedly be crucial to ensuring a sustainable and comfortable future.Indeed, there are neither obligations for a remote operator to manage AI nor legal provisions that specifically govern how it operates.One of the issues of particular relevance in the framework of interdisciplinary research is the question of the legal capacity and legal personality of AI.The questions about whether artificial intelligence can poses with rights and responsibilities, which of them can be assigned to AI, and how to resolve questions about its responsibility for actions and consequences, are key aspects of this problem.Other aspects include questions about intellectual property ownership if artificial intelligence creates innovation, or its role in different contexts, such as medicine or autonomous cars.In essence, this issue reflects the difficulty in adapting the existing legal framework to new forms of AI and calls for thinking about new approaches and regulatory solutions to ensure fairness, efficiency and safety in matters related to AI (Chesterman, 2020).
Legal personality, in broad terms, refers to the ability to act as the subject of legal rights and obligations, to enjoy them, and to act on one's behalf in public relations in defence of these rights (Rama-Montaldo, 1970).The debates and arguments surrounding the recognition or rejection of AI rights to developments produced by AI are largely speculative.Nevertheless, recent high-profile public events have caused the situation to start changing.In 2017, for example, Saudi Arabia granted "citizenship" to a humanoid robot named Sophia (Cuthbert, 2017), and a seven-year-old boy's identity was granted "residency" in Tokyo (Cuthbertson, 2017).Some of these examples were just ways to get attention.For instance, the robot Sophia is just a chatbot disguised as a human face.Nonetheless, the European Parliament passed a resolution in the same year urging the executive body, the European Commission, to consider creating "a special legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently" (European Parliament, 2015).Therefore, legal personality appears as a type of attribute containing more or fewer elements of different types (such as rights and duties, powers, etc.) that, in most cases, can be added to or removed by the legislator; the exception to these elements is human rights, which, contrary to popular belief, cannot be taken away.The right to quality legal representation is also regarded as a self-contained right.Limitations of rights can only apply if the rights and liberties of another person were violated during the commission of the offence.This fundamental idea, though, might be revised in the future, and AI will likely play a significant role in that.
In the age of fast-growing high tech, AI is becoming more important in many systems that support and serve people's lives.AI actions can have both positive (satisfaction with the service provided, obtaining material goods, property) and negative (causing material and/or moral damage) effects.Furthermore, the process of robotisation gives rise to new legal subjects, which include the search for new solutions in various areas of legal, economic, and social development (Yaroshenko, 2022).
Thus, the purpose of this study implies to identify the potential challenges and obstacles regarding issues of damage caused by AI that jurisprudence will face in the near future.In addition, it is assumed to consider the legal possibilities of recovering damages caused by AI, as well as to determine existing challenges regarding the issues of AI legal personality taking into account the theoretical possibility of acquisition the latter by AI.Theoretical limitations within the framework of the researched issue extend to AI use in industry (including the automotive and medical industries).

Literature review
The concept of AI is interpreted in science using a wide variety of different techniques.There is currently no agreed-upon framework for understanding AI in the technical world, which creates uncertainty in related areas of the law, society, morality and ethics.In the second half of the 1950s, one of the leading cyberneticists, J. McCarthy of Stanford University, introduced the concept of AI to science.He defined AI as the science and technology of creating intelligent machines on the one hand, and the ability of a computer to do what humans are capable of doing, which has something to do with intelligence, on the other (McCarthy et al., 2006).Afterwards, AI is acknowledged in full or in part as a self-organizing, autonomous system with the capacity for independent thought, learning, and decision-making (Morkhat, 2017).Alongside this, LeCun argues that the concept of AI can encompass not only machine learning but also deep learning, which is founded on the development of artificial neural networks (LeCun et al., 2015).
As previously stated, the use of AI affects a wide range of social relationships, which influence the trajectory of socioeconomic growth.Given the prevalence of social and professional interactions everywhere, the AI domain has the potential to attract the full attention of the criminal underworld.Right now, AI is in a good position to be used for criminal activity or the commission of crime through activities like autonomous business, e-commerce, and market rigging.AI can be employed to support criminal activity or, in certain circumstances (Mahmud, 2022).Given the variety of AI concepts, experts have mostly applied the notion of rationality to AI in their attempts to give a legal definition to it.This refers to the ability to select the best action to achieve a specific goal while taking into account certain criteria that must be optimised as well as the available resources (European Commission, 2019).The idea of granting legal personality to artificial intelligence is now widely discussed in European legal and philosophical literature.It is most visible in the application of AI in various sectors where high-tech applications are required.The top priority is transportation and medicine.Global automakers, for example, are attempting to put AI-related innovation projects into practice, creating a trend towards the use of autonomous vehicles that enhance driver comfort and safety.Although automation systems can function as intended, practice shows that human factors, such as driver error, driver overconfidence, and reliance on automation due to ignorance of its capabilities and limitations, have the potential to cause inappropriate interactions between humans and automation, which could have unfavourable effects (Muslim & Itoh, 2019).The implementation of technological solutions has not decreased the number of fatal collisions, and this leaves consumers wondering how much blame and accountability belongs to the person who supplied the vehicle (Bashayreh et al., 2020).On the one hand, legal responsibility for the fatal crash may fall on the vehicle's software manufacturer.On the other hand, it could be the manufacturer (the operator) who implemented, optimised, agreed upon, and approved the use of relevant AI for the vehicle (Benhamou & Ferland, 2020).Individual field-specific researchers emphasise that, as AI improves, the legal liability of actors affecting its operation will split over time, because autonomous vehicle software will interact with roadside units, signs, cloud services, and other road users, affecting control while on the road.Since this will incorporate products, services, and behaviour, the combination of different liability regimes will make it difficult for victims to obtain redress, thereby making legal regulation challenging (Noe, 2018;Uytsel, 2021).Overall, the introduction of new AI-powered products necessitates the meticulous development and modernisation of legal frameworks.
With regard to medicine, the debate about the potential of AI and autonomous robotic surgery is characterised by a lack of consensus about the liability of AI management subjects for the failure of a particular medical procedure (performing a surgical procedure), which can be subject to a variety of factors, including malfunctions, cyber-attacks.Many national laws impose liability on both the manufacturer and the operator of the robotic surgeon, but given cybercrime, liability can also be imposed on the criminal element (Ludvigsen & Nagaraja, 2022).As for the possibility of holding the AI itself liable, it is currently unthinkable at the current level of technology, not excluding the fact that a surgical robot will be able to self-learn and perform routine interventions under the supervision of a human surgeon soon (O'Sullivan et al., 2019).
The ability of AI to self-learn, imitate, and outperform humans in the speed, accuracy, and volume of intellectual operations is in no way comparable to human consciousness, self-awareness, and intelligence.To determine who is liable for the damage caused by an intelligent autonomous robot, the actions (or inactions) of the programmer, manufacturer, user, or another person who illegally interfered with its operation must be tested and evaluated to determine the source of the AI's malicious behaviour.Using analogies with autonomous vehicles, for instance, a surgeon physician would play the role of a driver who monitors the condition of severely ill patients undergoing surgical interventions performed by machines with autonomous capabilities (O'Sullivan et al., 2019).However, the individual who performed the surgical intervention, i.e. a human surgeon, would continue to be liable for medical errors, not the instrument, i.e. the surgical robot.
When discussing the possibility of recognising legal personality for AI systems, two key analogies are used in the literature: one between AI and animals, and the other between AI and legal persons or collective persons (Chen & Burgess, 2019).Many researchers agree that legal personality as it is recognised for humans is unique and cannot be recognised for AI, especially since AI has not demonstrated any evidence of consciousness or intelligence, at least thus far.Accordingly, there is not even a unified understanding of how an AI can provide a remedy for an incident that results in material loss or threats to the life or health of an individual or group of individuals.Given this, there is some discussion in academic circles about whether it is appropriate to hold AI legally liable, particularly regarding how AI can provide a remedy for the harm it causes to others as a separate legal entity (Danilov, 2021).It cannot be disputed that AI communicating with humans through a language conditioned by software can affect those humans' decisions and personalities in matters of liability redistribution between the AI manufacturer and AI operator.This is true despite claims that AI cannot make decisions, that humans have power over it, and that AI just serves as the basis for human decision-making, i.e. the result of reasoning (Wojtczak, 2022).Some researchers argue that existing legal persons, such as developers or other companies with the right to use the AI resource, should be held legally liable rather than the AI itself.Furthermore, this debate may touch on the ethical dimension of the question: Do humans have a moral obligation to grant AI legal personality?To address fundamental questions of responsibility for the actions of AI systems, society will need to conceptualise new cross-cutting legal institutions (Turner, 2019).The common grounds for experts is that, at the very least, given where humanity is in terms of technological advancement and understanding of the place it holds globally, contemporary AI systems do not share enough traits with people to give rise to a moral duty to recognise them as subjects of law (Dremlyuga & Dremlyuga, 2019).

Materials and methods
This article is based on a study of international acts, as well as national legislation in civil law, specifically in the regulation of the protection of intellectual property.The legal framework occupies a separate niche, as does the legislation of the EU and individual member states, as well as developed Asian countries (Japan, South Korea) that regulate liability for the use of hightech equipment.The above-mentioned regions were chosen due to their high levels of sustainable economic development, high levels of social digitalisation, and high-tech manufacturing that has an impact on machinery and instrumentation (Bonsay et al., 2021).Concerning the EU, it is a political and economic union of European nations whose economies have merged to foster growth and prosperity.Indeed, many EU member states are described as economically advanced, with large industries and widespread service penetration.Therefore, to accomplish these objectives, the European Parliament and the European Commission, the EU's sole legislative, executive, and supervisory bodies, have passed numerous regulations and directives.The following documents have been looked at in terms of AI regulation:  Parliament, 2020).
Furthermore, expert and legislative initiatives to establish AI liability for damages caused to the consumer/customer/owner are being considered.This includes an examination of the most effective procedures utilised by members of the legal community, such as attorneys and constitutionalists from Europe, the United States, and Russia.Finally, they determine whether the issue of holding AI accountable for its actions has been fully solved to meet the objectives of this research by systematising the data obtained.Using a systematic approach, the authors provide an overview and analysis of selected legal aspects pertaining to the regulation of questions of legal status and responsibility of AI-powered subjects.Following that, based on previous research, the feasibility of a gradual legislative framework of legal relations and the establishment of legal liability associated with the use of AI at both the national and international levels is discussed.By taking this approach, it is possible to complete two tasks: first, identify the most significant implications of interacting with AI for society; and second, demonstrate how important these implications are by using examples that match them.To that end, the dialectical method is used in a multidimensional discussion of AI personality concepts in existing legal relationships.

Results
The concept of AI, in the sense in which it is perceived now, as a technology, an electronic algorithm did not formally emerge until 1956 when the concept of AI was first mentioned in scientific literature.Since the early 1960s, the scientific and technological revolution has facilitated the development of AI's fundamental components, particularly since the advent of the rapid development and widespread adoption of personal computers.This has resulted in a more thorough and systematic study of AI.Until recently, it was widely assumed that intelligence was a unique feature of a biological being, namely Homo sapiens.This perception has begun to shift as a result of the ongoing development of computer systems.There begun to emerge the persuasive arguments confirming the idea that intelligence, or the ability to know, understand, and think, can be both innate (natural) and created artificially (McCarthy, 2007).Further there followed the emergence of AI as a scientific discipline.Subsequently AI began to infiltrate widely specialized production and directly into everyday life (Filipova, 2022;Schwab, 2015).
Parallel to this, the different scholar communities including the lawyers began discussing the possibility of using AI as a tool to organize and facilitate their routines.The legal community, of course, was interested in questions regarding the potential of AI in a professional area (Rissland, 1989;Sartor & Branting, 1998).Issues generating lively debate include: • Predictable changes in the judicial system due to the introduction of AI into the courts and AI's role in jurisprudence; • Automating the creation of legal documents using neural networks as a result of deep learning; • New risks of discrimination in AI decision-making; • Threats to data confidentiality due to the permeation of AI, whose data retrieval and processing power far exceeds that of humans; • Concerns about non-human machine learning algorithms that function in a black fashion, preventing the user from knowing how the intelligent system generates a specific result.
This list is not exhaustive, but the relevance of the topics covered is beyond doubt.As can be noted, the first four questions relate to a greater extent to issues of jurisprudence as a profession and type of activity.Delivering AI and related issues are still the subject of many debates at the intersection of ethics, philosophy, law and other sciences, and the questions raised within the subject matter of the above debates are often rhetorical.At the same time, the last question, which is directly related to industrial production and the introduction of robotics into it, causes less discussion and is quite practical.Currently, there are examples of successful implementation of AI in mechanical engineering (Kosimova, 2023).The combination of robotics and artificial intelligence opens up many opportunities for the development of new technologies and solutions in various fields, including automotive manufacturing, medicine, transportation, and many others.Although these industries use algorithms and machine learning techniques that have been proven to work, hardware errors can still happen, and the damage they cause can kill the product or service they provide.
Nevertheless, robotisation can be looked at from a different, economic point of view.The production process is made simpler by robotisation, allowing the business owner to offer the consumer a wider variety of goods and, therefore, increase profits.Furthermore, given the civil law relationship, when upgrading a specific enterprise, the owner may, if possible, lease the AI facility rather than purchase it in agreement with the supplier or the AI operator.Thus, incurring debt and owning the property are prerequisites for the ability to sue and contract.The potential for AI systems to amass wealth raises the question of whether and how they could be taxed.Putting a tax on robots was proposed as a way to address the expected reduction in the tax base and job losses due to automation (King et al., 2017).Meanwhile, industry representatives argued that such measures would harm competition, so the practice of taxing AI was never implemented.Instead of machines or robots, another option is to think about businesses that have been found to have abused their market position.Possible solutions include more aggressive taxation of profits.Still, AI systems' ability to borrow and own property would be taxed, not personality traits (Floridi, 2017).Other aspects could have to do with an increase in the number of damages caused by actions, if not by AI itself, then by its operators when they are providing a service.Although there have been no precise and reliable measurements of the damage caused by a specific type of AI to a specific victim, some societies are concerned about the potential for harm.For example, according to a survey done in Japan in December 2020, nearly 51% of people aged 70 or older said they were worried about the harm that could come from AI decisions that were not expected.The least worried people were young people between the ages of 30 and 39 (Statista, 2020).Taking a look at Europe as a whole, Italy's plan to start using AI in 2021 came with some risks.For example, nearly half of those polled stated that AI solutions were not industry-specific, and 32% stated that AI would result in more errors and lower accuracy.Risks such as algorithm bias, uncertainty about confidentiality, and unclear legal liability status are also mentioned by the IT community (Statista, 2022).
Still, the need is driven by companies' desire to maintain and raise the calibre of the AI-based products they release.Such ways of thinking have been shown in the way European laws have been changed to meet the needs of the times and the widespread use of AI in all aspects of life.Thus, on 20 October 2020, the European Parliament approved a Resolution with recommendations to the Commission on a Civil Liability Regime for Artificial Intelligence (2020/2014 (INL)) (European Parliament, 2020).The Resolution emphasised the importance of defining a clear and coherent civil liability regime in Europe for the development of AI technologies and the products and services that benefit from AI, to provide manufacturers, operators, users, and other third parties with adequate legal certainty.The same motivation drove the European Parliament to make several proposals on liability in its 2017 Resolution (European Parliament Resolution of 16 February 2017 with recommendations to the Commission on Civil Law on Robotics (2015/2103 (INL)).The two documents are different in some important ways.These differences are because the European Commission has done a more in-depth analysis, especially when it comes to civil liability for damage caused by AI systems (Sousa Antunes, 2020).Implementing AI liability rules in a way that works well could help improve the quality of service in the European consumer market.Nothing but improvements to member states' compliance with the EU Product Liability Directive can come from this (Cabral, 2020).
National law regulates AI applications and liability either through specific legislative changes or through the implementation of development strategies, concepts, government standards, technical regulations, and recommendations that do not contradict general legislation.For example, when it comes to the development and application of AI in the automotive industry, China has updated its 2020 Intelligent Vehicle Development Strategy, which identifies five key objectives, including the establishment of a "comprehensive cybersecurity system" (Schaub & Zhao, 2020).Some of the provisions are meant to be strategic, but for the time being, they are more like suggestions.Since 2016, for example, the US Automotive Information Sharing and Analysis Center (Auto ISAC) has maintained a series of automotive cybersecurity recommendations (Auto-ISAC, 2016) that offer guidance for implementing automotive cybersecurity principles.
When it comes to the public nature of legislative change consideration, the year 2018-2019 saw the start of a public debate in France on the transformations associated with digital transformation.These included constitutional changes as well as legislative changes.In January 2020, French National Assembly deputy Pierre-Alain Raphan proposed a Charter for Artificial Intelligence and Algorithms (Charte de l'intelligence artificielle et des algorithmes).The Parliamentary Committee on Constitutional Legislation then received it after registration.The project's authors proposed that the new constitutional law, the Charter of Artificial Intelligence and Algorithms, be referenced in the French Constitution's preamble and that some fundamental issues be enshrined in the Charter itself, such as: • Regular auditing of artificial intelligence systems (Article 4); • Assessing the evolution of artificial intelligence (Article 4); • Restrictions to prevent malicious misuse of artificial intelligence systems (Article 5), etc.
According to Article 1 of the draft law, the law would apply to both cyber-physical systems and virtual systems."A system. . .cannot have subjective rights because it lacks a legal personality.However, the obligations arising from legal personality shall fall on the legal or natural person" who uses the system, "becoming its de facto legal representative."Thus, despite much debate and opposition, the process of developing a new law to deal with the evolving digital reality and AI began.However, while there may be some progress in individual countries, this does not imply that positions can be unanimously agreed upon and adopted at the supranational level, as the current EU is an example of.Despite declaratory resolutions, the European Parliament is currently undecided on the final regulation of AI in legal practice (European Parliament, 2022a).Ad hoc committees and think tanks within EU institutions are also preparing the first set of regulations to manage the opportunities and threats posed by AI, with a focus on fostering confidence in AI and addressing its potential effects on people, society, and the economy.The new rules also aim to foster an environment conducive to the success of European researchers, developers, and businesses.The European Commission wants to increase private and public spending on AI technology to up to €20 billion a year (European Parliament, 2022b).Notably, there is still a stumbling block in some areas, such as the robotisation of medicine, which is directly related to human life and health.Even soon, it will be challenging to predict how AI will work.In this regard, AI technology is only just starting to become a part of the everyday lives of ordinary consumers and to have an effect on the legal relationships that arise between them when they purchase a product or service.This trend applies to everyday aspects of human life.Even in the medical field, despite the most recent technological advancements and the incorporation of AI elements, all treatment, diagnosis, and therapy processes in the healthcare system must be conducted with one clear goal in mind: to maintain health and preserve human life (Klarić & Karadža, 2021).It is necessary to maintain a broad discussion on an international level about the accessibility of advocacy for people whose lives have been affected by AI.This can be demonstrated by establishing a special institution that would compile, examine, and be able to offer legal support for international disputes involving AI actions.The intended accessibility would result in quick, low-cost, and effective dispute resolution.

Discussion
A human is not only the owner of an intelligent system but also its creator, in contrast to artificial intelligence, which began operating due to the manufacturer's programming of algorithms.Meanwhile, the current system can function effectively if AI technology manufacturers provide sufficient transparency in explaining how AI decisions are made (Reed, 2018).The ability of AI to reproduce and create new algorithms that are in charge of emotions and conscience, which are traits of human personality, has not yet been achieved by current technological development.This means that, for the time being at least, the level of autonomy that AI systems currently possess is still quite limited, and their actions are almost always comparable to human behaviour.It is thus not difficult to explain why, at the time this study is written, there is no need for an electronic personality.Another explanation is that the AI system must be programmed by humans.The state of technology has not (yet) advanced to the point where an AI system is self-sufficient and no longer requires a human component.However, the concept of an electronic personality should be dismissed.As high-tech society evolves, the concept of the e-person should be kept in mind for the future-AI systems may become even more autonomous in the future.In light of the current state of affairs, AI's creators, operators, and potential end-users must continue to bear responsibility for their actions (Andras, 2021).Summarizing all of the above, based on an analysis of theoretical views and a review of policy initiatives, as an intermediate result, a number of main problems regarding the issues of AI responsibility can be identified.All of them, to one degree or another, are shared in the legal scientific community (Arrieta et al., 2020;Coeckelbergh, 2020;Königs, 2022;Sartor & Branting, 1998).The above problems include: • The general issue of determining the guilt and responsibility of AI, as well as the issue of qualification of the act, qualifying signs (if it comes about a criminal offense).The question of how to establish the guilt of artificial intelligence in actions leading to damage remains rhetorical today.Artificial intelligence, being a program or a system, may not be aware of the consequences of its actions.Determining who is responsible is a key issue; • Comparing AI actions with human actions.The question arises of how to evaluate the actions of AI in comparison with the actions of a person.Some "standards" for such concepts like "guilty" may be based on human actions, and adapting them to AI can be difficult; • Lack of intent: AI cannot have intent, like a human.This, in turn, may make it difficult to determine intent or intentions, which is very important in matters of legal (in particular criminal law) qualification of an offense; • The impossibility of recovering material compensation from AI.This aspect is due to the impossibility of AI to possess material goods; • The complexity of the algorithms.Some artificial intelligence algorithms, such as neural networks, can make decisions based on complex and opaque internal processes.Obviously, the question arises of how to explain and justify AI decisions during the investigation or in court; • Bias and discrimination: if AI algorithms learn bias from training data, this can lead to incorrect or discriminatory decisions.The question then becomes how to regulate such cases and how to determine liability for discrimination arising from AI decisions; • Transparency and responsibility: the question arises of how to ensure transparency and clear responsibility in cases where damage is caused by incomprehensible or autonomous AI decisions.
It is important to have an accurate idea of how to assess whether sufficient measures have been taken to prevent damage.In addition, the question arises as to what purpose such responsibility should pursue.As is known, punishment as a component of responsibility is aimed, among other things, at achieving such goals as restoring social justice, correcting the convict, preventing the commission of new offenses, and, of course, can only be applied to those who are able to realize the causal relationship between illegal actions and consequences (Gromet & Darley, 2006).Obviously, at present, AI is not endowed with such an opportunity; • -volume and complexity of data.Notoriously that evidence about the impact of AI on events or damage can be based on large amounts of complex data.In this regard, the question arises of how to provide reliable evidence in court when the perception of AI can be difficult for lawyers and judges.
The above challenges will clearly challenge legislators to develop new rules, regulations and regulatory mechanisms to ensure fairness and efficiency in dealing with the damage caused by AI.While the above arguments are quite convincing in favor of the fact that AI categorically cannot be a legal entity, there are other points of view.Regardless of the course of history and the level of technological development with or without the involvement of the human component, the understanding of the need to supplement legislation was seen previously, when technology did not expressly surround society and its progress.If AI is to progress to a level of absolute technological sophistication, such as a thinking humanoid robot with feelings and emotions, laws must be changed to accommodate the roles of high-tech robots in society (Sherman, 1998).However, this does not necessarily imply that the machine has a visual resemblance to a human, as machines with advanced AI can act as computers with human-level (or higher) intelligence.Still, no matter what shape an ideal AI takes, it will have a profound and lasting impact on how people live their lives and how human civilization will advance in the future (Russell & Norvig, 1995).Hence, the legal personhood of AI tools will become real sooner or later, and once that happens, it can't be changed.Rights and obligations that come with having a legal personality do not necessarily have to be the same for everyone subject to the legal system.Even among individuals, the struggle for equal rights for women, ethnic or religious minorities, and other disadvantaged groups echoes this truth (Bryson et al., 2017).Despite well-established legal traditions, this does not imply that AI governance should be governed solely at the national level.In light of the globalised world and the vast number of international venues, it is possible to optimise the implementation of a risk-based approach to allocating legal liability for AI-caused accidents.It can be improved through international collaboration that aims to create a shared understanding of risk assessment related to the creation and use of AI products across national boundaries and cultural contexts.In addition, efforts should be made to create a standardised legal framework for client/consumer liability based on a shared understanding of risk assessment.Certainly, not all nations will be immediately impacted by a massive influx of AI-oriented industrial components.However, this does not mean that states should not consider developing a regulatory framework to allocate liability among the various parties that could be involved in an incident involving the use of AI (Li et al., 2018).Supranational bodies, such as the European Parliament, attempt to start the process of developing regulations related to the vision of AI as a legal subject.Their efforts demonstrate that the automation of daily human routines is a matter of time, and that the remaining legal liability principles if left unchanged, will be unable to satisfactorily resolve future disputes involving autonomous AI systems (Atkinson, 2020).
Ultimately, above arguments support the fact that AI may well be subjective.More relevant legislative initiatives can be considered and passed without having to review or change the law itself right away if legislators realise this sooner rather than later.The above initiatives will have to accompany technological and social changes (Bertolini, 2020).Nonetheless, at a time of rapid digital transformation of society, instead of giving in to ideological dogmatism or blindly adhering to long-standing customs, the field of legal science ought to get to work on formulating proposals as soon as possible.

Conclusion
The emergence of AI has altered the perception of intelligence, which was previously thought to be a feature unique to humans.AI's effects and positive qualities have enabled it to be used not only in scientific research, but also in practice, such as in knowledge-intensive industries, services, and e-commerce.At the same time, the study gives reason to believe that the question of the legal personality of AI can arise only at the stage when jurisprudence receives convincing evidence that AI can have a subjective assessment of its actions-the internal attitude of the subject to the act it performs.In addition, at this stage it will be important that the AI has a property equivalent to "awareness" of a causal relationship-otherwise, there are no goals and objectives for bringing AI to responsibility from the point of view of traditional jurisprudence.At the current stage, the responsibility of AI can only be considered as a subjective, evaluative category outside the legal plane.Herewith, all legal responsibility can be assigned only to the owner/operator of electronic systems and mechanisms that use the capabilities of AI.
As can be understood from this study, the main issues regarding AI damage and AI liability generally cover AI culpability, subjective perception of causality by AI, the possibility of linking the concept of "intention" to the category of AI and the issues of interpretability of AI decision-making processes.The advent of AI has not yet become a deciding factor in the evolution of the human way of life.Nonetheless, the domain of AI may take on this position soon, as AI can advance indefinitely.The challenge is simply to define AI legal framework without jeopardising individual interests, to create a reliable and flexible framework to mitigate threats to people's livelihood and safety.The European best practice enables a conclusion that all interested parties have given attention to the variety of AI issues and, through their efforts, have laid the groundwork for the practice of regulating AI activities to continue growing and minimising the potential harm caused by AI actions.In this regard, it is critical to consider the country's level of development: in developed countries, legislation and judicial practices are already in place and systematised, whereas, in developing countries, the issue is more at the level of doctrine.It is still important to continue drafting ideas for potential future strategies for the legal regulation of AI within the confines of global legal norms.Furthermore, the legal personality of AI needs to be defined in a way that everyone around the world agrees on so that disputes can be settled quickly and effectively.Future research should focus on the legal aspects of AI decision making.