AI ethics and learning: EdTech companies’ challenges and solutions

ABSTRACT The aim of this study is to identify the ethical challenges, solutions and needs of educational technology (EdTech) companies. Qualitative data was collected in interviews with seven experts from four companies, and the data was analysed using inductive content analysis. The four main areas of challenges were ambiguous regulations, inequalities in human learning, ethical dilemmas in machine learning (ML) and lack of ability to assess consequences in society. According to the studied companies, AI regulations are difficult to understand and implement. There is also much to be done in terms of reliability, transparency, and safety. Consequently, companies suggested that AI-based products should be more preventive, safe, explicable, and equally accessible. Sufficient information, multi-professional support also within company, global collaboration, sharing best practices, and general discussion were emphasised. The results show that EdTech companies are aware of their ethical challenges, and their responsibility as disseminators of information. However, translating information into practice is challenging because it is often very fragmented and difficult to understand. Companies hoped that everyone: themselves, consumers, educational institutions, researchers, funders, and decision-makers would do more together to overcome the ethical challenges of AI.


Introduction
AI is radically changing the landscape of teaching and learning.New technology offers more humanmachine interaction than ever before.Data collected from human learners' can be very multimodal and comprehensive.Wong et al. (2018), Roschelle et al. (2020), and Niemi (2021) have summed up the main themes discussed in relation to AI and education.Firstly, there are big changes in data gathering which is multimodal and may include perceptions from multiple sensors.Machines can detect complex features: for example, cameras and motion detectors can be used to identify a person entering a building.Secondly, due to ML, we can have representation and reasoning as well as models of people and their behaviour.Machines can use those models and deduce what might happen next.This is based on algorithms and how machines are programmed to analyse the data.Thirdly, big data can be used to gain new understanding as well as relevant patterns for learning.Fourthly, natural human-machine interaction (e.g.interacting through speech or gestures) can support and tutor human learning.Fifthly, AI has far-reaching social impact.The infrastructure needed for education is vast and has a decisive impact on people's lives in individual and societal levels.

Ethical issues in AI-based learning contexts
AI is not a new topic (Turing, 1950(Turing, [2009]]), nor AI in education (AIED) which has taken its first steps already in the beginning of 1970s (Self, 2016).Furthermore, many AI-related issues have been identified as ethically challenging some time ago (e.g.Mason, 1986).Jobin et al. (2019) conclude that, although there is a consensus that AI should be ethical, regulations are conceptually diverse and significant emerge between different parties that offer guidelines.According to Hagendorff (2020), ethical guidelines rarely help companies in their decision-making processes and self-regulation is much more preferred.Jobin et al. (2019) have analysed 84 regulation documents in the years 2014-2019.Their results reveal that the most important issues converge globally around five ethical principles: (1) transparency including explainability and understandability, (2) justice and fairness including equity and (non)bias, (3) non-maleficence including security, safety, and prevention, (4) responsibility and (5) privacy.Hagendorff (2020) has drawn three main principles from 22 ethical guidelines: accountability, explainability, and fairness.Consequently, the wide range of ethical guidelines and regulations causes uncertainty.Bostrom and Yudkowsky (2014) worry about how different partners understand regulations and the overall consequences of AI.They emphasise that AI-based decisions should be transparent, explainable, and fully auditable.According to a literature review by Morley et al. (2020), there is too much reliance on explicability (including explainability, accessibility and understandability) since it tends to be linked to other ethical principles.For example, if a system is explicable, one can assume that it is also more ethical.
Because of the complexity of ethical demands and the rapid development of AI, trustworthiness is commonly invoked when speaking about AI applications (e.g.HAI, 2020).Trust is also often used in connection with deontological ethics, particularly when referring to professional ethics.It includes the idea that society trusts professionals' moral behaviours.People trust doctors, judges, and teachers because they are committed to professional ethical codes.When AI is implemented in education and learning environments, people must trust designers' work the algorithms are planned by professionals who follow ethical principles.This means that all service providers and users are aware of the ethics involved in AI, and they have the capacity to make decisions accordingly.Yu and colleagues (2018) have proposed a taxonomy that delineates four areas of ethical decision making in AIbased settings.It consists of (1) designers and technical systems, (2) individual ethical decision frameworks, (3) collective ethical decision frameworks for multiple agents (4) ethics in human-AI interactions.Bostrom and Yudkowsky (2014) have criticised the decades-long academic discussion on the relation of AI and ethics, as system development and ethical research have only slightly crossed.Moreover, it has been argued that there is a general discussion about principles but not as much about practices, that is, how ethical AI is or should be implemented (Morley et al., 2020).This study focuses on the ethical challenges faced by companies that have not been resolved despite of numerous attempts.EdTech innovations or the development of their contents have not been addressed in this study.The recent COVID-19 pandemic has increased, for example, distance learning, need for support, and the use of technical aids (e.g.Niemi & Kousa, 2020), which makes tackling ethical challenges even more acute.

Methodology
This is a qualitative study (Cohen et al., 2007).The data was collected in interviews with seven experts from following four AI software and service providing EdTech companies with wide international networks: . Company I provides services and information on the well-being of students and teachers.The aim is to support students' learning and promote their social and emotional skills.The company has about 10 people from the fields of business, pedagogy, and research and they operate also in the international market. .Company II is specialised in training high-skill technical employees.Their service consists of VRtraining solutions for different learning environments, learning needs analysis, telemetry, and analytics.The company has about 60 people from the fields of business, engineering, and software development and they operate also in the international market. .Company III helps clients to have meaningful data that support decision-making as well as safe data-connections.There is about 10 people from the fields of business, education, engineering, and software development and they also have a global network and business. .Company IV is specialised in combining AI and human work in oil production by training operators.Dozens of people from the fields of business, education, engineering, research, and software development are related to company's work nationally and internationally.
The companies interviewed were involved in a two-year project AI in Learning (2020-2021) seeking new solutions how to apply AI in education and learning.The aim of the project was to create new knowledge about human-machine interaction and develop new AI-related tools for learning and wellbeing in schools and working life contexts.All interviewed participants have at least 10 years of expertise in the EdTech field.The interviewees were selected by relevance sampling (Krippendorff, 2004), since adequate AI ability would ensure the best contribution to the research questions.Seven voluntary company representatives (pseudonyms A-G) were interviewed individually by the researcher at the end of 2019-in the beginning of 2020, before the AI in Learning project.Each interview lasted about 60 minutes and it was recorded digitally.There were about six singlespaced pages of verbatim written text per interviewee and the texts were analysed by inductive content analysis.The coding focused on three main interview themes: companies' ethical challenges, solutions to current and future ethical challenges, and need of support.Companies were also asked about their current situation and their opinions about AI and EdTech issues to get more in-depth information.During analysis, the data was coded into smaller units and grouped into categories.All interview data was re-analysed several times for reliability and stability and for intercoder reliability and agreement, several negotiations were made between the authors of this study (Krippendorff, 2004).Table 1 demonstrates how one quote is coded into subcategories forming a main category.
The reliability and stability of this study was established by re-reading, categorising, and analysing the data.The reproducibility is rigorous, since the results are comparable within this study and with others (Krippendorff, 2004) since the categories that were found are in line with ethical main principles of AI that has been presented in various studies before (e.g.Jobin et al., 2019).The entire research process is based on the knowledge, consensus and discussions of the companies and researchers of the AI in Learning project.Due to the sensitivity of the topic, the results are completely anonymised so that the respondent cannot be identified, for example, from the citations.Therefore, only general information of companies has been presented and not, for example, information that could be considered as trade secrets.This study has also a considerable social validity (Krippendorff, 2004) since there is a common need to understand more about AI and its ethical challenges and solutions to diminish the gap between producers and users.

Results
The results are based on the conceptions and experiences of seven interviewees from four EdTech companies.The findings are presented thematically according to the research questions.Firstly, the ethical challenges experienced by companies are presented, and secondly, their solutions to these challenges.Thirdly, the need for support is reviewed.

Companies' AI-related ethical challenges
According to this study, EdTech companies have faced multilateral AI-related ethical challenges.Most of the challenges were contemporaneous, with some expected to become more relevant in the future.Four main themes are revealed from the data: (1) ambivalence of regulations, (2) inequalities in human learning, (3) ethical dilemmas in ML, and (4) the lack of ability to assess consequences in society.The themes are explained in Table 2.

Ambivalence of regulations
According to all companies, legal, regulatory, and ethical frameworks for AI are indefinite and difficult to understand, interpret and implement.The concept of ethics is complex and understood differently in different countries.As Participant A expressed: "The entire terminology of AI is very ambivalent.Currently, anyone can claim almost anything about AI and its potentials".Some of the respondents argued that most ethical guidelines on AI are too ambiguous and therefore useless in helping their businesses develop ethically sustainable products and services.The companies Table 1.Example of qualitative data analysis concerning companies' challenges.

Subcategory Main category
There are no satisfying solutions for the responsibility issues between humans and machines yet.The starting point is that humans make the final decisions and have therefore the greatest responsibilities, too.I wonder if it is fair, if algorithms do all the work and humans make assumptions and decisions based on that work afterwards.Do humans have enough knowledge and understanding to make those decisions?
Responsibility and decision-making issues between humans and machines are complicated Ethical dilemmas in machine learning mostly have their own ethical guidelines, but the challenge lies in implementing ethical AI in non-EU countries with more lenient regulations.Consequently, the global markets outside the EU were seen as problematic.In addition, some companies fear that the EU will fall behind global AI progress since it has such strict and inflexible regulations.According to Participant G: According to Participant G: "Legal processes are too slow and do not respond to the rapid technological development of AIbased commodities.The school system in many countries is also too slow to adapt new technology".

Inequalities in human learning
Most company representatives considered students' and workers' opportunities to use digital devices and participate in AI-based education inequal, varying enormously even within the same country or city.Therefore, equal opportunity to learn about and understand AI, as well as to choose ethically sustainable services and products, is challenging nationally and globally.In fact, equally accessible and understandable AI applications for schools and workplaces are not just challenging but even impossible to develop.AI applications are based on the data of the average learner and do not recognise cultural differences, special needs, or different learning paths, even though it would be technically possible.Participant A explained that customised AI applications for different learners and their needs are too expensive to implement.The companies also reason that they do not have enough support and resources to understand cultural or educational differences to benefit diverse needs.There was also a common awareness of parents', teachers', and workers' strong resistance to AI-based implementations.Participant C explained: It is not always straightforward to offer simulation-based training to our employees since they are not so enthusiastic about it in some countries.They might be afraid that they will lose face or even their jobs if the learning outcomes of the training are unsuccessful.That is why we have a lot of work to be able to supply and use similar AI-based equipment in different cultural environments.

Ethical dilemmas in ML
Ethical challenges in learning were also approached from the viewpoint of machines and data management.All interviewees recognised and predicted ethical concerns, such as how to safely collect, process, share and store data.The management of personal and sensitive data is particularly challenging and important.Participant C tried to explain their major concern: The negative reputation of bigger companies obstructs the business opportunities of others I see that the biggest ethical question is: can algorithms be trusted?We should be able to trust that the impacts of algorithms follow the safety norms and standards and, most of all, are not harmful to humans.For example, if there is a financial loss caused by a bad algorithm design, the problem can be likely solved afterwards.That is not the case if the algorithm's impacts are harmful or dangerous to humans.
Furthermore, responsibility issues, decision making and work distribution between humans and machines concerns some company representatives, as Participant D described: There are no satisfying solutions for the responsibility issues between humans and machines yet.The starting point is that humans make the final decisions and have therefore the greatest responsibilities, too.I wonder if it is fair, if algorithms do all the work and humans make assumptions and decisions based on that work afterwards.Do humans have enough knowledge and understanding to make those decisions?
In order to reach ethical goals, avoidance, and recognition of bias in data management is crucial.Participant B gave an example of a situation that they had faced in the product development process: We have used a lot of time to make simple and understandable questions.Despite that, we cannot ensure that everyone understands them similarly.For example, when we tested the prototype questionnaires with 8-yearolds, it took just eight seconds until one of the pupils asked: what is an atmosphere?That is a good example showing why it is important to make clear and straightforward questions and learning material in order to avoid data distortion and biases.
Even so, there are positive aspects of AI, as Participant D stated: In my opinion, AI-based implications are beneficial to companies.For example, machines can replace humans in the kinds of working environments that could be harmful or even dangerous.Although the main purpose of our organization is based on financial profit, we do aim at the health and safety of our employees as well.
Some of the respondents were optimistic about the work distribution between humans and machines: I do not believe that the number of employees is radically decreasing because of AI.However, I do believe that the type and content of human work is changing, and the trend is not necessarily negative.For example, machines can do most of the routine work.In addition, algorithms can detect anomalies in all stages of product development processes.All in all, humans are still much needed in the future to understand and manage different systems and processes, to identify challenges and to handle unforeseen situations.(Participant D)

Lack of ability to assess consequences in society
All companies find the negative public opinions of AI challenging, as well as the fact that the common discussion is controversial and polarised.Participant F expressed: "There is a negative public opinion about AI, but at the same time people carelessly give their personal information everywhere and use services and products in their everyday lives without even knowing that they are AI-based".Furthermore, new AI-related applications are considered difficult and untrustworthy in some workplaces.Participant C explained that employees might feel incompetent and fear losing their jobs.The main reason for the negative public opinion is that consumers, policymakers, and the media do not have sufficient education and knowledge to understand AI and its applications.People also have unrealistic expectations and fears about AI, many of which are presumably based in media and the film industry.According to the interviewees, companies exaggerate the qualities or features of their AI-based products as well.Most of all, the companies were unsatisfied with the current situation wherein the reputation of big companiessuch as Facebook, Google, Amazon, Microsoft, and Applehave a negative effect on the AI field and its ethical endeavours.The respondents also criticised investors and customers who question the trustworthiness and potential of smaller companies to provide ethically sustainable AI-based utilities.
In summary, the findings reveal that human learning impacts ML and vice versa, and they both have challenges.As humans have inequal opportunities to participate in and understand AI, companies struggle with complex data management processes with countless ambiguities and ethical problems.There is much to be done in society before ethically sustainable AI-based products and services became part of people's everyday lives.

Companies' solutions to AI-related ethical challenges
In the analysis of the second research questionhow companies may solve their AI-related ethical challengesfour categories emerged.According to the respondents, ethically sustainable solutions should be (1) preventive (2), safe (3), explicable and (4) equal.The categories are detailed in Table 3.

Preventive
First, all companies agree that the best way to tackle ethical problems would be prevention, problem recognition and constant risk monitoring during product development processes.Some companies use their own ethical guidelines and checklists, which participants considered extremely helpful.Moreover, sharing multi-professional knowledge with their own employees, experts from other companies, researchers and consumers allow companies to prevent and solve ethical challenges.Some respondents added that collaboration and information sharing should be global so that the companies could understand and prevent risks more efficiently.Participant F explained the following challenge prevention strategy: We do not offer black boxes or mysterious AI-based innovations.Our business is based on scientific research, and that is a remarkable competitive advantage for us.It makes a real difference in today's world if the company can handle its own problems, like how to detect anomalies and biases from the data or how to solve data safety issues right from the beginning of the product development processes.When we know our products thoroughly, we can be honest about them and explain every detail of the development process and algorithms if necessary.

Safe
Safety issues concerning ethically sustainable AI-based products and services was questioned and contemplated by all interviewees.According to the companies, the most important solution to ensure safety is legislation.As Participant B expressed: Since the primary goal of any company is to make money, the true, ethical choices are often based on consumers' own morals.That's why I cannot see anything else but AI legislation ensuring the ethicalness of companies and consumers' safety in turn.I think, however, that ethical issues are soft and therefore difficult to regulate strictly.
Other solutions to improve safety included human-machine work distribution, combining new AI-based technology with older robust technology and using research-based knowledge and models in product development and data management.Moreover, the importance of personal information de-identification was made clear by all interviewees.

Explicable
People also need more understanding of AI to choose products that are ethical and know how to use those products without risks.AI was considered by participants as important as climate change.Participant E argued: People should understand that AI is a big issue like climate change, and it has an influence on everyone.One might even ask if AI should be part of common knowledge and media literacy.I think that enhancing the common knowledge of AI might be the key for understanding.Therefore, there should be more fact-based information and public discussions about AI to understand what it is about.
Consequently, participants suggested more education on AI and ethics at all levels of education and that educational material should be accessible and understandable.Furthermore, knowledge and public discussions about the possibilities, risks and threats of AI would enhance common knowledge about what ethical AI means in practice.The interviewees agreed that companies, schools, and universities should start sharing their knowledge of AI for the common good.Participant E shared their company's concern about the situation: We do have big inequality problems with issues like data ownership, accessibility, and common understanding.Another problem is how the existing data is distorted and biased.Furthermore, we live in a divided world, and there is a considerable number of children who cannot be part of digital development.Their future has, therefore, no foundation.I fear that these problems will keep on continuing.We cannot just hide our heads in the sand and say that the internet fixes the problems.

Equal
Inequality in schools and companies was considered a big challenge nationally and globally by all interviewees.Most suggested the use of AI-based applications in an ethically sustainable way is based on trustworthiness and respectfulness during training in schools and workplaces.They pointed out that the opportunity to take part in training should be similar in every workplace and school on a national and global level.Additionally, customised and easily accessible options, such as browser-based applications and extra support should be available when needed.Companies should take diversity issues into account and avoid sensitive subjects when developing educational applications and material.Participant C had the following proposition: I think that the best and most effective AI-based education system would be based on both human and machine work.In that kind of system, those who like to study independently could use an AI tutor when needed, and those who needed more human support would have it instead.My opinion is that learners should have different options to choose from.What is more, AI could be especially helpful in order to recognize individual learning needs.
The best way to handle with ethical problems is thus prevention and risk monitoring at all stages of the product development process.Multi-professional teamwork and collaboration is also needed.Even if the common challenge seems to be in developing efficient, trustworthy, transparent, and equal AI solutions, most company representatives tended to have more questions than answers.Most of all, more education and common understanding of AI is needed to enhance people's opportunities to choose and use ethically sustainable products and services without risks.

Companies' needs for support
The third research question explores what kind of support is needed to develop ethically sustainable services and products.All the participants agreed that they would need more knowledge and understanding of AI to create ethically sustainable practices.According to the companies, knowledge about the following issues is needed: . Risks of collecting, processing, sharing, and storing data, .algorithms and AI-based methods and systems, .human-machine interaction, .GDPR (in whole words) and other legislation, .end users, and .responsibilities and data ownership.
Insufficient instruction on AI ethics, lack of technical ability or updated technical information are common in workplaces, obstructing product development processes and competition in international markets.Consequently, there was consensus that more support is needed in developing and implementing ethically sustainable AI-based services and products.First, participants suggested that more multi-professional support from researchers would be beneficial.Second, understandable juridical support was considered essential, especially at the global level.Third, more support, financial resources and knowledge from society and the economy is needed.Finally, to understand what kinds of communication and information are needed from companies, public support was called for.
The interviewees expressed clearly that more collaboration is needed, starting with their own experts, as Participant C explained: In our company, there is a constant lack of information about the specialised, working knowledge of our manufacturing employees.They have the best understanding about the potential everyday challenges that can occur during the product development processes.Therefore, we should know our employees as well as their working environments thoroughly, to understand what kind of ethical problems might arise.The situation is the same concerning those who use our services and products as well.
Useful also would be more collaboration with researchers, the public sector, policymakers, and consumers.Schools, including teachers and parents, healthcare workers and universities are essential partners.Most of all, global collaboration to understand economical, juridical, cultural, and educational differences is inevitable to ensure better competition in the global market, as participant D summarised: I think that now is the time when we [technology experts] should wake up and start to collaborate with experts from different backgrounds to keep up with global development.We must also bear in mind that our multi-professional solutions should be globally compatible and competitive as well.

Companies' AI-related ethical challenges
Companies' AI-related challenges mostly concern regulations, human learning, ML, and society.According to the participants, there are too many guidelines which are also often regarded useless and difficult to understand and implement.These challenges have also been previously identified (e.g.Hagendorff, 2020).Many parties provide guidelines, but they differ significantly (e.g.Jobin et al., 2019).Therefore, many companies have their own regulations.This, together with cultural differences, further complicates global interaction which would be extremely important for the safe and ethical use of AI around the world.Hence, more multinational collaboration also in a policy level is needed to reach an ethical consensus.Ethical guidelines should also have more context-specific terminology since "AI" includes so many technologies (Hagendorff, 2020).
Many challenges relate to human learning and ML.According to companies, people do not necessarily have equal access to AI-enabled digital tools for education.This creates high inequalities and reduces opportunities to learn, for example, adequate future skills.UNESCO (2019) shares the global concern in a following way: "We need serious efforts to prevent the development in which AI will exacerbate digital divides and deepen existing income and learning inequalities, as marginalised and disadvantaged groups are more likely to be excluded from AI-powered education".
Furthermore, companies have faced many challenges in product development, and they have many questions such as how to ensure the safety and trustworthiness of algorithms, how to recognise and avoid bias and how to distribute work, responsibility and decision making between humans and machines.Similar issues can also be found in the ethical principles (e.g.Jobin et al., 2019).It can be concluded that companies' ethical challenges are well known, but it cannot be deduced whether they have been identified at a sufficient level to be resolved.
At the societal level, companies are concerned about negative consumer attitudes, distrust, and unrealistic expectations about AI.According to companies, the reputation of large multinationals may be the cause of social challenges.Cope and Kalantzis (2016) criticise and are concerned about student privacy, profiling learners and the consequences of using big data.Their arguments are based on multiple warnings about ethical dilemmas that typically emerge in big data when AI applications are used (e.g.Mislevy et al., 2012;Pea et al., 2014;Siemens & Baker, 2013;West, 2012).It can be said that there are two sides to societal challenges in a context of ethical AI: businesses, which should strive for more ethically sustainable solutions to gain more reliability, and consumers, who should be better informed about AI and related safety issues.This requires more transparency, sufficient education, and cooperation between different actors in society.

Companies' solutions to AI-related ethical challenges
According to companies, ethical AI-related products should be safe, explicable, equal, and preventive in a way that no ethical issues arise.The solutions adhere ethical principles of AI (e.g.Jobin et al., 2019).Firstly, ethical problems should be able to be prevented before they occur.In order to effectively identify ethical risks, education, internal consultation, and cooperation between companies are needed.Ensuring anonymity was seen as one of the most important factors in product safety.The safety and common understandability would even increase if there were sufficient information and discussion about AI and products containing it.However, explicability does not automatically make AI-based products more ethical (Morley et al., 2020).Companies recognised their role as disseminators and hoped that educational institutions and others in society would also meet this challenge.The common discussion would also help other people in society learn more about AI and understand its consequences.Yu and colleagues (2018) underline that ethical decision making should involve different parts of society.However, it is still unclear how the responsibility would be shared between the various parties, how much information is sufficient, and how equal education of citizens with different learning paths would be implemented in practice.

Companies' needs for support
Companies need a wide range of support and collaboration to develop ethically sustainable products for education.More information is needed for example, on the possible risks, legislation, research, and best practices.How to implement ethical principles to practice is a known challenge (Morley et al., 2020).The study found that some of the information would be available from one's own company, but the lack of communication between units hinders collaboration within companies.Especially smaller companies need also financial support for product development.As the problem affects society as a whole, there is a need for actors or tutors that take responsibility for training and supporting companies and bringing multi-professional skills together.It can be assumed that research and new innovations for that kind of collaboration and education is very much needed.

Limitations and further studies
There are a few limitations of this study.First, a bigger number of participants would provide more diversity.However, interviewees reflected on ethical questions broadly, not only from their own company's perspective.Second, the results are based on interviewees' opinions, which are not necessarily in line with their actions.

Table 2 .
Companies' conceptions about ethical AI-related challenges.

Table 3 .
Companies' conceptions about ethically sustainable solutions for AI-related challenges.