Combating fake news, disinformation, and misinformation: Experimental evidence for media literacy education

Abstract This study investigated the effect of media and information literacy (MIL) on the ability to identify fake news, disinformation and misinformation, and sharing intentions. The experimental approach was selected to study both the control group and experimental group made up of a total of 187 respondents. Comparative analysis of the two groups revealed that although more respondents in the experimental group were able to identify the inauthenticity of information presented to them, some of the respondents in the control group were also able to do the same, even though they did not receive MIL training. Conversely, some respondents in the experimental group, even though they were trained in MIL, could not determine the inauthenticity of information, possibly because the one-off training given to them did not allow them to assimilate all the information in one sitting. Nonetheless, the results of the bivariate correlation computation showed that MIL trained respondents were more likely to determine authenticity or otherwise of information and less likely to share inaccurate stories. This means that when MIL increases, sharing of fake news decreases. This is yet another evidence that MIL enables information consumers to make informed judgments about quality information. It is recommended that MIL is incorporated into mainstream educational modules and consistently revised to reflect the demands of the times. MIL programs must also consider how to effectively reach those without formal education. Actors within the information, communications, and media ecology must contribute to their quota in making information consumers more discerning with the right MIL sensitisation.


PUBLIC INTEREST STATEMENT
This study applies the experimental approach to ascertain how media and information literacy (MIL) enables news and information consumers to identify fake news, disinformation, and misinformation and also how that influences sharing intentions. The motivation for using this approach was due to the dearth of the literature that adopts the experiment to explore the subject matter. The study therefore contributes uniquely to the body of knowledge on the topic. Experiments conducted and consequent bivariate correlation tests carried out have proven that factchecking is an important skill that information and media consumers require to fight misinformation, disinformation and, even, malinformation. Skills such as the ability to do reverse image search, determining technical shortfalls in news stories, using factchecking websites, and being able to detect clone news websites, are mandatory to combat disinformation and misinformation. The study also makes the case for both digital and mainstream media stakeholders to both develop their skills in fake news, misinformation and disinformation detection and to arm their users and audiences with the same. enables information consumers to make informed judgments about quality information. It is recommended that MIL is incorporated into mainstream educational modules and consistently revised to reflect the demands of the times. MIL programs must also consider how to effectively reach those without formal education. Actors within the information, communications, and media ecology must contribute to their quota in making information consumers more discerning with the right MIL sensitisation.

Introduction
Fake news is "news articles that are intentionally and verifiably false, and could mislead readers" (Allcott & Gentzkow, 2017, p. 213). It is also sometimes referred to as information pollution (Wardle & Derakshan, 2017), media manipulation (Warwick & Lewis, 2017) or information warfare (Khaldarova & Pantti, 2016). Disinformation has also been defined in UNESCO's handbook for journalism education and training, authored by Ireton and Posetti, as deliberate (often orchestrated) attempts to confuse or manipulate people through delivering dishonest information to them, while misinformation is misleading information created or disseminated without manipulative or malicious intent (Ireton & Posetti, 2018, p. 1). There is also malinformation, which is the intentional dissemination of confidential information, by typically changing context, date ,or time, for personal or corporate rather than public interest (Staats, 2021). The two main differences between misinformation and disinformation are that: 1) fake news mimics the form of mainstream news (Zimdars & McLeod, 2020), while disinformation does not; and 2) while disinformation is purposefully crafted to mislead, the one engaged in misinformation does not deliberately do so because they are not aware information being shared is fabricated or false. Fake news, disinformation, and malinformation yield misinformation. Disinformation (and by extension fake news) and malinformation are dangerous because of how it is deliberately orchestrated and resourced by malicious actors and how it is reinforced by digital technologies and platforms, including social networks (Ireton & Posetti, 2018). Table 1 provides a breakdown of respondents of the study. Table 2 compares the two groups studied in terms of accuracy of responses in relation to the question that sought their views on whether news items and information presented to them were accurate or not. Group one refers to the control  group (those who did not receive training), while group two refers to the experimental group (those who received training). Table 3 provides a breakdown of inter-group comparison of likelihood to share inaccurate information Table 4 illustrates the results of a bivariate correlation conducted to know whether there was a relationship between group respondents belonged to, accuracy of story or information and respondents' intention to share stories on social media or with their acquaintances.
Although not new, fake news, misinformation and disinformation have received enormous attention in contemporary times, especially, during and after the 2016 American presidential elections. Del Vicario et al. (2016) affirms that fake news has been listed by the World Economic Forum (WEF) as one of the main threats to society. Fake news and disinformation take a variety of forms, including satirical news sites, fabricated news items, manipulated photography, propaganda, false press releases (Tandoc et al., 2018) as well as sensationalist tabloid content (Marwick, 2018).Falsehood, misleading information, and rumors that are associated with the term have been around "as long as humans have lived in groups where power matters" (Burkhardt, 2017, p. 5). This is evidenced by the works of the Italian author and satirist, Pietro Aretino that were used to blackmail patrons and former friends in the 15 th centuries to the broadcast of the War of the worlds in 1938 through to present-day digital misinformation on social media. In current times, digital communications technologies have allowed for new ways to widely produce, distribute, and consume fake news and disinformation, making it harder to differentiate what information is authentic and which ones are false (Kalsnes, 2018). Fake news stories are shared more often on social media than articles from edited news media (Silverman & Alexander, 2016), where there is some form of gatekeeping. Caplan et al. (2018) corroborate this assertion and submit that social media platforms like Facebook and Twitter have been heavily cited as facilitating the spread of fake news. This is an indication of how fast fake news spreads and how its thinly veiled resultant damage could potentially spread widely.Actors ranging from corporations, government agencies, and individuals have been identified as creators of fake news and disinformation. The motivations for disseminating fake news and disinformation are also wide-ranging. Samanth (2017) and Dewey (2016) cite monetary motivation for spreading fake news and disinformation, as news articles that spread widely on social media can attract substantial advertising proceeds when users click on the links to the original site. Such producers of fake news do not care about their reputation but are only interested in "the short-run profits from attracting clicks in an initial period" (Allcott & Gentzkow, 2017, p. 219). Townsend (2016) also believes ideological motivation accounts for the dissemination of fake news in the political arena. Gu et al. (2017) are of the opinion that fake news stories disseminated are "designed to influence or manipulate users' opinions on a certain topic towards certain objectives" (p. 5). There is also the opinion that a news article may not necessarily be fake but may be merely labeled as such for self-interest reasons. Unfortunately, and rather alarmingly, whether in the know or not, audience members are regularly exposed to inaccurate content, including fake news and disinformation. This could make consumers of such content confused and doubtful about how useful their accurate knowledge is (Rapp & Salovich, 2018). This claim was corroborated by a study, which found that fake news left most Americans (64%) confused about basic facts (Barthel et al., 2016).

Problem statement
Concerns have been raised about fake news, disinformation, and misinformation. One such thing is that false information could pollute the public sphere and damage democracy. Pogue (2017) considers fake news one of the greatest threats to democracy, journalism, and freedom of expression.
A New York Times report has it that political leaders could invoke fake news as justification for beating back media scrutiny (Erlanger, 2017). By suggesting that news cannot be trusted and by labelling it fake, politicians deliberately undermine trust in news outlets and journalism (Kalsnes, 2018) to deprive the news media of their power to ensure accountability from the ruling class.
Fake news and disinformation can be impactful on media consumers as well. Warwick and Lewis argue that "media manipulation may contribute to decreased trust of mainstream media, increased misinformation, and further radicalisation" (Warwick & Lewis, 2017, p. 1). Even when there is evidence of misinformation and it is debunked, fake news holds the potential to continue to shape people's attitudes (Thorson, 2016). Rapp and Salovich (2018) assert that media consumers often rely on the erroneous information they consume to complete subsequent tasks. This, without a doubt, is not in the best interest of news consumers, since the consequences of making decisions and settling on opinions based on misinformation could be dire.
Others have hypothesised that fake news can exert a significant degree of influence on political campaigns and discussions (Allcott & Gentzkow, 2017;Groshek & Koc-Michalska, 2017;Gu et al., 2017;Jacobson et al., 2016). In settings where a lot of people are not able to sort through what is fake news and what is not, the thinly veiled implications of fake news will be enormous. It is, therefore, important for news content consumers to be able to have the lenses to identify which news stories are fake and which ones are accurate. This is because if content consumers trust every news source and cannot prudently evaluate the accuracy of the content and source, they are likely to be misinformed, and they will also not be adequately equipped to make effective and correct decisions (Rapp & Salovich, 2018). Being media and information literate is, therefore, key in ensuring that media content consumers do not fall prey to fake content. Rapp and Salovich (2018, p. 232) have posited that: One useful resource for dealing with the inaccurate information encountered in daily life is prior knowledge. Existing understandings and prior experiences, when appropriately accessed, benefit critical evaluation. By consulting valid understandings, people can interrogate incoming discourse to filter out misinformation and disinformation.
Prior knowledge, as alluded to in the above quote, is digital literacy or media and information literacy (MIL). When information consumers and users are digitally literate or are given media and information literacy training, they are expected to have and exhibit the requisite knowledge, skills, and attitudes that position them to know-how to obtain authentic and credible information; how to critically evaluate and verify the authenticity of information or news; when to use information; and how to ethically use it. In this study, the researcher used the experimental approach to explore whether MIL could indeed give information users and consumers the skills, knowledge, and attitudes to evaluate and ascertain the authenticity of information or news articles in addition to attitudes regarding the sharing of information and news. Two groups were studied. The experimental group received prior knowledge, in the form of media and information literacy (MIL). The researcher wanted to determine if the experimental group was better able to identify fake news and disinformation and how that translated into subsequent sharing intentions, compared to the group that did not have this knowledge beforehand. It is in light of this that the following hypothesis grounded this study: H a : Most members of the experimental group (who received MIL training) will be able to recognise fake news and disinformation compared to those in the control group (who did not receive MIL training).
H 0 : Most members of the experimental group (who received MIL training) will not be able to recognise fake news and disinformation compared to those in the control group (who did not receive MIL training).

Research questions
The research questions in the study were: • Are participants able to spot fake news and/or disinformation?
• How do participants recognise fake news and/or disinformation?
• To what extent are participants likely to share fake news and/or disinformation?

Literature review
Fake news, misinformation and disinformation permeate every fibre of society. Adriani (2019) argues that although public opinion is used to thinking about fake news as a tool used to create dirty propaganda in the political sphere, fake news (and disinformation) are instruments used by dishonest companies to strike at their competitors' reputation. Adriani suggests that corporations buying into the erroneous assumption that everything seen or written is factual are deploying dishonest (or even criminal) use of new technologies and Computer-Generated Imagery to disseminate disinformation and fake news. Fake news targeted at reputable businesses, "could lower the image and reputation of targeted firms, affecting their earnings and stock price" (Gu et al., 2017, p. 54). Chakrabarti et al. (2018) carried out a comparative study in Kenya and Nigeria to ascertain what accounts for the spread of fake news without verification. They found that in both countries, people are often aware of the negative consequences of sharing fake news. The findings also pointed to no malicious intent behind sharing fake news. It was also found that instead of verifying the authenticity of news by consulting online fact checkers or looking up legitimate news outlets, respondents rather did so through their own social networks. Chakrabarti et al. further submitted respondents sometimes overestimated their ability to spot fake news as they merely used mental short-cuts to help them decipher legitimate news. This may not be adequate in determining the authenticity of information and news and can result in misjudgment. Nielsen and Graves (2017) analysed data gathered from eight (8) focus group discussions and a survey of online news consumers to assess audience's perceptions about fake news. Data was collected during the first half of 2017 from Spain, the United States, United Kingdom, and Finland. The findings were that respondents considered poor journalism, propaganda, and other kinds of advertising as examples of fake news. Most respondents disclosed that they turned to consistently reliable news media sources to verify the information they received.
Understanding the socio-cultural contexts within which fake news, disinformation, and misinformation spread could prove potent in informing the type of responses to employ to curb them. So, with this commitment, Wasserman et al. (2019) studied six African countries to determine motivations for sharing fake news and concluded that the most common reason for the sharing of misinformation were: to raise awareness out of a (misplaced) sense of civic duty and to make others aware of misinformation. Regarding satirical content, people shared them "for fun." Humorous content became a refuge for media users who wanted to keep away from gloomy news and to create conviviality and community. With this socio-cultural context in mind, responses could be geared towards making information and news consumers aware that their rather "innocent" sharing of information could have dire consequences as some information consumers are susceptible to all kinds of information. Digital literacy will equally become important so information and news consumers are armed with the skills to determine authenticity of information received from people who may not know that information they are sharing is false. Pennycook et al. (2020) also conducted two experiments to ascertain whether nudging people to think about the accuracy of a news item can improve their choices about what is shared on social media. Across the two studies, it was found that participants who were not prompted to think about accuracy before sharing were far worse at discerning between true and false content when deciding what they would share on social media. However, those who were prompted or were reminded about accuracy nearly tripled the level of true discernment and their subsequent shared intentions. This shows that when people are media and information literate, they are likely going to be conscious about accuracy of information before sharing.
There has been a lot of research attention on fake news, disinformation, and misinformation, especially in the political arena and during the Covid-19 pandemic. However, there seems to be a dearth of the literature on respondents' ability to detect fake news disinformation and/or misinformation. The experimental approach has also not been widely used to explore fake news disinformation and misinformation. This informed the decision to explore the subject matter by adopting the experimental design in this study.

Media and information literacy
Media and Information Literacy (MIL) has been conceptualised as consisting of "the knowledge, the attitudes, and the sum of the skills needed to know when and what information is needed; where and how to obtain that information; how to evaluate it critically [to make educated judgments about information] and organise it once it is found; and how to use it in an ethical way" (International Federation of Library Associations and Institutions (IFLA), 2011, para. 2). Additionally, the UNESCO posits that MIL provides answers to: how to contribute both online and offline content wisely; the rights of online and offline information consumers and creators; "how to engage with media and ICTs to promote equality, intercultural and interreligious dialogue, peace, freedom of expression, and access to information" (UNESCO, n.d., para. 2).
Media studies and information literacy education scholarships keep evolving and expanding as media and information platforms for accessing news and information expand and evolve. Initial scholarships in the field sought to place emphasis on the importance of analyzing text and visual communication (Lipschultz & Hilt, 2005). Other earlier research in the field also gave attention to the ability to read, write, and speak (Ruben, 1997), while later research also studied the relationship between reading, writing, and speaking and the development of visual and computer literacy skills (Potter, 2001). In the information age, the definition of text has been expanded beyond traditional print media (Silverblatt, 1995). Media and information literacy is thus more than the ability to read text, since visual images dominate the media and information landscape (Lipschultz & Hilt, 2005). It also goes beyond the ability to use emergent information and communications technologies. As an emergent field of study, Lipschultz and Hilt (2005) have argued that media and information literacy must "address the complex interaction between literacy and new media forms" (p. 1) which in present times include digital media platforms.
According to the UNESCO (n.d.), the quality of information that consumers and users engage with basically defines their beliefs, perceptions, and attitudes. By its definition and conception, MIL provides the framework and skills to actively form opinions and to make meaning of information gained through media exposure and other contexts such as from libraries, individuals, and other information providers, including information found on the Internet. It provides the requisite skills to critically assess and make sense of visual images and videos for their authenticity, for information users to form opinions or act on information. Information consumers, who are media and information literate, become active users of information, meaning they "are aware of the messages and are consciously interacting with them" (Potter, 2001, p. 4). Media and information literacy therefore becomes an imperative to deal with doubts and other effects of fake news, disinformation, and misinformation.
Fake news and misinformation do not ordinarily appear to be fake; they often seem authentic. These are news stories meant to be a hoax, to deliberately misinform, deceive readers, or to push an agenda. Fake news stories can be hosted on websites that bear names similar to reputable news organisations (parody accounts/websites). A typical example is the website, martinlutherking.org, created by Stormfront, a white supremacist group, with the intent to mislead readers and discredit the works of the Civil Rights activist and his personal life (Thomson, 2011). With the proliferation of digital technologies and the democratisation of media ownership and content creation, fake news, misinformation, and disinformation undeniably permeate every fibre of society. It is therefore expected that media and information literacy will provide the needed skills for information users to be able to sift between which information is false and which is credible and reliable. This study therefore tests the impact of MIL on the ability to spot inauthentic news and information.

Methods
The experimental research approach was settled on for this study. This method enabled the researcher to study the control group and the experimental group separately, making room for a comparative analysis to be made afterwards. Those in the control group were not exposed to training on how to identify fake news and disinformation, while the experimental group was exposed to training on how to identify fake news and disinformation. Training included: how to determine the authenticity of websites by media outlets; looking out for features, such as bylines, shortfall in evidence, bad grammar and punctuation; how to determine if article cites credible sources; how to use fact checkers; and how to do reverse image search, among others. Training was one-off and lasted 2 hours on the average.

Participants
Third year undergraduate students at a public university in Ghana took part in the study. The choice of university students was based on the suggestion of Choney (2010), Quan-Haase and Young (2010), and Pempek et al. (2009) that university students are heavy users of new technologies and digital platforms, thereby getting exposed to fake news. Through the experimental approach, the researcher ascertained the effect of information and media literacy on the detection of fake news and consequent sharing intentions.
Sampling involved recruiting a total of 187 respondents from a third year Public Relations class through the voluntary response sampling technique-97 respondents for the control group and 90 for the experimental group, making up 187 respondents in total. This represented more than half of the total population of a class of 270 students and is therefore considered representative of the population of the class. It also met Krejcie and Morgan's (1970) formula for sample size selection, which recommends 159 respondents for a population of 270. The table below provides a breakdown of respondents to the study: Those in group one (control group) represented a little more than half (51.9%) of total number of respondents and those in group two, which was the experimental group, represented less than half (48.1%) of total respondents.

Data collection instrument and processes
Participants were informed, prior to the study, about the experiment during one of their online classes. Convenient dates were fixed for two online experiments, which required respondents give responses to an online questionnaire. The link to the questionnaire was sent to them at the start of the online meeting. They were required to fill the online questionnaire upon being presented with stimuli in the form of social media posts and news items, all of which were fake or inaccurate. In the first study, which involved a control group, respondents were not given any tips on how to identify fake news and disinformation. They were asked about the accuracy of several news items and information presented to them and provided answers to questions like: Is this story/information accurate? Would you share this on social media or with acquaintances? What makes it fake or not? In both the first and second studies, respondents were asked to provide reasons for their choice. With the actual accuracy of the stories in mind, the researcher analysed responses from each group and made intergroup comparisons.
The second study followed the same pattern. However, in this instance, respondents were first given training on how to spot or detect fake news (treatment intervention). The stimuli for the experiment in the form of news items and social media posts were provided by Dubawa, a factchecking organisation with an office in Ghana.

Data analysis
Data analysis began a day after the end of data collection. The researcher used the Statistical Package of Social Science (SPSS) software, version 26 to analyse data. Descriptive statistical analysis was used to compute the percentage, and frequency of the responses provided. Chisquare and correlation coefficient tests were also used to measure the strength relationships.

Ethical considerations
None of the respondents were cajoled into taking part in the study. All respondents took part in the study on a voluntary basis. Respondents were assured of anonymity and confidentiality while presenting results, which was complied with. Data collected was also secured from third-party access.

Results
The researcher sought to ascertain whether respondents could identify or make out fake news items and disinformation presented to them. To determine the basis of their choices and to ascertain whether it was based on their media and information literacy aptitudes, respondents were also asked to provide reasons for their choices. The reasons provided were then thematically analysed. The table below compares the two groups studied in terms of accuracy of responses in relation to the question that sought their views on whether news items and information presented to them were accurate or not. Group one refers to the control group (those who did not receive training), while group two refers to the experimental group (those who received training).
Of those who had not been offered training on fake news detection (Group one), more than 45% (46.4%) said the fake news stories/information were accurate when they were in fact inaccurate. On the other hand, a little more than half (53.6%) were able to determine that the stories/ information was indeed fake. Of those who were given media and information literacy training, more than 70% (73.3%) could accurately identify that the stories/information was fake, while a little more than a quarter could not accurately determine that stories and information were fake. This means that the majority of the group that was trained in fake news and misinformation detection was able to identify that the information presented to them was fake, unlike the groups that did not receive any training. The chi-square results also showed that there was a relationship between the two separate groups and their ability to detect the accuracy of stories presented to them: the P-Value 0.005 < 0.05. In other words, chi-square results proved a relationship between the ability to determine inaccurate information and those who received MIL training. The opposite was also true.
Results pointed to the fact that almost half of the respondents who were in group one and who did not receive training (49.5%) said they would share news items or information on social media or with their acquaintances even though information was inaccurate. Slightly more than half (50.5%) of this group also indicated they would not share fake news items and information on social media. The case was different for those who were in group two (experimental group). More than 70% said, they would not share information presented to them, which happened to be inaccurate, while a quarter said they would. Generally, it was observed that those with training were less likely to share a fake news item (one out of four respondents will share a fake news item), compared to those who had not been trained (one out of two). The chisquare test result (P-Value 0.001 < 0.05) also showed that there was a relationship between the group a respondent belonged to and whether the respondents would share fake news item on social media or with acquaintances. Respondents in group one were more likely to share inaccurate information with their acquaintances compared to those in group two. A bivariate correlation was computed to determine whether there was a relationship between-group respondents belonged to, the accuracy of the story or information and respondents' intention to share stories on social media or with their acquaintances, in line with hypothesis formulated. Results are in the table below.
The results of the bivariate correlation computation showed that there was a positively significant correlation between accuracy of story or information and sharing intention. The correlation is significant at 0.01 which is less than 0.05. Thus, it can be deduced that there was a significant negative correlation between the group respondents belonged to and whether they would share a fake news item on social media. This means that when training increases, sharing fake news decreases. It also means that respondents who were trained on fake news and disinformation detection were less likely to share the stories, they were shown during the study, which happened to be inaccurate. There was also a significant negative correlation between the group respondents belonged to and the accuracy of their answers regarding the information presented to them. This also means that the MIL training received had a positive impact on respondents' ability to determine that stories and information presented to them were inaccurate.

How participants recognised fake news and disinformation
The reasons participants gave for their choices could be categorised into two-technical reasons and non-technical. Technical reasons in this context means reasons based on their knowledge about fake news detection or reasons based on specialised knowledge about MIL. Non-technical reasons are reasons based on personal deductions or conclusions or reasons not based on specialised knowledge about MIL. Group two respondents, who received MIL training prior to data collection, tended to give technical reasons, while group one respondents gave non-technical reasons.

Technical reasons
Some of the sub-themes that emerged from the reasons considered technical are respondents' doubts about source credibility, inadequate evidence to back claims, and grammatical errors and ambiguity of headlines. Some respondents also went the extra mile to verify the authenticity of stories and information presented to them by painstakingly looking out for technical features and doing a quick online search. Details are presented below

Doubts about source credibility
Regarding source credibility, respondents looked out for the familiarity of news or information sources or whether they were of good repute. These were some of the responses provided in that regard: Because it is not from a credible source I cannot trust the source. Probably, an online blogger. I can't just trust an unrecognised source for such information. It is not backed by any authentic/ credible source. There is no author and also the source of the information. With the R. Kelly news, what makes the story fake is that the blogger isn't even a journalist, but paid trolls.

Lack of substantial evidence
For some, they concluded that stories and information were fake because of the lack of evidence or the fact that not enough evidence was provided in the stories for claims made or that the evidence (in the form of photos or videos) did not corroborate stories or information. Others also thought headlines and content of stories did not match, while others thought the stories were not detailed enough, giving them the impression that such stories were not credible. This was what some respondents wrote.
Because the headline does not correspond with the images shown It is not detailed enough. Not even a single placard suggests so. It is fake because there is not any connection of a source. The story and headline not corresponding. The story does not tally with the headline. R. Kelly is still in jail. And besides, there is nothing in the video that backs the claim.

Technical deficiency
With the technical know-how they gained through the MIL training, some respondents were able to determine URLs were not genuine in certain instances. Others were able to notice that the photo in a particular story presented to them had been manipulated. Respondents may have quickly done a reverse image search since the survey was conducted online.
The URL is not genuine. The original photo had George Floyd's kid being knelt before by the President Biden after George Floyd was murdered.

Research-based determination
Some respondents may have done a quick search to determine whether stories and information presented to them were authentic or not. The checks may have helped them determine if the stories had been published on other news websites. Some clicked on links embedded within articles and noticed that they did not lead to credible sources. This convinced them that stories and information were likely to be fake.
The links in is tracing the sources and does not lead to articles outside of the sites. Because I searched on the web, and it confirms it was false. Ghana is not the second highest producer of cassava, so it is fake. It can't be found anywhere else because there's not any other news outlet really reporting on the issue.

Grammatical errors and ambiguity of headlines
Some of the respondents thought the ambiguity of headlines and grammatical errors gave the stories away as being inauthentic. It was one of the clues the experimental group was told to look out for in news stories.
There was error in the headline. The grammar and the use of "kinda" There are grammatical errors.
The headline is fake as it does not have all the qualities of a headline. It is ambiguous and vague. The technicality of the writing and no stamp, it would have been on an official letter head.

Non-technical reasons
Non-technical reasons were reasons that were not grounded in the critical thinking and factchecking prowess that MIL training may provide. These included personal convictions, mental shortcuts, prior or no prior knowledge about stories, and personal trust in media outlets or sources. The sub-themes under non-technical reasons are discussed below.

Personal convictions
Some respondents' personal conviction let them to the conclusion that the stories were not authentic.
He is too young to become a minister. It's not possible for a 19-year-old boy to be appointed by a whole country to be a minister.

Mental shortcuts
Mental shortcut, in the context of this study, involves consciously or unconsciously rationalising decisions and not applying oneself to critically assess the information presented to them to determine if there is something about information that could make it true or otherwise. Some of the mental shortcut reasons provided by respondents were: It does not seem authentic. Individuals walking in a desert may not necessarily be immigrants from Burkina Faso. It didn't look true.

Prior or no prior knowledge about stories
Some of the respondents seemed to have had prior knowledge about stories and information presented to them and so recognised them as inauthentic. Others thought that because they had not heard or read about such stories in the media, they were likely to be fake. Their reasons were, therefore, not based on a critical evaluation of the information.
It's fake because I personally have not heard anything about Ghana producing other stuff from cassava. Because I have not heard it on any TV or radio station announcing it This is a video of tribal/Fulani attacks in Nigeria, and not Burkina Faso . . . it is fake.

Personal trust in media outlet
For some respondents, the trust they had in news sources was enough for them to believe that news or information coming from them was authentic.
Because Business and Financial Times shares stories on Ghanaian related financial news, and not others. Because it was reported by Sky News. It is not fake because Joy FM is a reputable station, they gather facts before presenting them for the public.

Discussion
This study sets out to investigate the effect of media and information literacy on the ability to identify fake news, disinformation or misinformation and sharing intentions through an experimental approach. Previous studies have shown that many people would want to be able to differentiate what is authentic from what is fake, especially, while online (Newman et al., 2020). The absence of digital literacy or MIL eases the spread of misinformation and disinformation (Chakrabarti et al., 2018). This study provides evidence that media and information literacy can help stem the spread of misinformation, disinformation, and fake news, aside from making information users and consumers capable of evaluating and determining the authenticity of news and information.
Some of the respondents in the control group (group one) were able to determine that information presented to them was inauthentic, even though they did not receive MIL training provided by the researcher. These respondents may have applied non-technical logic in determining the authenticity of news stories and the information presented to them. Conversely, some respondents in the experimental group (group two), even though they were trained in MIL, could still not determine if information presented to them was inaccurate.
The case of the control group advances the argument that nudging people to think about the accuracy of information could make them conscious of looking out for markers of authenticity of information before sharing (Pennycook et al., 2020). A person who has not received digital literacy or MIL training may still be able to apply some logical reasoning to determine the authenticity or otherwise of information if they are prompted to do so.
Findings from the experimental group also provide firm evidence that a one-off training may not be adequate to make people media and information literate, even though it can have significant impact on respondents' ability to determine the authenticity or inauthenticity of information and news. Since the survey was conducted immediately after the training, it could be that some respondents could not assimilate all the information in one sitting. Getting regular training or reminders about how to spot fake news and disinformation is therefore imperative. Constant reminders can come in the form of news media organisations continuously raising awareness about fake news and disinformation and providing MIL tips. This can be likened to refresher training, which aids long-term memory and enhances productivity (Lynch, 2020). In this context, it will make information and media consumers become attuned to ways of spotting fake news and disinformation. It will also make them conscious about the dire consequences of spreading fake news and disinformation, thereby making them confirm authenticity of information before consumption, usage, or sharing.
When information is found to be inauthentic, it is unlikely to be shared. Data from this study show that the ability to notice that information was inaccurate informed the subsequent choice of not sharing such information with acquaintances. When one is media and information literate, apart from knowing how to spot inauthentic information, one becomes consciously and sometimes unconsciously careful about deciphering the authenticity of information before passing information on. It must be stated that, like Pennycook et al.'s (2020) study, this study has proven that participants who were prompted by means of MIL training were far better at discerning between genuine and erroneous information when evaluating information for accuracy or authenticity, which also influenced their consequent sharing intentions. The chi-square and correlation tests also confirmed this. Hence, it can be taken that the experimental group obtained a certain level of factchecking skills, which made them apply technical knowhow to factcheck. They also imbibed consciousness to factcheck, allowing them to deliberately ascertain the authenticity of news and information presented to them.
The correlation test yielded a value of 0.01 < 0.05. The alternative hypothesis-H a : most members of the experimental group (who received MIL training) will be able to recognise fake news and disinformation compared to those in the control group (who did not receive MIL training)-was thus confirmed by study data. On the other hand, the null hypothesis-H 0 : Most members of the experimental group (who received MIL training) will not be able to recognise fake news and disinformation compared to those in the control group (who did not receive MIL training)-was rejected by the data gathered. Data gathered therefore rejected the null hypothesis (Ho). It can thus be concluded that there is a significant positive relationship between MIL training and the ability to recognise fake news and disinformation. By assessing the accuracy of the information, respondents were actually reading beyond the headline. It is in reading beyond the headline that fake news and disinformation can be detected. That is what every information consumer must do-read beyond the headline.
It cannot be confirmed that respondents of this study overestimated their ability to spot fake news by using mental short-cuts to help them decipher the accuracy of news as Chakrabarti et al. (2018) submitted. But it can be admitted that using mental shortcut is not entirely reliable and adequate to determine the accuracy of news and information and can result in misjudgment. It is essential for all information consumers to be technically inclined and to determine the accuracy of information based on media literacy skills or knowledge. That is why providers of media and information literacy training must consider how best those without formal education can benefit from their training in ways that will make them appreciate the context and the importance of being media literate while giving them the needed skills. This requires knowing what kinds of content those without formal education consume and providing them with the requisite knowledge and skills in ways they can relate to.
It is also useful not only to employ one approach but to apply a number of approaches to determine the authenticity of information. This is because using just one approach alone is not a reasonable way of detecting fake news and disinformation: the techniques or approaches are to be used together (Smith, n.d.). Farmakis (2019) advises treating news and information consumption the same way we treat shopping for products online or consumable products like food because the mind is shaped by what is read just as the body is shaped by what is eaten. He proposes the following as what to look out for in a news story or any information to avoid misinformation or fake news: Consider if the article is published in a reputable media outlet and whether the article has a byline; consider if the article cites credible sources; look out for any shortfall in evidence, bad grammar and punctuation; check if the story or information is shared predominantly on social media and does not appear in reputable news publications; reverse search the image, among others. These were some of the justifications given by most of the respondents in the experimental group. Their ability to use these methods proved useful in determining if information was accurate.
For most participants who were not able to identify fake news/information presented to them, the main hindrance observed was that some of them did not read beyond the headlines and hastily trusted sources of news, leading them to make erroneous judgements about the authenticity or otherwise of news and information. This is noteworthy for those involved in MIL training.
Personal trust in the credibility of a media outlet is not wholly reliable and adequate to evaluate authenticity of information. Some credible news outlets have fallen victims to circulating inaccurate information, and so have reputable publishers been victims of propagating inaccurate information that individual writers sought to propagate. That is why technical methods for spotting fake news are more useful to determine the accuracy of information.
Fake news, misinformation, and disinformation can be impactful on media consumers as well. They are typically orchestrated to sow seeds of mistrust (Wardle & Derakhshan, 2017). When fake news and/or disinformation are targeted at the media or becomes part of conventional news consumption, it can lead to decreased trust of mainstream media, increased misinformation, and further radicalisation (Warwick & Lewis, 2017). It is therefore imperative that all actors within the information, communications, and media ecology become concerned about fake news, disinformation and misinformation and contribute to their quota in making people more discerning with the right MIL training. In a time of disease outbreak, for instance, the quality of information received is vital in combating the disease, as receiving authentic and reliable information becomes a matter of life and death. Disinformation and misinformation can also cause panic and allow people to rely on harmful or ineffective remedies (Pennycook et al., 2020). This goes to illustrate how fake news, disinformation, and misinformation can affect humanity.
It may be difficult to curb the production and spread of fake news, disinformation, and misinformation, but those who work within the media and information space can help make information consumers media and information literate. This will provide them with the skills and capacity to make good choices about information use, consumption, and distribution.
While much of the spread of fake news and disinformation is blamed on social media platform companies and political actors, some journalists and media organisations have also been found to be sources of false or misleading information (Newman et al., 2020). Hence, media workers themselves must be media and information literate, so they do not become conduits of misinformation. Despite this, social media companies perhaps have a higher obligation to ensure their users are media and information literate. This is because of how their platforms facilitate the production, distribution, and consumption of fake news, disinformation, and misinformation (Kalsnes, 2018).

Conclusions
This study is yet another piece of evidence that MIL generates consciousness about the implications for being channels of misinformation and enables information consumers to make informed judgments about quality information. The study has also shown that even for people who did not have the benefit of MIL, when prompted to think about the accuracy of information, they tried to apply nontechnical reasoning to decipher the authenticity of news and information. Besides, this study has also proven that when one is media and information literate, they are likely to become careful about deciphering the authenticity of information before sharing. Arming information consumers with the requisite knowledge to decipher what is fake or not is therefore valuable and imperative.

Recommendations
Principal stakeholders, who work within information, communications, and media ecology, must be concerned about fake news, disinformation, and misinformation, and contribute to their quota in making information users more discerning with the right Media and Information Literacy (MIL) training. Social media companies must be obligated to make their users aware of fake news and disinformation and give them the necessary skills, tools, and knowledge on how to spot them. Social media companies can consider periodically asking users evaluate the accuracy of randomly sampled stories or information and providing them with the right answers afterwards. This could be a subtle but a valuable way of conscientising their users about fake news and disinformation. They can also present users with MIL tips on a regular basis.
Fake news is disseminated in mainstream media as well. Thus, it is imperative that mainstream media stakeholders, including journalists and editors, develop their skills in fake news detection, so they do not misinform their audiences when relying on sources, particularly online sources, for news content production. They also need to be involved in MIL education by making their audiences aware of the phenomenon and helping them to develop the skills that will help them determine the authenticity or otherwise of news and information.
MIL must be incorporated into mainstream educational modules at all levels and revised to reflect the demands of the times. Moreover, news and information consumers must take advantage of opportunities to learn to be media and information literate, while media and information literacy programs must factor in how best to train those who have not acquired formal education. One way to reach those without formal education with MIL is to identify key gatekeepers in communities who are relatively knowledgeable in detecting fake news. These gatekeepers will become point of call and reliable sources for authenticating the credibility and accuracy of news and information.
Again, this study has proven that factchecking is an important skill that information and media consumers require to fight misinformation, disinformation and, even, malinformation. Skills such as the ability to do reverse image search, determining technical shortfalls in news stories, using factchecking websites, and being able to detect clone news websites, are mandatory to combat disinformation and misinformation.
Finally, it is recommended that further research into media and information literacy within the digital and mainstream/traditional media domains must be conducted. Associated issues like cyberbullying, privacy and security in the digital sphere must be explored. In particular, knowledge about cybersecurity and safety; experiences with cybertheft and cyberbullying and what people do to stay safe, while online are topical and pertinent to the current times when the use of digital technologies have become central to the lifstyles of individuals.

Funding
This work is based on the research supported by the National Research Foundation of South Africa [grant number: 118583].

Disclosure statement
No potential conflict of interest was reported by the author(s).