Undergraduate students’ preference between online test and paper-based test in Sub-Saharan Africa

Abstract Computer-based test has been administered in e-learning environments as part of ICT integration in education. Recently, online test is gaining attention in both regular and distance education institutions, and students’ preference or perception of an online test versus paper-based test is crucial in successful adoption or implementation of either of the test modes by any educational institution. For this reason, this study examined undergraduate students’ computer usage experience as a prerequisite for online test completion, and their perception and preference toward online test as opposed to paper-based test. It also examined how students’ perception of online test differed across sex, program major, and computer usage experiences. The findings would contribute to knowledge on factors for consideration, especially by faculty and instructional designers, in successful adoption of online test in undergraduate programmes. An online questionnaire was used to collect data from 213 undergraduate students at a university in Ghana, and the data were analyzed using median, standard deviation, Mann-Whitney U Test, Kruskal–Wallis Test, and Spearman correlation. The results indicated that students had high level of experience in computer usage, they had positive perception and preference for online test, there was no significant differences in perception of online test between female and male students, and among students of different majors. However, there was a positive correlation between computer use experience of the students and their perception of online test. It was recommended that universities seeking to implement online assessment ensure their students are equipped with adequate skills and experiences in computer usage.


Introduction
Traditionally, paper-based test had been the predominant method of assessment used in educational institutions worldwide.However, with the rapid advancement of technology, online test platforms have emerged as a viable alternative (Yilmaz, 2021).Online test (or assessment) refers to the administration of assessment tasks through web-based platforms, enabling students to complete tests and exams electronically.Online test platforms include Google quiz, Kahoot!, Award Force, Assessment Generator, ClassMarker, Edmodo, Exam Time, Flubaroo, ProProfs, Schoology, Quizizz, TestMoz, TestGorilla, Qualified, Learning Pod, Learning Management System built-in assessment tools, etc.The shift towards online assessment has kindled debates among educators, educational administrators, and researchers regarding the benefits and drawbacks of this digital approach compared to traditional paper-based testing.
At the setting of this study, like many other universities across the sub-Saharan Africa and the world over, online test modality was being adopted as a substitute or complement to paper-based tests; especially during and after the COVID-19 pandemic.It was noted that many students eagerly accomplished online assessment tasks within the allotted time frames, whereas others either did not complete the tasks at all, or did later.These occurrences distorted the assessment processes.As postulated in the Technology Acceptance Model (Davis, 1989) and the Unified Theory of Acceptance and Use of Technology (Venkatesh et al., 2003), the adoption and use of technology is largely influenced by attitude or perception of prospective users.It therefore became worthy to understand the attitudes of students toward test modalities and factors that either facilitate or mitigate adoption of the test modalities.
Besides learner attitudes, other factors reportedly are worthy of consideration in successful adoption of online tests.These include the proficiency levels in computer usage, and sex of the learners.According to Khairuddin et al. (2022)., female students were more stressed in online assessment than their male counterparts.The aim of this research article was, therefore, to explore students' attitudes in terms of their preferences between online test and paper-based test, and analyze how demographic factors might influence their choices.In order to achieve the aim of the study, the following specific objectives guided the research: (1) To examine the level of computer usage experience of the undergraduate students in the target university.
(2) To examine how the target students perceived online assessment, in comparison to paperbased test.
(3) To determine how the students' preference for test modality differed according to their sex, program major, and experiences in computer/smartphone use.
By understanding the students' perspectives and preferences, educational institutions can better tailor their assessment methods to accommodate the needs and expectations of the modern learner (Hekmatshoar-Tabari & Rahimy, 2021).Moreover, this investigation will contribute to the existing body of knowledge on assessment practices, providing insights into the efficacy and implications of online assessment in comparison to its traditional counterpart.The findings revealed in this study will be useful for faculty and educational administrators in policy formulation and implementation concerning adoption of online assessment in their institutions.

Literature review
Numerous studies have been conducted on various aspects of online assessment and paper-based assessment methods.While some studies focused on comparison of learners' academic performance in online or computer-based tests as opposed to paper-based tests (Jaap et al., 2021;Paul & Jefferson, 2019;Yu & Iwashita, 2021), others also examined students' attitudes and perceptions toward the same (Khairuddin et al., 2022;Marín García et al., 2021).For instance, Jaap et al. compared the mean scores of students who took an online test on one hand, to the mean scores of their counterparts who took the same test in a traditional paper-based mode on another hand.
Their findings indicated a non-significant difference between the mean scores of the two groups.
Similarly, Yu and Iwashita reported that they found comparable scores between students who took computer-based test and those who took paper-based test.In contrast, Domínguez-Figaredo et al. ( 2022) reported an increase in academic performance of students when a university in Spain adopted online assessment for its 28 bachelor's degree programs during the COVID-19 pandemic.
With regards to learner attitudes and preferences between computer-based assessment and paper-based assessment, various studies have similarly reported a mixed feeling.Whereas some students express positive attitude and preference toward online assessment (Domínguez-Figaredo et al., 2022;Khoshsima et al., 2019;Marín García et al., 2021), others favour paper-based assessment (Boevé et al., 2015;Cole, 2015;Jaap et al., 2021).
The positive attitudes and preferences of students toward online assessment, as opposed to paper-based assessment, is informed by multitude of factors including the perceived advantages of online assessment.Some students appreciate the flexibility and convenience offered by online testing, allowing them to take exams from any location with an internet connection (Karaoğlan- Yilmaz et al., 2020;Khoshsima & Hashemi Toroujeni, 2017).Also, online testing was perceived to provide automatic results and immediate feedback, promoting a faster learning cycle (Al-Qdah & Ababneh, 2017).Furthermore, other students appreciate the reduced paper usage and environmental impact associated with online testing, among others.However, despite the advantages of online assessment method that motivate many students to prefer it to paper-based test, some concerns have been raised regarding online testing.As reported by Karaoğlan-Yilmaz et al. (2020), technical issues, such as internet connectivity problems and system glitches, can negatively impact students' testing experiences.Other concerns raised by some students include anxiety and insufficient time to complete tasks in online exams.Moreover, other students express concerns about the potential for cheating and dishonesty in online testing environments.Therefore, it is important to implement test security measures and strategies to maintain academic integrity in online assessments.
Furthermore, other studies investigated the influence of demographic factors of learners on their scores in computer-based tests.For instance, McClelland and Cuevas (2020) analyzed the influence of gender and computer familiarity on students' scores in computer-based tests, and concluded that there was no statistically significant relationship between such factors.Their findings were reportedly consistent with some earlier studies (Leedy & Ormrod, 2010;Bennett et al., (2007).;Poggio et al., 2005;as cited in McClelland & Cuevas).
Although several studies have been conducted on computer-based and online assessment methods, most of them focused on comparison of students' scores in computer-based tests and paper-based tests, whereas others enquired into learner attitudes and preferences toward computer-based tests.However, there exists a void in literature regarding the analyses of the relationships between learners' demographic factors such as sex, program major (specialization), and computer/smartphone use experience; and their perceptions toward online assessment.On the basis of this background, the current study purported to fill this gap in literature, by seeking answers to the following research questions and test the accompanying hypotheses:

Research Questions
(1) What is the level of computer usage experience of UPSA undergraduate students?
(2) How do undergraduate students of UPSA perceive online assessment, in comparison to paper-based test?
(3) How does preference for testing mode of undergraduate students of UPSA differ among sex, program major (specialization), and computer/smartphone use experience.
The following null hypotheses were formulated in order to answer the research question 3.

Null hypotheses
H 01 : There is no significant difference in perception of online assessment among male and female undergraduate students.
H 02 : There are no significant differences in the perceptions of online assessment among undergraduate students pursuing different program specialties.
H 03 : There is no correlation between undergraduate students' computer use experience and their perception of online assessment.
The findings of this study will contribute to existing knowledge on undergraduate students' preferences between online assessment and paper-based assessment, and the relationships between demographic characteristics of undergraduate students and their preferences for mode of assessment.Such contribution to literature will be of immense benefit to faculty and administrators, especially in higher education institutions, in the successful implementation or adoption of online assessment as a complement or substitute for traditional paper-based assessment.

Materials and methods
In this study, the descriptive research design was employed to obtain information about the characteristics of a sample that could be generalized to a larger population (Leedy & Ormrod, 2010).This design was appropriate because it sought answers to questions about the characteristics of the target population through analyses of relationships among variables of the population.The target population were undergraduate students of the University of Professional Studies, Accra (UPSA), located in the capital city of Ghana in West Africa.
The sample for the study consisted of 213 students pursuing diploma and bachelor's degree programs at the aforementioned university who volunteered to participate in the study by filling an online questionnaire that was made available to all the students of the university.Thus, the sampling technique was convenience sampling.The sample size of 213 was considered adequate because, a power analysis for a two-tailed Wilcoxon-Mann-Whitney test (two groups) ran in the G*Power software indicated that the minimum sample size to yield a statistical power of at least .90and a medium effect size (d = .5)is 180.The detailed demographic characteristics of the sample are presented in Table 1 under the results section.
An online questionnaire was used to collect data from the participants.The items of the questionnaire were adapted from previous studies (Bulent et al., 2016;Sorensen, 2013).Bulent, Murat, and Selahattin reported a Cronbach's Alpha value of 0.873 which assured the reliability of their scale items.Reliability test ran on the items used in this study resulted in an overall Cronbach's alpha of 0.843, and those of individual items ranged from 0.824 to 0.870.Thus, all the Cronbach's alpha values were within the range of generally acceptable values for reliability of an instrument (Jugessur, 2022;Taber, 2018).
The questionnaire for this study consisted of six sections; introduction, demographic characteristics; access and familiarity with computers, computer usage experience, preference for mode of examination, and perception of online assessment/test.
The introduction section explained the purpose of the study, target respondents, informed consent, and anonymity of responses.The demographic characteristics section solicited information such as sex, age, level of study, program specialty/major, and year group.The access and familiarity with computers section also sought about whether students owned or had access to smartphones, tablet, laptop/desktop computer, and how often they actually used those they owned or had access to.Computer usage experience section provided seven statements of tasks performed with computers and smartphones, for students to rate their respective skill levels on a five-point Likert scale from 1 (I definitely can't), 2 (I probably can't), 3 (neutral), 4 (I probably can) to 5 (I definitely can).The preference for mode of examination section asked students to indicate whether and number of times one had taken online test in the current academic year, make preference between online and paper-based test, and recommend one for the university to use always.Finally, the section on perception of online assessment provided five-point Likert type items for each student to rate his/her attitude for online assessment mode from 1 (strongly disagree) to 5 (strongly agree).
The data were collected over the period of seven days.Prior to that, the questionnaire was created using Google forms, and its hyperlink was posted by the researcher and other lecturers in WhatsApp groups of various classes, with an invitation message for the students to voluntarily follow and fill.In effect, all the students had equal opportunity to participate in the study.However, not all students willingly did; the 213 students who volunteered to respond to the questionnaire constituted the sample.After the seventh day, the responses were downloaded as csv file and imported into SPSS package for coding and analyses.The data were analyzed using both descriptive and inferential statistics.The demographic data were analyzed using frequencies and percentages, whereas the remaining data were analyzed using median, Mann-Whitney U Test, Kruskal-Wallis Test, and Spearman correlation to test the hypotheses at the confidence level of .05(confidence interval = 95%).Prior to hypotheses testing, tests of normality were ran on the scale data, which indicated non-normal distribution of the data.Hence, non-parametric tests were chosen for testing the hypotheses.

Results
The analyses of the demographic characteristics of the respondents as self-reported in the questionnaire responses were summarized in Table 1 and Figure 1.
The results presented in Table 1 and Figure 1 show that the respondents consisted of 65 females and 148 males, aged between 17 and 37 years inclusive (mode = 19, M = 20.9,SD = 2.9).Thus, majority of the respondents were in their early adult stages.Majority of the students (82.6%) were bachelor's degree students whereas the remaining were diploma students.Moreover, with regards to programme majors, the respondents were pursuing Information Technology (62.9%), Business studies (30.5%), and Communication studies (6.6%).
In the quest to learn about the level of computer usage experiences of the respondents, the researcher sought an answer to the research question 1: What is the level of computer usage experience of UPSA undergraduate students?On this note, the respondents were presented with seven statements about some basic computing tasks, that could be useful in doing online assessment, for the respondents to rate the level at which they felt they could perform those tasks.The respondents rated their levels of experience on a five-point Likert scale ranging from 1 (I definitely can't), 2 (I probably can't), 3 (I don't know), 4 (I probably can), to 5 (I definitely can).The estimates of central tendencies of respondents' ratings on each of the computing tasks are presented in Table 2.
It can be observed from Table 2 that values of the measure of central tendency (Mean) for each of the statements of computing experience is greater than 3.00 (middle value on the 5-point scale).These values indicate that the students generally perceived themselves as being competent in using computers to perform basic tasks which could be relevant in accessing and completing online assessment (Mean = 4.33, N = 213, SD = 1.01) as presented in Table 3.
The study further solicited responses to ascertain the nature of UPSA undergraduate students' perception of online assessment.The research question 2 formulated was: How do undergraduate students of UPSA perceive online assessment, in comparison to paper-based test?
In order to answer this research question 2, ten statements about the perceptions of online assessment were presented to the respondents, and the latter were requested to rate their level of agreement on a five-point Likert scale from 1 (strongly disagree) to 5 (strongly agree).However, two items stated in negative form were reverse coded as 1 (strongly agree) to 5 (strongly disagree).The items and respective measures of central tendencies and standard deviations calculated from the responses are presented in Table 4.
As noted from Table 4, each of the statements about the perceptions of online assessment mode had a mean score greater than 3.0.These values are greater than the middle score on the scale of 1 to 5. Hence, it could be ascertained that the students had positive perceptions about online assessment as solicited by each of the statements (Mean = 4.14, SD = 1.13,N = 213), as shown in Table 5.
In order to answer research question 3, three hypotheses were tested by running respective non-parametric statistics in SPSS.The outputs of the respective non-parametric statistics were presented in Tables 6 to 10.
Tables 6 and 7 present the SPSS output for Mann-Whitney U test.The independent variable was sex (female and male), and the dependent variable was students' perception of online test mode.These results were used to test the first null hypothesis that H 01 : There is no significant difference in the perception of online assessment among male and female undergraduate students.The results indicated that there was no statistically significant difference between the UPSA undergraduate female students' perception of online assessment (Mean rank = 106.69,N = 65) and that of their male counterparts (Mean rank = 107.14,N = 148), U = 4790.00,z = −0.05,p = .958.Thus, the null hypothesis was not rejected.
The second null hypothesis tested stated that H 02 : There are no significant differences in the perceptions of online assessment among undergraduate students pursuing different program specialties.Tables 8 and 9 present the SPSS output of Kruskal-Wallis's test.The dependent variable for the test was the perception of online assessment, and the independent variable was program specialties which comprised three areas viz.Business studies, Communication studies, and Information Technology studies.The results showed that there were not significant differences in the perception of online test mode among the UPSA undergraduate students pursuing different program specialties (H = 1.77, df = 2, p = .41).The null hypothesis was supported, thus not rejected.Finally, the study also tested the null hypothesis that H 03 : There is no correlation between undergraduate students' computer use experience and their perception of online assessment.A spearman's rank correlation coefficient was calculated using SPSS.The variables were the students' computer use experience, and their perception of online assessment.The SPSS output of the correlation estimates is presented in Table 10.The result showed that the students' computer use experience positively correlated with their perception of online test mode, Spearman's r (213) = .286,p < .001.The null hypothesis was not supported, thus rejected.

Discussion
The study sought to enquire about the level of computer usage experience of undergraduate students in the UPSA, how they perceived online test in comparison to paper-based test, and how the students' preference for an assessment modality might be influenced by or related to their demographic characteristics such as sex, program major (specialization), and their experience in using computers.Three research questions were stated, and three null hypotheses were formulated and tested.
Regarding the level of computer usage experience of the undergraduate students of UPSA, the results revealed that they possessed a high level of experience or skills in using computers and smart phones to perform some basic tasks that could be relevant in accessing and completing online test.Some of the computing tasks the students reported to had been performing included typing sentences, editing text, using touch screen, looking up meanings of words on computer, participating in online discussions, and searching for information on the internet.The computing experience level reported by the students imply that they were equipped with requisite computing skills to support the administration of online test in the university as a policy which had already been started.This finding is an interesting revelation about students in Ghana, a Sub-Saharan African country, that proliferation of technology has enhanced access and utilization of computing devices by students both in distance and full-time educational institutions.This corroborates with Yeboah et al. (2020) that students at university level in Ghana possess and use variety of computing devices such as smartphones, tablets and laptop or desktop computers, and thus are ready to integrate same in their academic life.
With regards to the students' perceptions about online assessment and their preference between online test and paper-based test, the results of this study revealed that the students were more positive towards online test.In some of the statements, the students explicitly indicated that they preferred online exams to traditional paper-based exams, and recommended that the university replaces the latter with the former.It could be deduced that the students expressed positive attitudes toward online assessment probably due to the fact that they were well experienced in using computing devices and had access to same, which made accessing and completing online test convenient for them.The students' positive attitude and preference for online test could be explained in view that there were facilitating conditions in terms of their possession or access to internet connectivity, computers and smartphones, as well as effort expectancy due to their computing experience (Yeboah & Nyagorme, 2022).The positive attitude and preference of UPSA students as revealed in this study corroborates with attitudes of students in other parts of the world toward computer-based test as reported in earlier studies (Domínguez-Figaredo et al., 2022;Khoshsima et al., 2019;Marín García et al., 2021).In those studies, some of the reasons for their attitudes were based on the advantages the students perceived in computerbased tests.It is likely that, the positive attitudes of students of UPSA were also influenced by their perception of the merits of online assessment in addition to the aforementioned explanations.
Another finding of the study was that no statistically significant difference was found between the perceptions of female and male students toward online test.This suggests that, at the university, both female and male students possess the characteristics and experiences that make them perceive online test to be convenient for them.Also, they both possess or have access to resources such as smartphones, laptop computers and internet connectivity, all of which facilitate the conditions for completing online assessment.This finding explains why when McClelland and Cuevas (2020) analyzed the impact of gender on computer-based test scores, no relationship was found to exist.It is interesting to know that the gender parity in the perception of online test as revealed in the current study situated in Ghana, a sub-Saharan African context, corroborates with the situation in USA (McClelland & Cuevas, 2020).Thus, gender factor is no more relevant in the perception and adoption of online assessment at the university level.As a result, any effort by a university to motivate adoption of online assessment should be directed to all students in general, irrespective of their sex groups.
The current study further revealed in its findings that among undergraduate students pursuing programs of different specializations, there was no statistically significant differences in their perception of online test.Thus, it could be deduced that the factors that influence the students to have positive view and preference for online test are common among all the students, irrespective of their program specialization.This suggests, probably that the students possessed the prerequisite computing skills and experiences before entering the university, so irrespective of their programs of study, they were comfortable with online test.The finding in the current study regarding non-significant difference in perception of online test among students pursuing different majors contradicted the report of Khairuddin et al. (2022) in Malaysia.They found that among students of faculty clusters (Science & Technology, Social Science & Humanities, and Business & Management), there were significant differences in their perceptions of the implementation of computer-based language test.This could have happened probably because the factors influencing their perceptions were not common among students across the faculty clusters, unlike the case among the respondents of the current study.
Finally, this study found a statistically significant positive correlation between the students' computer use experience and their perception of online assessment.This implies that the more undergraduate students gain experience in using computers, the more positive attitude they have towards online assessment, and so are more likely to prefer online assessment modality to paperbased assessment.This finding is a novelty in the contribution to body of knowledge on online assessment.No study had assessed the possible existence of the relationship between computer usage experience and perception of online assessment.Therefore, this study has brought light this finding to serve as a guiding principle for universities and educational institutions that intend to adopt online assessment to either complement or substitute paper-based assessment modality.

Conclusion
This study made significant findings about the perceptions of online assessment at the undergraduate level, especially in sub-Saharan Africa.Due to advancement and ubiquity of communication technologies, many educational institutions are intensifying ICT integration in instructional practices including online assessment.The following are some of the implications of the findings for policy, practice and further research.

Implications for policy and practice
As was revealed, the students in this study expressed positive perception and preference for online assessment over paper-based assessment because they possessed some skills and experiences in computer usage that were relevant for completing assessment online.Such computing experiences correlated positively with the perception and preference of online assessment.Therefore, educational institutions that seek to implement online assessment modalities should provide the necessary computing skills and experiences to their students, and ensure availability of requisite resources to the learners.When these are done, the learners will have positive attitude to support successful implementation and adoption of online assessment.Also, because perception of online assessment did not differ between females and males, whenever educational institutions deem it necessary to provide support for students on implementation of online assessment modalities, such support should be provided to all students who may need, regardless of their sex.

Limitations and recommendations for future research
Finally, there are some limitations of this study that further research can address.For instance, the target population of this study was in only one university.As a result, the findings of this study must be generalized with caution that their prevailing conditions may not necessarily be the same as other universities.Hence, further research may expand the population to multiple universities to enhance external validity.Also, the sampling technique in this study was convenience, which resulted in unequal number of the sub-categories such as sex, and cohort, and program majors.Further research can overcome such limitation by using multi-stage stratified sampling to obtain equivalent number of sub-categories.