Assessment competence and practices including digital assessment literacy of postsecondary English language teachers in Oman

Abstract This study investigated the assessment literacy of teachers of English as a Foreign Language in postsecondary education in Oman. Using self-assessment questionnaires, it examined teachers’ assessment competencies and practices and digital assessment literacy. Multivariate and correlation analyses tested the effects of gender, age, pre-service and in-service assessment training, teaching experience and teaching load on assessment competence and practices. Findings showed that teachers believed that they were assessment literate but less so in digital assessment. Pre-service training had the strongest effect on assessment competence and practices. The paper emphasizes the importance of assessment courses in teacher preparation programs. It also recommends integrating digital assessment competencies and practices in teacher preparation courses and in-service language teacher training in assessment.


PUBLIC INTEREST STATEMENT
This paper investigates how English as a Foreign Language teachers perceive language assessment skills and practices.It studies their perceptions on how competent they think they are in language assessment and how often they practice specific assessments in the language classroom.The paper explores how factors including age, teaching experience, pre-service assessment training, and in-service assessment training are related to teachers' assessment competence and practices.The findings indicate that pre-service assessment training is the factor that is most strongly related to teachers' selfperceived language assessment competence and practices.The paper outlines recommendations emphasizing the need to capitalize on assessment training for pre-service and inservice teachers and to foster cooperation to support teacher assessment skills and practices.
In the present study, assessment literacy is defined as the knowledge, skills, and practices teachers possess and use to understand, design, administer, make decisions from, and evaluate classroom-based assessment according to the principles and concepts relevant to the fields of language testing and evaluation as well as language teaching and learning in accordance with the affordances and needs (of both teachers and students) in the language teaching context.This definition includes the EFL context affordances and needs and is broad enough to encompass other foreign language (FL) learning and use contexts.
Literature evidences the lack of measures that assess AL of teachers in specific fields, such as the teaching of foreign languages (Levi & Inbar-Lourie, 2020).In an EFL context, AL measures should integrate scenarios from the EFL context that the teachers are familiar with.They should be related to language teaching and assessment rather than establishing general or abstract issues of AL.
There is an evident lack of research that looks into the needs, perceptions, and skills of language teachers when it comes to using computers for language testing in the classroom (Brown, 2013;Noijons, 2013).In addition, there is a need to examine how prepared teachers are to manage assessments in a digital environment.
Considering the definition and gaps in research outlined above, the current study attempts to measure EFL teachers' assessment competence and practices including digital assessment competence using self-assessment questionnaires.

Variables that affect assessment competence and practices
Several variables that potentially affect AL have been investigated.Of interest in the present study are the factors of gender, age, years of teaching experience, teaching load, and if teachers had received pre-service assessment training and in-service assessment training.Research findings about gender effects to date are inconsistent.Alkharusi (2011a) reported higher levels in educational assessment knowledge for male in-service teachers than female teachers.In studies with teaching candidates, Alkharusi (2009Alkharusi ( , 2011b) ) found that male students received higher grades and perceived a course in educational assessment to be more relevant to their needs than female students.On the other hand, advantages of female students were reported with respect to their overall level of educational assessment knowledge and with respect to their self-perceived competence in recognizing ethics in assessment and in their use of classroom tests for different purposes, such as grouping and motivating students, compared to their male counterparts (Alkharusi et al., 2012).
There is some evidence for the effect of pre-service educational assessment courses on teachers' assessment knowledge.DeLuca and Klinger (2010) and Alkharusi et al. (2011) found that teacher candidates who attended a pre-service course on educational assessment, rated themselves more confident in assessment knowledge and skills than peers who did not take an assessment course in college.Teachers in secondary education who had in-service training in assessment reported on average higher levels of assessment knowledge than teachers who did not receive any in-service assessment training (Alkharusi et al., 2012).

Assessment literacy in EFL and foreign language education
The relatively few studies into AL of FL teachers generally confirm the inadequacy of AL in teachers reported above.In a study of FL teachers in Europe, Vogt and Tsagari (2014) found that teachers' AL was overall underdeveloped, especially with respect to the ability to design and develop modern language tests.Scarino (2017) in a study of K-12 FL teachers in Australia reported that teachers experienced difficulties designing assessments of language ability within an intercultural communicative competence approach.Crusan et al. (2016) who investigated writing AL in ESL teachers reported that years of teaching experience was inversely related to AL: More experienced teachers showed lower levels of AL than less experienced teachers.In a qualitative study on AL, Yan et al. (2018) found that the secondary EFL teachers they investigated had stronger training needs in assessment practice than in assessment theory.
Most studies that examined teacher assessment knowledge used the Teacher Assessment Literacy Questionnaire, developed by Plake and Impara (1992) and based on the 1990 Standards or modified versions of it, such as the Classroom Assessment Literacy Inventory (Mertler, 2004).As emphasized earlier, there is a lack of instruments that measure AL of teachers in specific disciplines, such as the teaching of foreign languages (Levi & Inbar-Lourie, 2020).In an EFL context, AL measures should incorporate scenarios from the EFL context that the teachers can identify with.They should be related to the teaching and assessment of English language skills and/or levels rather than ascertaining overly general or abstract issues of AL which Popham (2011) calls "exotic assessment content" (p.268), i.e., information that does not apply to the day-to-day needs and experiences of teachers in the language classroom.This resonates with White's (2020) demand that AL development should focus on classroom-based assessment in the primary interest of student learning.

Formative assessment
Also referred to as assessment for learning, formative assessment has gained researchers' and educational systems' attention as empirical research has proven its positive impact on student learning.According to Popham (2009) Practice in a classroom is formative to the extent that evidence about student achievement is elicited, interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in instruction that are likely to be better, or better founded, than the decisions they would have taken in the absence of the evidence that was elicited.(p.9) Informed by student learning and self-regulating theories where students are considered the center of the assessment process, formative assessment has developed to ensure students are decision makers and responsible for their own learning.Therefore, practicing formative assessment necessitates students' involvement through understanding student learning intentions and success rubrics.It also requires teachers to provide and communicate feedback to students based on the assessment of their work and guide them to make decisions about their own learning accordingly (Brookhart, 2011;Fulcher, 2010).
As stated earlier, Brookhart (2011) argued that for a teacher to be assessment literate, AL components should account for formative assessment theories that 1990 standards failed to address as they mainly focused on the teacher and neglected student involvement, student learning outcome, communication of student learning outcomes and feedback that are considered the backbone of formative assessment.Hence, teacher AL measures should reflect formative assessment theories and conceptualizations.

Digital assessment literacy
The assessment literacy of a teacher today requires adaptation to the digital environment and the pedagogical approaches it requires.Digital assessment competencies and practices have become increasingly important with the application of learning management systems (LMSs), computerbased testing and online assessments in the last decades and during the COVID-19 pandemic.There is not only an evident lack of research that investigates the needs, perceptions, and skills of language teachers when it comes to using computers for language testing in the classroom (Brown, 2013;Noijons, 2013), there is also a general need to investigate how prepared teachers are to manage assessments in a digital environment.Eyal (2012) discussed the role of teachers as assessors in a digital environment, proposed various levels of digital AL and summarized them as follows: "The level of literacy moves on a continuum, starting from the use of LMSs as part of traditional assessment processes, such as computerized tests; to a higher literacy level that, in addition to traditional processes, includes the implementation of tests, tasks and projects in a digital environment, the performance indicators for which are determined in cooperation with students; through to implementing advanced estimation approaches based on constructivistsocial learning and the development of self-targeted learning, where as part of the assessment teachers must also know how and when to delegate the processes of assessment to the students."(p.45).Some of the abilities and skills that Eyal finds necessary for teachers to be digitally assessment literate to study postsecondary EFL teachers' digital assessment competence and practices in addition to general AL were adapted in the current study.

Research questions
The present study attempted to ascertain AL, including digital AL in an EFL teaching context in the Sultanate of Oman.It investigated the following research questions: (1) How do postsecondary EFL teachers in Oman self-assess their assessment competence and assessment practices?
(2) How do participant characteristics (age, gender, pre-service training, in-service-training, teaching experience, and teaching load) affect teachers' self-reported assessment competence and practices?

Method
To gather data on EFL teachers' self-perceived AL, particularly assessment competencies and practices, this study used an adapted version of the Teacher Self-Perceived Assessment Competence and Practices Survey, an online self-assessment questionnaire, developed by Alkharusi (2009).The questionnaire was adapted to the needs of EFL teachers, modified where deemed appropriate, and complemented by items on digital AL.The research project and instruments were approved by the Human Subjects Protection Program of the Office for Research and Discovery at the University Arizona (Protocol Number: 1609839685).

Participants
The site of this study were six Colleges of Applied Sciences in Oman.Each of these colleges had a department of English with between 15-36 faculty members who taught and/or held a coordination position.The English departments offered courses to foundation-year students at four different levels: Beginners, Elementary, Pre-Intermediate, and Intermediate.
114 language teaching professionals were asked to participate in the teacher assessment competence and practice surveys.Of these, 98 and 101 participants completed the assessment competence survey and the assessment practice survey respectively.All participants provided informed consent to use the survey data for this research.Of the 101 participants, 56 were female, 61 had participated in a pre-service course on assessment before they started working as teachers, and 62 had participated in at least one in-service training on assessment.Most teachers (70%) held MA, 14% BA, 7% Ph.D. and 7% other degrees.Table 1 summarizes the participant characteristics.

Instruments
A self-report questionnaire with three sections was used in this study.The first section elicited background data about participants (age, gender, etc.).See Table 1.

Teacher self-perceived assessment competence
The second section was about teachers' perceptions of confidence in their abilities to perform educational assessment tasks.This part contained 34 items.Nineteen of these items were adopted from the Self-Confidence Scale in Educational Measurement (Alkharusi, 2009), eight items were modified to better address EFL teachers' assessment needs, and 24 items were excluded because they did not apply well to EFL teachers' needs and/or to reduce time to complete the survey (see Appendix A).Seven new items were added to elicit teacher self-perceived digital assessment literacy.These were concerned with using the LMS "Blackboard" to design language skills tests, computerized course tasks/tests, online tools (discussion boards, wikis, blogs etc.), reports from SafeAssign (a plagiarism prevention tool) to give students feedback, varying digital assessment tools according to their effectiveness for classroom purposes, providing criteria for online/computerized tests/tasks, and using the LMS's assessment data (on student participation, grades, user activity, etc.).Responses to items were given on a 5-point Likert scale ranging from 1 (not competent) to 5 (very competent).
A teacher's competence score in a specific area was obtained by calculating the mean score of all items in a specific competence category (see Table 2); and a teacher's overall competence score was obtained by calculating the mean score across all items.Content validity of the items was verified by three experts in educational and language assessment.Wording was examined for appropriateness by six language teachers (all enrolled in post-graduate studies).The questionnaire items were pilot-tested on 20 participants with characteristics similar to the actual participants of the study.Participants' performance on each questionnaire item was correlated with the overall performance across all participants to obtain the Cronbach's Alpha level, which should be above 0.7 (Pallant, 2007).The internal consistency reliability was 0.79.Cronbach Alpha's coefficient for the added items on digital assessment literacy was 0.84.

Teacher self-perceived assessment practices
The second section of the questionnaire (Appendix B) contained 27 items adapted from Alkharusi's (2010) Teachers' Assessment Practices Questionnaire.It was designed to measure participants' frequency of use of various assessment practices.Seven items were added to assess teachers' digital assessment practices.Responses were obtained on a 5-point Likert scale ranging from 1 (never) to 5 (always).The internal consistency reliability was found to be 0.82 and 0.8 for the added items on digital assessment literacy.

Statistical analyses
MANOVA was used to examine differences in teachers' competencies and practices in language assessment in relation to gender, age, pre-service courses in assessment, and in-service training in assessment.Univariate analyses were conducted to test if differences in specific variables, for example gender, yielded statistically significant differences in assessment competence.P < 0.05 was adopted as confidence level for significance for all analyses.
Pearson Product-Moment Correlation Coefficients were computed of age, teaching experience, and teaching load with teacher assessment competencies and assessment practice frequencies to probe the strength of association between these variables.

EFL teachers' assessment competence
Overall (across all items), teachers perceived themselves as "competent" in language assessment (M = 3.92).Table 2 illustrates how competent teachers rated themselves with respect to the specific assessment competence categories.
The two categories that participants felt least competent in and that were below the mean overall competence score were developing digital language assessment (3.21) and constructing and administering language assessment (3.9).Participants felt more competent in recognizing ethics of assessment (4.32), developing valid grading procedures (4.08), developing performance assessment (4.01), and communicating assessment results with others (3.92).
The MANOVA, used to examine differences in teachers' competencies in language assessment in relation to participant characteristics, revealed a statistically significant multivariate effect for pre-service assessment training on teacher language assessment competence; F(3,94) = 3.059, Wilks' Lambda = .837,p = .009.There were no statistically significant multivariate effects for gender, age, and in-service assessment training on teacher self-perceived language assessment competence.Notes: N = 98 participants.1 = incompetent, 2 = a little competent, 3 = somewhat competent, 4 = competent, 5 = very competent.
Table 3 presents the results of the correlation analyses probing the strength of association between years of teaching experience and teaching load with the self-rated assessment competencies.
There was a significant moderate association between teaching experience and competence 1 (constructing and administering assessment) as well as weak associations between teaching experience and assessment competencies 2, 3, and 4.There was only one weak association between teaching load and assessment competence 2.

EFL teachers' assessment practices
Overall (across all assessment practices), participants reported to use assessment practices "sometimes" (M = 3.43).Table 4 displays the frequencies of practice categories that the teachers self-reported to use in their classrooms.
The practices that the participants reported to use the least (and below average) are nonachievement grading (M = 2.73), digital assessment (M = 2.75) and student-involved assessment (M = 3.23).Teachers reported to practice more frequently communicating assessment results (M = 4.18), using alternative assessments (M = 3.80), developing assessment standards and criteria (M = 3.70), and using traditional assessments (M = 3.60).
Table 5 presents the results of the correlation analyses probing the strength of association of years of teaching experience and teaching load with the frequency of teacher assessment practices.
There were no significant correlations between teaching experience and the frequency of assessment practices and no correlations between teaching load and assessment practices, except a small negative correlation between teaching load and the practice of communicating assessment results.

Assessment competence
Overall, the EFL teachers working in post-secondary education investigated in this study perceived themselves as being competent in assessment in their subject.This finding corresponds to overall positive self-ratings of teachers with respect to their assessment competence reported in other studies Alkharusi et al., 2012;Öz & Atay, 2017).However, it differs from studies with in-service teachers (Mertler, 2004;Plake et al., 1993) and studies with students in teacher education programs (Volante & Fazio, 2007) that report low assessment competence levels.The EFL teachers' positive self-perception with respect to assessment competence in this study could be a consequence of the relatively high percentage of participants (60%) who had taken pre-service courses on assessment and/or had participated in in-service-assessment training events (61%).
While this study's teachers' self-ratings of assessment competence were overall high, they differed for specific assessment categories.The two categories that teachers felt least competent in and that were below the mean overall competence score were developing digital assessment and constructing and administering language assessment.It is recommended that pre-service training and in-service teacher training particularly address these areas to elevate teachers' confidence in assessment skills.The participants felt more competent in recognizing ethics of assessment, developing valid grading procedures, developing performance assessment, and communicating assessment results (in that order).
EFL teachers who had taken pre-service assessment courses rated themselves significantly more competent in five out of the six assessment competence categories than teachers who had not taken any assessment courses.The differences in reported competence between teachers with pre-service assessment course experiences and teachers without was largest for constructing and administering language assessment and developing digital assessment.This result contradicts Alkharusi et al. (2012) who did not find statistically significant effects of pre-service course work on Omani teachers' assessment knowledge and practices.It is possible that differences between study participants in terms of teaching subjects and characteristics of the pre-service assessment courses are responsible for the different findings.Future research should investigate and compare assessment course contents, how they relate to teachers' needs, and how they contribute to shaping teachers' knowledge about language assessment.Past research suggests that there is room for improvement of assessment courses.Teacher candidates often found formal courses on assessment hard to understand or irrelevant to real classroom practices (Berry et al., 2017;DeLuca & Johnson, 2017;Popham, 2006Popham, , 2009;;Stiggins, 1995;Taylor, 2009).Our finding of pre-service assessment training effects is reason for more optimism.It is in line with previous research that demonstrates the impact of pre-service training in assessment on self-perceived assessment competence by teachers and teacher candidates in various subject areas and at different educational levels.It corresponds to findings from studies with in-service teachers in secondary education in Oman (Alkharusi et al., 2011) and teachers in Turkey (Ogan-Bekiroglu, 2009) that showed higher levels of perceived assessment knowledge in teachers who had taken assessment courses than those who did not.It also corresponds to DeLuca and Klinger's (2010) finding that teacher candidates, who took a pre-service course on educational assessment, rated themselves more confident in assessment knowledge than peers who did not take an assessment course.These  Notes: N = 101 participants.1 = never, 2 = rarely, 3 = sometimes, 4 = most of the time, 5 = always.
results underscore the important role that pre-service assessment courses can play in developing teacher-perceived assessment competence.
Competence in using digital assessment, particularly, was also rated higher by teachers who had taken pre-service courses in assessment compared to teachers who had not.This suggests that assessment courses help prepare teacher candidates for today's challenges to teach and assess in digital environments which is yet another reason for why assessment courses should be an indispensable component in teacher preparation programs.
This study found no statistically significant multivariate effects for in-service assessment training, age, and gender on teachers' language assessment competence.Teachers who had received in-service training in assessment did not rate themselves more competent than teachers who had not participated in such training as was found in studies with teachers of other subjects (Fan et al., 2011;Koh, 2011;Mertler, 2009).The authors believe that in-service training, which is often limited in time and scope, cannot replace in-depth courses on assessment in teacher-preparation programs in college, but it can complement them, fill gaps, or rectify what they failed at (Popham, 2009).In the present study, the variable gender did not impact teachers' assessment competence.Given the inconsistent results reported in previous studies (e.g., Alkharusi, 2009Alkharusi, , 2011)), the researchers assume that neither gender nor age are among the main variables that affect assessment competence.
Correlation analyses showed a weak association of teaching load with one assessment competence: Teachers with a high teaching load reported to less frequently communicate assessment results.There were also associations of years of teaching experience with four assessment competence categories.Most of these associations were not strong, but one was of medium strength: More experienced teachers felt that they were more competent in constructing and administering language assessments than less experienced ones.However, when it came to using digital assessment, experienced teachers did not rate themselves more competent than less experienced teachers.Crusan et al. (2016) had found lower levels of AL in teachers with more years of teaching experience (most likely older teachers) compared to teachers who have taught less.Our finding contradicts theirs but suggests that advantages in confidence of more experienced teachers do not apply to the area of digital assessment literacy.

Assessment practices
Overall, the EFL teacher participants reported to use all the specific assessment practices that were given as options in the survey.These findings imply that the teachers surveyed in this study vary the methods they use to assess their EFL students.The teachers apply traditional methods, alternative assessments, and involve students in assessment, as in self-and peer-assessment tasks.They also use digital assessment and recognize the importance of communicating assessment results to their students-all overall positive findings.
Correlation analyses showed no association between years of teaching experience and teaching load with teachers' assessment practices.There was only one weak negative association between teaching load and teachers' practice of communicating assessment results.A too heavy teaching load may go at the cost of time teachers have or take to communicate assessments to students.While there were no statistically significant multivariate effects for gender, age, and in-service assessment training on the frequency of assessment practices, MANOVA showed that pre-service assessment training had a statistically significant multivariate effect on the frequency of teacher assessment practices.These findings are in partial agreement with Alkharusi et al. (2012) who found no effect for age and in-service training on teachers' assessment practices.However, unlike Alkharusi and colleagues, who reported no relation between pre-service assessment training and frequency of assessment practices, this study found that having taken a pre-service course on assessment affected EFL teachers' practices.The EFL teachers with pre-service assessment training reported higher frequencies for communicating assessment results and developing assessment standards and criteria than teachers without pre-service assessment training.This finding could be relevant for the planning of inservice assessment training events.If (as in this study) teachers, who have not had any substantial assessment training, report that they develop less often assessment standards and criteria than peers with training in assessment, then they might benefit particularly from in-service training events that focus on the need for standards and criteria and how they can be developed.Analogously, in-service training events could also address the communication of assessment results.
To strengthen teachers' confidence in assessment competence and promote assessment practices, it is recommended that EFL and FL teacher preparation programs offer at least one course on language assessment.That this is frequently not yet the case was demonstrated by Lam (2015) who reported that only one out of five institutions offered a compulsory and stand-alone language assessment course in teacher preparation programs in Hong Kong.In a Turkish context, Ölmezer-Öztürk and Aydın (2019) found that EFL teachers often felt that only one assessment course was insufficient to prepare them for their assessment needs as language teachers.
The authors also recommend that language-teaching institutions evaluate the types of in-service training (e.g., Koh, 2011) they offer and how to modify them to meet their teachers' needs.In-service training should focus on teachers, who have not taken assessment courses during their college education, and on assessment practices that these teachers report using less frequently, as for example, the development of assessment standards and criteria in this study.Where in-service training is limited, the authors recommend the involvement and guidance of a knowledgeable mentor (Eyal, 2012) or forming local teacher learning communities (Wiliam & Thompson, 2007).

Limitations and future research
This study is limited in that it relies only on teacher self-reports elicited through questionnaires.Ideally it would be complemented by data from tasks that measure teachers' assessment competence through actual performance.This could be done through tests of teacher assessment knowledge (Alkharusi et al., 2012;Lin & Su, 2015;Xu & Brown, 2017), through tasks in which the teachers are asked to evaluate the quality of a set of assessment(s) based on their understanding of what good assessment is (White, 2019) or through tasks in which the teachers are asked to create their own assessments (Levi & Inbar-Lourie, 2020) based on which their assessment skills will be evaluated.We are in the process of using such measures to complement the survey data presented here.
We also acknowledge that the findings about postsecondary EFL teachers in Oman are limited and that they cannot be generalized without further ado to other populations (for example, teachers in primary or secondary education, teachers of other subjects, or teachers who work elsewhere).However, taken together with the findings from the research literature, we believe that this study provides further evidence for the importance of pre-service assessment courses for teachers' competence and practices in language assessment and for the need to further develop teachers' digital assessment literacy.

Conclusion and recommendations
This study contributed to the field of AL and EFL assessment by expanding the ways of measuring teacher' AL by adapting AL self-rated questionnaires, aligning them with the definition of language assessment literacy provided by this study and accounting for the affordances and needs of the investigated context and EFL teacher participants of this study.It has accounted for digital assessment literacy in both teacher assessment competence and practices.In addition, the study has responded to several calls in the literature to emphasize the pressing need to use digital assessment (Brown, 2013;Noijons, 2013) and address the constantly reported underdeveloped teacher AL (Alkharusi, 2011;Mertler, 2004;Plake et al., 1993;Yamtim & Wongwanich, 2014).The study is the first, to our knowledge at least, to operationalize Eyal's (2012) Teacher Digital Assessment Literacy Levels (Eyal, 2012) by adapting some abilities to the context of the current study of postsecondary EFL teachers in Oman and integrating them into the applied self-rated questionnaires.
The postsecondary EFL teachers of this study in general believed that they were assessment literate.However, the study's findings showed differences in teachers' confidence in the different assessment categories investigated.The participants felt more competent in recognizing ethics of assessment, developing valid grading procedures, developing performance assessment, and communicating assessment results respectively.Digital assessment was among the areas that teachers felt the least competent in.Teachers who had taken a course on assessment in college reported to be significantly more competent in assessment.Teaching experience was associated to some extent with assessment competencies but not with digital assessment literacy.More experienced teachers felt generally more competent in assessment than less experienced ones but not so in digital assessment.
The study's findings also showed differences in teachers' use of specific assessments.Digital assessment and student involved assessment were used the least (below average).Teachers who had taken a course on assessment in college reported to practice certain assessments more frequently than teachers who had not taken any assessment courses.Teaching experience had no association with teachers' assessment practices.Teaching load correlated negatively with the practice of communicating assessment results.Teachers with a too heavy teaching load reported to take less time to communicate assessment results to students.
The findings of this study highlighted the impact of pre-service assessment courses on EFL teacher perceptions of assessment competencies and practices.Hence, the authors recommend that EFL preparation programs continue providing language teachers with courses on assessment to promote their assessment literacy or add an assessment course if they do not offer one yet.Further, the authors recommend that pre-service assessment courses and in-service assessment training organized by the EFL departments focus on the competencies that received belowaverage self-ratings by the teachers: developing digital assessment and (hands-on) constructing and administering language assessment as well as the practices using student-involved assessment (self-and peer-assessments) and non-achievement grading.These were the areas that teachers felt to be the least competent in and the practices they used the least frequently.Teachers who had no assessment course in college may benefit the most from professional development training events that address the competencies constructing and administering language assessment and developing digital assessments as well as the practices communicating assessment results and developing assessment standards and criteria.These were the competencies and practices in which the teachers who had no assessment courses lacked confidence compared to teachers who had taken assessment courses in their teacher-preparation programs.The authors emphasize that the development of digital assessment competencies and practices should be an important component of both teacher preparation courses and in-service training in language assessment.