A case study of the use of the Hattie and Timperley feedback model on written feedback in thesis examination in higher education

Abstract Effective feedback is a powerful educational intervention to support learning. Hattie and Timperley have developed a feedback model in which they define three different functions and four different levels of feedback. Although the model is widely used in educational practice, there is little known about how the model is used in education nor about the optimal distribution of the different types of feedback. In the current case study, we used an explanatory sequential mixed method design to investigate how lecturers use the model and how its use can be optimized. For this, 349 feedback comments from 22 lecturers were analysed, and 5 feedback experts participated in a focus group interview. The results show that most of the feedback given by the lecturers is task-oriented feedback and that current feedback practices can be improved by focusing more on the combination of feedup, feedback, and feedforward. In addition, results indicate that the level (task, process, self-regulation, person) of the feedback is more difficult to determine before hand and should be tailored to the learning goal. The design of the assessment form plays an important role in this alignment.

Effective feedback is one of the most powerful educational interventions to support learning (Hattie & Timperley, 2007;Shute, 2014). It helps students to assess what they know, what they do not know, and what they need to know (Black, 1998;Boud & Molloy, 2013;Broers et al., 2017). Because of its positive effects, with effect sizes up to d = 0.73 (Hattie, 2009), there has been done a lot of research on feedback. This research focuses on many different aspects of feedback such as ABOUT THE AUTHOR Ivonne Lipsch-Wijnen is educationalist and trainer. She has extensive, practical experience in teacher professionalization and providing educational advice on feedback and formative evaluation. Kim Dirkx is educational psychologist and former teacher in higher education. Her research focuses on the effects of feedback modalities, online measures of feedback processing, and design guidelines for online and formative tests.

PUBLIC INTEREST STATEMENT
Effective feedback is a powerful educational intervention to support learning. One feedback classification system that is often cited and used in educational practice is the one of Hattie and Timperley. This feedback model makes statements about effective and less effective feedback, but there is little research on how the model is actually implemented in educational practice. The present study provides empirical data on how the feedback model of Hattie and Timperley is used in practice and how it can support lecturers in using the model more effectively (which improves the quality of feedback). feedback literarcy (Carless, 2020;Carless & Boud, 2018;De Kleijn, 2021;Leenknecht et al., 2019), feedback perception (Winstone et al., 2019;Geitz et al., 2016;De Kleijn et al., 2013;Lizzio & Wilson, 2008;Perera et al., 1990), and feedback quality (Newton et al., 2006;Nicol, 2010;Sopina & McNeill, 2014). There are also a lot of feedback models and classification systems such as the feedback model of Nicol and Macfarlane-Dick (2006) which connects self-regulated learning theory and seven principles for good feedback practice, or the feedback literacy model of Carless and Boud (2018). Panadero & Lipnevich (2022) and Panadero and Lipnevich (2022) provide a insightfull overview of the diversity and range of these models and classfications.
One classification system that is often cited and used in educational practice (Arts et al., 2021;Dirkx et al., 2019), is the one of Hattie and Timperley (2007). The framework of Hattie and Timperley is attractive because the authors have theorized the meaning of feedback, systematically mapped it, and it is one of the first and few feedback typologies that includes the feedforward function of feedback. Also, the model presents a pedagogical and psychological point of view, in contrast to, for example, the feedback models of Kluger and DeNisi (1996) and Kulhavy and Stock (1989) which are more focused on internal processes (Panadero & Lipnevich, 2022).
The model of Hattie and Timperley focusses on two aspects of feedback, namely its function and its level. Hattie and Timperley make also statements about effective (process and self-regulated focussed feedback and a combination of feedup-feedback-feedforward) and less effective (person focused) feedback, but there is little research on how the model is actually implemented in educational practice. More research on the use of the feedback model in educational practice is therefore needed. Such research is important to further develop the classification framework and effective feedback practices in education.
In the current study, we therefore examined how the model of Hattie and Timperley is applied by lecturers of a University of Applied Science (UoAS) in written feedback provided formativeley on bachelor theses, and how its use may be optimized. Hattie and Timperley (2007, p. 81) define feedback as "information provided by an agent (e.g., lecturer, peer, book, parent, self, experience) regarding aspects of one's performance or understanding". According to this definition, effective feedback provides information on how to realise the desired result. In order to achieve that, Hattie and Timperley suggested that effective feedback should provide answers to three kind of questions that steer the learning process of students: (a) What am I going to learn? What are the learning goals? (feedup), (b) How have I handled this so far? (feedback 1 ), and (c) What is the next step? What am I going to do to achieve the set goals? (feedforward). The advantage of feedup is that the student knows what is expected of him or her and gives a direction to which he or she can work. The student gets insight in this goal by being clear about the assessment and the assessment criteria (Sadler, 2017;Wang & Li, 2011). Feedback gives the student insight into the current level of his or her performance (Kulhavy & Stock, 1989). The lecturer marks for example, what is already good, or still unclear or incorrect. With feedforward, the student looks ahead. The feedback is extended with solutions and suggestions for improvement (Elder & Brooks, 1992;Narciss & Huth, 2004). Feedforward in particular is a positive aspect of the model, because it is focused on growth or progress. It gives the student perspective, direction, and is motivating. Feedup, feedback and feedforward are best provided in combination, and therefore not used in isolation. The combination of the questions gives the student a good insight into what the student needs to do to get from the current level to the desired level (Sadler, 1989).

Theoretical feedback model
A second aspect of the feedback model of Hattie and Timperley (2007) is the level of feedback. Hattie and Timperley make a distinction in four levels. The first level is feedback on the task, where the feedback indicates whether a learning task has been properly understood and/or performed, whether or not it is relevant, and so on. In the context of a research paper, task-related feedback involves feedback on the research product, such as the results or outcomes of the research. An example might be "Your research question has not yet been properly formulated". The second level is feedback on the process, and focuses on the approach or strategy. What steps should the student take to carry out the task? The lecturer gives, for example, feedback on the approach by asking questions and suggesting alternative solutions, making the student aware that other strategies may be possible. Feedback at this level, according to Hattie and Timperley, leads to deep learning. In the context of a thesis, process-level feedback is about how the student has approached the research. An example of feedback at the process level is "Use an academic database to find literature". The third level is feedback on self-regulation. This feedback is focused on the metacognitive skills and is meant for the student to evaluate himself or herself. It therefore relates to the way in which the student has shaped his or her own learning and made choices. An example is "Take another look at your planning and indicate why it was not possible to submit the problem statement on time". And finally, feedback on the person focusses on on someones personal quality and personal characteristics. An example is "Well done!".

Effectiveness of feedback
According to Hattie and Timperley (2007), feedup, feedback, and feedforward should always be used in combination. The combination of the functions gives the student a good view on what he or she has to do to get from the current level to the desired level. Furthermore, Hattie and Timperley suggest that feedback at the task level is most beneficial because the student receives specific information about how he or she is doing on the task. But feedback at task level is not always effective. Often task-level feedback only contains information about whether a task has been performed well or not, and not why (Glover & Brown, 2006;Hattie & Timperley, 2007). The student then receives no help to come to a correct interpretation of the task. In addition, if a lot of feedback is given on the task, students are easily overwhelmed by the amount of the feedback, and they might give up as a result of cognitive overload (Joosten-ten Brinke & Sluijsmans, 2014; Shute, 2014;Surma et al., 2019). If too much feedback is given in a row, students may also start to believe that they need a lot of feedback because they cannot do it themselves, which oftentimes results in increased teacher dependency (Voerman & Faber, 2019). The feedback should also be not too simple, because then it yields too little (Joosten-ten Brinke & Sluijsmans, 2014; Shute, 2014). Feedback at task level is also ineffective if students have too little knowledge to perform the task. Then, it is better to give more instruction first (Hattie & Timperley, 2007). Finally, task-level feedback can usually not be generalized to other tasks, which limits its usefullness for enhancing students' learning (Hattie & Timperley, 2007).
Feedback at the process level and at the self-regulation level are the most powerful for learning, because it is about the way in which learning takes place (Arts et al., 2021). Feedback at the process level provides information on how to approach a task and leads to deep learning whereas feedback at the self-regulation level focuses on the way students monitor and regulate their actions, and contributes to self-activity, autonomy, and independent learning (Deci & Ryan, 2000;Pintrich & De Groot, 1990).
Feedback on the personal level says something about the student, such as giving students a compliment. This type of feedback is the least supportive and least effective, because it makes the student dependent on the lecturer (Hattie & Timperley, 2007;Kirschner et al., 2018). If many compliments are given, students may find the compliments suspicious ("What is wrong with me that the lecturer always compliments me?"; Hattie & Yates, 2014) and it can affect their intrinsic motivation (Hattie & Timperley, 2007;Shank, 2017). However, it is important to distinguish between compliments that are accompanied by information about the process or performance, and compliments as a reward or reinforcer. Giving compliments as a reward can be valuable in-the -moment to build a student-teacher relationship, but it does not support learning (William, 2011). Because this form of feedback contains no task information, Deci et al. (2019) and Kluger and DeNisi (1996) state that giving compliments as a reward should not be seen as a form of feedback.
According to Hattie and Timperley (2007), feedback is especially effective when all three functions are used and the feedback focusses on the process and self-regulation. However, only four studies (Arts et al., 2021(Arts et al., , 2021Dirkx et al., 2019;Harris et al., 2015) are known in which the model is applied to authentic feedback to verify to which extent the model reflects educational practice and how its use can be optimized. The four articles are discussed below. Harris et al. (2015) conducted research on peer and self-evaluation in primary and secondary schools in New Zealand. The pupils were between 10 and 14 years old. The researchers investigated whether the four levels of the Hattie and Timperley (2007) model could be found in authentic, digital pupil feedback. A total of 471 feedback notes were examined. The results showed that pupils mainly gave feedback on the task, in both peer and self-evaluations (71.0%, 50.0% respectively). Feedback at the process level occurred in only 9% of the feedback comments for peer evaluation and 20.0% in self-evaluations. Feedback at the level of self-regulation was not found in peer evaluations and only in 8.0% in pupils' self-evaluations, whereas feedback at the level of the person occurred in 20.0% of the cases in peer evaluations and 22.0% in selfevaluations. Arts et al. (2021) conducted a case study at a Biology teacher training college in the Netherlands among fourth-year college bachelor students who conducted an action-based research and wrote a paper about it. During the course, students received interim, written feedback from one of three lectures at different times. Arts et al. (2021) categorized 299 feedback comments on 8 interim papers (M = 37; SD = 19). The analysis of the feedback showed that 88% of the categorized comments could be classified as feedback, and that only 12% of the comments could be classified as feedup whereas feedforward was never used by the lecturers (0%). Furthermore, Arts et al. (2021) found that 48.1% of the comments were task-level feedback (48.1%) and that 26.4% were process and 24.1% self-regulatory level feedback. Feedback on a personal level was found in only 1.3% of the feedback comments. Dirkx et al. (2019) did research at a part-time University of Applied Science master's programme for assessment and testing experts in the Netherlands. Because it is a part-time program, students already had a lot of work experience in different professions, and the average age was quite high (M = 48.9; SD = 6.8). The researchers analyzed interim feedback from three lecturers, all experts in the field of assessment. The lecturers gave written feedback in two ways (i.e., modalities): (a) track changes and adding comments in Microsoft Word© (hereafter: in-text), and (b) by filling in a rubric and giving additional feedback comments in the rubric (hereafter: rubric). A rubric is an assessment matrix, consisting of two dimensions: test criteria and assessment levels ( Van den Bos et al., 2014). In total, almost 1,000 feedback comments from 18 papers from two courses were analyzed. It appeared that mainly feedback was given in both modes (in-text 96.0%, rubric 96.8%), that little or no use was made of feedup (in-text 0.3%, rubric 0%) and feedforward (in-text 6.4%, rubric 22.0%). Furthermore, they found that the vast majority of feedback comments were at task level (in-text 68.8%, rubric 77.1%), but in rubrics, slightly more feedback was given at the process and self-regulation level (process 28.7%, self-regulation 19.7%) than in-text (process 19.4%, selfregulation 14.9%). Feedback on the level of the person was not existent in both modalities. Feedback in-text and in a rubric therefore contain different types of feedback.

The function and level of feedback in practice
Finally, Arts et al. (2021) conducted a case study at a Biology teacher training college in the Netherlands among 18 fourth-year college bachelor students who conducted an action-based research and wrote a paper about it. During the course, students received interim written feedback from one of four lectures. The participants of the course were randomly divided over a control group and a experimental group. Both groups received in-text and side-line annotations in the papers, but only the experimental group also received feedback in the form of a filled out cover sheet. Arts et al. (2021) categorized 802 annotations in papers and 173 on cover sheets. In the analysis, Arts et al. (2021) made three groups: (a) annotations in the paper (control group: no cover sheet, hereafter: control group), (b) annotations in the paper (experimental group: with cover sheet, hereafter: experimental), and (c) annotations on the cover sheet (hereafter: cover). It appeared that mainly feedback was given (control 96%, experimental 99%, cover 42%). Where in the control and experimental group hardly any or no feed up (3%, 0%) and feed forward (1%, 1%) is given, in the coversheet group they were used quite often (feedup 25%, feedforward 32%). Furthermore, Arts et al. (2021) found that a lot of feedback comments were at task level (control: 53%, experimental 42%, cover 36%), but in the coversheet group more feedback was provided on process level (45%) compared to the control group (26%) and the experimantal group (35%). In all three groups, about one-fifth of the feedback was at the self-regulation level (control: 21%, experimental 23%, cover 18%). Feedback on personal-level feedback has almost never been given (control 0%, experimental 0%, cover 1%).
Overall, the four studies described above show that in educational practice there is a lack of feedup and feedforward and the feedback that is provided mainly focuses on the task level. However, the results also seem to vary between contexts and feedback instruments that are used (i.e., rubrics, cover sheets, or in-text). More research is therefore needed to see if there are general recommendations for better implementation of the model of Hattie and Timperley in educational practice.
In the current case study, we examined the use of the model of Hattie and Timperley (2007) in written feedback on interim theses by lecturers at a Dutch Academy of Hotel Management and an Academy of International Facility Management. We used an explanatory sequential mixed method design to answer the following research questions: (a) what kind of feedback is used in thesis examination by lecturers at the Academies? and (b) which recommendations are suggested by feedback experts for a more successful implementation of effective feedback functions and levels?

Context
In this study, the theoretical feedback model of Hattie and Timperley (2007) was applied to authentic feedback from lecturers of two international, four-year, full-time bachelor's degree programs: an Academy of Hotel Management and an Academy of International Facility Management in the Netherlands.

Design
An explanatory sequential mixed method design was used to answer the research questions (Creswell, 2013). In the first phase, quantitative data was collected by categorizing feedback comments and then quantifying them on the basis of a scoring format (question 1). In the second phase, qualitative research was conducted by means of a focus group interview with experts (question 2) using the results of phase one as input. A focus group interview was chosen because participants are stimulated to actively share their ideas and opinions (Creswell, 2013;Van Assema et al., 1992). This interaction between the participants leads to deeper insights than individual or group interviews.

Sample and participants
Phase 1. A total of 22 out of 37 lecturers (12 male; 10 female) provided feedback on 44 bachelor theses (10 of the theses from the cohort of 2017-2018 and 34 from the cohort of 2018-2019). The lecturers (i.e., feedback providers) were between 27 and 63 years old (the age of two lecturers was unknown) and had between 3 and 18 years of teaching experience (the number of years of teaching experience of 11 lecturers was unknown). In order to be able to say something about the representativeness of the randomly selected feedback comments for all feedback comments of the 22 lecturers who participated in the study, an a priori power analysis was carried out with the G*Power 3.1.9.2 program. This showed that 159 feedback comments are needed for the study (Heinrich Heine Universität Düsseldorf, 2019). The test chosen for the power analysis is the χ2 test (Creswell, 2013) with feedback comments as the research unit and .95 as the desired power. In this study, 349 feedback comments were scored. This is more than enough power to perform the analysis (the larger the sample, the more representative; Garssen & Hornsveld, 2016).
Phase 2. For the focus group, five experts in the field of feedback participated (Bloor et al., 2001;Creswell, 2013;Van Assema et al., 1992). They were all (recently) employed at a Belgian or Dutch UoAS or full university and recruited via the network of the authors. Two experts authored previous studies on this topic. One expert worked at the UoAS where the study was conducted.

Materials
Assessment Forms. In Figure 1, a section of the assessment form is presented. On the top of the assessment form, there is an instruction for the lecturers "Provide comments for each element of the bachelor thesis in this form". Then, there are six main criteria, which relate to different sections of the thesis: introduction, literature review, objectives and/or research questions, research and/or design methods, and reporting & style. Because it is an interim assessment, the sections "results", "conclusions" and "discussions" are not part of the aspects to be assessed. Each section is covered by two or more assessment criteria. The lecturers provided interim feedback on students' bachelor's theses by adding digitial feedback comments in the assessment form using Microsoft Word©.

Scoring format
To analyze the feedback, a scoring format was designed using the theoretical feedback model of Hattie and Timperley (2007). Multiple scores were possible when scoring (e.g., feedup, feedback, task, process). The scoring format is shown in Figure 2.

Interview protocol
For the focus group interview, an interview protocol was used (Creswell, 2013). The interview protocol contained general data (such as date and time) and the set-up of the focus group interview including the questions for the experts. The interview questions were developed by the two authors and an expert on this topic and qualitative research from the same University using the following procedure: the first author formulated concept interview questions based on the literature and research questions. The second author and expert were consulted and provided feedback, which led to a second draft, which was again discussed and a final set of questions was agreed upon by all three. The final questions were the following: (a) What do you notice about the feedback provided in this group of lecturers? Are the results recognizable?, (b) What recommendations do you have regarding to the "optimal mix" of different functions and levels of feedback, and  how to achieve that mix?, and (c) Are there still issues that have not been addressed in this interview, but are important to address here? During the focus group interview, the first author continued to ask for more concrete or detailed answers or to clarify answers that were provided.

Procedure
Ethical approval was granted by the Open University of the Netherlands before data collection (cETO, Open Universiteit, U2018/09426/SVW). The researchers requested and received permission from the program manager of the UoAS to conduct research on feedback at a Dutch Academy of Hotel Management and an Academy of International Facility Management. The UoAS searched their archives for bachelor thesis from cohort 2017-2018 and 2018-2019. Only assessment forms where a "Permission statement for publication of graduation thesis" was added were selected by the UoAS. In this statement, the student and the host company permitted the public use of bachelor thesis after graduation for educational and research purposes. Then, the UoAS

Feedforward
What is the next step? What could the student do to achieve the goals set? Explicit. Feedback is expanded to include solutions and suggestions for improvement. Example: "Take a close look at spelling and grammar mistakes."

Feedforward-task
The lecturer explicitly indicates what the student should do to improve language errors, namely look at spelling and grammar. This is in contrast to, for example, saying, "Your spelling contains many errors." That is implicit, and therefore feedback-task. Task Content: Is the learning task well understood and/or performed? Is the work correct or incorrect, relevant or irrelevant, complete or incomplete, et cetera? Example: "The aim of the study could be more clearly formulated."

Feedback-task
The feedback is about that the purpose of the study is not yet well formulated, or in other words, the content is not yet good.

Figure 2. Scoring format for feedback comments, with a part of the codebook.
approached the thesis supervisors of these students asking if the assessment form with lecturers' feedback could be used for research. All lecturers agreed and signed an informed consent.
Subsequently, the UoAS anonymized the assessment forms with lecturer's feedback and provided them to the researchers. After analyzing the feedback by the researchers, a 1-h online focus group interview took place in BlueJeans©. At the start of the focus group interview, the experts were welcomed and thanked for their participation by the researchers. This was followed by a brief explanation of how BlueJeans© works, the personal data collected (the audio recording), and a round of introductions. Then, the audio recording started and a short online presentation was given about the study, and the results of the feedback analysis were presented via a presentation in PowerPoint©. Following this, the experts were asked through a semi-structured interview to make recommendations in addition to Hattie and Timperley (2007) feedback model based on the findings on current practice. During the focus group interview minutes were made into Microsoft Word© and after the focus group interview the audio recording was transcribed into Microsoft Word© (Creswell, 2013).

Feedback comments
A codebook was developed (see Figure 2) and the second author provided feedback on the subcategories and examples. Then, the scoring format was tested in four rounds and discussed between the two researchers in order to arrive at a shared understanding. Based on the conceptual scoring format, the two researchers first scored the same assessment forms with feedback comments (n = 19) in two rounds and discussed the results. Subsequently, four other assessment forms with feedback comments (n = 40) were scored by both researchers, and the inter-rater reliability was determined by calculating the Cohen's Kappa with IBM SPSS Statistics 26© (Creswell, 2013). The inter-rater reliability of the functions of feedback was κ = .51 (p < .001) and of levels of feedback was κ = .29 (p < .001). This is, respectively, a sufficient and moderate agreement (Landis & Koch, 1977). The scoring format was then further discussed in order to arrive at a better shared understanding and a final round of scoring was conducted. After this, the inter-rater reliability was again calculated to determine the agreement between the assessors. This time the outcome of the functions of feedback was κ = .73 (p < .001) and of the levels of feedback was κ = .74 (p < .001). This means that the similarity is large (Landis & Koch, 1977). Finally, the other 38 assessment forms were analyzed by the first author.
The feedback comments in the assessment forms were scored by using the scoring format in Microsoft Excel 2010©. A feedback comment belonged to an assessment criterion and often consisted of several sentences. Therefore, the comments could receive one or more codes on function and level (e.g., feedback, feedforward, task, process; see, Figure 3). Because lecturers could only write their feedback comments in an overarching comment field, the first author splitted the feedback comments-if applicable-and placed them in the scoring format with the correct assessment criterion (see, Figure 3). If there was more than one alinea feedback for the assessment criterion, this was considered as one feedback comment. With Microsoft Excel 2010© the frequencies were calculated for the combinations of the functions and the combinations of the levels of feedback, so that question 1 could be answered. A total of seven combinations of functions were possible and 15 combinations of levels.

Focus group data
To answer subquestion 2, the audio recording was transcribed verbatim (Creswell, 2013;Van Assema et al., 1992) and was used to manually sort the information based on the themes from question 2. New themes may emerge during the analysis of the data. In this study, we chose to consider the information shared by the experts as the most important insights. Then, these most important insights were summarised and paraphrased per theme and supplemented with quotes from the experts.

Scored feedback comments
A total of 349 feedback comments from 44 assessment forms were scored (M = 8 per assessment format; min = 4, max = 13). For 267 assessment criteria, no feedback was given. The results are discussed below.

Functions of feedback
The results can been seen in Table 1, and show that the lecturers have used four combinations of functions of feedback. In 94.3% of the cases feedback is given, in 0.3% of the cases accompanied by feedup and in 36.7% accompanied by feedforward. The combinations feedup, feedup-feedbackfeedforward, and feedup-feedforward are not used (0%).

Levels of feedback
The results can been seen in Table 2, and show that the lecturers have used 10 combinations of levels of feedback. In 94,9% of the cases task-oriented feedback is given, and in 20,6% of the cases task-oriented feedback is accompanied by process-oriented feedback. All other combinations are rearily or not at all used.

Recommendations by experts
Below the results of the focus group interview are described.

Optimal mix of feedback functions
The experts all agree that feedback should be provided in combination with feedup and feedforward and that mainly the feedforward function is of most importance for students. "Because you want that the student will take action and for that it is important that the student understands the feedback, but also knows what the next steps are" (Expert A). They furthermore accord that it is especailly the combination of feedback functions that makes feedback effective. Only providing feedforward (e.g., suggestions) may namely seem non-committal to the student. They therefore recommend to always link feedforward to feedup (i.e., the learning goal/assessment criterion) and to feedback (e.g., this is good, you no longer need to adjust this, or, this is not good, you need to adjust is). In this way, it is much clearer what is expected of the student and why. In sum, feedup, feedback, and feedforward should be given in combination, but feedforward is the most important function.

Optimal mix of feedback levels
The experts agree that although dominance of task-oriented feedback is not wrong in itself (it is the task that is assessed), attention to process and self-regulation is too limited in current feedback practices. As a result, lecturers provide a lot of feedback on the task instead of feedback on the process and self-regulation, although this kind of feedback is particulary important in the current situation in which students work on a thesis in multiple writing cycli and are expected to be the expert on the content. They therefore recomment to include process and self-regulation as criteria in the assessment format, "A student needs criteria that focus on self-regulation and the executive functions to perform a task. If you start making those criteria explicit for lecturers, then they become much clearer as well" (Expert B). Furthermore, they indicate that it is important to make a distinction in feedback on self-regulation (a) as a means of doing the task properly, and (b) as a goal in itself, which is the case if the study program wants to start promoting student selfregulation. Than self-regulation becomes the task. In sum, according to the experts, there is no optimal mix for the levels of feedback, because this depends on the goal. They also highlight that the assessment form is important to provide the right focus so that feedback at the process level and self-regulation level is encouraged in students.

Redesign of the assessment form
Besides suggestions on how to improve the effective usage of the conceptual framework of Hattie and Timperley about effective feedback, the experts heavily agree upon the fact that the layout and content of the assessment form causes much unwanted effects. They stress the importance of redesigning the assessment form, "Actually, it's even more of a checklist: is it present or not" (Expert C). In addition, they acknowledge that feedup is actually already hidden in the form. However, they also stress that feedback does not always align (well) to the criteria and agree that there should be an explicit connection between feedup and feedback. For example, by marking with a yellow marker the assessment criteria (feedup) that a student does not yet meet and to add a feedback and feedforward comment. In addition, the experts suggest to create opportunities where students are required to use the feedback they received. The course can facilitate this by designing an assessment form on which the student is asked to keep track of what feedback is received, what he or she is going to do with it, what help may be needed, andafterwards-what exactly has be done with the feedback. In this way, the students are stimultated to use the received feedback more actively, and it gives lecturers the opportunity to provide feedback on the process and self-regulation. It is important that students also (learn to) think about what the feedback means for a next task, and during a new task also work with the feedback they received earlier. This means that an assessment form, or part of it, should be used for a longer period of time in the study program and not only upon graduation. In this way, an assessment form promotes the learning process and self-regulation among students even better, according to the experts. In sum, the experts agree that the assessment form needs to be redesigned in order to be able to provide feedback that is effective.

Conclusion and discussion
The aim of this study was to identify how the model of Hattie and Timperley (2007)  lecturers (question 1), and then experts made recommendations based on this analysis about the optimal mix of feedback types and how to achieve this mix (question 2).
The results on the first question show that lecturers make limited use of different functions and levels of feedback. Most feedback is task-oriented feedback. Previous research also shows that lecturers in higher education mainly give task-oriented feedback (Arts et al., 2021(Arts et al., , 2021Dirkx et al., 2019) on student papers and written assignments.
The results of the second question show that experts agree that feedup, feedback, and feedforward should always be provided in combination. This is line with what Hattie and Timperley (2007) state in their article. The lack of feedup and feedforward leads to students having a harder time revising the assignment because they lack explanations on how to make those revisions (Arts et al., 2021), as well as information about what they are working towards (Arts et al., 2021) and what the assessment criteria are (Biggs, 2014;Biggs & Tang, 2011;Brookhart, 2008;Panadero & Jonsson, 2020;Weaver, 2006). By linking feedforward to feedup and feedback, feedforward is not optional and non-committal for the student (the feedforward gets direction). According to the experts, the use of the feedup function can be optimized by creating an assessment form in which lecturers are forced to provide feedback on each and every assessment criterion instead of using an overarching comment field per subtask. This is also in line with recent research by Arts et al. (2021) which shows that cover sheets are a more effective alternative compared to in-text feedback. In the study of Arts et al., (2021), lecturers wrote their feedback comments on structured cover sheets. These coversheets were divided over the following sections: (a) general impression, (b) feedback, (c) feedup, and (d) feedforward. In this way, cover sheets helped lecturers to make better use of the feedup-feedback-feedforward functions.
A second conclusion of the current study is that there might be no universal mix for the level of feedback. But instead, the criteria and the goals should be well aligned, which makes it more easy to provide also feedback on, for example, the process and self-regulation level. A third, and final, conclusion that can be drawn from the expert focus group interview is that opportunities must be created whereby the student must actively use the feedback that he or she has received. One could use, for example, a feedback portfolio, in which the students must keep track of what feedback they have received and (learn to) think about what this means for a next task.
Although the current study contributes to the limited amount of research that has been done on the implementation of the feedback model of Hattie and Timperley (2007) and provides some interesting new insights on how to better implement it, there were some challenges that are important to discuss here. One of the challenges we experienced was related to difficulties in scoring the feedback comments. First, it was quite difficult to score the feedback comments because they can oftentimes be interpreted in different ways. This challenge is also experienced in previous research (Arts et al., 2021;Dirkx et al., 2019). It is important to be aware of the fact that there might be some serious differences in how feedback is formulated and how it is interpretated. For future research, it might therefore be interesting to further investigate how students interpret the feedback and expand the feedback model of Hattie and Timperley (2007) to provide more examples.
Second, this study is based on one specific assignment with a particular assessment form (writing a bachelor thesis). One should therefore be precautious with making generalizations to other contexts, although the results are in line with research conducted in different contexts (Arts et al., 2021(Arts et al., , 2021Dirkx et al., 2019).

Recommendations for practice
A widely heard critique on feedback practices is that feedback is not always effective and then has a minimal effect on learning. Although the feedback model of Hattie and Timpelery provides some very practical tips to improve feedback effectivity, there is little research on how the model is applied in educational practice. This study shows that altough lecturers use different types and functions of feedback, they heavily rely on task-oriented feedback and there is much to gain regarding the use of feedup and feedforward on the process and self-regulation level. Lecturers can improve their feedback practices by a clear alignment between learning goals, assessment criteria, and make more use of the combination of different feedback levels and functions. Lecturers and students can therefore benefit by a carefully designed feedback form with appropriate learning goals, where there is room for feedup and feedforward, and where the student needs to log what he or she did with the feedback (i.e., feedback portfolio). In this way, the student is stimulated to actively use the feedback and the feedback form promotes the learning process and self-regulation among students.