Fostering student teachers’ research-based knowledge of effective feedback

ABSTRACT Properly designed feedback can be highly conducive to students’ learning. Therefore, teacher education needs to equip future teachers with research-based knowledge of how to provide effective feedback. The present study reports the implementation and quasi-experimental evaluation (a pre-post control group design; N = 141) of a four-week intervention programme that aimed to enhance student teachers’ knowledge of effective feedback and their ability to provide it to students. As a secondary objective, we also tested whether the experience of applying research-based knowledge about feedback improved participants’ attitudes towards educational research. The results showed a substantial increase in knowledge about effective feedback. Moreover, in-depth analysis of written feedback indicated an improvement in participants’ ability to provide high-quality feedback. However, there was no additional effect on their attitudes towards the usability of knowledge from educational research. We discuss the implications for teacher education and teaching about effective feedback.


Introduction
Feedback is one of the most influential factors in learning (Alfieri et al. 2011;Hattie 2009;Wisniewski, Zierer, and Hattie 2019).This is especially true when it relates to individuals' performance, goes beyond the immediate task and includes advice on self-regulatory strategies related to the learning objective.However, feedback can also be ineffective or even harmful for learning, such as when it focuses solely on the correctness of an answer and provides no specific information on the performance or learning process (Bangert-Drowns et al. 1991;Hattie and Timperley 2007;Kluger and DeNisi 1996;Shute 2008).In fact, teachers often seem to give ineffective feedback that is too unspecific (Voerman et al. 2012) or focuses on praise or summative assessments rather than providing specific constructive suggestions for improvement (Drake and Nelson 2021;Hattie and Clarke 2019;Wisniewski, Zierer, and Hattie 2019).In addition, feedback that is constrained to the CONTACT Thomas Bock thomas.bock@uni-erfurt.deFaculty of Education, University of Erfurt, Nordhaeuser-Straße 63, Erfurt 99089, Germany Supplemental data for this article can be accessed online at https://doi.org/10.1080/02619768.2024.2338841 task provides little guidance on self-regulatory processes and learning goals (van den Bergh, Ros, and Beijaard 2013).Two reasons may explain why teachers frequently fail to give productive feedback.First, they often rely on everyday knowledge of or misconceptions about how to formulate effective feedback (Hess, Werker, and Lipowsky 2017).Second, they overestimate the effectiveness of their day-to-day feedback.Research shows significant discrepancies between teachers' own assumptions about how to formulate effective feedback, their own feedback behaviours and their learners' perceptions of the feedback given (Carless 2006;Lee 2009).To counteract these shortcomings, it is important that teachers acquire research-based knowledge about effective feedback in initial and continuing teacher training.Such knowledge can help them recognise the impact and complexity of effective feedback and acquire the knowledge and skills to provide it.Beyond improving everyday feedback practice in schools, learning about the principles of effective feedback may also be beneficial to teachers' adoption and use of research knowledge for their own practice.Despite frequent calls to foster evidence-based practice in teaching (Bauer and Prenzel 2012;Detrich and Lewis 2013;Diery et al. 2020;Rousseau and Gunia 2016;Scheeler, Budin, and Markelz 2016), teachers rarely seem to refer to research to inform their teaching (van Schaik et al. 2018).Hence, showing that research can actually provide practical insights that transfer easily to classroom practice may help foster favourable attitudes towards educational research (Datnow and Hubbard 2016;Kippers et al. 2018).The research base on feedback is both of high practical relevance and scientifically solid (Hattie and Timperley 2007).
In line with these arguments, this contribution presents an intervention study that was embedded in the internship phase of teacher training in a master's programme.The intervention had two aims: First, it was designed to enable student teachers to gain and apply educational research knowledge about the effects and provision of effective feedback.Second, by exemplifying the application of research knowledge, it aimed to foster positive attitudes towards educational research use.In our empirical study, we aimed to evaluate whether this intervention increased knowledge (Research Question 1a) and abilities (Research Question 1b) regarding the formulation of effective feedback and improved participants' attitudes towards educational research findings (Research Question 2).
Below, first, we provide a short overview of the principles of effective feedback.Second, we outline the role of prior assumptions about feedback in learning to provide effective feedback, drawing on research on knowledge integration and revision.Third, we explain how acquiring educational research knowledge about feedback may serve to improve student teachers' attitudes towards educational research.

Characteristics of effective feedback
According to the well-known model of effective feedback by Hattie and Timperley (2007), feedback can be understood as "information provided by an agent (e.g.teacher, peer, book, parent, self, experience) regarding aspects of one's performance or understanding" (p.81).Effective feedback provides guidance to close gaps that are observed between the current and intended performance or understanding of learners.Specifically, Hattie and Timperley (2007) highlight the relevance of different levels of feedback that should be addressed and detail feedback questions that direct learners to improve their performance or understanding, as discussed below.
Feedback can address four different levels: the task, the process, self-regulation or the learner's self.Depending on the level of feedback, it can be more or less informative for learners.Feedback at the task level refers to task completion and is one of the most common forms of feedback (van den Bergh, Ros, and Beijaard 2013).It provides information about the correctness of a solution as a whole and sometimes about which parts of the task processing were correct or incorrect (Henderson et al. 2019).The effectiveness of task feedback can be enhanced when it includes additional cues addressing the process or self-regulatory levels (Brooks et al. 2019).Process-level feedback entails detailed suggestions and instructions on how to perform better on similar tasks.Feedback at the self-regulatory level provides recommendations concerning self-regulatory strategies, such as optimising time management and monitoring one's learning process (Nicol and Macfarlane-Dick 2006).In contrast, feedback at the level of the self has been shown to be less effective as it is not directly related to the specific performance or learning goal at stake (Hattie and Timperley 2007;Kluger and DeNisi 1996;Wisniewski, Zierer, and Hattie 2019).Commonly, such feedback refers to attributes of the person ('You are a great student', 'That's an intelligent response, well done') and does not provide constructive cues on how to improve.
Each of these four levels can address present, past and future performances and thus answer three specific feedback questions (Hattie and Timperley 2007): 'Where am I going?', 'How am I going?', and 'Where to next?'.These questions serve as informants about the current level of performance in relation to the intended learning goal (feed up), to a previous level of performance (feed back) and to future challenges (feed forward).
Feedback at the task level can refer to a past performance, point to the current goal achievement and include future goals (e.g., 'Your underlining is now better implemented [feed back].You marked all the important words in the text [feed up].In the future, the goal will be to generate single paragraphs of meaning from your underlining [feed forward]').
In summary, the discussed model highlights the importance of defining precise learning goals to guide learners (Brooks et al. 2019;Hattie and Timperley 2007;Shute 2008).In contrast, mere praise often fails to provide such informative feedback as it commonly represents a general form of evaluation without any goal orientation ('You're a good student!').Such assessment primarily targets the level of the self and sometimes the level of the task ('Good job!').It is therefore less productive than precise feedback in guiding learners to improve their performance and learning (Hattie and Timperley 2007;Kluger and DeNisi 1996).

Learning about effective feedback: a case of knowledge integration and revision
To provide effective feedback, (future) teachers need to be aware of the conditions that make feedback informative to learners.Yet they seldom start with blank slates when it comes to feedback.Therefore, learning about effective feedback needs to be framed from a knowledge integration and revision perspective that addresses learners' (potentially mistaken) prior beliefs and knowledge (Britt and Sommer 2004;Chi 2008;Kendeou et al. 2019;Lehmann, Rott, and Schmidt-Borcherding 2019).Because feedback is an everyday phenomenon, student teachers often have some prior assumptions about alleged feedback rules, such as the sandwich feedback method (i.e.critical feedback should be embedded between two instances of positive feedback; von Bergen, Bressler, and Campbell 2014).They may have also picked up and adopted feedback practices from inservice teachers during the internship phases of their teacher training or from university lecturers (Hobson et al. 2009;Lofthouse 2018).However, these 'feedback rules' do not necessarily correspond with knowledge from educational research (Gigante, Dell, and Sharkey 2011;James 2015) and may not lead to the desired effects (Molloy et al. 2020).Furthermore, numerous misconceptions exist about how to provide effective feedback, such as non-specific praise (e.g., 'You're so smart') would have positive effects on children's motivation and self-esteem (Brummelman, Crocker, and Bushman 2016).In fact, children seem to attribute non-specific praise internally, leading them to choose easier tasks to avoid internal attribution of failure (Baumeister, Tice, and Hutton 1989;Pomerantz and Kempner 2013).Accordingly, student teachers can be expected to have widely varying levels of more or less profound knowledge about feedback.Interventions to foster knowledge and skills to provide effective feedback, therefore, need to take into account and potentially correct such partial and possibly questionable understandings (see Chi 2008).To this end, we draw on theoretical approaches to knowledge integration and knowledge revision (Britt and Sommer 2004;Chi 2008;Kendeou et al. 2019;Lehmann, Rott, and Schmidt-Borcherding 2019), which acknowledge the need to incorporate and/or revise prior knowledge and held assumptions.
Updating existing knowledge about feedback with new information can be seen as a process of knowledge integration.To foster knowledge integration, the targeted activation of prior assumptions through macro-structure focusing tasks has proven to be effective (Britt and Sommer 2004).Macro-structure tasks focus on information in a text that describes global relations and reasoning contexts instead of on detailed information (Britt and Sommer 2004).Therefore, they pose elaborative questions, such as what and why something happened, instead of narrower ones that can be answered with a single word or number.Britt and Sommer (2004) argued that structuring a text through macrostructure focusing tasks facilitates the integration of new knowledge because it can be grasped in a more structured and efficient manner.Knowledge integration can further be supported by integration prompts (Lehmann, Rott, and Schmidt-Borcherding 2019) that provide learners with explicit cues to look for possible overlaps, complements and reasoning connections between different bodies of knowledge.If students' prior knowledge and assumptions about feedback are incompatible with the new knowledge, a process of knowledge revision is required, which takes a form similar to knowledge integration (Kendeou et al. 2019).According to Kendeou et al. (2019), three conditions foster the initiation of knowledge revision.First, prior assumptions must be activated simultaneously with the correct new information (coactivation) to highlight inconsistencies between them (Kendeou and van den Broek 2007;Kendeou, Muis, and Fulton 2011;van den Broek and Kendeou 2008).Second, the new information must be integrated with the previously misconceived knowledge to update its long-term memory representation (integration; O'Brien, Cook, and Guéraud 2010; Zwaan and Madden 2004).Third, the frequent activation of the new, correct information gradually weakens the propensity to activate false prior assumptions (competing activation; Kendeou, Smith, andO'Brien 2013, 2014;McNamara and McDaniel 2004;van Boekel et al. 2017).In the present study, we applied these instructional concepts to foster students' learning of effective feedback principles.

Learning about effective feedback as an exemplar of using research for teaching
Though the primary purpose of our study was to enhance student teachers' knowledge and ability to provide effective feedback, we were also interested in whether learning research-based knowledge of this topic would enhance participants' positive attitudes towards educational research and its use for teachers.Favourable attitudes towards educational research and its perceived usefulness have been highlighted as crucial drivers that facilitate its usage in the literature on evidence-based practice and research-based teacher education (e.g., Diery et al. 2020;Heitink et al. 2016;Kippers et al. 2018;Thomm, Sälzer, et al. 2021; van Schaik et al. 2018).First, conceptions of research-based teacher education rely on the notion that the contents of teacher education need to be supported by pertinent theory and evidence.This includes addressing misconceptions and questionable beliefs about teaching and learning that students may carry by confronting them with research-based knowledge.Second, engaging with research-based knowledge may foster pre-service teachers' knowledge, skills and attitudes in becoming competent and critical recipients of research-based knowledge as a pre-requisite of lifelong professional development.These two perspectives also provide a basis for teachers to take a more active role in enriching their practice on the grounds of their own classroom-based research (Hammersley 1993;Stenhouse 1975;Westbroek et al. 2022).Though this latter teacher-as-a-researcher perspective provides a more encompassing view and is prominent in some countries (e.g., Niemi 2008).The focus of the present paper is more strongly on (pre-service) teachers as recipients and users of research-based knowledge.
Unfortunately, students and in-service teachers often perceive research-based knowledge as irrelevant to their practice (Cain 2016; van Schaik et al. 2018).Moreover, they tend to dismiss research when its results conflict with their prior assumptions (Thomm, Gold, et al. 2021).Teaching and teacher education have long been criticised to be more strongly based on tradition, ideology, conventional wisdom and personal experience and belief rather than scientific knowledge about learning and teaching (e.g., Cain 2016; Schildkamp and Kuiper 2010; van Schaik et al. 2018).To increase the likelihood that individuals engage in a particular activity (i.e. the use of research-based knowledge in the present case), it is vital to increase the perceived value of that activity (Eccles and Wigfield 2002).In this sense, utility-value interventions aim to increase one's attitude towards an object by perceiving its usefulness for one's everyday practice (Hulleman and Harackiewicz 2021;Rosenzweig et al. 2019).If student teachers experience that scientific knowledge is superior to their own presuppositions or conventional knowledge, it can be assumed that the aforementioned negative tendencies will be counteracted.That is, if student teachers experience that research knowledge can offer helpful insights into issues relevant to their teaching, they may (re)consider the relevance and usefulness of educational research in relation to their own professional practice (Eccles and Wigfield 2002).Even short-term utilityvalue interventions can be expected to have an impact on attitudinal variables (Hulleman et al. 2010;Rochnia and Gräsel 2022;Zeeb et al. 2019).As argued above, we believe that feedback research provides a particularly suitable example in this regard as knowledge of effective feedback is directly applicable to teachers' everyday classroom practice.Moreover, the principles derived from feedback research can be used to analyse and improve existing practice (Kirkhart 2000;Weiss 1998).Even if research-based knowledge about feedback may conflict with students' prior assumptions, it is relatively easy to recognise its superiority to common simple feedback rules -a crucial condition for knowledge revision (Chi 2008).Therefore, as a second aim, we explored whether the designed intervention positively affects student teachers' attitudes towards educational research and its usefulness.

The present study
The present contribution describes the development and evaluation of an intervention that followed the discussed instructional principles to promote student teachers' knowledge of and ability to formulate effective feedback.As a secondary objective, the intervention aimed to improve student teachers' attitudes towards educational research by illustrating its usefulness.The intervention was embedded as a companion course to a compulsory half-year school internship in a master's teacher training programme.In this context, student teachers could learn about effective feedback and relate it to their internship.By means of a quasi-experimental evaluation design, we aimed to test whether the intervention increased participants' knowledge about effective feedback (Research Question 1a) and their ability to provide it to students (Research Question 1b).We assumed that participants would more often formulate feedback conducive to learning (i.e.refer more often to the different levels and feedback questions and make references to learning objectives).Furthermore, we expected the students to make less use of praise and feedback for the learner's self.We also looked at whether the intervention improved the participants' attitudes towards educational research findings (Research Question 2) and assumed it would do so.

Sample and study design
Overall, N = 141 student teachers (78.0%female; M age = 24.08 years, SD age = 1.70) participated in the study.All participants were in the last semester of their master's degree programme for teaching in primary or secondary schools.We implemented a quasi-experimental pre-post control group design.Overall, n = 63 participants were placed in the intervention group (74.6% female, M age = 24.46years, SD age = 2.13) and n = 78 participants in the control group (80.8% female, M age = 23.77years, SD age = 1.18).The control group participated in an alternative learning environment that did not offer any content on the topic of feedback during the same period as the intervention group.Dependent variables were (i) participants' knowledge about effective feedback, (ii) their ability to provide effective feedback and (iii) their attitudes towards educational research findings.Note that, for practical reasons, only participants in the intervention group could be asked to provide written feedback on a students' performance (i.e.our ability measure).Therefore, the design did not allow for a comparison of the growth in ability between the experimental groups.Nevertheless, the written feedback enabled us to analyse how participants implemented features of effective feedback and how this changed over the course of the intervention.

Intervention
The intervention was implemented as a four-week course that accompanied the internship (delivered online because of the COVID-19 pandemic).Each week, participants attended their assigned school for four days and the online course for one day.Each online session lasted about 90 minutes, and students received additional coursework to be done at home.Below is a detailed description of the course programme.Table 1 summarises the goals and tasks for each week.

Week 1
The aim of the first week was to help participants activate and structure their prior assumptions about effective feedback.They were tasked with writing a first draft feedback on a student's performance and instructed to reflect on and explicate why they had formulated their feedback in this specific manner.
Following the idea of macro-structure focusing tasks (Britt and Sommer 2004), these guiding questions were intended to support participants in structuring their prior assumptions and integrating newly read content into their knowledge structures.As a result, participants were expected to be prepared to grasp the subsequent research-based materials on effective feedback in a more focused and efficient manner.This activity also served as a preparatory task for integration prompts and the coactivation of prior (possibly false) assumptions and new knowledge in week 4.

Week 2
The aim of the second week was to build up and ensure a shared understanding of the educational research findings presented in two complementary educational research textbooks (Lipowsky 2015;Weckend, Schatz, and Zierer 2019).

Week 3
The third week aimed to encourage participants to use their new knowledge to provide effective feedback.We asked participants to write another feedback response and provide a rationale for it, as we did in week 1.This time, participants were instructed to draw on the research they had read to formulate their feedback.

Week 4
We designed the final session to help participants reflect on and integrate their prior assumptions with the new research knowledge.As can be seen in Table 1, we used three integration prompts (Lehmann, Rott, and Schmidt-Borcherding 2019) to help participants identify connections and conflicting information between their prior assumptions and the research knowledge concerning effective feedback.These prompts supported participants with correct prior assumptions in identifying overlaps and additional knowledge about effective feedback (Lehmann, Rott, and Schmidt-Borcherding 2019) and allowed participants with incorrect prior assumptions to directly contrast (coactivation, Kendeou et al. 2019) and update (integration) these assumptions with correct knowledge.

Knowledge about effective feedback
To measure the participants' knowledge about effective feedback, we constructed a test with 15 single-or multiple-choice items, including essential components of the model developed by Hattie and Timperley (2007).These items covered knowledge about the levels of feedback, feedback questions, role of praise and importance of learning objectives in feedback.Points were awarded for each correct answer (i.e. two points for a multiple-choice task with two correct answers), with a maximum possible score of 27 points.Though five items proved to be quite easy, we decided to retain them to ensure that the test covered all theoretical aspects of effective feedback.Readers should note that, similar to many other knowledge tests, our test cannot be considered to measure a homogeneous construct; instead, it was designed to represent the most important aspects of the knowledge domain of interest (cf.Stadler, Sailer, and Fischer 2021;Taber 2018).Because it has been highlighted that reliability indices, such as Cronbach's alpha, are inappropriate in such cases (Stadler, Sailer, and Fischer 2021), we do not report them here.

Ability to provide effective feedback
We used participants' written feedback from weeks 1 and 3 as a measure of their ability to provide effective feedback.As not every participant gave permission to analyse their written feedback, 42 feedback responses per time of measurement were analysed (overall 84 documents).Following the model by Hattie and Timperley (2007), we developed a coding scheme (see Supplement) that contained four top-level categories: level of feedback, feedback question, learning goal focus, and praise.The categories level of feedback and feedback question were separated into subcategories related to the different levels of feedback (task, process, self-regulatory and self) and different feedback questions (feed back, feed up and feed forward), respectively.
When a feedback response entailed at least one mention of a respective category, we coded one.If there was no mention in the whole feedback text, we coded zero for this category.Three independent raters coded 30 of the 84 feedback instances (35.7%).In a first step, the coding segments were determined.Second, each rater assigned these segments to the categories described above.Third, the rules for when a segment should be assigned to each respective category were discussed and adjusted where necessary.The process resulted in an average chance-adjusted inter-rater agreement of κ = .78(Brennan and Prediger 1981) between the three raters.

Attitudes towards educational research findings
We measured attitudes towards the use of educational research findings at three levels, as proposed by Weiss (1998).We asked participants whether educational research findings help change, understand and justify teaching practices and pedagogical phenomena.Two scales with five items on changing and justifying teaching practices and pedagogical phenomena were adapted from Haberfellner (2016).Moreover, we constructed a scale for understanding teaching practices and pedagogical phenomena.All items were rated on a 6-point Likert scale from 1 (strongly disagree) to 6 (strongly agree).Table 2 shows the sample items and reliability values.

Knowledge about effective feedback and ability to provide it
Regarding research question 1a, Table 3 summarises the descriptive statistics for the test of knowledge about effective feedback.Descriptively, the intervention group scored higher than the control group.We tested this in a mixed repeated measures ANOVA with the within-subjects factor time (pre-test vs. post-test) and  the between-subjects factor intervention (intervention group vs. control group).Indeed, the statistically significant interaction, F(1, 139) = 49.81,p < .001,η 2 = .08,revealed that the intervention group gained more knowledge compared to the control group.A Cohen's d of 1.21 indicates that this is a large effect.We also observed a statistically significant main effect of time, F(1, 139) = 62.69, p < .001,η 2 = .10,which is almost entirely due to the knowledge gains in the intervention condition.The main effect of the intervention was non-significant, F(1,139) = 2.48, p = .118,η 2 = .01.
To investigate learning gains in the ability to provide effective feedback in the intervention group (Research Question 1b), we analysed which characteristics of effective feedback participants considered in the written feedback activities in weeks 1 and 3. We expected an increase in the levels of feedback, feedback questions and in the references to learning goals.The intervention was also expected to reduce feedback on the level of the self and the amount of praise.Table 4 shows the average relative frequencies as well as the results of dependent sample t-tests.
We first analysed the extent to which participants considered the different levels of feedback.As Table 4 shows, on both occasions, participants mainly focused their feedback on the task and process levels.For example, 98% of the participants' feedback responses addressed the task level.The results also showed that the relative focus on the different levels remained relatively unchanged across the intervention.The descriptively apparent reduction in self-related feedback and slight increase in mentioning aspects of selfregulation were non-significant.
In contrast, we found changes in how the participants utilised the feedback questions.There were statistically significant increases in the frequencies of references to students' past performances (feed back) and of mentioning learning goals.There was also a slight, though non-significant, decrease in the use of general praise.Feed-up information on the comparisons between the student's current and intended performance was present in almost all written feedback on both occasions.Finally, feed-forward information was present in about a third of the feedback texts, and this proportion did not change over the course of the intervention.

Attitudes towards educational research knowledge
Table 5 shows descriptive statistics as well as the results of mixed repeated measure ANOVAs on the three scales measuring attitudes towards research use.Overall, the participants reported relatively positive attitudes, with mean values in the upper half of the used answer scale.Across time, only very small differences occurred in these values.
With the exception of the effect of time on the scale of the usefulness of educational research knowledge in justifying teaching practices, none of the ANOVA test results were statistically significant.

Discussion
As feedback is one of the most influential factors in supporting learning processes (Hattie and Timperley 2007), it is important that student teachers learn what makes feedback effective.The present study evaluated an intervention that aimed to impart knowledge about feedback and to improve student teachers' attitudes towards educational research findings in general.In the following, we discuss our findings along the two research questions and reflect on limitations of the study.

Fostering knowledge about feedback and the ability to provide effective feedback (RQ 1)
The results for research question 1 showed, first, that knowledge of effective feedback significantly increased in the intervention group in contrast to the control group.Hence, educational research knowledge can be an important source for student teachers' learning about pedagogical phenomena during an internship, supplementing the practical knowledge of the supervising teachers (Hobson et al. 2009;Lofthouse 2018).Even though providing feedback is a commonplace pedagogical activity (Drake and Nelson 2021), the stability of the control group's declarative knowledge indicates that knowledge gains cannot necessarily be expected from internship activities alone.Rather, the results indicate that student teachers need a purposeful and research-based learning context to increase their knowledge of effective feedback.These results are also in line with our proposition that student teachers already possess varying degrees of prior knowledge about effective feedback that they bring to the intervention.The relatively high baseline performance of both groups on the feedback test indicated that participants -either informally or through other courses -had already acquired some knowledge of effective feedback that aligned with the existing body of research.For example, almost all participants were aware that feedback should not primarily refer to learners' weaknesses and should be formulated as specifically as possible.However, we also found non-negligible use of self-related feedback and praise, which research has shown to be less effective than other feedback forms.This finding indicates that when teaching about feedback, considering participants' prior assumptions is crucial to facilitating the integration of new knowledge (Britt and Sommer 2004) and overcoming false prior assumptions (Kendeou et al. 2019).As this study shows, participants' explication of prior knowledge and assumptions as well as comparison to research-based knowledge are helpful instructional strategies for this purpose.
Consistent with other research (van den Bergh, Ros, and Beijaard 2013), our analyses of the participants' written feedback levels suggested that teachers mainly addressed the task and process levels.Because both levels showed ceiling effects in the pre-test, there was little potential for further improvement through the intervention.Feedback related to self-regulatory strategies occurred relatively rarely and did not change over the intervention.The reason could be that to recognise and support self-regulation, teachers need specific pedagogical knowledge that the intervention did not address, for example metacognition, self-regulation and direct self-regulation strategies (Dignath-van Ewijk and Van der Werf 2012;Geduld 2019).Thus, pre-service teachers would need more information or special training on self-regulation strategies (e.g., Dignath 2021) to get ideas for their feedback on the self-regulation level.Similarly, in terms of feedback questions, the amount of feed-forward questions did not significantly change; however, pre-service teachers addressed the feedback perspective significantly more.Feedback on students' progresses may be easier to describe than giving advice on how students can proceed with the next task, their learning process or their self-regulation, for which specific pedagogical and pedagogical content knowledge is necessary.
In addition, there was a clear focus on feed-up, with almost all pre-service teachers incorporating the same in their feedback in the pre-test.In return, none of the Week-1 feedback responses contained any reference to learning goals, even though such feedback is crucial to guiding students' attention to what they are expected to achieve.While our intervention effectively increased student teachers' awareness of the use of including learning goal references, only about 20% of the feedback texts included such information.Overall, the intervention successfully enhanced students' ability to include important aspects of effective feedback but left substantial room for improvement.Potentially, a longer-term intervention with repeating the cycles of writing and reflecting on feedback from the perspective of feedback models would be more effective than the current intervention in this regard.
Regarding questionable prior assumptions, the intervention only led to a descriptive tendency to reduce misconceptions, for example about the use of global praise and selfrelated feedback.This may be seen as surprising given that the intervention explicitly conveyed that such practices do not contribute to effective feedback (Brummelman, Crocker, and Bushman 2016).However, this is also in line with evidence from research on many topics showing that misconceptions may be hard to change, even if they are expressly debunked (Kendeou et al. 2019).In most cases, misconceptions cannot be simply replaced by new, correct knowledge (Chi 2008), they continue to exist and can be activated simultaneously with new knowledge (Kendeou et al. 2019).For example, our participants apparently still drew on their own prior assumptions about the importance of praise when writing feedback (Dagenais et al. 2012;Schildkamp and Kuiper 2010).Potentially, they still believed that including such information does not hurt, even though it does not enhance feedback efficacy, and therefore adhered to common 'rules' that feedback should contain positive aspects.Hence, it may be worthwhile to make the fact more explicit that praise and self-related feedback can indeed harm the effectiveness of feedback as they draw students' attention to aspects that are not conducive to improving their learning.

Fostering attitudes towards educational research knowledge (RQ 2)
Concerning research question 2, we assumed that a research-based feedback intervention could provide a Trojan Horse-approach to fostering student teachers' attitudes about the usefulness of educational research for their classroom practice.As discussed above, research shows that it is notoriously difficult to increase teachers' research use and related attitudes (e.g.van Schaik et al. 2018), despite many calls to make teaching more evidence-based (e.g.Rousseau and Gunia 2016).On basis of research on utility-value interventions (Hulleman and Harackiewicz 2021;Rosenzweig et al. 2019), we had hoped that offering student teachers a positive experience of using research-based knowledge to provide better feedback would help demonstrate the usefulness of research and, thus, improve participants' attitudes towards it.However, this attempt clearly failed as there were no effects on any of the investigated aspects of attitudes towards research use.One potential reason for this may be that the student teachers already had relatively positive attitudes towards educational research findings (cf.Thomm, Gold, et al. 2021), so the single experience of the intervention did not make a salient difference.Alternatively, student teachers may not have transferred the experience of the intervention to the usefulness of educational research findings in general, or they may have seen the topic of feedback as an exception.Even though the described approach did not work in the present study, we still assume that providing positive experiences of applying research knowledge to school-related problems can help foster student teachers' orientations towards educational research (cf.Hulleman et al. 2010;Rochnia and Gräsel 2022;Zeeb et al. 2019).Future studies might make such experiences more explicit and dedicate more time to reflective activities.

Limitations
Beyond the already-mentioned shortcomings, we acknowledge several limitations of our study.First, in the framework of Kendeou et al. (2019), competing activation is one of the necessary conditions to correct false prior assumptions and involves new knowledge being activated more frequently in future situations than prior incorrect assumptions.Unfortunately, it was not possible in this study to re-examine the participants in a follow-up session.Thus, it remains unclear whether the learned content led to a longer-term correction of false prior assumptions.It would be interesting for future studies to determine the lasting impact of knowledge revision in in an internship context.
Second, despite the important role of supervising teachers in student teachers' internship experiences (Hobson et al. 2009;Lofthouse 2018), we could not take their assumptions concerning effective feedback into account.In the intervention, we only contrasted individual prior assumptions with educational research questions.It would be interesting to examine the extent to which the student teachers' change in prior assumptions is moderated by the assumptions of supervising teachers as feedback is an everyday practice for teachers and, therefore, likely to be a topic of discussion during an internship.
Third, the ability to provide effective feedback could only be measured in the intervention group.It is possible that the changes in ability would also have occurred in the control group.However, given that the control group did not demonstrate any changes in knowledge about feedback, we consider this possibility unlikely.
Despite these limitations, we are convinced that the present study contributes to a better understanding of how intervention designs can foster (student) teachers' knowledge and ability to provide effective feedback to their students (Ha and Murray 2021;Knochel et al. 2022;Mrachko, Kostewicz, and Martin 2017).In particular, we believe that knowledge integration and revision approaches (Kendeou et al. 2019) offer a theoretical perspective that could be incorporated more systematically into this field of enquiry.

Disclosure statement
No potential conflict of interest was reported by the authors.
research on teachers and teacher education, medical education, and digitalisation in higher education.
Bernadette Gold is professor for school pedagogy and general didactics for primary and lower secondary school at the TU University Dortmund, Germany.Her main research interests are teachers' pedagogical-psychological competencies and how it can be promoted in teacher education with the use of classroom videos.Furthermore, her research focuses on student-teacherinteractions and research-based teacher education.

Table 1 .
Overview of goal and tasks for each week.
• Explicitly contrasting and update prior assump- tions with reliable research knowledge based on both feedback drafts (week 1 vs. 2)

Table 2 .
Sample items and reliability values of the attitude towards educational research findings.

Table 3 .
Descriptive statistics of the feedback test.
Note.The maximum number of achievable points was 27.

Table 4 .
Characteristics of the written feedback.

Table 5 .
Means, standard deviations, and mixed repeated measure ANOVAs on the attitude towards educational research knowledge.