Measuring what learners do in feedback: the feedback literacy behaviour scale

Abstract Feedback can be powerful, but its effects are dependent on what students do. There has been intensive research in recent years under the banner of ‘feedback literacy’ to understand how to help students make the most of feedback. Although there are instruments to measure feedback literacy, they largely measure perceptions and orientations rather than what learners actually do. This paper documents the development and validation of the Feedback Literacy Behaviour Scale (FLBS), which is a self-report instrument intended to measure students’ feedback behaviours. A framework for feedback literacy was constructed with five factors: Seek Feedback information (SF), Make Sense of information (MS), Use Feedback information (UF), Provide Feedback information (PF), and Manage Affect (MA). An initial set of 45 questions were reviewed in an iterative process by feedback experts, resulting in 39 questions that were trialled with 350 student participants from four countries. Our final survey of 24 questions was generally supported by confirmatory factor and Rasch analyses, and has acceptable test-retest reliability. The FLBS provides a more robust way for educators and researchers to capture behavioural indicators of feedback literacy and the impact of interventions to improve it.

When feedback works well it has one of the most powerful effects in education (Hattie and Timperley 2007;Wisniewski, Zierer, and Hattie 2019).However, feedback often falls short of its potential, and as a result is one of the most dissatisfying and problematic aspects of education (Boud and Molloy 2013).A common frustration for educators is that the comments they make on student work often go unread or don't lead to change (Winstone, Nash, Rowntree, et al. 2017).In response to the seeming intractability of the problems of feedback, researchers have sought to shift the way we think about feedback, supplementing a focus on the messages teachers provide to students with consideration of how students actively engage in feedback (Carless and Boud 2018).In this view, students are positioned as the primary actors in their own personal feedback processes, making use of inputs from a variety of sources.
One challenge that comes with student-centred feedback models is that they demand much more from students than traditional approaches to feedback.'New paradigm' (Winstone and Carless 2019) models require students to seek feedback information, actively process it, make quality judgements about which inputs will be most useful, and take action.This asks students to be far more active than teacher-centred models of feedback, in which students are often positioned as passive recipients of information.
Feedback literacy has been proposed to conceptualise the capabilities that students need to engage in feedback.The notion has gained significant traction in the higher education literature thanks to conceptual work by Carless and Boud (2018) and Molloy, Boud, and Henderson (2020).Although there are differences between various authors' models, all at their core focus on understanding what students need to do to benefit from feedback processes.Feedback literacy is thought by its proponents to be a worthwhile outcome in and of itself, as well as something that may support students in engaging in feedback and, ultimately, in improving their learning.
Research on feedback literacy is currently blossoming with empirical (Noble et al. 2019;Winstone, Mathlin, and Nash 2019;(e.g. Hoo, Deneen, and Boud 2022) and conceptual (e.g.gravett 2022; Chong 2021) work.To enable quantitative research into interventions to improve feedback literacy, there is a need for instruments to measure it.Currently six such instruments have undergone a formal validation process: the Student Assessment-Based Feedback Literacy (SAFL) scale (Liao 2021), the Scale of Student Feedback Literacy (SSFL) (Zhan 2022), the Feedback Literacy Scale (Song-FLS) (Song 2022), the L2 Student Writing Feedback Literacy Scale (L2-SWFLS) (Yu, Di Zhang, and Liu 2022), the Feedback Literacy Scale (Yildiz-FLS) (Yildiz, Bozpolat, and Hazar 2022), and the Peer Feedback Literacy Scale (PFLS) (Dong, gao, and Schunn 2023).Characteristics of these scales are summarised in Table 1.We have not included Winstone, Mathlin, and Nash (2019) study on their Developing Engagement With Feedback Toolkit, as their instrument is explicitly disclaimed as "simplistic and exploratory" (p.3), and they state that their aim "was not to develop and validate such a measure of feedback literacy" (p.8).
The focus across these six scales is mainly students' beliefs about feedback and their own evaluation of their capabilities to engage in feedback processes.each scale offers little or no consideration of what students do in feedback processes.However, dominant feedback literacy frameworks emphasise that feedback is an active process, and feedback literacy involves "taking action" (Carless and Boud 2018) and eliciting, processing, enacting and providing feedback information (Molloy, Boud, and Henderson 2020).This suggests that to measure feedback literacy, Although current understandings of feedback literacy in higher education have largely been prompted by Carless and Boud's (2018) paper, there is a long history of research on similar concepts in education, psychology and business, and it is worth exploring instruments to measure those concepts.The extensive work on employees' attitudes towards feedback (Anseel et al. 2015) has some overlap with feedback literacy (Joughin et al. 2021), and has included the development and validation of scales to measure various concepts.One example is the Feedback Orientation Scale (Linderbaum and Levy 2010), which measures receptivity to feedback.While this has been validated and is used within business research contexts, its workplace contextualisation means that it misses some aspects of feedback relevant to higher education and contains language and concepts that are not directly applicable (e.g."Feedback from supervisors can help me advance in a company").In addition, it is founded on a fundamentally different set of understandings about the problems of feedback.Although the scale is intended to study the agentic practices of employees around feedback information provided to them, the scale positions feedback as something that is given or done to someone, rather than as an active process driven by the learner.This sort of positioning of feedback in surveys for use in education has been criticised by Winstone, Ajjawi, et al. (2022) and it runs counter to an overall shift in the higher education literature towards viewing feedback as a process rather than an input (Winstone, Boud, et al. 2022).Other scales from this domain (e.g. vandeWalle et al. 2000;Krasman 2010) similarly lack either a full coverage of feedback literacy, are a poor match for the context of higher education, and/or do not position feedback in ways that are compatible with feedback literacy.
There are also a variety of instruments in educational psychology that measure related concepts.For example, Lipnevich and Lopera-Oquendo (2022) developed the Receptivity to Feedback Scale, which comprises experiential and instrumental attitudes towards feedback, and cognitive and behavioural engagement with feedback.Although related, the ideas of receptivity to feedback and feedback literacy have very different conceptual bases.Similarly, the Student Conceptions of Feedback instrument adapted from school education to higher education by Brown, Peterson, and Yao (2016) focuses on students' conceptions of feedback.As discussed above, our understanding of feedback literacy differs from both receptivity to feedback and conceptions of feedback in that we are more interested in behaviours than attitudes.
In the absence of a robust scale specifically focused on feedback literacy behaviours, we set out to develop one.

Conceptual framework
We based our conceptualisation of feedback literacy on existing frameworks from the literature that were consistent with an understanding of feedback literacy as behaviour.At the time of developing our instrument, there were two such frameworks.The first of these is Carless and Boud's (2018) conceptual paper, which was the most-cited paper on feedback literacy at the time of writing (>1,000 citations on google Scholar).This framework is particularly useful because its four components (appreciating feedback processes; making judgements; managing affect; taking action) suggest behaviours.The second existing framework we used was Molloy, Boud, and Henderson's (2020) empirically-based framework, which goes into greater depth about the specific behaviours students are expected to do to be considered feedback literate.The most significant addition in terms of behaviours present in Molloy, Boud and Henderson's framework that is not made explicit in Carless and Boud's framework is the incorporation of learners providing feedback information as part of feedback literacy -the category acknowledges feedback as a reciprocal process.
We created a new framework, rather than use an existing framework, as neither was framed entirely within behavioural terms, and neither on its own was as comprehensive as we required.Although both are framed in active terms, and described largely in terms of what students do, they occasionally use verbs like "acknowledges" and "appreciates" which require reframing towards a behavioural focus.To develop our framework from these two existing frameworks we took an iterative process.Firstly, we discussed these and other existing frameworks amongst the project team to identify ways to build a comprehensive behavioural framework for feedback literacy.Then we developed potential frameworks, which were workshopped with the team over three successive iterations.We checked the conceptual frameworks with a critical friend, a senior scholar who has published in the field of feedback literacy, and we also checked our framework with an author of each of the frameworks we were building on.The connection between our framework and the two frameworks we built on is detailed in Figure 1.
Our final conceptual framework consists of five components, with the following definitions: • Seek feedback information (SF): eliciting feedback information from a variety of sources, including one's own notions of quality and examples of good work; • Make sense of information (MS): processing, evaluating, and interpreting feedback information; • Use feedback information (UF): putting feedback information into action to improve the quality of current and/or future work; • Provide feedback information (PF): considering the work of others and making comments about its quality; and • Manage affect (MA): persisting in feedback processes despite the emotional challenges they pose.
These five components represent our reworking of the two frameworks into a set of broad behaviours that denote the enactment of feedback literacy.

Instrument development
The instrument development was informed by Hinkin's (1998) guideline.After our conceptual framework was established, an initial set of 45 items was developed by the project team.Consistent with our behavioural focus, we sought to represent each component in a series of items representing behaviours that feedback literate students would undertake in feedback processes.We considered adapting items from other inventories but could not find a substantial set that focused on behaviour.We went through three iterations of the item set at this stage, and discussed the items in a mix of online meetings and asynchronous exchanges.Our initial set of 45 questions was split over the five concepts.The preface to the questions was "please think about what you usually do in your studies, and rate how often you do these things".This set was workshopped by the project team through three iterations before being sent for review to 21 scholars regarded by the authors as international feedback experts (most of whom are named in the acknowledgements of this paper).Fifteen of them rated each question on how essential it was for measuring feedback literacy: essential, useful but not essential, or unnecessary.experts also commented on any wording issues with questions.Questions that at least 11 experts rated as essential were kept, and others were debated among the project team in terms of how essential they were.
All items in the provide feedback information category were rated as not essential by most experts.Analysis of their comments suggested this may have resulted from diverging understandings of feedback literacy between our conceptual framework and the views of the experts.Our conceptual framework regards learners having the capability to provide feedback information as an essential component of feedback literacy, and this is consistent with the literature; for example, Molloy, Boud, and Henderson (2020) include "Composes useful information for others about the nature of their work" in their list of what feedback literate students do.Three existing scales, the SAFL (Liao 2021), the L2-SWFLS (Yu, Di Zhang, and Liu 2022) and PFLS (Dong, gao, and Schunn 2023), also include the provision of feedback information as part of feedback literacy.Removing the provision of feedback information by learners from our instrument, or separating it into another instrument, would (a) make our instrument inconsistent with the literature, and (b) represent a missed opportunity to explore relationships between the provision of feedback information and other dimensions of feedback literacy.Because of this, we opted to amend and keep some questions about student provision of feedback information and expose the full set of questions to a second round of review.
This revised set of 39 questions was reviewed by eight experts who again rated every question and provided comments.Five items were dropped based on low scores, and modifications were made to seven other items at this stage, resulting in the instrument of 34 items that was used in this study.The project team, which consists of six feedback researchers from education and psychology, debated items in terms of where they fit in the conceptual framework.A six-point Likert-type frequency response scale (1 = never, 2 = almost never, 3 = rarely, 4 = sometimes, 5 = almost always, 6 = always) was used for all items.A neutral point was excluded to prevent problematic response tendencies (i.e.satisficing, ambivalent responding, etc.) (Johns 2005;Moors 2008).All survey items were mandatory.Messick's (1995) framework of validity was adopted to guide the validation.Five out of the six aspects of validity, i.e. content, substantive, structural, generalizability and external aspects, were evaluated in this study.Content validity evidence was captured by expert review, as discussed earlier.Substantive validity evidence was ensured by the correspondence between items and the underlying conceptual framework (see Figure 1), and the item-level statistics from Rasch analysis, such as the item fit statistics.Confirmatory factor analysis (CFA) results offered evidence of structural validity, whereas evidence of generalizability could be partially inferred from the results of differential item functioning (DIF) across gender in a Rasch analysis.

The framework for validation
To partially understand external validity, correlations between subscales of the Big Five Personality Inventory-Short Form (BFI-S) and the FLBS were evaluated.A number of studies have shown that broad personality dimensions of the Big Five personality inventory (extraversion, agreeableness, conscientiousness, emotional stability and openness) often subsume newly introduced concepts (MacCann et al. 2012; Walton et al. 2023).Hence, our goal was to examine links among the subscales of the Feedback Literacy Scale and the Big Five personality factors to ensure construct discrimination and the presence of theoretically meaningful correlations.Prior studies of feedback receptivity showed its links to conscientiousness and openness, with coefficients ranging from 0.164 < r < 0.362 (see Lipnevich et al. 2021).Personality manifests itself through behaviors, thus, exploring links of the Big Five with factors of FLBS was critical for scale validation (Briley, Domiteaux, and Tucker-Drob 2014).Consequential validity could not be reported as we did not have relevant criterion data.

Data collection
Participants completed the survey online using Qualtrics.All participants (T1) were invited to complete a follow-up survey (T2) with the same questionnaire four weeks later and 322 participants (92%) did.Participants were paid £3.15 for each survey, which was calculated based on the estimated length of time to complete the survey multiplied by the minimum wage in Australia, where the study was based.The data of T1 and T2 were matched through unique identifiers on Prolific Academic.This study was approved by the relevant ethics committee.
In addition to completing the FLBS, participants also completed the Big Five Personality Inventory-Short Form (BFI-S), which has 20 items and measures five dimensions of personality (goldberg 1993).Responses to each item ranged from 1 = strongly disagree to 5 = strongly agree.Previous studies have supported the reliability and validity of the BFI-S with university students (Donnellan et al. 2006;Trapmann et al. 2007).

Data analysis
The psychometric properties of the FLBS were examined via both confirmatory factor analysis (CFA) and Rasch analysis.We did not conduct exploratory factor analysis because our scale development was guided by a strong theory.With CFA we explicitly tested a hypothesised model, as discussed earlier, on an empirical data set (gorsuch 1983).Rasch analysis assesses the quality of a scale by examining the extent to which items in the scale are reflective of a single underlying latent construct.The 'data fit the model' approach adopted in Rasch analysis requires that the collected data meet a priori requirements essential for fundamental measurement purposes (Andrich 2004;Bond, Yan, and Heene 2020).While CFA tests the overall fit between the empirical data and the hypothesised model, Rasch analysis (Rasch 1960) examines the psychometric properties at the item level and, therefore, provides additional evidence regarding the scale quality.Several empirical studies (e.g.Deneen et al. 2013;Testa et al. 2019;Yan et al. 2020) have demonstrated that using both approaches can provide comprehensive and robust evaluation of the quality of an instrument.Rasch analysis provides calibrated scores that account for differences in item difficulty.In this paper, when we report comparisons between an individual's pre-test and post-test performance, and comparisons between the FLBS and the BFI-S, we use these Rasch-calibrated scores.As Rasch analysis is relatively technical and lengthy to describe, and it did not result in the exclusion of any items or changes to our factor structure, we have placed it in supplementary online material.
CFA was conducted using AMOS 24.0 (Arbuckle 2015).Multiple fit indices were checked including the goodness of fit index (gFI), the comparative fit index (CFI), the Tucker-Lewis index (TLI), the standardised root mean square residual (SRMR), and the root mean square error of approximation (RMSeA).gFI, CFI, and TLI with values > 0.90, RMSeA and SRMR with values < 0.08 (Hu and Bentler 1999;McDonald and Ho 2002) indicate an acceptable model-data fit.
The Cronbach's alpha (α;Cronbach 1951) and omega coefficient (ω; McDonald 1999) were computed to evaluate the internal consistency of the scale.

Results
We ran a theory-driven CFA with empirically driven modifications (Mueller and Hancock 2001) with the original five factors: Seek Feedback information (SF), Make Sense of information (MS), Use Feedback information (UF), Provide Feedback information (PF), and Manage Affect (MA).The values for Kaiser-Meyer-Olkin measure of sampling adequacy was 0.907 and Bartlett's test of sphericity was χ 2 (561) = 4059.532,p < 0.001, indicating that the sample and inter-item correlations were appropriate for factor analysis.The initial results showed that some of the model-data fit statistics were not satisfactory: χ 2 /df =2.401; gFI = 0.814; CFI = 0.801; TLI = 0.784; RMSeA = 0.063; SRMR=.068.examinations of the factor loadings and modification indices revealed that two items had very low factor loading on the target factor (PF1: When I comment on someone else's work, I consider the emotional impact of my comments; MS6: When making sense of comments I try not to focus on the time and effort I put into the work), while some items cross loaded on different factors (e.g.MA7: I am comfortable commenting on other people's work).According to Saris, Satorra, and van der veld (2009) suggestion, we examined the size of modification indices and power of the test for all misfitting items.The results showed misspecification as all modification indices were significant and the power of the test was low.We then removed all questionable items.
A final model was arrived at with five items for each of subscales SF, UF, PF and MA, and four items for the subscale MS.The model-data fit was acceptable: χ 2 /df =1.896; gFI = 0.901; CFI = 0.911; TLI = 0.898; RMSeA = 0.051; SRMR=.053. Figure 2 displays the standardised factor loadings and correlations among the latent traits.The correlation between SF and UF was high (0.94), statistically indicating the need to merge these two factors.However, we chose to keep these factors separate for both conceptual and pragmatic reasons.Firstly, feedback information (which may be obtained through actively seeking feedback or just being provided the information) is received before it is made sense of (MS) and used (UF).Combining SF and UF would mean merging the first and third steps, but not the second, creating conceptually awkward factors.Secondly, we were concerned not to misrepresent student agency in feedback, and to distinguish between situations in which students might not elicit information but are expected to act on teacher comments.Thirdly, we wanted to retain the ability of users of the instrument to target groups of students in need of support with respect to one factor but not the other, recognising that practical strategies used to seek feedback are different to those used to enact it.
The results of the Rasch analysis were satisfactory: there were no significantly misfitting items in the final scale, no disordered thresholds in the six-point rating scale, and no substantial DIF across gender.The detailed Rasch analysis results are placed in supplementary online material.The Cronbach's alpha coefficients for subscales SF, MS, UF, PF and MA were 0.66, 0.64, 0.76, 0.69 and 0.81 respectively.As Cronbach's alpha represents a lower bound to the reliability and in many cases underestimates the true reliability (Sijtsma 2009), these results appear acceptable, although there is room for further improvement.The omega coefficients for the five subscales were 0.67, 0.64, 0.76, 0.69 and 0.81, which were quite similar to Cronbach's alpha coefficients.
When a subsample of the participants (N = 322, 92%) completed the FLBS four weeks after the main survey, the Cronbach's alphas and omega coefficients for subscale scores were similar (see Table 2), indicating the measurement error is acceptable (Nunnally and Bernstein 1994).The test-retest correlations of participants' responses for all subscales were positive and significant (ranging from 0.56 to 0.71), indicating fair to good test-retest reliability according to Cicchetti's (1994) guideline.The moderate stability of students' feedback literacy over time was in line with our expectations because, on the one hand, feedback literacy represents a long-term capacity which may remain stable and, on the other hand, the FLBS focused on the behaviour elements of feedback literacy that are susceptive to change due to learning, practice and context.
To test the external validity of the FLBS, Pearson correlations between the Rasch-calibrated person measures of the five FLBS subscales and the five BFI-S subscales were calculated; these are reported in Table 3.The Cronbach's alphas of the five BFI-S subscales -extraversion, agreeableness, conscientiousness, emotional stability and openness -were 0.85, 0.78, 0.71, 0.77 and 0.70 respectively.

Discussion
Our results provide initial validity evidence for the Feedback Literacy Behaviour Scale as a measure of students' self-reported enactment of their feedback literacy.The five-factor model was generally supported by both the factor analysis and Rasch analysis, and the instrument has an acceptable level of test-retest reliability.This represents an important first step into more robust quantitative research on feedback literacy behaviours.Researchers, educators and learners wishing to measure feedback literacy behaviours can access the instrument in Appendix A of this paper.One issue requiring further exploration is whether the capability to provide feedback is a necessary component of feedback literacy.Our conceptual framework, informed by Carless and Boud (2018) and Molloy, Boud, and Henderson (2020) suggests that, at its core, feedback is a reciprocal act where learners are active participants who also generate feedback information.In such an understanding of feedback, being feedback literate requires the capability to provide information to others.However, given our expert review of the initial scale suggested that our experts did not all agree, and the inconsistency among existing scales on this matter (Liao 2021;Zhan 2022;Song 2022;Yildiz, Bozpolat, and Hazar 2022;Yu, Di Zhang, and Liu 2022;Dong, gao, and Schunn 2023), there is a need for further work in this space.Could it be that this is a value position that assumes that learners must be equipped to help others as well as themselves, or is it a matter that can be separated in more teacher-centred pedagogic contexts?The reasonably high correlations between provide feedback (PF) and other factors provides some initial evidence to support our view that feedback literacy incorporates the provision of feedback information.
Another conceptual matter raised by our development and validation process is the close correlation between seek feedback (SF) and use feedback (UF).We opted to keep these factors separate for reasons discussed earlier, but their closeness suggests something that is perhaps obvious but still conceptually useful: those students who seek feedback were likely to also use feedback.However, these factors remain conceptually distinct in our model, supported by the evidence of patterns of correlations among the FLBS subscales and the Big Five dimensions.
Overall, links between personality and the FLBS were of the expected direction (Lipnevich et al. 2021), thus providing additional validity evidence and suggesting the construct's differentiation from the Big Five personality factors.Conscientiousness and openness were the strongest predictors of behavioural indicators of students' engagement with feedback, revealing significant correlations with four out of five FLBS subscales.It suggests that students who tend to be disciplined and achievement-focused (high on conscientiousness) as well as open to new information (high on openness) would tend to be higher on feedback literacy.Agreeableness yielded significant links with SF, UF and PF, showing that students who were more cooperative and trusting were more likely to seek, use and provide feedback.Out of five FLBS factors, emotional stability was significantly and positively related to students' ability to manage their affect (MA).This correlation is of the highest magnitude among all observed links (r = 0.311, p < 0.01), yet it is weak enough to show evidence of factor differentiation.
Of particular interest are the links between our highly correlated factors of SF and UF and the Big Five factors.They show that students' willingness to use feedback (UF) has a stronger relation with their tendency to be trusting (agreeableness) compared to their tendency to be disciplined (conscientiousness).The reverse is true for feedback seeking (SF): there was a higher correlation between conscientiousness and SF than between agreeableness and SF.Feedback use (UF) was the only factor unrelated to openness, showing that one's proclivity to be intellectually curious doesn't matter as much in one's use of feedback; this is not the case for feedback seeking (SF), when intellectual curiosity apparently matters.These differential relations support our separation of the UF and SF factors, although further research is needed.
In addition to its focus on behaviours, the FLBS differs from existing instruments ( and Schunn 2023) in that it does not use language specific to the context of education.This means the FLBS may have utility beyond education, and it may enable longitudinal tracking of feedback literacy behaviours from late high school, through university and onto graduate employment settings.However, this would require further validation of the instrument in those contexts.The literature on feedback literacy interventions is expanding (Little et al. 2023), including approaches such as self and peer assessment (Hoo, Deneen, and Boud 2022), mixtures of online and face-to-face workshops (Noble et al. 2019), and freely available toolkits (Winstone, Mathlin, and Nash 2019).Previous work reviewing students' proactive recipience of feedback found that meta-analysis was not possible, partly due to methodological inconsistency (Winstone, Nash, Parker, et al. 2017).The FLBS may help avoid this problem in future feedback literacy research by providing a common measure specifically focused on behaviour.The FLBS could facilitate not only studies that more reliably measure students' feedback literacy behaviours before and after an intervention, but also comparison of those results with other studies that use the FLBS.It may also prove a useful tool for longitudinal studies of feedback literacy behaviours.
Researchers and educators may wish to consider the five-factor model of feedback literacy underpinning the FLBS when designing interventions to improve feedback literacy behaviours.It may be that targeted development of particular aspects of feedback literacy is needed.Administering the FLBS would help identify cohorts that need help with, for example, managing affect, but not with making sense of feedback.

Limitations and future directions
There are potential limitations to the use of our instrument.The first is that it relies on students to self-report what they do.Further mixed-methods work is required to understand how accurate these self-reports are.As our scale focuses on feedback literacy behaviours, such work would include observations of student feedback processes.One weakness regarding the psychometric properties of the scale is the relatively low Cronbach's alphas and omega coefficients for subscales SF and MS.If this level of reliability was duplicated on other samples, future studies may consider further developing the scale -such as adding additional items representing other relevant feedback practices to these two subscales -with an aim of increased reliability.Another major challenge is that the degree to which feedback literacy is context-dependent is yet to be established, so we do not know how transferrable the results of the FLBS are across contexts.Future work might involve validating the instrument within different disciplines, age groups and countries, and how well the self-reported behaviours in the instrument map to observable behaviours in practice.

Conclusion
Feedback literacy helps students engage with the challenges and opportunities that feedback offers.The Feedback Literacy Behaviour Scale (FLBS) provides a way to capture the current state of students' feedback literacy behaviours, and to measure changes over time.It is intended to be useful for researchers seeking to study the effects of interventions to improve feedback literacy, as well as for everyday educators and students.As a measure of feedback literacy behaviours rather than attitudes or conceptions, the FLBS is targeted at what students do in feedback processes -which, according to recent understandings of feedback, is what matters most.

Figure 1 .
Figure 1.conceptual framework mapped against components of molloy, Boud and Henderson's framework and carless and Boud's framework.

Figure 2 .
Figure 2. the five-factor FLBS cFA model.sF: seek Feedback information, ms: make sense of information, uF: use Feedback information, PF: Provide Feedback information, mA: manage Affect.

Table 1 .
(Pintrich et al. 1993)instruments to measure feedback literacy.instrumentsshouldfocuson what students do -not just what they think they should or can do.A similar approach is taken in other related concepts, such as measuring self-regulated learning with the Motivated Strategies for Learning Questionnaire (MSLQ)(Pintrich et al. 1993), which mostly consists of items about what students do.Just as with self-regulated learning, the utility of feedback literacy depends on its relation to what students do.Feedback is an active process, and if instruments are to measure feedback literacy as enacted -rather than just students' beliefs and attitudes -then they need to ask students about what they do in feedback processes.The six published instruments do not do this in any great depth.