Making assessment accessible: a student–staff partnership perspective

Student assessment literacy, and staff assessment practices can be enhanced through constructive dialogue, designed to help build better shared understandings, and in which both students and staff can meaningfully contribute. Such a dialogue has great potential to increase student engagement with their own learning. Focusing on a UK university law school’s staff–student partnership initiative called Getting It Right: Assessment and Feedback in Translation (GRAFT) and aimed at improving assessment feedback practices, this paper demonstrates how students and staffworking as partners in this context canmake amajor contribution to assessment literacy and student engagement. ARTICLE HISTORY Received 4 May 2018 Revised 6 August 2019 Accepted 14 November 2019

the article begins with a discussion of the theoretical bases for the project's aims and design, beginning with an introduction to academic and assessment literacies.

Academic and assessment literacies
It is now twenty years since Lea and Street (1998) published their influential and widely cited critique of the notion that student learning in higher education (HE) could be understood as involving the acquisition of generic, 'atomised' and socially neutral 'study skills'. In place of this reductive and simplifying 'skills' discourse, the authors proposed that we conceptualise student learning and writing in HE through an 'academic literacies' lens. Academic literacy practices, they explained, encompass those approaches to reading, writing, articulating knowledge, etc., that make possible students' engagement with the distinctive epistemological values and knowledge production practices they encounter in their respective disciplines. Such practices served to signal the extent to which students were judged to be 'successfully' participating in the scholarship and discourse of their fieldas witnessed, for example, in how far written assignments were recognised as having conformed with discipline-specific norms, conventions and expectations. In the period since this article's publication, an extensive body of research has sought to develop, deepen and expand on this understanding of student learning, and to consider its practical, as well as its political and ethical implications where teaching, assessment and curriculum design are concerned (see, for example Gourlay, 2009;Lillis, Harrington, Lea, & Mitchell, 2015;Wingate, 2006).
Two important and interconnected elements of this academic literacies scholarship have been: (a) its emphasis on the often esoteric codes of behaviour that novitiate students are required somehow to know and be able to practice (codes which Lillis has referred to as instances of the 'institutional practice of mystery ' (2001: 58)); and (b) its focus on the limitations of instructional and explanatory approaches to aiding students' development of this knowledge and practice. Where the latter is concerned, Lillis and Turner (2001) have pointed out that often well-intentioned efforts to clarify and explain standards and expectationsin the form of written guidance materialstend to work with somewhat naive beliefs about language: [M]uch advice . . . not only uses wordings to denote conventions as if they were transparently meaningful but works with a metaphor of language itself as ideally transparent. Consider, for example, the following exhortations: state clearly, spell it out, be explicit, express your ideas clearly, say exactly what you mean [original emphasis]. If we explore further just one of these exhortations we can see that, whilst working with a notion of language as ideally transparent, such wordings are anything but transparent and indeed mean different things across different contexts. (58).
The language we regularly use to try and elucidate our expectations to students, in other words, is far from being a clear and unambiguous transmitter of already shared meaningsan observation that should come as no surprise to anyone who has ever tried to explain anything of even modest complexity to anybody else. Rather, our language (and perhaps especially when we use it to distil and summarise complex and context-dependent social practices such as those students are required to perform in written assignments) is often opaque and generic, and therefore as likely to sow confusion as it is to secure clarity (Lillis & Turner, 2001). Indeed, our attempts at distillation and complexity-reduction invite the constant risk of reification; the risk that we will mistake those commonly used phrases that serve at best as inadequate shorthand for the practices they seek to describe as being measurable and assessable things in themselves. Which is not to say that the language that Lillis and Turner so rightly problematise is without educational use or value. As will be discussed in later sections, where they occur in assessment criteria and marking rubrics, they can provide a fruitful starting point for productive discussion and shared sense-making between students and staff. However, problems arise when they are instead misconceived as being the opposite: as a finishing point in our communication, after which students are expected to simply 'implement' what we have advised them to do. As O'Donnell, Kean, and Stevens (2016) make clear, students' successful transition to, and progression within, higher education cannot be made reliant on the provision of 'ever-more explicit articulations of . . . expectations ' (19). If communication around expectations is to be effective and meaningful, it must be steered away from simple transmissions of necessarily abstract written criteria and standards, and towards opportunities for dialogue, discussion and practical engagement with these criteria and standards (Harrington, 2011).
And it is with the process of facilitating just such dialogue and practical engagement that the scholarship of 'assessment literacy' is concerned (see, for example: O'Donovan, Rust, & Price, 2016). As with academic literacy practices, the term 'assessment literacy' encompasses certain forms of knowledge and types of practice; in this case, knowledge and practice as they relate specifically to the methods, meanings and purposes of assessment and feedback. For O'Donovan et al. (2016), assessment literate students: . . . are familiar with assessment and feedback approaches, concepts, purposes and techniques, understand the nature, meaning and level of assessment criteria and standards, interpret assessment expectations and tasks in the same way as their tutors, and can evaluate their own work and that of their peers, and thereby are more effective learners. (940)(941) Given what has already been noted about the elusiveness of language and the vagaries of interpretation, it may be somewhat utopian to imagine quite such a level of assessment literacy pertaining universallynot least as the tutors on any given programme may well not always 'interpret assessment expectations and tasks in the same way as' each other let alone their students. This definition does, however, provide a helpful ideal to keep in mind where both our interactions with students, and our reflections on assessment and feedback practices, are concerned. Even if we acknowledge the unlikelihood of our achieving the above definition of assessment literacy in any kind of absolute or universal way, we can surely agree that it does at least describe a situation we should work towards achieving where most, if not all, students and staff are concerned.
How then, in practical terms, are we to support greater assessment literacy among our students? For O' Donovan et al. (2016) the various answers to this question lie in devising assessment and feedback practices that enable more explicit, reflective and dialogical engagement with the purposes, expectations, criteria, etc., for assessment. Practical suggestions include: greater use of formative assessment and feedback (including 'draftplus-rework' opportunities (942)); greater coherence of assessment practices across programmes in order to better facilitate 'feed forward' approaches for students moving between modules and levels of study (943); and greater opportunities for peer-to-peer and tutor-led dialogue (for example, through 'in-class discussion of exemplars . . . [and] peer-review discussions supported by tutors' (943)). Whilst such broader developments in assessment and feedback practice were certainly of interest to the project reported on here, its specific concern was to ensure that staff and students were starting from a position of better shared understandings concerning both the purposes of, and expectations for, different types of written assessment within the Law School. To this end, staff initiating the project were particularly interested in working with students to (a) discuss how they interpreted current assessment criteria; and (b) to co-produce more studentcentred and practically useful guidance and support materials for those encountering these assignments for the first time. In the section that follows, we explore in more detail the potential of student-staff partnerships in helping to facilitate the development of assessment literacies.

Student-staff partnerships
Student-staff partnerships are an increasingly important feature of the current HE landscape, particularly in the UK. Major institutionally supported partnership projects are evidenced at, for example, the University of Exeter, Lincoln, and Birmingham City University (Birmingham City Students' Union, 2010; Dunne & Zandstra, 2011;Neary, 2010). Many more examples and case-studies of individualised partnership initiatives (both within and outside the curriculum and pertaining to a specific course or module) can be found in the literature (for a comprehensive overview of these see Healey, 2018). Some of these case-studies focus specifically on assessment. Before, however, we discuss the way in which student-staff partnership initiatives have engaged with assessment and feedback, it is worth briefly recapitulating what studentstaff partnership means and why assessment and feedback processes can benefit from this approach.
A commonly cited definition of student-staff partnership identifies partnership as: a collaborative, reciprocal process through which all participants have the opportunity to contribute equally, although not necessarily in the same ways, to curricular or pedagogical conceptualization, decision making, implementation, investigation, or analysis (Cook-Sather, Bovill, & Felten, 2014, p. 6-7).
Partnership working invites students to work with staff on improving learning and teaching in HE. The above definition emphasises the importance of sharing perspectives. Staff and students have different expertise and experiences. Students have direct experience and knowledge of being a learner. As the primary recipients of the educational experience and environment, they can provide staff with vital insights on how to further improve (their) learning and teaching (experience). Similarly, engaging with staff enables student partners to appreciate learning and teaching from the teacher's perspective.
Partnership is also about actively collaborating in identifying any challenges and opportunities and working together in finding and implementing solutions. Students ideally are full partners in this respect. Partnership in this differs from other forms of student engagement in which the student voice is consulted but not necessarily further engaged. Student-staff partnership approaches in contrast invite students to become active participants in their own learning and teaching and help build a community of practice in which both learners and educators are actively engaged and enabled to share their different but equally valuable perspectives. The growing literature around student-staff partnerships in HE has identified a number of benefits associated with engaging with students as partners. Partnership working enables students to become more personally invested in their learning, feel a greater sense of institutional belonging and identity, develop greater awareness of learning and teaching processes, and increase criticality and graduate skills. Partnership working, furthermore, appears to enable students to increase their disciplinary knowledge and engagement with their courses. Staff members on the other hand are enabled to benefit from student perspectives in developing and improving their teaching. Partnership working appears to increase staff ownership of and engagement with pedagogical development and research and leads to increased reflection on learning and teaching practices in HE (Bovill & Bulley, 2011, 2011Bovill, Morss, & Bulley, 2009;Evans, Muijs, & Tomlinson, 2015;Freeman, Millard, Brand, & Chapman, 2014;Healey, Flint, & Harrington, 2014;Marquis et al., 2016).
Initiating a student-staff partnership initiative takes time, however, and can be challenging to both students and staff. Commonly identified challenges in the literature are a general unfamiliarity with partnership working amongst both staff and students, staff uneasiness with giving students greater input, the amount of time and effort required to make a partnership work and staff being sceptical about the way in which students can contribute (Bovill, Cook-Sather, & Felten, 2011;Bovill et al., 2009;Cook-Sather et al., 2014;Marquis et al., 2016). Students may also be sceptical about the extent to which they can be involved and contribute. Students might even feel that collaborating as partners in developing learning and teaching is not part of their job. These are just some of the challenges commonly referred to and which will need careful consideration by anyone interested in setting up his or her student-staff partnership project.
Working in partnership with students can take place both within and outside of the curriculum. It can involve all students of a particular cohort or a selected number of (paid?) volunteers. The Higher Education Academy has attempted to categorise student-staff partnership working in HE and identified four areas in which activity generally takes place. Assessment is identified as a particular area of focus (Healey et al., 2014;Higher Education Academy, 2015) and as previously highlighted numerous studies involving students more closely in the assessment and feedback process have been conducted. Not many studies, however, involve students in co-writing assessment criteria or marking rubrics or producing student-owned versions of these (see, for example in this context: Greenbank, 2003;Stefani, 1998;Rust, O'Donovan, & Price, 2010;Meer & Chapman, 2015). The next section will briefly outline the potential benefits of doing so.

The importance of co-creation in developing assessment literacy
It is fundamental in our view and an approach adopted more and more (see, for example: Andrade & Du, 2005;Deeley, 2014aDeeley, , 2014bGreenbank, 2003;Orsmond, Merry, & Reiling, 2010;;Stefani, 1998) to bring students and staff together in an effort to develop a shared language around assessment. A language which is both acceptable to staff and comprehensible and meaningful to students. Engaging students as partners in developing assessment literacies is potentially a powerful tool and approach in breaking down some of the barriers and ambiguities associated with assessment and feedback Healey et al., 2014). There are, indeed, numerous benefits associated with involving students in the co-creation of assessment. They match-up well with the benefits associated with partnership working more generally. Deeper learning and enhanced skills ; student motivation, development and a community of practice (Meer & Chapman, 2015), and greater student ownership of and responsibility for their own learning (Deeley, 2014b) have been identified in the literature.
A key benefit, however, identified by student-staff partnership projects that focussed particularly on the co-writing of assessment criteria and marking rubrics was the increased assessment literacy amongst students that collaboration and ownership brought (Carless, 2007;. There is some evidence, therefore, that cocreation of marking criteria leads to an increased student understanding of and engagement with those criteria. Outside of the arena of assessment there is indeed ample evidence that co-creation does precisely that, it leads to a deeper and more thorough understanding and enables students to engage more effectively and knowledgeably with their learning and teaching (Bovill & Bulley, 2011;Bovill et al., , 2009Healey et al., 2014). These findings are important as there is ample evidence, as highlighted earlier, that students struggle to engage effectively with assessment and feedback (see, for example O'Donovan, Price, & Rust, 2004). Andrade and Du (2005) assert that assessment rubrics should be a tool which not only helps staff to mark student work but equally facilitates student learning. It should be a tool which provides students with guidance on what is expected of them in a given assignment or test, enables self-assessment of work by the learner, and provides a handle as to how to improve performance. It is therefore self-evident and absolutely crucial that students understand and are able to engage with marking criteria, assessment rubrics and feedback. There is evidence that the co-creation associated with student-staff partnership work has tremendous potential for increasing assessment literacies amongst students. Projects that have collaborated with students on the co-creation of marking criteria and rubrics have reported positively on the ability of such a collaboration to increase student understanding of the assessment process and its characteristics (for example Deeley, 2014aDeeley, , 2014bGreenbank, 2003). The creation of a Student Academic Literacy Tool, cocreated by staff and students of Teesside University, and aiming to provide an accessible guide to academic writing, has similarly reported on increased student understanding and engagement (Becker, Shahverdi, Spence, Kennedy, & Rayment, 2016). At the University of Leicester's School of Law, the GRAFT project engaged student and staff in the co-creation of 'street' version of the assessment criteria and descriptors (Becker, Kennedy, Shahverdi, & Spence, 2015. It is on this project that the remainder of our contribution will now focus as a practical case-study of ways in which student assessment literacy can be enhanced through student-staff collaboration.

Rationale and aims
GRAFT was devised to improve the Law School's assessment criteria and feedback practices. The aim was a redesign of the assessment criteria and associated feedback form. Through this exercise, the intended impact was two-fold. In relation to staff, we wished to provide clearer and more homogeneous guidance when assessing students; we aimed to help colleagues to reflect on how they feedback and supporting them in providing meaningful and clear feedback. We also intended to assist students in understanding coursework expectations and discipline writing convention. We aimed to 'demystify' some of the language used in feedback (for example: 'be more critical'). The new system would provide a feedback form and marking criteria that would empower students to understand why they obtained their mark and how they could improve in a way that was consistent, enhancing their learning and teaching experience.
The motivation came from a number of realisations, notably a gap in existing guidance for staff: written advice existed for the examination marking but not for coursework. Additionally, there was a need to develop more coherently the support of students' writing and to respond to a changing higher education landscape, including the National Students Survey and questions on feedback (and now the Teaching Excellence Framework).
In light of the literature (as discussed in the previous section, for example Lillis, 2001), for the purpose of developing a shared language around assessment, it was decided to work closely with students in partnership with academic and professional staff.

Methodology
In order to devise assessment and feedback practices that enable more explicit, reflective and dialogical engagement, as encouraged by the literature considered earlier (O'Donovan et al., 2016), the process involved appointing a subgroup from the school's TEACHING AND LEARNING COMMITTEE. Volunteers were invited to participate as stakeholders. The aim was to recruit members who were considered crucial for the success of the project: markers, professional colleagues as they process assignments and results, members of the university Learning Institute for their expertise and access to best practice, and students. A group of five academics, two professional colleagues, one member of the university Learning Institute and four law students came forward. In the case of students, two were law students' representatives in their second year who had been approached by the university Learning Institute. Those students had enrolled another two enthusiastic undergraduates. It will be shown that the success of the project may be attributable to the engagement of those students as co-writers of assessment criteria, a winning formula which has been highlighted in the literature.
The work of the group quickly identified three goals: (i) to revise and improve the Law School's assessment criteria, (ii) to redesign the feedback form and (iii) to change the procedures for undergraduate coursework, including electronic feedback. This was completed in phase 1 (the first academic year of the project). In phase 2, understanding of the new criteria was key through the development of a 'street version' of the criteria by students for the benefits of their peers. The use and application of the new criteria by staff was also key and a questionnaire was devised to develop a better shared understanding of the criteria (see Appendix 1).
Throughout the project, views of staff were expressed either in committees and via email for phase 1, or through the staff questionnaire in phase 2. Ethical clearance was sought at the end of the first academic year of the project and before phase 2 in line with University and Law School guidelines. A consent form was circulated and signed by all staff who filled in the questionnaire: colleagues involved agreed to the data collected throughout the project to be used for research papers and journal articles in an anonymous fashion. Students involved in the project signed a consent form for the same purpose.
In phase 1 of the project, academic volunteers drafted the new assessment criteria. They consulted existing literature on assessment and student learning in HE (for example Gourlay, 2009;Lillis & Turner, 2001) with the assistance of colleagues from our Learning Institute, examples from other departments, and the school guide to marking and grading for examinations and postgraduate assessed coursework. While writing the first draft, they reflected on how they could assist markers when establishing whether an essay belongs to the 1st, 2(1), 2(2), etc., category and how they could help students understand what was expected of them. For additional inputs, drafters had attended university teaching events on assessment where alternative modes of feedback and quick wins on feedback were considered. Inspiration was found from colleagues who had considered effective feedback and the need to focus on a small number of aspects that were positive and similarly two-three points to improve. The work from Higgins, Hartley, and Skelton (2001) also highlighted the literature that recommends shorter feedback.
These sources inspired the feedback form where a box on 'overall assessment and steps for improvements' was included. Comments by Channock (2000) on how students do not always understand what is meant by their tutors in expressions, such as 'you are descriptive and not analytical enough', prompted the drafters to try to explain more clearly in the criteria what is meant by 'analytical', using alternative terminology such as 'argument' and providing additional descriptors such as 'answer the question explicitly'.
To clarify criteria for staff and expectations for students, assessment descriptors were divided into categories: argument and identification of relevant issues; knowledge and understanding; structure; research and writing and referencing. For each classification (1st, 2(1), etc.), each of the category was described to help the marker and the student understand what was being assessed. For example, one of the descriptors for a First read:

Argument and identification of relevant issues
The submission identifies the relevant issues and answers the question explicitly and critically with a sophisticated level of clarity and logic; detailed and perceptive analysis; credit will be given for originality of thought or approach.
The second task was the drafting of the feedback form which was devised to integrate the headings from the descriptors and the degree classification to help staff and students match the assessment criteria with the feedback form. It was therefore presented with a table, listing the criteria and classification (Table 1 but this is presented in a shortened version) and a separate box for comments, entitled 'Overall assessment and steps for improvement' after that table.  Overall assessment and steps for improvement Once drafted, the criteria and feedback form were fully discussed with the feedback assessment group. Students' input was particularly valuable as they questioned the language that had been used for the descriptors (for example, what was meant by 'mature' analysis?): those discussions assisted with refining the terminology used for the final criteria and illustrated how the partnership was successful in developing common assessment literacy. The findings of the subcommittee's work were reported to the wider TEACHING AND LEARNING COMMITTEE where further consultation took place. Questions such as whether the school should operate separate descriptors for different types of exercises (for example, essay questions or problem questions) constituted significant challenges but ultimately the decision was made to keep only one set of criteria. As a result of those debates, the Law School was presented with a well-thought proposal which was wholeheartedly accepted. Phase 2 was focussed on the users of the new system. In relation to students, the objective was to develop students' understanding of the assessment criteria. This was achieved by a student-led project entitled 'Get the grades' to 'translate' some of the language adopted in the criteria. A small group of students who had previously participated in phase 1 created an unofficial or 'street' version of the criteria. They used annotated bubbles, which explained in their own words the requirements associated with the different grades. The students' work was supervised by academics and was also discussed with the assessment group for additional feedback. Examples of the annotated bubbles were as follows (with the students' translation in colour):

Argument and identification of relevant issues
The submission identifies the relevant issues (you have spotted the right legal principles) and answers the question explicitly and critically (you have said what the principle is and the rule associated with it, and for an essay you have explored both sides of the argument before picking one to support, avoid too much description and restatement of facts) with a sophisticated level of clarity and logic (what you have written can be easily understood); detailed and perceptive analysis; credit will be given for originality of thought or approach (this does not mean you have to say something no one else has said before!).
For staff, the intention was to understand better how colleagues interpret the criteria when applying them to their marking to help consistency in marking and guidance. With the help of colleagues from the university Learning Institute, a questionnaire was devised (attached as an appendix) to obtain additional information around academic approaches to marking with the new assessment criteria (for example, what do you understand as critical writing). In total, 16% of the staff responded, which was deemed to be a positive rate. The result of the survey confirmed what is found in research (Channock, 2000): we know what we do not want to see but explaining the opposite is problematic. It also revealed that there are differences in what we understand as a good essay, confirming what has been indicated in the section above on assessment and literacy: that tutors on any given programme may well not always interpret assessment expectations and tasks in the same way as each other. This raised questions such as: Are we sure that we are consistent in our marking, given these apparent differences? A limit of our research was the lack of an obvious question to colleagues: do you use the assessment criteria? Work from Boxham, Boyd, and Orr (2011) has shown that this is not always obvious. A proposed outcome was the possible production of FAQs for staff to have a common understanding of what we expect. This is work in progress that will be developed in phase 3 of the project.

Students
For the first time, criteria were distributed with essay title to understand expectations. Further, the students' 'street version' was discussed in a seminar in a first-year introductory module and with a new group of interested studentssecond year and finalists. The academic lead on the project worked in cooperation with the colleagues involved in the first-year module to include the official and 'unofficial' (or student street) version in a seminar exercise. The aim was to explain the school's expectations to writing an essay and discuss the street version of the criteria. In the debriefing session with the teaching team, the exercise was deemed to be a positive undertaking that should be repeated, enhancing students' writing skills. The module convenor stated 'Students loved the student version of the criteria and it helped me to see what they understood about the meaning of the feedback'.
One of the students involved in the project from the outset was awarded a university partnership award for her work. From her perspective, in line with the benefits highlighted by the literature above (personal investment in learning, greater awareness of learning and teaching processes, increase graduate skills for example), she commented: 'It was an eye-opening experience and learning curve to be able to be part of the team redeveloping the assessment and feedback criteria. Not only did it enable me to represent my student colleagues' opinions, but it also allowed me to approach the criteria from a marker's perspective and gain a much clearer insight into what was expected from students in assessments. This permitted me to fully appreciate the purpose of the criteria, whilst developing skills such as teamwork and communication.'

Staff
The new assessment criteria and assessment form were inserted in our school's Guide to Marking, Grading and Feedback and were immediately used by all academics. This new guidance complemented the comprehensive advice that existed for marking examination scripts. It allowed staff to refer to the criteria when marking essays to assist with finding the final mark and direct students in what was expected. Colleagues commented positively on the usefulness of the criteria: 'The use of the feedback form and the criteria . . . colleagues devised to enhance feedback and promote consistency was very effective, especially for me, as this was my first academic teaching post. It helped me to reflect on the marking criteria, to make sure that I was consistent with my marking, and it enhanced my teaching. As a result of the feedback form, I now clarify and discuss with my students some of the key feedback terms we use to facilitate their understanding of how to get the highest grades'.
Additionally, the school moved to an electronic feedback system. Essays were sent electronically and members of staff were encouraged to fill in the feedback forms and include contextual comments in the text to supplement and illustrate the comments found in the feedback form. This was a successful use of technology which increased efficiency, homogeneity and clarity for students. The members of the subgroup had been unsure about the acceptance of the proposed practice from a staff perspective as marking electronically may have been considered more time-consuming and less flexible. The new system was however agreed by the whole school.
Overall, the project led to a more homogeneous and enriched approach to feedback. The school academic director indicated that the new system helped colleagues 'focus more closely on the need to provide detailed information to students on how they can improve their writing skills as well as encouraging them by informing them explicitly of the things that they have done well. This is something which was missing from previous versions of the feedback form. [. . .] the project and the new form focussed attention of academic staff on to best practice in feedback'.
Dissemination of knowledge in the current higher education context As the issue of feedback remains particularly topical for university strategies and as student voice is paramount in higher education, the team sought to disseminate the findings of and methodology adopted for the project. It was presented at a University Learning and Teaching Conference to share our practice, raise awareness of students' voice in the context of assessment and feedback and seek feedback. It provided substance for a workshop on student/staff engagement at the Learning Institute during a Teaching Focus Week. A number of colleagues from across the institution expressed significant interest and subsequently sought views on how to develop a new feedback system, involving students. Finally, the work was showcased at the College level.

Reflection and next steps
The partnership between academics and students constituted a positive and enlightening feature of the project. It led to a real insight in the way students receive and assimilate feedback. It also brought newer perspectives on the purpose of feedback and the use of terminology. While a new official version of assessment criteria was used in the school, the version 'annotated' by the students was a useful tool for students and staff to understand expectations and translate some of the language commonly used. The challenges of involving students centred around finding volunteers who would commit for the whole project. As the idea of partnership was in its infancy for our school at the time of the project, no training or clear methods for recruitment had been provided or thought of. In future cooperation, such needs would need to be considered more thoroughly. However, the students involved gained experience and valuable transferable skills, working in a team, delivering an original project with deadlines.
Phase 3 of the project is under consideration for staff and students. Inquiry into students' experience of the criteria and the students' street version would cast light on the impact of the exercise with a view to engage students further in the review of the criteria. Producing FAQs on how staff approach some of the criteria to increase further consistency in marking and in providing comments is considered beneficial as this will also assist clarity for students.
The objectives of the GRAFT project were traditional in the current higher education climate: producing clearer and more homogeneous feedback via understandable assessment criteria and form. The methodology was original because of the student partnership element. It assisted in shaping our assessment criteria, making them clearer. More significantly, it sprung the idea of students producing their own 'translated' version which gave another aid to their fellow students. The testing of those two versions of assessment criteria in a compulsory first-year module, before writing the first assignment, was undoubtedly another successful feature of the project. The methodology and use of the products are vectors that could be used in similar assessment projects across disciplines as they directly involve students' voices.

Disclosure statement
No potential conflict of interest was reported by the authors.