Student dissatisfaction in Higher Education: a ‘fuzzy’ index approach

The revamp of the National Student Survey (NSS) has led to the elimination of the ﬁ nal ‘ overall satisfaction ’ question for Higher Education Institutions in England. This paper develops an index approach that can e ﬀ ectively summarise student satisfaction, utilising a ‘ fuzzy poverty ’ methodology that assigns weights to dissatisfaction outcomes based on their correlation levels. We show how our dissatisfaction index enables a comprehensive sector-level analysis by combining NSS data with sector-wide data and further show the usefulness by presenting a case study. Our approach can be universally and unbiasedly applied to student surveys globally, while alleviating problems related to the removal of the overall satisfaction question in the UK.


Introduction
The National Student Survey (NSS), established by the UK government in 2005, has steadily grown into an influential survey involving nearly half a million respondents each year (OfS 2023).The survey collects feedback from final-year undergraduates studying at Higher Education (HE) institutions across the United Kingdom (England, Scotland, Wales, and Northern Ireland), capturing perspectives on a variety of aspects of their time at university.It was developed to give students a voice and provide data that would allow for comparisons across different universities and courses.The NSS is publicly available and covers a wide array of questions relating to student experience, including teaching quality, assessment and feedback, academic support, organisation and management, learning resources, student voice, and, until recently, overall satisfaction.It is deemed usable for comparison and is widely used by prospective students to help them make their higher education choices, and by universities to identify what they are doing well and areas where they can improve (see OfS 2023 and the many studies using NSS to compare institutional performance: Adisa et al. 2022;Agnew et al. 2016;Bowles, Sharkey, and Day 2020;Burgess, Senior, and Moores 2018;Cheng and Marsh 2010;Dean, Shubita, and Claxton 2020;Langan and Harris 2023;Satterthwaite and Roudsari 2020;Winstone et al. 2022).The NSS is not unique to the UK and has roots in similar initiatives implemented elsewhere around the world, including the National Survey of Student Engagement (NSSE) in the United States and Canada which has been running since 2000 (see https://nsse.indiana.edu/nsse/index.html2023 or see Kuh 2001 for an introduction of its genesis) and the Course Experience Questionnaire (CEQ) in Australia, first used in the early 1990s (see Talukdar, Aspland, and Datta 2013).
The increasing influence of the NSS seems, inevitably, to be linked to the introduction, and then increase in tuition fees, positioning the student as a consumerstudents consume education as a product/service and the sector must obtain their feedback on the quality of the service (Kandiko and Mawer 2013;Webb et al. 2017;).Such neoliberalism has led to a reliance on feedback data to inform university strategy and there has been a rising awareness that such feedback informs rankings and league tables further affecting the future of the university (Baird and Elliott 2018;Langan and Harris 2023).
The NSS has sparked intense debate in the UK.Detractors claim the survey represents an outdated method of collecting feedback and leads to an obsession with flawed data and metrics (Spence 2019;Williamson, Bayne, and Shay 2020).Attwood (2010) argues that the survey is a 'statistically laughable exercise in neoliberal populism ' and Harvey (2008) believes it to be a 'superficial, expensive, heavily manipulated and methodologically useless'.Further, Winstone et al. (2022Winstone et al. ( , 1526) ) notes that 'the framing of items pertaining to feedback in the NSS focuses on the delivery and transmission of information rather than students' use of feedback'.Despite these concerns, there are those that are more positive, including the Chief Executive of the HE Funding Council for England, Madeleine Atkins, who believes the survey has been indispensable in both influencing and instigating transformation within the academic landscape of UK universities (Grove 2015).
The focus of this paper is on the decision by the OfS to implement a number of changes to the way the NSS is compiled; including, contentiously, dropping the final summative question on 'overall student satisfaction' in England (while being retained for Scotland, Wales, and Northern Ireland).It is argued by some stakeholders that the overall satisfaction question contributed little and skewed results.They believe students responded to this question based on experiences beyond the intended scope of the survey, or the strategic consideration of the university management, including their experiences of the accommodation, car parking, or extracurricular activities (OfS 2022, 17).
However, most were against the change, with the OfS reporting that 90% of the responses condemned it as a regressive step which could lead to a multitude of issuesnot least problems of asymmetric information, signalling and benchmarking (OfS 2022).Approximately half of the respondents expressed concern that dropping the summative question in England only may limit student information for cross-UK comparisons.Respondents also expressed concerns regarding the disproportional effect on international students, who already face difficulties assessing quality given the myriad of intertwining factors they must consider when evaluating university and course quality.
Our paper demonstrates that an index approach can effectively replace the traditional overall satisfaction measure.We show that our index, by incorporating more data from the NSS, offers an improved mechanism for cross-comparing HE institutions.In Section 2, we provide a brief review of the summative question in the NSS which establishes the foundation for our empirical analysis.In Section 3, we develop the dissatisfaction index using our proposed fuzzy poverty methodology which is easily replicated and universally applied.In Section 4, we explore the application of the dissatisfaction index at the institutional level, introducing our hypotheses to test the index and a small case study.Section 5 analyses our results.We conclude that the elimination of the summative question should not be perceived as a reduction in the NSS's value, but rather as an opportunity for innovation.

The NSS, quality enhancement and the summative question
A full review of research into the NSS is beyond this paper's scope.Instead this section provides a summary of research regarding the summative overall satisfaction question.Our first observation is that the available econometric research is limited.An explanation for this may lie in the statistical characteristics of the response data, which exhibits little variation across institutions such that responses can typically fall within a narrow range covered by sampling error.Cheng and Marsh (2010), for example, find that only 2.5% of the variance in NSS responses can be explained in terms of the respondent's university.Such results have encouraged subject-level analysis (Agnew et al. 2016;Langan, Dunleavy, and Fielding 2013;Vaughan and Yorke 2009).
Past research has paid particular attention to the importance of the different categories in predicting the summative satisfaction outcome.Satterthwaite and Roudsari (2020), for example, find that variation in Dentistry data can be largely explained by the 'teaching on my course' and 'learning opportunities' sub-categories.Whereas Bell and Brooks (2018) find that students are happiest at pre-92 universities, on clinical degrees and humanities courses and that teaching and course organisation is important.Interestingly, it is generally accepted that 'assessment and feedback' and 'learning resources' hold the weakest value in predicting overall satisfaction (Bell and Brooks 2018;Sofroniou, Premnath, and Poutos et al. 2020).
The predictive value of sub-categories for overall satisfaction might imply that omitting the overall satisfaction question is inconsequential.Deprived of this metric, these sub-categories can instead be utilised for inter-institutional comparisons.However, discrepancies across sub-categories, regardless of their correlation with overall satisfaction, might be more illuminating.Firstly, the NSS survey is susceptible to 'acquiescence bias', or the inclination of respondents to select positive answers.In such scenarios, the 'assessment and feedback' scores, usually noted as lower, might more accurately represent student attitudes.Given their closer relation to actual academic experiences than to degree outcomes, students might be more motivated to thoughtfully engage with these queries, thereby reducing the tendency to respond mechanically, which can lead to uniform responses.Secondly, limiting the overall satisfaction correlation solely to 'teaching on my course' overlooks significant insights.As pointed out by Bowles, Sharkey, and Day (2020), such connections are swayed by the 'psychological needs of autonomy, competence, and relatedness'.For a comprehensive understanding of student satisfaction, which integrates aspects like 'self-efficacy', the metrics used must be diverse and extensive.

Developing the dissatisfaction index
Given the known correlations of core question sub-categories with the overall summative satisfaction responses and, in addition, the uncertainty over the quality of each proxy, we develop an approach that creates a single index encompassing all available data.This approach allows us to validate different institutional strategies and, if reliable, will also help to evaluate the impact of dropping the summative satisfaction question in England.We utilise the 2019 NSS, which incorporates 26 questions distributed across eight core categories to assemble our basic index measure.This is achieved through a method of aggregation which consists of two stages where the HESA/NSS data is used to benchmark and then define and construct a dissatisfaction index (HESA 2023; OfS 2023).Our index measure is grounded in a methodology that is universally accessible and does not require extensive technical expertise, with all data publicly available.
First, we utilise the benchmarking procedure developed by HESA, and utilised in the NSS, which is constructed to allow for valid comparisons across different UK HE institutions.This controls for both individual student characteristics and diversity in the student demographic which are known to affect responses.The factors controlled for are: age; gender; ethnicity; disability; subject and mode of study (OfS 2023).The benchmark for each institution is calculated through a weighted aggregation of overall sector scores across benchmarking cohorts.Thus, the benchmark serves as a projection of the anticipated outcomes for an institution, reflective of the student demographic blend.This weighted average benchmark of the overall reported scores for each institution's respondents is then used in stage two of our aggregation method.
Stage two involves generating a set of dissatisfaction dummy variables for each question.Informed by the approach of Langan and Harris (2019;2023), we define dissatisfaction as those that are 'not satisfied' and who do not respond positively to the question.For example, question one of the 2019 NSS is: 'Staff are good at explaining things?'.If a student responds either with a (1) Definitely agree or a (2) Mostly agree, we class them as satisfied, and respondents selecting (3) Neither agree or disagree, (4) Mostly disagree, or (5) Definitely disagree, we classify as dissatisfied.Those selecting 'not applicable' are not included.Our definition, which could also be termed as a 'not satisfied' measure, is adopted because of its compatibility with University evaluation methods, which rely on the HESA benchmarks.We anticipate this alignment will enhance the index's appeal.We assign a value of one to an institution when its performance doesn't meet its HESA weighted benchmark.By summing up our 26 indicators, we derive a dissatisfaction index ranging from 0 to 26.In the most extreme case, a university could score 26 dissatisfaction points.However, such aggregation has its limitations.Simply adding the indicators can lead to high correlation between related response categories.For example, dissatisfaction with the aforementioned 'staff are good at explaining things' is likely to be correlated with responding negatively to 'staff have made the subject interesting'.Feeley (2002) demonstrates this correlation with an analysis of what they term 'halo effects', where a general impression of a teacher fails to provide sufficient distinction when considering specific aspects of teaching effectiveness.
To avoid issues of high correlations with this simplified approach, we could utilise Confirmatory Factor Analysis (CFA).However, this approach creates unnecessary complications as it operates on the assumption that certain unobserved latent factors influence observed variables (Prudon 2015).
To avoid this, we adapt Betti and Verma's (2008) 'fuzzy poverty' approach; originally designed to explore deprivation more holistically than the simplistic view of poverty being just about a person's income and wealth.Rather than drawing a sharp line at an ad hoc income level and declaring everyone below the line as 'poor', the approach takes a more nuanced path to build into the model a wide array of personal circumstances (e.g.education, healthcare and housing).The term 'fuzzy' signifies that poverty isn't a binary state, acknowledging that there are different grades of poverty that vary based on the number and severity of these factors.
Applying the 'fuzzy' approach to the UK HEs NSS allows us to develop a multidimensional measure of student dissatisfaction taking into account a wide array of factors.We replace the factors in fuzzy poverty with student responses to the NSS, including: quality of teaching; resource availability; and opportunities for personal development.The approach views dissatisfaction, like povertyas not absolute but as a matter of degree, influenced by various interconnected factors.This provides us with a robust index that can be universally applied and easily replicated, thereby enhancing its usability and accessibility.
The creation of our dissatisfaction index involves a two-step weighting methodology.In the first step, we assume that a reported dissatisfaction outcome carries greater significance for an institution when it is disclosed by a smaller fraction of HE institutions.This signals outlier status and a departure from the typical group clustering anticipated within NSS responses.We then assign a weight in proportion to the coefficient of variation of the dissatisfaction measure, which represents the relative variation or dispersion of dissatisfaction across institutions.This coefficient of variation, which is a statistical measure that provides a standardised estimate of dispersion, allows us to interpret variations in different variables on an equal basis.It essentially compares the degree of variation from the mean for different series, irrespective of the units or scales.A higher coefficient of variation corresponds to a higher level of relative variability implying a smaller fraction of institutions reporting dissatisfactiona situation deemed more critical for the institution in question.
The second step incorporates a control mechanism for the degree of correlation between individual dissatisfaction measures to mitigate any potential issues related to multicollinearity (arising for example, from correlations across categories and limited variation within each category's questions).This second weight is computed as an average of the correlations using the expression: where r k,k ′ represents the correlation between dissatisfaction measures k and k ′ .We again follow Betti and Verma (2008) who introduce a 'largest gap criterion', where the threshold ρ H is determined according to the largest gap within the ordered set of correlation values.As this threshold corresponds to the correlation with the variable itself, the expression is simplified to the inverse of the average of dissatisfaction correlations (including the variable of interest).This adjustment ensures the weighting system remains uninfluenced by the inclusion of dissatisfaction measures that exhibit zero correlation with k.

Testing the dissatisfaction index
The result of this approach is a dissatisfaction index expressed in percentage terms ranging from 0 to 100.Employing this methodology on NSS 2019 data, we discover the dissatisfaction index spans from 0% to 81.3%.However, when the clustering towards general levels of contentment within the NSS is examined, the distribution of dissatisfaction exhibits a positive skewness with a mean dissatisfaction level of just 8.20%.Importantly, this finding suggests that most institutions fall within a relatively low range of dissatisfaction, and only a small number of institutions extend to higher levels.This approach is then used to create a sector-wide dissatisfaction index to counter the previously mentioned simplistic method and interpretation of the NSS (Adisa et al. 2022).In order to show how the dissatisfaction index can be used in practice to analyse the results of the NSS we select four areas for hypothesis testing: gaming of student grades; student heterogeneity; resource costs; and staff structure.
(i) Gaming of student grades 'When a measure becomes a target, it ceases to be a good measure' (Goodhart's Principle see The Royal Statistical Society 2018).
As far back as 2007, Espeland and Sauder noted that universities may well prioritise the management of appearances and expend more effort in improving various ranking factors without any real/ actual improvements in the factors being measured.As grades play a pivotal role in student satisfaction, universities may target grade inflation by encouraging more generous marking leading to higher grades (Stroebe 2020).Gaming directly addresses student cognitive biases and reduces the uncertainty of outcome for the university in both the reporting of satisfaction and, more generally, with how students perceive their entire university experience.Back in 2014, a comprehensive review of the NSS by Callender et al. also expressed a variety of concerns (both conceptually and methodologically) with the survey that may lead to unintended consequences.Concerns included inappropriate use of the NSS in league tables and gaming, reporting that there were: Alleged manipulation of results by some HE institutions (i.e.gaming) and concerns that the NSS created perverse incentives for HE institutions to manipulate students' responses, on the basis that poor overall scores would devalue their degrees.(20) Humorously, Callander et al. use a headline from the Times Higher Education Supplement published in August 2013 entitled 'Hold bad news about grades until after the NSS' to justify their concerns about gaming (Grove 2013).More recently, Winstone et al. (2022) and Carpenter, Witherby, and Tauber (2020) stated that the NSS can 'incentivise practices that focus on increasing scores rather than improving education' (Winstone et al. 2022).Bell and Brooks (2018) state that there might be a need to 'keep the customers happy … where students are spoon-fed and marking is unduly lenient in order to raise the scores the easy way' and Kogan et al. (2022) find that 'student satisfaction with their grades, unsurprisingly, seems to be a significant driver of course evaluations'.We therefore use our index to test the hypothesis: H1: There is a significant negative correlation between changes in students' degree classification and their level of dissatisfaction (ii) student heterogeneity Our second hypothesis uses our index approach to examine the socioeconomic background of the student body on satisfaction (see Cook, Watson, andWebb 2019 andMarginson 2011 in an international student context).This is informed by the work of Bennett and Kane (2014) who argue that the diversity of student backgrounds, skills, and perspectives influence how NSS questions are understood and interpreted.Brown (2013) and Cook, Watson, and Webb (2019) argue that the onset of mass education in the UK has led to social congestion with those from higher socioeconomic backgrounds positioning themselves in higher ranked 'Russell Group' universities which have more established links with business, afford better graduate outcomes and increase cultural capital.As Webb et al. (2017) report, the top-ranked universities remain dominated by those in the higher socioeconomic group.While student heterogeneity is used by those in universities to push for methods to improve communication with the students (for example, aligning language across students with different engagement levels), it indicates how even differences in response rates, by including more disengaged students, is likely to impact on overall outcomes.
Consequently, lesser-ranked universities have a wider dispersion in their student body and, as such, these universities might need to deploy more effective, tailored communication strategies to address the varied needs and expectations of such a diverse student population.Also, those from lower social class families may well have lower expectations regarding the services provided by universities and, having incurred debt, may well be minded to perceive their choice as positive (Webb et al. 2017).To consider social class, we test the hypothesis: H2: There is a significant negative correlation between students who have attended state school and levels of dissatisfaction (iii) staff costs Our third hypothesis tests the influence of an institution's expenditure on staffing.This is informed by the resource dependence theory, which suggests that the availability and allocation of resources can directly influence the operational effectiveness of an organisation (Pfeffer and Salancik 2003).Resource expenditure could affect the quality of the students' learning experience as increased staffing expenditure enhances the university and the quality of the student's educational experience.However, this relationship might not be straightforward, given the paramount importance of research contributions in influencing university rankings.Teaching, which could directly influence student satisfaction remains secondary to research in many UK universities.For example, in a review of 24 academic departments in the UK, Coate, Barnett, and Williams (2001, 164) argue that any shift in focus towards teaching is still 'somewhat overshadowed by the high status of research'.This view continues to the present day and can also be found in the work of Bamber, McCormack, and Lyons (2021), with Cretchley et al. (2014), Serow (2000), and Zubrick et al. (2001) providing an international context on the dominance of research over teaching in terms of recruitment, promotion and the targets set for academics.
As such, we will consider the role of staffing expenditure, ensuring it is accounted for when assessing the impact of institutional strategies on NSS outcomes.In doing so, we hope to elucidate the intricate balance between funding allocation, educational quality, and student satisfaction.We test the hypothesis: H3: There is a significant negative correlation between staff costs and levels of dissatisfaction (iv) staff structure Our final hypothesis, before explaining our proposed case study, further explores the theme of staffing and the types of employment contracts that a university uses.Williams (2022) notes that there is a relatively recent trend amongst some institutions towards non-permanent, teachingfocused contractsa shift that may inadvertently result in negative repercussions on student satisfaction outcomes.Thus, we scrutinise the influence of these teaching-only contracts on student satisfaction to discern whether these contractual decisions affect student dissatisfaction.
H4: There is a significant positive correlation between levels of teaching-only contracts and levels of dissatisfaction To examine our hypotheses, we develop a probit regression model in which the dependent variable is defined as a binary variable that equals 1 if the student dissatisfaction index exceeds the median score across all HE providers.Probit methods offer an appropriate statistical technique for analysing binary outcomes.Utilising our fuzzy approach, we can model the probability of a university's dissatisfaction index exceeding the average, considering our various relevant factors.The model's output estimates how these factors influence the probability of the outcome, which in this context is a university reporting above-average student dissatisfaction.
Built into this approach are several key psychometric assumptions.These include unidimensionality, ensuring that the dissatisfaction index is measuring a single latent construct (student dissatisfaction) and measurement invariance, which affirms that this same dissatisfaction construct is measured consistently across different universities or student groups.Other important assumptions are: ordinality, suggesting that the responses to survey items can be logically ordered from least to most dissatisfied; and monotonicity, implying that an increase in the dissatisfaction index corresponds to increased student dissatisfaction.
When these assumptions are valid, finding statistical significance in the factors affecting dissatisfaction will provide validation to the robustness of our index.It suggests that these elements have a genuine, significant impact on student dissatisfaction, thereby reinforcing its policy value.If we identify characteristics that are statistically linked to heightened student dissatisfaction, it provides further evidence that these psychometric assumptions are holding, lending credibility to the use of the index.As a result, the index could be seen as a vital tool for policymakers and educational institutions.It can inform the creation of effective strategies or policies to address areas of concern, with the ultimate aim of improving the overall student experience.
(a) Case study Our approach so far has focused on a sector-wide exploration of how our dissatisfaction index can be leveraged to better exploit available NSS data.However, it has not specifically addressed how it could be utilised by individual institutions.This potentially ignores the benefits of using the index to fully encompass all facets of an institution's NSS outcomes, particularly in identifying and tackling emerging issues.We address this by examining NSS data for the single Common Aggregation Hierarchy (CAH) subject area Economicswe do this as an indicative example only and, as such, could have chosen any subject area and any University in England.
Our focus for our case study widens our approach to a time series, examining the years impacted by the pandemic from 2020 to 2022.Our case study uses the University of East Anglia (UEA) which sits outside the Russell Group and contrasts this with the Russell Group Universities (RGU).The Russell Group is often seen as the gold standard in British higher education and is a prestigious network of 24 leading research-intensive universities in the United Kingdom established in 1994 and includes the University of Oxford and the University of Cambridge, amongst others.
Thus our case study chooses to contrast the RG universities with a university outside the RG (those outside the RG often compare their student outcomes with those of the RG).This is informed by a number of factors: (1) in their study, Bell and Brooks (2019) distinguish between the Russell Group and the rest of the university sector in the UK, (2) the fact that the majority of quality-related (QR) research funding goes to the Russell Group, indicating that they are less reliant on teaching income (McIntyre 2022; Research England 2023), ( 3) doing so provides a benchmark for evaluating non RGU performance and identifying areas for potential improvement.The comparison also offers a competitive advantage as universities vie for students, faculty, and funding in the ever-competitive higher education landscape.
We compare UEA's dissatisfaction index scores with the average achieved by all 24 RGUs.We explore how the index can be used to gauge the success of our case study's strategy, relative to the measure of the quality of the establishment, during the period of 'remote learning' necessitated by the pandemic.

Results
Our objective is to develop an easily applicable, universally understood, index that will help to move beyond ad hoc analysis dependent on individual researcher decisions.Before proceeding, we emphasise that our investigation is meant to serve as a demonstration of how our dissatisfaction index can be utilised for sector-wide analysis, rather than providing a comprehensive examination of all factors influencing HE outcomes.To clarify, all institutional data used is sourced from HESA, includes all subjects where data is collected and, in order to allow for comparisons to be made, we use two cohorts of data where applicable: 2018 and 2019.All data and definitions can be accessed at https://www.hesa.ac.uk/.
The results of our approach are summarised in Table 1, showing how specific factors impact on the probability of having a below-average index score.Importantly, finding statistical significant estimates indicate the ability of our dissatisfaction index to effectively circumvent the analytical challenges posed by the clustering of student satisfaction outcomes.
First, our results confirm our initial hypothesis (H1) as we observe a notable significant correlation between assessment outcome and the dissatisfaction index.The 'good degree outcomes' variable presents a negative estimate of −4.209 at a 1% level of significance, implying that as these outcomes increase, the dissatisfaction index decreases, suggesting improved satisfaction levels.Specifically, as the ratio of 'good degree' outcomes (that is, those with Upper Second Class Honours or higher) increases, the index score improves and dissatisfaction reduces.
Our index allows us a deeper understanding of this result.First, the index shows that higher student grades reduces dissatisfaction.Such 'grade improvement' could however be linked to a higher quality learning experiencewhereby the university is enhancing both student satisfaction and assessment outcomes.Importantly, our index approach allows us to investigate this further and to include an additional variable to control for the rate of increase in 'good degree' outcomes over a single cohort, allowing us to test for, and find, 'grade inflation'that is, the proportion of 'good degrees' has increased.This result suggests, as reported above by Callender, Ramsden, and Griggs (2014), Bell and Brooks (2018) and Kogan et al. (2022) that gaming of the NSS results may well exist and universities encouraging higher grades can 'game' the NSS in their favour.Second, we accept hypothesis H2, finding a significant negative correlation between state school graduates and NSS dissatisfaction.This might imply, in line with Cook, Watson, and Webb (2019) that graduates from state school backgrounds, burdened by the high personal debt incurred through their degrees, are less likely to perceive their investment as wasted.Alternatively, it might indicate that students with a private or grammar school background, accustomed to higher levels of educational investment, react more negatively to a HE experience with substantially more independent study.This helps to further confirm some of the assertions, noted above, in the findings of Bennett and Kane (2014).
Third, we observe that staff costs have a significant impact on the dissatisfaction index, but the relationship is not intuitive, leading us to reject hypothesis H3.An increase in overall academic staff costs significantly negatively affects our dissatisfaction index, thus increasing levels of dissatisfaction.This may suggest the existence of diseconomies of scale.This finding may well be due to the uniqueness of the HE sector whereby institutions, and larger, more reputable institutions in particular, might grapple to establish a community environment fostering positive student relationships.This result may be attributed to such institutions concentrating on research and research funding.For example, the UKs Research Exercise Framework (REF) is linked to grant allocations and creates a virtuous circle at the top.Evidence shows that English universities that won the most quality related (QR) research funding in 20022/23 are all in the 'Russell Group' (McIntyre 2022;Research England 2023).In addition, Bamber, McCormack, and Lyons (2021) argue that those concentrating on teaching only can suffer from 'esteem uncertainty'.As such, this helps, if only tangentially, to substantiate the work of Bamber, McCormack, and Lyons (2021), Cretchley et al. (2014), Serow (2000) and Zubrick et al. (2001).In addition, Bell and Brooks (2019) report that one of their key findings is 'that students are happiest at pre-1992 universities outside the Russell group'.
Finally, our results for our fourth hypothesis, indicate that using non-permanent, teachingfocused contracts decreases dissatisfaction rates.This leads us to reject hypothesis H4; however, the effect is fairly small and requires us to distinguish between economic significance (that is, coefficient size) and statistical significance (that is, p-value).We need to consider both these indicators together, in line with Engsted (2009) rather than just focusing on the latter p-value and using this to reject our hypothesis, given the level of significance.As such, while the proportion of teaching-only staff has a statistically significant impact, its effect is minimal.This signals the need for more extensive research into how such teaching staff are utilised.For instance, if these staff are merely used to reduce the teaching load on research staff, then significant improvements in pedagogical investments may not be anticipated.More work is required to understand the dynamic at play here.
Overall, our aim was to test our dissatisfaction index to see if it could provide us with validation of our approach.The significant results provide us with evidence that the index performs well and provides a more nuanced approach to sector-wide analyses using the NSS, suggesting its worth is not limited to just substituting for the summative question.It can also serve as an effective tool to delve deeper into the understanding of the strategic influence of institutions on student experience.We encourage future wider and deeper research using this approach.
Moving on to examine the results of our case study, Figure 1 shows that the pandemic-induced shift to remote learning amplified student dissatisfaction, likely due to increased workload and uneven teaching quality.Notably, dissatisfaction grew more in Russell Group Universities, enabling UEA to highlight the relative success of its Dual + strategy.This approach guided staff to use remote learning for long-term benefits, ensuring the integration of a combination of blended and active learning methodologies.Though a detailed exploration of optimal blended learning use is beyond this paper, our UEA case study exemplifies how the index can be used to highlight strategic success, thus reinforcing its value for a comprehensive assessment of NSS outcomes.

Conclusion
The National Student Survey (NSS), initiated by the UK government in 2005, gathers feedback from around 500,000 final-year undergraduates in Higher Education (HE) institutions across the UK.Criticism of the NSS include outdated methods, flawed metrics, and concerns over framing.Significantly, the 2023 survey dropped the summative satisfaction question for England only.As a result, we proposed a solutionan index measure of dissatisfaction.We developed a multi-dimensional measure of student dissatisfaction to replace the summative question.Our measure was designed to be capable of being consistently applied by all in the sector, both circumventing the empirical problems created by the clustering nature of NSS responses, and the dropping of the summative satisfaction questionensuring a coherent approach to how student experience enhancement is achieved over time.
Overall, we found that our dissatisfaction index performed well in sector-wide NSS analysis, offering insights beyond replacing the summative question's role.While not dwelling on the results too much (and we encourage more research using our fuzzy index approach), our hypothesis testing found that the shift to a neo-liberal customer-based environment in UK HE shows signs of being gamed by universities but also that there is a negative correlation between state school graduates and dissatisfaction; overall academic staff costs significantly negatively affects our index; and that using non-permanent, teaching-focused contracts decreases dissatisfaction rates.Further, our discipline-specific small case study highlights the value of our dissatisfaction index to compare institutional strategies.By adopting a temporal comparison to gauge a specific department's strategic response to the forced shift to remote teaching during the pandemic showed the index's value in testing the impact of pedagogical strategies.
In summary, our dissatisfaction index fills a void by using all available data and comprehensively leveraging NSS information.It offers a holistic representation of student responses and provides a tool for evaluating course outcomes and their links to departmental strategic shifts.Despite the evolution of the NSS resulting in the loss of universal access to summative satisfaction scores, this development should not be viewed as a significant disadvantage.The use of our dissatisfaction index provides a superior measure which is also capable of enhancing future empirical research in this area.However, we note that the approach does not solve the concerns with neoliberalism, noted at the start of the paper, regarding consumerism, massification and commodification of HE and how student feedback is collected and utilised (Winstone et al. 2022).As such, future research should proceed with awareness of the risks of perpetrating neoliberal ideologies by using the NSS in this way.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Figure 1 .
Figure 1.Dissatisfaction in the subject of economics.

Table 1 .
Probit analysis investigating index poor performers.Analysis of institutions where the student dissatisfaction index exceeds the norm.Excluding missing data, we are left with 115 observations/UK universities where there is data on all variables, all subjects; (2) * and ** denote significance at the 1% and 5% levels of significance respectively; (3) Alternative use of a logit methodology did not alter the nature of our findings.(4) Analysis of institutions regarding overall satisfaction and the benchmark is available from the authors on request.