Student Satisfaction Lecturer Professionalism - 2019 - Arthur
Student Satisfaction Lecturer Professionalism - 2019 - Arthur
Student Satisfaction Lecturer Professionalism - 2019 - Arthur
Dr Linet Arthur
School of Education, Oxford Brookes University, Oxford UK
1
Evaluating student satisfaction, restricting lecturer professionalism: outcomes of using
teaching (SET)
Abstract
student satisfaction levels is important in improving National Student Survey (NSS) scores.
This article focuses on the impact of a UK university’s new student feedback questionnaire -
for individual modules - which used the NSS questions. The research draws on survey data
(N = 101) to identify lecturers’ views and 3 student focus groups. The outcomes raised issues
university’s contract with each student, including the aspects that affect the student learning
experience but are beyond the lecturers’ control, for example, class sizes, timetables. The
results indicate that by recognising the impact of provision university managers may be better
able to develop systemic improvements to student experience and (in the UK) a
corresponding uplift in NSS and Teaching Excellence Framework (TEF) results. The article
relationships between university managers, academics and students. This model could enrich
student experience in SETs and support data analysis in future research studies.
Key words
2
Introduction
The international trend towards neo-liberalism in higher education appears to have had a
profound impact on universities, changing the student into a customer (Tight, 2013), faculty
into ‘units of resource’ whose performance must be monitored (Shore and Wright, 1999,
559) and student evaluations of teaching (SET) into mechanisms for ensuring student
consumer choice (Olssen and Peters, 2005) with an emphasis on efficiency, productivity,
auditing and accounting (Kenny, 2017; Ball, 2012). In the competitive world of higher
education this has often been accompanied by hierarchical, centralised management practices
(Kenny, 2017) and a lack of trust in professionals (Olssen and Peters, 2005). The
889) appears to have been replaced by measurable performance outcomes: the ‘tyranny of
metrics’ (Ball, 2012, 20) which may supersede professional judgements. Thus
‘performativity’, with its focus on achieving targets, has become established in universities,
where faculty are held accountable through, and expected to direct their activities towards
measured outputs, performance indicators and appraisal (Olssen and Peters, 2005).
Student evaluations of teaching are a key part of this audit culture with, in the UK, two
external metrics linked to SETs: the National Student Survey (which gathers final year
undergraduates’ views about their university experience) and the Teaching Excellence
Framework (which assesses the quality of teaching in higher education institutions). Higher
levels of student satisfaction feed into both these performative measures, improving a
consequence.
The research reported here focuses on a university which adopted the questions from the NSS
for the purposes of module evaluation across the university, arguably combining a neoliberal,
3
student-as-customer approach with a performative audit of lecturers’ teaching. The research
combined a survey of academics at the university (N = 101) and 3 student focus groups to
ascertain both staff and student responses to the new module evaluation system. The study
draws on a thematic analysis of lecturers’ and students’ responses to the new system and
considers performative and professional aspects of the SET, linking these to a model which
The article starts by reviewing the literature in relation to student feedback, the National
Student Survey and the theories of performativity and professionalism. It then describes the
methodology of the research before analysing the results. The discussion section puts forward
a new model illustrating the complex relationships between students, academics and
could enrich the ongoing debates about performativity and professionalism, extend the range
of issues in SETs that impact on learning and teaching in universities and serve as an
analytical tool in further research studies on how to improve the student experience.
The aims of SETs link to performativity through measuring lecturers’ performance (Kember
et al, 2002; Alderman et al, 2012). Other performative purposes of student evaluations
include the provision of a systematic documentation of student experiences which allows the
comparison of standards across a university (Johnson, 2000) and assisting quality assurance
(Moore and Kuol, 2005; Blackmore, 2009). Student feedback also contributes to
and Anderson, 2012; Wright and Jenkins-Guarnieri, 2012) and improving students’
4
There are questions, however, about how far SETs are able to achieve any of these aims. On
systematic, there may be undetected bias in questions and responses – for example, the order
and wording of questions may influence outcomes, while acquiescence bias (respondents are
more likely to give positive ratings) and indifference bias (respondents opt for the middle of
the scale) impact on responses (Yorke, 2009). It is rarely possible to ensure that students
respond to the questionnaires in exactly the same, controlled conditions (Berk, 2013). When
SET questionnaires are online, response rates tend to be low (Bamber and Anderson, 2012;
Spooren et al, 2013), resulting in the danger that a small number of extreme responses bias
the outcomes (Yorke, 2009). On the other hand, if lecturers are able to design their own
evaluations, there is a danger that the questionnaires may be ‘psychometrically putrid’ (Berk,
2013, 19), due, for example, to faulty items, ambiguous instructions, lack of specified
Much of the research on SETs has focused on their validity (Spooren et al., 2013), which is
particularly important when student evaluations are used to measure a lecturer’s performance.
Extant research has indicated a number of areas of potential bias in SETs, which are linked to
the course (subject disciplines, higher academic levels, course difficulty and whether it is
compulsory), the teacher (the ‘halo’ effect of charismatic lecturers, gender, race, sexual
orientation, rank – professors are rated more highly), the university (class size, timetabling)
and the students (maturity, gender, grade expectations) (Spooren et al., 2013; Denson et al,
2010).
There are also concerns that students’ understanding of learning may be immature (Edstrőm,
2008) or that they may not be competent to judge good teaching (Schuck et al, 2008;
Richardson, 2005). In terms of providing evidence for lecturers’ appraisal and/or promotion
5
decisions, there is a view that student evaluations should at best be used with caution
(Johnson, 2000) or with additional data from multiple sources (Berk, 2013).
Professionally, although student evaluations are often seen as a way for academics to improve
their teaching (Wright and Jenkins-Guarieri, 2012; Winchester and Winchester, 2011;
Alderman et al, 2012), this too, is problematic. Richardson (2005) noted that there was no
empirical evidence to support the claims that publishing student feedback helped academics
teaching (Moore and Kuol, 2005) or provide the information that lecturers need in order to
make improvements (Bamber and Anderson, 2012). Student evaluations are arguably less
likely to improve academics’ practice where questions are focused on bureaucratic needs
(Moore and Kuol, 2005). An ongoing problem with using SETs to improve teaching is the
lack of a shared understanding of effective teaching and learning by teachers, students and
Richardson (2005) argued that student feedback may make lecturers’ perceptions of their
teaching more accurate, but it does not generally change their behaviour. In Beren and
Rokosh’s (2009) survey of 357 faculty in one university, while over half the respondents
found student ratings useful, only a few had substantially modified their practice as a result.
Kember et al’s (2002) research study analysed student evaluation questionnaires at one
university over a four-year period. They concluded that the Student Feedback Questionnaire
produced ‘no evidence of an improvement in the quality of teaching during the four-year
period’ (416), based on a lack of significant change in mean scores in the SETs over 4 years.
If academics are unable or unwilling to respond actively to SETs, this influences the
motivation of students to complete evaluations: the belief that their feedback is not valued is
a key reason for low response rates (Hoel and Dahl, 2019).
6
In neoliberal universities a shift in the focus of SETs from student learning to student
(Schuck et al, 2008; Bedggood and Donovan, 2012). Evaluations to assess student
satisfaction – such as the NSS – may be restricted to questions about how far teaching and
learning have met student expectations rather than how teaching could be enhanced.
It is possible that fulfilling students’ expectations in order to increase satisfaction might result
students’ expectations accurately. There may be assumptions, for example, that students take
fees increase in England. Yet Bates and Kaye (2014) compared students’ expectations before
and after the fees increase and found that there was not a significant difference between either
their expectations (in terms of contact time, resources and support) or their satisfaction as a
result of the fee rise. Budd (2017) compared students at a British (fee-paying, neo-liberal
context) and a German (no fees, limited neoliberalism) university and found that students in
the British, competitive, market-driven university were not more passive nor instrumental
than those in the German university. Thus focusing on student satisfaction may be based on a
false premise about what academics need to do to satisfy students. Denson et al.’s (2010, 353)
analysis of 60,860 student course evaluations at one university found that the best predictors
of students’ overall satisfaction were the two optional questions set by faculty (rather than the
seven compulsory questions set by the university), which indicated that ‘faculties appear to
It appears that student feedback may not be an effective means of measuring performance or
may be limited. Arguably, when academics use their own methods of gathering feedback the
results are more likely to assist them in developing their professional skills. The case-study
7
research reported here involved the replacement of module feedback developed and analysed
by individual academics with an institution-wide system using the National Student Survey
(NSS) questionnaire. Unlike internal student evaluations, the National Student Survey (NSS)
The NSS was introduced in England, Wales and Northern Ireland in 2005. It takes the form
resources, student support, organisation and management, careers, physical environment and
overall satisfaction (Botas and Brown, 2013). The NSS questionnaire was changed in 2017,
reducing the number of questions, adding new sections on the learning community and
student voice, and offering optional question banks to institutions (Higher Education Funding
Council for England (HEFCE), 2016a). Although it was the original version of the
questionnaire which was adopted by the case study university, the results reported here are
not concerned with particular questions but with the overall approach of adopting the NSS
The purpose of the NSS was to help prospective students choose their courses and to provide
a form of quality assurance and public accountability (HEFCE, 2004). Since its
implementation the NSS has extended its reach. It now plays a role in management
information and allows universities to benchmark against other Higher Education Institutions
(HEIs) (Buckley, 2012). There is evidence that the NSS has impacted on the behaviours of
8
More recently NSS responses have contributed to a university’s score in the Teaching
Excellence Framework (TEF) in England. The TEF was introduced in 2017 to ‘provide clear,
(Department for Business, Innovation and Skills (DBIS), 2016, 13). Universities are graded
as gold, silver or bronze based on contextual data, a range of metrics on student satisfaction,
retention and employability and an additional narrative to support each university’s case for
excellence (HEFCE, 2016b). The NSS questions on ‘Teaching on my course’ (NSS Q1-4),
‘Assessment and feedback’ (NSS Q5-9) and ‘Academic support’ (NSS Q10-12) provide the
metrics for teaching quality and the learning environment in the TEF. Both the TEF and NSS
could be seen as potentially a means of improving teaching quality and empowering students
but they are also elements of the regulation, competition and performativity typical of
Obtaining positive NSS results has become a preoccupation for senior leaders in universities
and some HEIs have started to use NSS questions when gathering student feedback. For
example, Birmingham City University based its annual Student Experience Survey on the
NSS in order to be able to address student complaints before they reached their final year
(Kane et al, 2013). The impact of the NSS on a university’s league table position and TEF
result mean that this performative measure has become a crucial element in a university’s
strategic planning. The impact on professionalism and performativity is discussed in the next
section.
The ways in which student evaluations provide information are complex and open to
university have been undermined by issues about response rates, bias, inappropriate
9
questions/information, lack of consistency in questionnaires, students’ questionable
teaching. As a result, student feedback may not achieve the goals of performativity (quality
The national system for gathering students’ evaluations of their experience at university (the
NSS) also has deficiencies in terms of validity and reliability (Kane et al, 2013; Botas and
Brown, 2013; Yorke, 2009). Its focus on student satisfaction is unlikely to provide
information on how to improve teaching and learning, although university managers may be
able to identify broad areas of student dissatisfaction that need to be addressed. While the
performance outcomes, evaluations, targets and calculations (Ball, 2012), it is concerned with
the overall student experience and as such addresses an element that seems to be overlooked
in the arguments about professionalism and performativity, namely the provision for students
at universities.
In theory, both performativity and professionalism have similar aims in ensuring an optimal
student experience, performativity through ‘the very best input/output equation’ (Locke,
within a set of collegial relations’ (Ball, 2016, 1056). Ball’s (2012; 2016) concern is that
practice towards measurable outcomes, rather than the principled judgements and complex
Student feedback designed for university quality assurance procedures could meet
performative requirements, while evaluation for quality enhancement would satisfy the needs
10
of professionalism. The research reported here considers the impact of a new student
additional category, provision, should be added to the binary divide between performativity
and professionalism.
In this study the case university (hereafter called University A), a post-1992 university in
system which adopted the pre-2017 NSS questions. Two reasons were given for doing this:
firstly, a need for consistency in gathering student feedback across the university; secondly,
in order to address a dip in NSS ratings. By using the NSS questions at an earlier stage in the
students’ experience, it was hoped (like Birmingham City University) to pinpoint areas of
however, in using the NSS questions to evaluate individual modules rather than as an annual
feedback mechanism.
The approach adopted by University A started with a requirement that the NSS-based
questionnaire be used in paper form for every module (undergraduates take 4-5 modules per
semester; 9 in a year). Staff could add up to 4 questions of their own to the questionnaire.
After a year, the questionnaire was moved online. University A changed its online platform
the same year which made it impossible for staff to add their own questions to the NSS
questionnaire. A ‘traffic light’ system was introduced: when managers and academics were
provided with the evaluation results, items where fewer than 50% of students scored the top
grades were highlighted in red; where the top scores lay between 50 to 66% they were shaded
11
This study took place the year after the NSS-based questionnaire went online at University A.
It was a single case study, which allows the development of an in-depth understanding of a
single institution, but the findings are not necessarily generalizable (although they could be
survey of university staff and students, interviews with Student Union Officers and
academics identified as excellent practitioners and focus group interviews with students.
The results reported here are from the survey of university staff (101 responses to an online
questionnaire) and the student focus groups. The survey questionnaire combined quantitative
and qualitative data: in addition to Likert-style tick boxes, respondents were asked for
comments on different aspects of the new evaluation system. These were in many cases
extensive and detailed. Of the respondents who provided personal data (some did not do this
because of fears that they might be recognised), 54% were women and 46% were men; they
came from all four faculties; 51% were senior lecturers, with other positions (in descending
order): subject coordinators (15%), readers, professors and principal lecturers (7-8% each),
lecturers, hourly paid lecturers and programme leaders (2-5% each); most respondents had
worked in higher education for between eleven and twenty years (44%), 28% had 1-10 years
The student focus groups were carried out after there had been no responses to a similar
survey questionnaire for students. Three focus groups took place: one with MA students from
a Coaching and Mentoring module; one with second and third year undergraduate students
from an Educational Studies module; the third was with first year students from an
introductory Business module. The focus groups took place immediately after the students
had completed the new feedback questionnaire, inviting their responses to the questionnaire
12
The initial approach to the data analysis was based on grounded theory (Glaser and Strauss,
1967). Grounded theory enables researchers to develop theory from their data using an
iterative process of identifying themes and codes through the constant comparative method to
make comparisons with and between data at every stage of the analysis (Glaser and Strauss,
1967; Strauss and Corbin, 1990; Charmaz, 2006). It may, however, be difficult for
researchers to bracket away their previous knowledge when analysing data (Thomas and
James, 2006) and proponents of grounded theory recognise that ‘we construct theory through
our past and present involvements in interactions with people, perspectives and research
practices’ (Charmaz, 2006, 10). In other words, theory can arise from an interaction between
the data and the literature review or conceptual framework. In my study I first read through
all the qualitative data several times, identifying themes and patterns which emerged directly
from the data. After establishing these key themes, I then considered the theories from the
possible to use these concepts to help to categorise the data. Not all the themes were covered
which encompassed the remaining themes. The thematic analysis was scrutinised to check
whether the background factors (role, age, gender etc) may have influenced responses. This
was not the case: a range of respondents was represented in each of the themes and none of
Results
The focus of this article is on the qualitative data from both the questionnaire (the extensive
comments written in response to the open questions) and the student focus groups. This
section begins with the issue of response rates before moving on to the categories of
13
Student response rates
A major problem when the new system went online was that the response rates were low
(similar to Bamber and Anderson’s (2012) experience). The first year the online evaluation
was introduced at University A, the overall response rate was from 18 - 25% per module,
with much variation between modules. Eighty four per cent of respondents indicated that the
response rate was lower after the online evaluation questionnaire was introduced. Academics’
• ‘The student response rate has dropped off a cliff as a result of the online evaluations’
(Respondent 57)
• ‘Due to the massive drop in completion rate those who do reply waver all over the
(Respondent 56)
The low response rate impacted on lecturers’ ability to interpret student feedback accurately
Performativity
Issues relating to performativity concerned: a) the extent to which the evaluation feedback
was relevant to academic staff compared to university managers; b) staff assumptions about
the main reasons for the online questionnaire (for monitoring and judgement as opposed to
improved performance), and linked to the latter, c) staff anxieties about whether student
responses were accurate, together with students’ explanations about their responses.
A question about how far the new feedback questionnaire provided information that lecturers
valued indicated that more respondents felt that the evaluation was ‘very important’ to the
14
A number of staff suggested that the new online evaluation system was designed for
managers to monitor performance rather than for staff to make improvements, for example,
Trust the staff to talk to the students and understand their nuanced interests and needs,
rather than making them (staff) feel as though this is simply a method by which
This comment about lack of trust and the ‘policing’ of performance could be seen as an
indication of a typical performative culture with a clear division between management and
staff.
There were concerns about reliability, including whether students could understand the
questions. Fifteen respondents identified the potential for bias in the feedback because:
• some students focus on enjoyment rather than learning. Respondent 88 suggested that
• some students who have not attended any teaching sessions still complete the online
• there are ‘a few rogue students with extreme views at either end of the spectrum’
(Respondent 32) and the online system ‘encourages the disaffected to vent their
• some students’ feedback is influenced by their grade, for example, ‘bad feedback
from a student is quite often related to a low grade for coursework’ (Respondent 86).
In terms of reliability, the student focus groups seemed to indicate that the questionnaires
may not convey students’ views accurately. One student used only the highest or lowest
scores ‘because whoever receives the forms needs definite answers’ (Education Studies
15
Focus Group). Another student was reluctant to give low scores: ‘I feel bad about giving 1s.
The lowest mark I gave is 2 or 3, even if something is awful’ (Business focus group).
Students also described using the questionnaires to address issues about their own
performance on a module, for example, one of the Education Studies students had written
comments explaining why external matters had affected his assignment. One of the Business
students used the comment box in the feedback sheet to comment on: ‘all the assignments
cropping up during the same week – it's a nightmare’. This indicates a degree of
performativity on the students’ part: an instrumental focus on the assessment rather than
The lecturers’ comments resonate with an earlier model of lecturers’ responses to student
feedback, in one category of which, lecturers blame the students for poor feedback instead of
taking responsibility for the student experience in seeking to make appropriate improvements
(Arthur, 2009). This links to a performative culture in which judgement and fault-finding
replace collegiality and support. One respondent made this point explicitly:
isolation and more about evaluating the success of the module as a learning event
the current NSS-influenced questionnaire tend much more towards a 'rate my teacher'
culture and I would suggest such a culture does not enhance student learning
(Respondent 60).
This response encapsulates the difference between student satisfaction and learning and
learning experience (‘the success of the module as a learning event’) designed to improve
16
identified as performative and the latter (student learning and performance enhancement) as
professionalism.
Professionalism
Issues relating to professionalism concerned the use of the evaluation data to improve
lecturers’ practice. Twenty one respondents noted that the online questionnaire did not give
• the questions were too generic so did not give useful feedback about particular
modules: ‘A one size fits all approach is not a good idea. Certainly for modules with a
• the questionnaire did not allow module leaders to distinguish between modules taught
at different sites, between single, double and triple modules, or, when team teaching,
between different lecturers: ‘[The questionnaire needs a] box to indicate the site
where they are studying’ (Respondent 41); ‘No difference is made between single,
double and triple modules’ (Respondent 89). ‘We team teach on many of the modules,
based on the NSS survey, which is a satisfaction survey so that just says whether or
not they are happy. There is very little that informs the development of teaching
• there were no explanations which would help improvements: ‘why did some students
think the module was not well organised?’ (Respondent 6); ‘student evaluation…
always baffles me’ (Respondent 54); ‘only the qualitative responses are of any
use’(Respondent 31).
17
There was a strong desire (35 respondents) for lecturers to have more input into the design of
the evaluation questionnaire in order that the questions could be more module-specific. This
The current system is too broad and cannot be applied effectively to implement
change on my module in a manner that can identify very clearly what the students
find difficult, be it the delivery from specific lecturers or the material presented
(Respondent 91)
The student focus groups confirmed the lecturers’ concerns that the evaluation questionnaire
failed to provide a sufficiently nuanced reflection of their views about the module. For
example, one MA student had responded ‘neutral’ to a question about feedback and then
explained what lay behind that score: the academic standard was higher than expected, the
feedback was not sufficiently clear to help with future work, the marking was too formulaic
and she had felt discouraged by the result and would have liked more enthusiasm from the
marker. None of this feedback was conveyed by ticking the ‘neutral’ box. Students also
similar issues to the academics. One of the Business students said: ‘There were four seminar
leaders contributing to this course, but we could not give comments on each lecturer… We
had different teachers in different terms – I wanted to answer yes for one person and no for
another but I was not able to do so’. Spooren et al (2013) noted the danger that students may
not complete SETs if the questions do not enable them to express their views.
Academics were concerned about the interpretation of the results of student feedback and
how they could use it more effectively. Several respondents indicated that they carried out
their own formative evaluations mid-semester, using a mixed range of methods (for example,
focus groups, discussions with student representatives, asking students to rate aspects of the
18
module with red, amber or green cards), in order to gauge students’ views in time to adjust
their teaching.
were also comments about provision: a third area concerned with what the university was
Provision
Fifty seven comments from 35 questionnaire respondents drew attention to the organisational
constraints which prevented lecturers from responding effectively to student evaluations. One
‘It is not always easy to teach students in the way they work best (i.e. small-
Timetabling, inappropriate teaching rooms and campus facilities were also highlighted. One
respondent noted the difficulties in providing a quick response to such issues: ‘Students often
raise issues outside the control of the teaching team with regard to areas such as teaching
space quality, noise, cleanliness, IT systems, library provision. By the time a response to
some of these issues has been raised […] it is many many months later’ (Respondent 56).
Other respondents commented on the nature of the subject, intensity of teaching and the
difficulty in making minor changes midway through modules to respond to student feedback
19
judged on issues over which they had no control. One suggestion was ‘Only ask questions
There were also comments about student expectations, with several respondents suggesting
that these were unrealistic, for example: ‘If a student who is used to being online all hours of
the day and studies mainly during the night does not get a prompt response from a lecturer at
3 am one morning and considers they have not been able to contact the module leader when
they needed to, is it fair that the academic gets marked down?’ (Respondent 54). One lecturer
suggested: ‘What is often more required are colleagues with the right skills to say no
Other aspects of provision which were criticised by the academics were management
students. These issues appear to be overlooked in the divide between professionalism and
performativity, but are important in the overall student experience of learning and teaching as
The student focus groups revealed that some students score modules based on aspects of
provision, rather than on the learning and teaching experience. For example, in relation to the
campus where the teaching session takes place, one of the Business students said: ‘I would
give 1 [the lowest score] for something that didn't work. For example, a module which starts
at 5 pm at [a different campus] does not work for me…’ Other students were influenced by
the timing of the teaching session: ‘I am more likely to give low marks to late afternoon
Discussion
20
University A’s decision to use the NSS questionnaire as the basis for student module
evaluations clearly links to an agenda of performativity (Ball, 2016). Its main purpose was to
standards between modules and highlighting the performance of individual lecturers through
its ‘traffic light’ system. The issues of identifying teaching improvements (Wright and
Jenkins-Guarnieri, 2012) and enabling students to make informed choices about modules
(Alderman et al, 2012) were of secondary importance. However, although the university’s
focus appeared to be on quality assurance (Bamber and Anderson, 2012), its intention was
also to improve the student experience by finding areas of student dissatisfaction that needed
to be addressed. This indicates that quality assurance may be seen as a first step towards
The lecturers’ responses were critical of the university’s performative approach, suggesting
that they did not simply reorient themselves to measurable outcomes as Ball (2016) proposes.
Unlike Schuck et al (2008) and Richardson (2005), none of the respondents indicated that the
students were unqualified to judge their teaching. In addition to the need for reliable SETs,
the respondents argued for more nuanced measures of teaching quality in order to be able to
make improvements. When Kember et al (2002) found that SETs had not impacted on
learning and teaching over a four-year period, they identified a number of possible
explanations, including a lack of incentive to use the data (because teaching was not valued);
the SET questionnaire being insufficiently developmental; the need for counselling to support
emphasised the second of these issues: the shortcomings of a questionnaire which focused on
While the data indicated aspects of performativity (for example, anxiety about performance
measures) and professionalism (for example, a focus on how to improve teaching) in the
21
lecturers’ responses, there was also evidence of a concern about provision. Provision links to
the students’ experience, in terms of teaching and learning on the one hand and, on the other,
all the additional factors that contribute, such as IT, timetabling, class sizes and library
facilities. Ultimately, provision is about the university’s contract with each student and the
need to fulfil the student’s expectations in relation to that contract. Provision may also be
agenda, whereby university managers seek to measure and guide academics’ performance in
Figure 1 [not available in this accepted manuscript] illustrates the ways in which the
relationships between the university managers, academics and students link to performativity,
professionalism and provision in addressing the quality of student experience. In the figure,
the relationship between university managers and students is identified as provision (the
contractual relationship described above), the relationship between university managers and
academics is performativity (setting targets, judging performance, assuring quality) and the
students are primarily concerned, on the one hand, with the quality of teaching and learning,
represented by the professionalism of their lecturers, and, on the other hand, aspects of
provision that impact on their learning, such as when and where their classes are timetabled,
and whether they have sufficient resources. Meanwhile academics are represented as focusing
22
of their performance (performativity). As indicated above, the university managers have a
contractual responsibility to students for an acceptable level of provision, and this contributes
circles is the same size, reflecting the similar numbers of comments under each category, but
I would argue that these relationships have the potential to become unbalanced, for example,
There is a ‘sweet spot’ in the centre of the figure, where, in my view, provision supports
These relationships are, of course, more complex than the figure suggests. Although students’
primary relationship with the university may appear to be a contractual one, with students as
the ‘customers’, their identity, values, personal friendships and social development also
contribute to their relationship with their university (Brennan and David, 2010). Tight (2013)
argues that students should not be viewed as customers, partly because they are active
participants in their learning and also because the potential benefits of their degree will only
Even if the reality is that students continue to want agency over their learning, and expect
challenge and independence as well as support and positive outcomes (Bates and Kaye, 2014;
Budd, 2017), university managers who are focused on student satisfaction inevitably adopt
as a defining aspect of the managers’ own professional duties. Performance targets are set in
order to demonstrate to students, governors (and, potentially, a court of law) that the
23
provision. Universities are also, however, concerned about professionalism as well as
performativity. For example, University A abandoned the use of the NSS-style online
questionnaire four years after its introduction because the student response rates continued to
be too low to be useful and because it did not support improvements to learning and teaching.
Meanwhile, academics have to navigate the tensions between fulfilling performance targets
set by university managers - for example, achieving ‘green’ in the student evaluation traffic
light system – and achieving what they identify as their professional responsibilities towards
students. In some cases, these coincide, for example, when both recognise the importance of
student evaluations, but in others they conflict, for example, when lecturers would like
student feedback which helps them to make improvements, but the university creates a
system which prevents that from happening. Professionalism also has links with provision, in
that students may associate aspects of provision directly with their learning and teaching
Despite its limitations in relation to the above complexities, the model does demonstrate the
links between these concepts and the relationships between students, university managers and
academics in a neo-liberal setting. It seems likely that increasing a focus on provision has a
professionalism.
Conclusion
This research study into a system of online module evaluations based on the NSS survey
revealed a number of concerns: a low response rate made the student feedback
results of the questionnaire did not help them to improve their practice. The findings linked to
24
issues relating to performativity, professionalism and provision, with provision appearing to
the influence of provision, particularly in the wake of full-cost student fees, that it is possible
academics’ professionalism.
While the research focused on one case study university, making the results transferable (to
other, similar institutions), rather than generalizable, it is hoped that the model will enhance
tool that may be of use in future studies of SETs. Further research could examine these
relationships more closely and include the views of university managers as well as students
and academics.
undoubtedly continue to be highly concerned about the NSS score and its impact on
university rankings, especially now that the TEF incorporates the NSS results in its
evaluation of a university’s teaching excellence. However, using the NSS survey as a means
of evaluating individual modules is not recommended, based on the outcomes of this research
25
References
Alderman, L., S. Towers, and S. Bannah. 2012. “Student feedback systems in higher
education: a focused literature review and environmental scan.” Quality in Higher Education
18 (3): 261-280.
Ball, S. 2016. “Neoliberal education? Confronting the slouching beast.” Policy Futures in
Bamber, V. and S. Anderson. 2012. “Evaluating learning and teaching: institutional needs
and individual practices.” International Journal for Academic Development 17 (1): 5-18.
Bates, E. and L. Kaye. 2014. “’I’d be expecting caviar in lectures’: the impact of the new fee
(5): 655-673.
Bedggood, R.E. and J.D. Donovan. 2012. “University performance evaluations: what are we
Berk, R.A. 2013. “Top five flashpoints in the assessment of teaching effectiveness.” Medical
evaluating teaching and what students want.” Studies in Higher Education 34 (8): 857-872.
26
Botas, P.C.P. and R. Brown. 2013. “The not so ‘Holy Grail’: the impact of NSS feedback on
the quality of teaching and learning in higher education in the UK.” In Enhancing Student
Feedback and Improvement in Tertiary Education, edited by M. Shah and C.S. Nair, 45-56.
Brennan, J. and David, M. 2010. Teaching, Learning and the Student Experience in UK
England: problematizing the notion of ‘student as customer’. Higher Education 73 (1): 23-37.
Buckley, A. 2012. Making it count: reflecting on the National Student Survey in the process
DBIS (Department for Business, Innovation and Skills). 2016. Success as a Knowledge
https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/523396/bis-
Denson, N., T. Loveday and H. Dalton. 2010. “Student evaluation of courses: what predicts
Edstrőm, K. 2008. “Doing course evaluation as if learning matters most.” Higher Education
Glaser, B. and A. Strauss, A. 1967. The Discovery of Grounded Theory – strategies for
27
Heaney, C. and H. Mackenzie. 2017. “The Teaching Excellence Framework: Perpetual
April, 2018.
HEFCE (Higher Education Funding Council for England). 2016a. HEFCE Circular letter
http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/2016/201632/HEFCE2016_32.pdf
HEFCE. 2004. National Student Survey 2005: Outcomes of Consultation and Guidance on
Next Steps.
http://webarchive.nationalarchives.gov.uk/20120118171947/http://www.hefce.ac.uk/pubs/hef
Hoel, A. and T. Dahl. 2019. “Why bother? Student motivation to participate in student
Kane, D., L. Millard, and J. Williams. 2013. “Transforming the student experience in the UK
from 1989”. In Enhancing Student Feedback and Improvement in Tertiary Education, edited
by M. Shah and C.S. Nair, 57-75. Abu Dhabi: Commission for Academic Accreditation
28
Kember, D., D. Leung, and K.P. Kwan. 2002. “Does the use of student feedback
Kenny, J. 2017. “Academic work and performativity.” Higher Education. 74 (5): 897-913
Moore, S. and N. Kuol. 2005. “Students evaluating teachers: exploring the importance of
Olssen, M. and Peters, M. 2005. “Neoliberalism, higher education and the knowledge
economy: from the free market to knowledge capitalism.” Journal of Education Policy 20 (3)
313-345
Richardson, J. 2005. “Instruments for obtaining student feedback: a review of the literature.”
Richardson, J. 2013. “The National Student Survey and its impact on UK Higher Education.”
and C.S. Nair (pp. 76-84). Abu Dhabi: Commission for Academic Accreditation Quality
Series No. 5.
Schuck, S., S. Gordon and J. Buchanan. 2008. “What are we missing here? Problematising
Shore, C. and S. Wright. 1999. “Audit culture and anthropology: neo-liberalism in British
higher education.” The Journal of the Royal Anthropological Institute 5 (4): 557-575.
29
Spooren, P., B. Brock and D. Mortelmans. 2013. “On the validity of student evaluation of
teaching: the state of the art.” Review of Educational Research 83 (4): 598-642.
Strauss, A., and Corbin, J. 1990. Basics of Qualitative Research: Grounded Theory and
Thomas, G and James, D. 2006. “Reinventing ground theory: some questions about theory,
Tight, M. 2013. Students: Customers, clients or pawns? Higher Education Policy 26 (3):
291–307.
16 (2): 119-131.
meta-analyses and demonstrating further evidence for affective use.” Assessment and
30