Student Satisfaction Lecturer Professionalism - 2019 - Arthur

Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

Evaluating student satisfaction, restricting lecturer professionalism:

outcomes of using the UK National Student Survey questionnaire for


internal student evaluation of teaching (SET)

Dr Linet Arthur
School of Education, Oxford Brookes University, Oxford UK

[email protected]

1
Evaluating student satisfaction, restricting lecturer professionalism: outcomes of using

the UK National Student Survey questionnaire for internal student evaluation of

teaching (SET)

Abstract

In the neo-liberal context of a UK university, responding to student feedback in order to raise

student satisfaction levels is important in improving National Student Survey (NSS) scores.

This article focuses on the impact of a UK university’s new student feedback questionnaire -

for individual modules - which used the NSS questions. The research draws on survey data

(N = 101) to identify lecturers’ views and 3 student focus groups. The outcomes raised issues

relating to performativity, professionalism and ‘provision’, the latter defined as the

university’s contract with each student, including the aspects that affect the student learning

experience but are beyond the lecturers’ control, for example, class sizes, timetables. The

results indicate that by recognising the impact of provision university managers may be better

able to develop systemic improvements to student experience and (in the UK) a

corresponding uplift in NSS and Teaching Excellence Framework (TEF) results. The article

puts forward a model linking performativity, professionalism and provision to the

relationships between university managers, academics and students. This model could enrich

understandings of professionalism and performativity, extend the range of issues affecting

student experience in SETs and support data analysis in future research studies.

Key words

Evaluation, feedback, NSS, performativity, professionalism, provision

2
Introduction

The international trend towards neo-liberalism in higher education appears to have had a

profound impact on universities, changing the student into a customer (Tight, 2013), faculty

into ‘units of resource’ whose performance must be monitored (Shore and Wright, 1999,

559) and student evaluations of teaching (SET) into mechanisms for ensuring student

satisfaction and academic effectiveness. Neoliberal institutions focus on markets and

consumer choice (Olssen and Peters, 2005) with an emphasis on efficiency, productivity,

auditing and accounting (Kenny, 2017; Ball, 2012). In the competitive world of higher

education this has often been accompanied by hierarchical, centralised management practices

(Kenny, 2017) and a lack of trust in professionals (Olssen and Peters, 2005). The

‘individualised, self-managed and intrinsically motivating’ role of academics (Kenny, 2017,

889) appears to have been replaced by measurable performance outcomes: the ‘tyranny of

metrics’ (Ball, 2012, 20) which may supersede professional judgements. Thus

‘performativity’, with its focus on achieving targets, has become established in universities,

where faculty are held accountable through, and expected to direct their activities towards

measured outputs, performance indicators and appraisal (Olssen and Peters, 2005).

Student evaluations of teaching are a key part of this audit culture with, in the UK, two

external metrics linked to SETs: the National Student Survey (which gathers final year

undergraduates’ views about their university experience) and the Teaching Excellence

Framework (which assesses the quality of teaching in higher education institutions). Higher

levels of student satisfaction feed into both these performative measures, improving a

university’s league table position, with a potential rise in student applications as a

consequence.

The research reported here focuses on a university which adopted the questions from the NSS

for the purposes of module evaluation across the university, arguably combining a neoliberal,

3
student-as-customer approach with a performative audit of lecturers’ teaching. The research

combined a survey of academics at the university (N = 101) and 3 student focus groups to

ascertain both staff and student responses to the new module evaluation system. The study

draws on a thematic analysis of lecturers’ and students’ responses to the new system and

considers performative and professional aspects of the SET, linking these to a model which

also includes university provision.

The article starts by reviewing the literature in relation to student feedback, the National

Student Survey and the theories of performativity and professionalism. It then describes the

methodology of the research before analysing the results. The discussion section puts forward

a new model illustrating the complex relationships between students, academics and

university managers in terms of professionalism, performativity and provision. This model

could enrich the ongoing debates about performativity and professionalism, extend the range

of issues in SETs that impact on learning and teaching in universities and serve as an

analytical tool in further research studies on how to improve the student experience.

Purposes of student feedback

The aims of SETs link to performativity through measuring lecturers’ performance (Kember

et al, 2002; Alderman et al, 2012). Other performative purposes of student evaluations

include the provision of a systematic documentation of student experiences which allows the

comparison of standards across a university (Johnson, 2000) and assisting quality assurance

(Moore and Kuol, 2005; Blackmore, 2009). Student feedback also contributes to

professionalism, by enabling lecturers to identify potential teaching improvements (Bamber

and Anderson, 2012; Wright and Jenkins-Guarnieri, 2012) and improving students’

attainment of learning outcomes (Denson et al, 2010).

4
There are questions, however, about how far SETs are able to achieve any of these aims. On

the performative side, while centrally-developed SET questionnaires might appear to be

systematic, there may be undetected bias in questions and responses – for example, the order

and wording of questions may influence outcomes, while acquiescence bias (respondents are

more likely to give positive ratings) and indifference bias (respondents opt for the middle of

the scale) impact on responses (Yorke, 2009). It is rarely possible to ensure that students

respond to the questionnaires in exactly the same, controlled conditions (Berk, 2013). When

SET questionnaires are online, response rates tend to be low (Bamber and Anderson, 2012;

Spooren et al, 2013), resulting in the danger that a small number of extreme responses bias

the outcomes (Yorke, 2009). On the other hand, if lecturers are able to design their own

evaluations, there is a danger that the questionnaires may be ‘psychometrically putrid’ (Berk,

2013, 19), due, for example, to faulty items, ambiguous instructions, lack of specified

teaching behaviours. This prevents a comparison of standards across the university.

Much of the research on SETs has focused on their validity (Spooren et al., 2013), which is

particularly important when student evaluations are used to measure a lecturer’s performance.

Extant research has indicated a number of areas of potential bias in SETs, which are linked to

the course (subject disciplines, higher academic levels, course difficulty and whether it is

compulsory), the teacher (the ‘halo’ effect of charismatic lecturers, gender, race, sexual

orientation, rank – professors are rated more highly), the university (class size, timetabling)

and the students (maturity, gender, grade expectations) (Spooren et al., 2013; Denson et al,

2010).

There are also concerns that students’ understanding of learning may be immature (Edstrőm,

2008) or that they may not be competent to judge good teaching (Schuck et al, 2008;

Richardson, 2005). In terms of providing evidence for lecturers’ appraisal and/or promotion

5
decisions, there is a view that student evaluations should at best be used with caution

(Johnson, 2000) or with additional data from multiple sources (Berk, 2013).

Professionally, although student evaluations are often seen as a way for academics to improve

their teaching (Wright and Jenkins-Guarieri, 2012; Winchester and Winchester, 2011;

Alderman et al, 2012), this too, is problematic. Richardson (2005) noted that there was no

empirical evidence to support the claims that publishing student feedback helped academics

to enhance teaching. Centrally-designed questionnaires may not evaluate all aspects of

teaching (Moore and Kuol, 2005) or provide the information that lecturers need in order to

make improvements (Bamber and Anderson, 2012). Student evaluations are arguably less

likely to improve academics’ practice where questions are focused on bureaucratic needs

(Moore and Kuol, 2005). An ongoing problem with using SETs to improve teaching is the

lack of a shared understanding of effective teaching and learning by teachers, students and

the designers of SETs (Spooren et al, 2013).

Richardson (2005) argued that student feedback may make lecturers’ perceptions of their

teaching more accurate, but it does not generally change their behaviour. In Beren and

Rokosh’s (2009) survey of 357 faculty in one university, while over half the respondents

found student ratings useful, only a few had substantially modified their practice as a result.

Kember et al’s (2002) research study analysed student evaluation questionnaires at one

university over a four-year period. They concluded that the Student Feedback Questionnaire

produced ‘no evidence of an improvement in the quality of teaching during the four-year

period’ (416), based on a lack of significant change in mean scores in the SETs over 4 years.

If academics are unable or unwilling to respond actively to SETs, this influences the

motivation of students to complete evaluations: the belief that their feedback is not valued is

a key reason for low response rates (Hoel and Dahl, 2019).

6
In neoliberal universities a shift in the focus of SETs from student learning to student

satisfaction may further reduce the possibility of identifying improvements to teaching

(Schuck et al, 2008; Bedggood and Donovan, 2012). Evaluations to assess student

satisfaction – such as the NSS – may be restricted to questions about how far teaching and

learning have met student expectations rather than how teaching could be enhanced.

It is possible that fulfilling students’ expectations in order to increase satisfaction might result

in improvements to teaching, but it is questionable whether universities are able to identify

students’ expectations accurately. There may be assumptions, for example, that students take

a more instrumental, performative approach to their university education as a result of the

fees increase in England. Yet Bates and Kaye (2014) compared students’ expectations before

and after the fees increase and found that there was not a significant difference between either

their expectations (in terms of contact time, resources and support) or their satisfaction as a

result of the fee rise. Budd (2017) compared students at a British (fee-paying, neo-liberal

context) and a German (no fees, limited neoliberalism) university and found that students in

the British, competitive, market-driven university were not more passive nor instrumental

than those in the German university. Thus focusing on student satisfaction may be based on a

false premise about what academics need to do to satisfy students. Denson et al.’s (2010, 353)

analysis of 60,860 student course evaluations at one university found that the best predictors

of students’ overall satisfaction were the two optional questions set by faculty (rather than the

seven compulsory questions set by the university), which indicated that ‘faculties appear to

be more in tune with their students’ needs and experiences’.

It appears that student feedback may not be an effective means of measuring performance or

improving teaching and learning, so its contribution to performativity and professionalism

may be limited. Arguably, when academics use their own methods of gathering feedback the

results are more likely to assist them in developing their professional skills. The case-study

7
research reported here involved the replacement of module feedback developed and analysed

by individual academics with an institution-wide system using the National Student Survey

(NSS) questionnaire. Unlike internal student evaluations, the National Student Survey (NSS)

provides an external measure of university performance and it could be seen as strongly

performative in aims and style as the section below explains.

National Student Survey

The NSS was introduced in England, Wales and Northern Ireland in 2005. It takes the form

of an online questionnaire for final year undergraduate students, with (originally) 71

questions on a range of topics including teaching, assessment, course content, learning

resources, student support, organisation and management, careers, physical environment and

overall satisfaction (Botas and Brown, 2013). The NSS questionnaire was changed in 2017,

reducing the number of questions, adding new sections on the learning community and

student voice, and offering optional question banks to institutions (Higher Education Funding

Council for England (HEFCE), 2016a). Although it was the original version of the

questionnaire which was adopted by the case study university, the results reported here are

not concerned with particular questions but with the overall approach of adopting the NSS

questionnaire for module evaluations.

The purpose of the NSS was to help prospective students choose their courses and to provide

a form of quality assurance and public accountability (HEFCE, 2004). Since its

implementation the NSS has extended its reach. It now plays a role in management

information and allows universities to benchmark against other Higher Education Institutions

(HEIs) (Buckley, 2012). There is evidence that the NSS has impacted on the behaviours of

HEIs, academics and students (Richardson, 2013).

8
More recently NSS responses have contributed to a university’s score in the Teaching

Excellence Framework (TEF) in England. The TEF was introduced in 2017 to ‘provide clear,

understandable information to students about where teaching quality is outstanding’

(Department for Business, Innovation and Skills (DBIS), 2016, 13). Universities are graded

as gold, silver or bronze based on contextual data, a range of metrics on student satisfaction,

retention and employability and an additional narrative to support each university’s case for

excellence (HEFCE, 2016b). The NSS questions on ‘Teaching on my course’ (NSS Q1-4),

‘Assessment and feedback’ (NSS Q5-9) and ‘Academic support’ (NSS Q10-12) provide the

metrics for teaching quality and the learning environment in the TEF. Both the TEF and NSS

could be seen as potentially a means of improving teaching quality and empowering students

but they are also elements of the regulation, competition and performativity typical of

neoliberal ideologies in higher education (Heaney and Mackenzie, 2017).

Obtaining positive NSS results has become a preoccupation for senior leaders in universities

and some HEIs have started to use NSS questions when gathering student feedback. For

example, Birmingham City University based its annual Student Experience Survey on the

NSS in order to be able to address student complaints before they reached their final year

(Kane et al, 2013). The impact of the NSS on a university’s league table position and TEF

result mean that this performative measure has become a crucial element in a university’s

strategic planning. The impact on professionalism and performativity is discussed in the next

section.

Professionalism and performativity

The ways in which student evaluations provide information are complex and open to

criticism. As indicated above, the potential benefits of student feedback internal to a

university have been undermined by issues about response rates, bias, inappropriate

9
questions/information, lack of consistency in questionnaires, students’ questionable

judgements of teaching and academics not responding to students’ evaluations of their

teaching. As a result, student feedback may not achieve the goals of performativity (quality

assurance) or professionalism (quality enhancement).

The national system for gathering students’ evaluations of their experience at university (the

NSS) also has deficiencies in terms of validity and reliability (Kane et al, 2013; Botas and

Brown, 2013; Yorke, 2009). Its focus on student satisfaction is unlikely to provide

information on how to improve teaching and learning, although university managers may be

able to identify broad areas of student dissatisfaction that need to be addressed. While the

NSS appears to be an instrument of performativity, linked to indicators, measurable

performance outcomes, evaluations, targets and calculations (Ball, 2012), it is concerned with

the overall student experience and as such addresses an element that seems to be overlooked

in the arguments about professionalism and performativity, namely the provision for students

at universities.

In theory, both performativity and professionalism have similar aims in ensuring an optimal

student experience, performativity through ‘the very best input/output equation’ (Locke,

2015, 248); professionalism through ‘a pedagogy of context and experience, intelligible

within a set of collegial relations’ (Ball, 2016, 1056). Ball’s (2012; 2016) concern is that

performative systems undermine professionalism by orienting academics’ professional

practice towards measurable outcomes, rather than the principled judgements and complex

understandings derived from experience. The views of students appear to be overlooked in

this argument although a richer learning experience for students is implied.

Student feedback designed for university quality assurance procedures could meet

performative requirements, while evaluation for quality enhancement would satisfy the needs

10
of professionalism. The research reported here considers the impact of a new student

evaluation system in relation to performativity and professionalism, and argues that an

additional category, provision, should be added to the binary divide between performativity

and professionalism.

Student feedback at University A

In this study the case university (hereafter called University A), a post-1992 university in

England with approximately 17,000 students, introduced a university-wide module evaluation

system which adopted the pre-2017 NSS questions. Two reasons were given for doing this:

firstly, a need for consistency in gathering student feedback across the university; secondly,

in order to address a dip in NSS ratings. By using the NSS questions at an earlier stage in the

students’ experience, it was hoped (like Birmingham City University) to pinpoint areas of

dissatisfaction and make improvements. University A differed from Birmingham City,

however, in using the NSS questions to evaluate individual modules rather than as an annual

feedback mechanism.

The approach adopted by University A started with a requirement that the NSS-based

questionnaire be used in paper form for every module (undergraduates take 4-5 modules per

semester; 9 in a year). Staff could add up to 4 questions of their own to the questionnaire.

After a year, the questionnaire was moved online. University A changed its online platform

the same year which made it impossible for staff to add their own questions to the NSS

questionnaire. A ‘traffic light’ system was introduced: when managers and academics were

provided with the evaluation results, items where fewer than 50% of students scored the top

grades were highlighted in red; where the top scores lay between 50 to 66% they were shaded

in amber; the other results were green.

Methodology of the research study

11
This study took place the year after the NSS-based questionnaire went online at University A.

It was a single case study, which allows the development of an in-depth understanding of a

single institution, but the findings are not necessarily generalizable (although they could be

transferable to other universities in similar circumstances). The research methods included a

survey of university staff and students, interviews with Student Union Officers and

academics identified as excellent practitioners and focus group interviews with students.

The results reported here are from the survey of university staff (101 responses to an online

questionnaire) and the student focus groups. The survey questionnaire combined quantitative

and qualitative data: in addition to Likert-style tick boxes, respondents were asked for

comments on different aspects of the new evaluation system. These were in many cases

extensive and detailed. Of the respondents who provided personal data (some did not do this

because of fears that they might be recognised), 54% were women and 46% were men; they

came from all four faculties; 51% were senior lecturers, with other positions (in descending

order): subject coordinators (15%), readers, professors and principal lecturers (7-8% each),

lecturers, hourly paid lecturers and programme leaders (2-5% each); most respondents had

worked in higher education for between eleven and twenty years (44%), 28% had 1-10 years

of experience and 28% over 20 years.

The student focus groups were carried out after there had been no responses to a similar

survey questionnaire for students. Three focus groups took place: one with MA students from

a Coaching and Mentoring module; one with second and third year undergraduate students

from an Educational Studies module; the third was with first year students from an

introductory Business module. The focus groups took place immediately after the students

had completed the new feedback questionnaire, inviting their responses to the questionnaire

and the extent to which it reflected their views.

12
The initial approach to the data analysis was based on grounded theory (Glaser and Strauss,

1967). Grounded theory enables researchers to develop theory from their data using an

iterative process of identifying themes and codes through the constant comparative method to

make comparisons with and between data at every stage of the analysis (Glaser and Strauss,

1967; Strauss and Corbin, 1990; Charmaz, 2006). It may, however, be difficult for

researchers to bracket away their previous knowledge when analysing data (Thomas and

James, 2006) and proponents of grounded theory recognise that ‘we construct theory through

our past and present involvements in interactions with people, perspectives and research

practices’ (Charmaz, 2006, 10). In other words, theory can arise from an interaction between

the data and the literature review or conceptual framework. In my study I first read through

all the qualitative data several times, identifying themes and patterns which emerged directly

from the data. After establishing these key themes, I then considered the theories from the

literature relating to professionalism and performativity and explored whether it would be

possible to use these concepts to help to categorise the data. Not all the themes were covered

by professionalism and performativity, however, so I developed a third category, ‘provision’,

which encompassed the remaining themes. The thematic analysis was scrutinised to check

whether the background factors (role, age, gender etc) may have influenced responses. This

was not the case: a range of respondents was represented in each of the themes and none of

the respondents could be located in just one category.

Results

The focus of this article is on the qualitative data from both the questionnaire (the extensive

comments written in response to the open questions) and the student focus groups. This

section begins with the issue of response rates before moving on to the categories of

performativity, professionalism and provision.

13
Student response rates

A major problem when the new system went online was that the response rates were low

(similar to Bamber and Anderson’s (2012) experience). The first year the online evaluation

was introduced at University A, the overall response rate was from 18 - 25% per module,

with much variation between modules. Eighty four per cent of respondents indicated that the

response rate was lower after the online evaluation questionnaire was introduced. Academics’

comments reflected their concern about this decline:

• ‘The student response rate has dropped off a cliff as a result of the online evaluations’

(Respondent 57)

• ‘Due to the massive drop in completion rate those who do reply waver all over the

place in terms of ratings and so provide a very unreliable source of information’

(Respondent 56)

The low response rate impacted on lecturers’ ability to interpret student feedback accurately

and thus the opportunity to make improvements to address student concerns.

Performativity

Issues relating to performativity concerned: a) the extent to which the evaluation feedback

was relevant to academic staff compared to university managers; b) staff assumptions about

the main reasons for the online questionnaire (for monitoring and judgement as opposed to

improved performance), and linked to the latter, c) staff anxieties about whether student

responses were accurate, together with students’ explanations about their responses.

A question about how far the new feedback questionnaire provided information that lecturers

valued indicated that more respondents felt that the evaluation was ‘very important’ to the

university (26%) compared to themselves (11%).

14
A number of staff suggested that the new online evaluation system was designed for

managers to monitor performance rather than for staff to make improvements, for example,

Trust the staff to talk to the students and understand their nuanced interests and needs,

rather than making them (staff) feel as though this is simply a method by which

management can police their performance (Respondent 17)

This comment about lack of trust and the ‘policing’ of performance could be seen as an

indication of a typical performative culture with a clear division between management and

staff.

There were concerns about reliability, including whether students could understand the

questions. Fifteen respondents identified the potential for bias in the feedback because:

• some students focus on enjoyment rather than learning. Respondent 88 suggested that

in order to improve feedback, training on how to entertain was needed.

• some students who have not attended any teaching sessions still complete the online

feedback: ‘such students make comments on my teaching + organisation of the

module without having seen me teach!’ (Respondent 38)

• there are ‘a few rogue students with extreme views at either end of the spectrum’

(Respondent 32) and the online system ‘encourages the disaffected to vent their

spleen’ (Respondent 11)

• some students’ feedback is influenced by their grade, for example, ‘bad feedback

from a student is quite often related to a low grade for coursework’ (Respondent 86).

In terms of reliability, the student focus groups seemed to indicate that the questionnaires

may not convey students’ views accurately. One student used only the highest or lowest

scores ‘because whoever receives the forms needs definite answers’ (Education Studies

15
Focus Group). Another student was reluctant to give low scores: ‘I feel bad about giving 1s.

The lowest mark I gave is 2 or 3, even if something is awful’ (Business focus group).

Students also described using the questionnaires to address issues about their own

performance on a module, for example, one of the Education Studies students had written

comments explaining why external matters had affected his assignment. One of the Business

students used the comment box in the feedback sheet to comment on: ‘all the assignments

cropping up during the same week – it's a nightmare’. This indicates a degree of

performativity on the students’ part: an instrumental focus on the assessment rather than

broader aspects of their learning experience.

The lecturers’ comments resonate with an earlier model of lecturers’ responses to student

feedback, in one category of which, lecturers blame the students for poor feedback instead of

taking responsibility for the student experience in seeking to make appropriate improvements

(Arthur, 2009). This links to a performative culture in which judgement and fault-finding

replace collegiality and support. One respondent made this point explicitly:

I think of a module evaluation as less about 'rating' or 'judging' my own teaching in

isolation and more about evaluating the success of the module as a learning event

comprising environment, resources, students, activities and lecturer. Some aspects of

the current NSS-influenced questionnaire tend much more towards a 'rate my teacher'

culture and I would suggest such a culture does not enhance student learning

(Respondent 60).

This response encapsulates the difference between student satisfaction and learning and

between monitoring performance (‘rate my teacher’) and a holistic understanding of the

learning experience (‘the success of the module as a learning event’) designed to improve

performance. The former (student satisfaction and performance monitoring) could be

16
identified as performative and the latter (student learning and performance enhancement) as

professionalism.

Professionalism

Issues relating to professionalism concerned the use of the evaluation data to improve

lecturers’ practice. Twenty one respondents noted that the online questionnaire did not give

them information which was helpful in making improvements, for example,

• the questions were too generic so did not give useful feedback about particular

modules: ‘A one size fits all approach is not a good idea. Certainly for modules with a

practical element it is not very useful’ (Respondent 46)

• the questionnaire did not allow module leaders to distinguish between modules taught

at different sites, between single, double and triple modules, or, when team teaching,

between different lecturers: ‘[The questionnaire needs a] box to indicate the site

where they are studying’ (Respondent 41); ‘No difference is made between single,

double and triple modules’ (Respondent 89). ‘We team teach on many of the modules,

so the current questionnaire isn't conducive to identifying individual lecturers’

performance’ (Respondent 70).

• questions focusing on satisfaction do not indicate how to improve teaching: ‘It is

based on the NSS survey, which is a satisfaction survey so that just says whether or

not they are happy. There is very little that informs the development of teaching

practice’ (Respondent 25)

• there were no explanations which would help improvements: ‘why did some students

think the module was not well organised?’ (Respondent 6); ‘student evaluation…

always baffles me’ (Respondent 54); ‘only the qualitative responses are of any

use’(Respondent 31).

17
There was a strong desire (35 respondents) for lecturers to have more input into the design of

the evaluation questionnaire in order that the questions could be more module-specific. This

was partly to enable lecturers to make improvements, for example,

The current system is too broad and cannot be applied effectively to implement

change on my module in a manner that can identify very clearly what the students

find difficult, be it the delivery from specific lecturers or the material presented

(Respondent 91)

The student focus groups confirmed the lecturers’ concerns that the evaluation questionnaire

failed to provide a sufficiently nuanced reflection of their views about the module. For

example, one MA student had responded ‘neutral’ to a question about feedback and then

explained what lay behind that score: the academic standard was higher than expected, the

feedback was not sufficiently clear to help with future work, the marking was too formulaic

and she had felt discouraged by the result and would have liked more enthusiasm from the

marker. None of this feedback was conveyed by ticking the ‘neutral’ box. Students also

commented on the shortcomings of the questionnaire for module evaluation, identifying

similar issues to the academics. One of the Business students said: ‘There were four seminar

leaders contributing to this course, but we could not give comments on each lecturer… We

had different teachers in different terms – I wanted to answer yes for one person and no for

another but I was not able to do so’. Spooren et al (2013) noted the danger that students may

not complete SETs if the questions do not enable them to express their views.

Academics were concerned about the interpretation of the results of student feedback and

how they could use it more effectively. Several respondents indicated that they carried out

their own formative evaluations mid-semester, using a mixed range of methods (for example,

focus groups, discussions with student representatives, asking students to rate aspects of the

18
module with red, amber or green cards), in order to gauge students’ views in time to adjust

their teaching.

These comments demonstrate academic professionalism – a desire to use student feedback to

make improvements. Yet in addition to issues of professionalism and performativity, there

were also comments about provision: a third area concerned with what the university was

offering to students. Aspects of provision were considered to affect students’ judgement of a

module but were often beyond the lecturers’ control.

Provision

Fifty seven comments from 35 questionnaire respondents drew attention to the organisational

constraints which prevented lecturers from responding effectively to student evaluations. One

response summed up many of the points made:

‘It is not always easy to teach students in the way they work best (i.e. small-

groups/tutorials) because of large class sizes (N = 100+), time constraints, space

constraints, staff shortage. Often easier to continue teaching in traditional lecture-

based format regardless of feedback’ (Respondent 62).

Timetabling, inappropriate teaching rooms and campus facilities were also highlighted. One

respondent noted the difficulties in providing a quick response to such issues: ‘Students often

raise issues outside the control of the teaching team with regard to areas such as teaching

space quality, noise, cleanliness, IT systems, library provision. By the time a response to

some of these issues has been raised […] it is many many months later’ (Respondent 56).

Other respondents commented on the nature of the subject, intensity of teaching and the

difficulty in making minor changes midway through modules to respond to student feedback

because of university regulations. Respondents indicated a sense of grievance about being

19
judged on issues over which they had no control. One suggestion was ‘Only ask questions

over which the teaching staff have influence’ (Respondent 50).

There were also comments about student expectations, with several respondents suggesting

that these were unrealistic, for example: ‘If a student who is used to being online all hours of

the day and studies mainly during the night does not get a prompt response from a lecturer at

3 am one morning and considers they have not been able to contact the module leader when

they needed to, is it fair that the academic gets marked down?’ (Respondent 54). One lecturer

suggested: ‘What is often more required are colleagues with the right skills to say no

(politely) to some of the more extreme student requests…’ (Respondent 4).

Other aspects of provision which were criticised by the academics were management

competence, lack of resources, staffing, workload and insufficient time to respond to

students. These issues appear to be overlooked in the divide between professionalism and

performativity, but are important in the overall student experience of learning and teaching as

well as the context within which professional standards have to be met.

The student focus groups revealed that some students score modules based on aspects of

provision, rather than on the learning and teaching experience. For example, in relation to the

campus where the teaching session takes place, one of the Business students said: ‘I would

give 1 [the lowest score] for something that didn't work. For example, a module which starts

at 5 pm at [a different campus] does not work for me…’ Other students were influenced by

the timing of the teaching session: ‘I am more likely to give low marks to late afternoon

sessions’ and ‘The same is true for Monday morning at 9 am.’

Discussion

20
University A’s decision to use the NSS questionnaire as the basis for student module

evaluations clearly links to an agenda of performativity (Ball, 2016). Its main purpose was to

enable a systematic documentation of student experience across the university, comparing

standards between modules and highlighting the performance of individual lecturers through

its ‘traffic light’ system. The issues of identifying teaching improvements (Wright and

Jenkins-Guarnieri, 2012) and enabling students to make informed choices about modules

(Alderman et al, 2012) were of secondary importance. However, although the university’s

focus appeared to be on quality assurance (Bamber and Anderson, 2012), its intention was

also to improve the student experience by finding areas of student dissatisfaction that needed

to be addressed. This indicates that quality assurance may be seen as a first step towards

quality enhancement rather than as an end-goal.

The lecturers’ responses were critical of the university’s performative approach, suggesting

that they did not simply reorient themselves to measurable outcomes as Ball (2016) proposes.

Unlike Schuck et al (2008) and Richardson (2005), none of the respondents indicated that the

students were unqualified to judge their teaching. In addition to the need for reliable SETs,

the respondents argued for more nuanced measures of teaching quality in order to be able to

make improvements. When Kember et al (2002) found that SETs had not impacted on

learning and teaching over a four-year period, they identified a number of possible

explanations, including a lack of incentive to use the data (because teaching was not valued);

the SET questionnaire being insufficiently developmental; the need for counselling to support

lecturers in making appropriate improvements. The respondents from University A

emphasised the second of these issues: the shortcomings of a questionnaire which focused on

student satisfaction rather than on how to improve learning and teaching.

While the data indicated aspects of performativity (for example, anxiety about performance

measures) and professionalism (for example, a focus on how to improve teaching) in the

21
lecturers’ responses, there was also evidence of a concern about provision. Provision links to

the students’ experience, in terms of teaching and learning on the one hand and, on the other,

all the additional factors that contribute, such as IT, timetabling, class sizes and library

facilities. Ultimately, provision is about the university’s contract with each student and the

need to fulfil the student’s expectations in relation to that contract. Provision may also be

concerned with benchmarking against other universities, to ensure an equitable - or even

superior - student experience. In my view it is provision that drives the performativity

agenda, whereby university managers seek to measure and guide academics’ performance in

order to meet minimum, but preferably market-leading, standards.

Figure 1 [not available in this accepted manuscript] illustrates the ways in which the

relationships between the university managers, academics and students link to performativity,

professionalism and provision in addressing the quality of student experience. In the figure,

the relationship between university managers and students is identified as provision (the

contractual relationship described above), the relationship between university managers and

academics is performativity (setting targets, judging performance, assuring quality) and the

relationship between academics and students is one of professionalism (focused on teaching

and learning, how to make improvements, quality enhancement).

[For figure 1, refer to: https://doi.org/10.1080/02602938.2019.1640863]

Figure 1 [not available in this accepted manuscript] encapsulates my interpretation that

students are primarily concerned, on the one hand, with the quality of teaching and learning,

represented by the professionalism of their lecturers, and, on the other hand, aspects of

provision that impact on their learning, such as when and where their classes are timetabled,

and whether they have sufficient resources. Meanwhile academics are represented as focusing

simultaneously on professionalism in their duties to students’ learning and on fair measures

22
of their performance (performativity). As indicated above, the university managers have a

contractual responsibility to students for an acceptable level of provision, and this contributes

towards creating a performative relationship with academics. In Figure 1, each of these

circles is the same size, reflecting the similar numbers of comments under each category, but

I would argue that these relationships have the potential to become unbalanced, for example,

either performativity or provision could become larger, at the expense of professionalism.

There is a ‘sweet spot’ in the centre of the figure, where, in my view, provision supports

teaching and learning; performativity moves beyond quality assurance to quality

enhancement and professionalism enables academics to use performative targets as a means

of developing their skills.

These relationships are, of course, more complex than the figure suggests. Although students’

primary relationship with the university may appear to be a contractual one, with students as

the ‘customers’, their identity, values, personal friendships and social development also

contribute to their relationship with their university (Brennan and David, 2010). Tight (2013)

argues that students should not be viewed as customers, partly because they are active

participants in their learning and also because the potential benefits of their degree will only

be known in the long-term.

Even if the reality is that students continue to want agency over their learning, and expect

challenge and independence as well as support and positive outcomes (Bates and Kaye, 2014;

Budd, 2017), university managers who are focused on student satisfaction inevitably adopt

performative approaches to managing academics. In some ways, performativity could be seen

as a defining aspect of the managers’ own professional duties. Performance targets are set in

order to demonstrate to students, governors (and, potentially, a court of law) that the

university is taking quality assurance seriously. So performativity is inevitably linked to

23
provision. Universities are also, however, concerned about professionalism as well as

performativity. For example, University A abandoned the use of the NSS-style online

questionnaire four years after its introduction because the student response rates continued to

be too low to be useful and because it did not support improvements to learning and teaching.

Meanwhile, academics have to navigate the tensions between fulfilling performance targets

set by university managers - for example, achieving ‘green’ in the student evaluation traffic

light system – and achieving what they identify as their professional responsibilities towards

students. In some cases, these coincide, for example, when both recognise the importance of

student evaluations, but in others they conflict, for example, when lecturers would like

student feedback which helps them to make improvements, but the university creates a

system which prevents that from happening. Professionalism also has links with provision, in

that students may associate aspects of provision directly with their learning and teaching

experience and evaluate their lecturers’ performance accordingly.

Despite its limitations in relation to the above complexities, the model does demonstrate the

importance of provision in relation to performativity and professionalism, and indicates the

links between these concepts and the relationships between students, university managers and

academics in a neo-liberal setting. It seems likely that increasing a focus on provision has a

direct impact on performativity – and that in turn influences academics’ commitment to

professionalism.

Conclusion

This research study into a system of online module evaluations based on the NSS survey

revealed a number of concerns: a low response rate made the student feedback

unrepresentative; academics’ performance was being judged by an unreliable measure; the

results of the questionnaire did not help them to improve their practice. The findings linked to

24
issues relating to performativity, professionalism and provision, with provision appearing to

influence performativity measures as well as professional concerns. It is only by considering

the influence of provision, particularly in the wake of full-cost student fees, that it is possible

to understand the complexities of performativity and professionalism, and to find ways to

prevent the constraints of managerialist performativity from undermining the motivation of

academics’ professionalism.

While the research focused on one case study university, making the results transferable (to

other, similar institutions), rather than generalizable, it is hoped that the model will enhance

future debates about professionalism and performativity as well as providing an analytical

tool that may be of use in future studies of SETs. Further research could examine these

relationships more closely and include the views of university managers as well as students

and academics.

In the neoliberal, competitive world of UK higher education, university managers will

undoubtedly continue to be highly concerned about the NSS score and its impact on

university rankings, especially now that the TEF incorporates the NSS results in its

evaluation of a university’s teaching excellence. However, using the NSS survey as a means

of evaluating individual modules is not recommended, based on the outcomes of this research

study, as its focus on student satisfaction prevents the identification of improvements to

learning and teaching.

No potential conflict of interest was reported by the author.

25
References

Alderman, L., S. Towers, and S. Bannah. 2012. “Student feedback systems in higher

education: a focused literature review and environmental scan.” Quality in Higher Education

18 (3): 261-280.

Arthur, L. 2009. “From performativity to professionalism: lecturers responses to student

feedback”. Journal of Teaching in Higher Education 14 (4): 441-454.

Ball, S. 2012. “Performativity, Commodification and Commitment: An I-Spy Guide to the

Neoliberal University”. British Journal of Educational Studies 60 (1): 17-28.

Ball, S. 2016. “Neoliberal education? Confronting the slouching beast.” Policy Futures in

Education 14 (8): 1046-1059.

Bamber, V. and S. Anderson. 2012. “Evaluating learning and teaching: institutional needs

and individual practices.” International Journal for Academic Development 17 (1): 5-18.

Bates, E. and L. Kaye. 2014. “’I’d be expecting caviar in lectures’: the impact of the new fee

regime on undergraduate students’ expectations of Higher Education.” Higher Education 67

(5): 655-673.

Bedggood, R.E. and J.D. Donovan. 2012. “University performance evaluations: what are we

really measuring?” Studies in Higher Education 37 (7): 825-842.

Berk, R.A. 2013. “Top five flashpoints in the assessment of teaching effectiveness.” Medical

Teacher 35 (1): 15-26.

Blackmore, J. 2009. “Academic pedagogies, quality logics and performative universities:

evaluating teaching and what students want.” Studies in Higher Education 34 (8): 857-872.

26
Botas, P.C.P. and R. Brown. 2013. “The not so ‘Holy Grail’: the impact of NSS feedback on

the quality of teaching and learning in higher education in the UK.” In Enhancing Student

Feedback and Improvement in Tertiary Education, edited by M. Shah and C.S. Nair, 45-56.

Abu Dhabi: Commission for Academic Accreditation Quality Series No. 5.

Brennan, J. and David, M. 2010. Teaching, Learning and the Student Experience in UK

Higher Education. In Higher Education and Society: a Research Report, 5-12.

http://oro.open.ac.uk/21274/1/Higher_Education_and_Society.pdf. Accessed July 26, 2018.

Budd, R. 2017. Undergraduate orientations towards higher education in Germany and

England: problematizing the notion of ‘student as customer’. Higher Education 73 (1): 23-37.

Buckley, A. 2012. Making it count: reflecting on the National Student Survey in the process

of enhancement. York: Higher Education Academy.

Charmaz, K. 2006. Constructing Grounded Theory. London: Sage

DBIS (Department for Business, Innovation and Skills). 2016. Success as a Knowledge

Economy: Teaching Excellence, Social Mobility and Student Choice.

https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/523396/bis-

16-265-success-as-a-knowledge-economy.pdf. Accessed 22 February, 2018.

Denson, N., T. Loveday and H. Dalton. 2010. “Student evaluation of courses: what predicts

satisfaction?” Higher Education Research and Development 29 (4): 339-356

Edstrőm, K. 2008. “Doing course evaluation as if learning matters most.” Higher Education

Research and Development 27 (2): 95-106.

Glaser, B. and A. Strauss, A. 1967. The Discovery of Grounded Theory – strategies for

qualitative research. New York: Aldine de Gruyter.

27
Heaney, C. and H. Mackenzie. 2017. “The Teaching Excellence Framework: Perpetual

Pedagogical Control in Postwelfare Capitalism.” Compass, Journal of Learning and

Teaching, 10(2): 1-17. doi: http://dx.doi.org/10.21100/compass.v10i2.488. Accessed 10

April, 2018.

HEFCE (Higher Education Funding Council for England). 2016a. HEFCE Circular letter

30/2016: A new National Student Survey for 2017.

http://www.hefce.ac.uk/pubs/year/2016/CL,302016/. Accessed 22 February, 2018.

HEFCE. 2016b. Teaching Excellence Framework Year 2 Additional Guidance.

http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/2016/201632/HEFCE2016_32.pdf

. Accessed 26 February, 2018.

HEFCE. 2004. National Student Survey 2005: Outcomes of Consultation and Guidance on

Next Steps.

http://webarchive.nationalarchives.gov.uk/20120118171947/http://www.hefce.ac.uk/pubs/hef

ce/2004/04_33. Accessed 11 May, 2017.

Hoel, A. and T. Dahl. 2019. “Why bother? Student motivation to participate in student

evaluations of teaching. Assessment and Evaluation in Higher Education 44 (3): 361-378.

Johnson, R. 2000. “The authority of the student evaluation questionnaire.” Teaching in

Higher Education 5 (4): 419-434.

Kane, D., L. Millard, and J. Williams. 2013. “Transforming the student experience in the UK

from 1989”. In Enhancing Student Feedback and Improvement in Tertiary Education, edited

by M. Shah and C.S. Nair, 57-75. Abu Dhabi: Commission for Academic Accreditation

Quality Series No. 5.

28
Kember, D., D. Leung, and K.P. Kwan. 2002. “Does the use of student feedback

questionnaires improve the overall quality of teaching?” Assessment and Evaluation in

Higher Education 27 (5): 411-425.

Kenny, J. 2017. “Academic work and performativity.” Higher Education. 74 (5): 897-913

Locke, K. 2015. “Performativity, Performance and Education.” Educational Philosophy and

Theory 47 (3): 247-259

Moore, S. and N. Kuol. 2005. “Students evaluating teachers: exploring the importance of

faculty reaction to feedback on teaching.” Teaching in Higher Education 10 (1): 57-73.

Olssen, M. and Peters, M. 2005. “Neoliberalism, higher education and the knowledge

economy: from the free market to knowledge capitalism.” Journal of Education Policy 20 (3)

313-345

Richardson, J. 2005. “Instruments for obtaining student feedback: a review of the literature.”

Assessment and Evaluation in Higher Education 30 (4): 387-415.

Richardson, J. 2013. “The National Student Survey and its impact on UK Higher Education.”

In Enhancing Student Feedback and Improvement in Tertiary Education, edited by M. Shah

and C.S. Nair (pp. 76-84). Abu Dhabi: Commission for Academic Accreditation Quality

Series No. 5.

Schuck, S., S. Gordon and J. Buchanan. 2008. “What are we missing here? Problematising

wisdoms on teaching quality and professionalism in higher education.” Teaching in Higher

Education 13 (5): 537-547.

Shore, C. and S. Wright. 1999. “Audit culture and anthropology: neo-liberalism in British

higher education.” The Journal of the Royal Anthropological Institute 5 (4): 557-575.

29
Spooren, P., B. Brock and D. Mortelmans. 2013. “On the validity of student evaluation of

teaching: the state of the art.” Review of Educational Research 83 (4): 598-642.

Strauss, A., and Corbin, J. 1990. Basics of Qualitative Research: Grounded Theory and

Techniques. London: Sage

Thomas, G and James, D. 2006. “Reinventing ground theory: some questions about theory,

ground and discover.” British Educational Research Journal 32 (6): 767-795

Tight, M. 2013. Students: Customers, clients or pawns? Higher Education Policy 26 (3):

291–307.

Winchester, T. and M. Winchester. 2011. “Exploring the impact of faculty reflection on

weekly student evaluations of teaching.” International Journal for Academic Development.

16 (2): 119-131.

Wright, S. and M. Jenkins-Guarnieri. 2012. “Student evaluations of teaching: combining the

meta-analyses and demonstrating further evidence for affective use.” Assessment and

Evaluation in Higher Education 37 (6): 683-699.

Yorke, M. 2009. “‘Student experience’ surveys: some methodological considerations and an

empirical investigation.” Assessment and Evaluation in Higher Education 34 (6): 721-739.

30

You might also like