The year the National Student Survey was sabotaged*

 

The cunning plan of the National Union of Students to boycott the National Student Survey feels like a long time ago. It wasn’t so much a different news cycle as a different dimension of experience altogether. I mean, back then Andrew Adonis looked like a supporter of the British higher education system.

Now that it’s come back to bite, it’s worth reminding ourselves of what it was all about. The NUS was aggrieved that the NSS was being used as a metric in an exercise – the Teaching Excellence Framework – that was being used to drive an agenda of marketization. It wasn’t a protest against the NSS itself, but the only way that students could see to undermine the TEF. It remains NUS policy.

The boycott has had highly marked though isolated effects. While the national response rate has dropped only four points, twelve universities did not receive sufficient responses to be able to register results. That says a lot about the NUS: passionately political on some campuses but not at others. At my university I couldn’t even find students who wanted to debate the issue.

Should the student leaders at those twelve unlisted universities be proud of themselves this morning? Doubtless they will see it as a result; many students devoted an awful lot of time and energy to sabotaging the survey. They have ensured that the 2017 NSS results will always be marked with an asterisk.

But I don’t see any chance of this stopping the TEF. That ship has sailed; the debate has moved on in the meantime to new metrical frontiers, such as learning gain and teaching intensity. While many people argue that the TEF metrics are no more than proxies of teaching quality, the direction of travel is towards more rather than fewer metrics, and also towards the granularity of subject-level assessment.

Meanwhile, the fact remains that there is only one TEF metric that directly registers the perceptions of students, and this is the NSS. It’s also been arguably the greatest single driver of reform in higher education over the past decade. I’ve seen it prompt wholesale change in poorly-performing departments. And even in my own, which generally does well, we always identify specific areas for attention: feedback, programme management, student support, resources, and so forth.

So I feel sorry for the  students at those twelve unlisted universities who completed the survey. No, actually I feel bloody angry on their behalf. Their responses will be made available internally so they should still have some impact; however, they won’t be published and won’t register in league tables. A handful of managers this morning will be breathing sighs of relief, and that’s not what their students deserve. Those students paid £27000 – in fees alone – and their views matter.

I also feel sorry for the people who put so much effort into revising the NSS. The focus right now shouldn’t be on the boycott; it should be on the responses to the new questions added this year. My favourite one was: ‘I feel part of a community of staff and students’. But there was also: ‘I have had the right opportunities to work with other students as part of my course’; and ‘The IT resources and facilities have supported my learning well’. These questions help to document the full dimensions of higher education. They are light-years away from the ‘value-for-money’ reductivism of certain other student surveys that jostle for the attention of policy-makers and journalists.

The NSS also includes a section on ‘student voice’. There’s: ‘It is clear how students’ feedback on the course has been acted on’. And there’s a bleak irony to this one: ‘The students’ union effectively represents students’ academic interests’. Well, did they?

I’m not immediately sure how the NSS results will be spun as bad news, but I expect it will happen. Maybe Lord Adonis will claim that there is ‘no student satisfaction in Cambridge’. It feels like a precarious moment for the sector, and everyone – not least the students – could do with some credible data on what’s working and what needs attention. In this context, the boycott-compromised 2017 results feel like an own-goal for British higher education. I’m not sure that’s exactly what the NUS had in mind.

Advertisements

Trash the NSS? Let’s be careful what we wish for

It’s been open season for attacks on the National Student Survey in recent months. The House of Lords, for instance, was treated to the old chestnut that students give better satisfaction ratings to easier courses. That’s not only entirely unsubstantiated, but disrespectful to the many thousands of students who actually want to be challenged, and in turn respond honestly to NSS questions. Even Chris Husbands, Chair of the Teaching Excellence Framework, has labelled the NSS as a ‘flawed … proxy for teaching quality’.

This has been music to the ears of many academics, who have long felt the NSS as a stick used to beat them, a key and resented manifestation of an audit-culture. Some objections to the NSS are fair, and indeed entirely helpful in limiting the uses to which it might be put; others, in my opinion, are knee-jerk reactions to students who dare to voice their dissatisfaction.

My point here is: be careful what you wish for. Because if we don’t stand behind the survey that gathers the considered responses of roughly 70% of all final-year undergraduates each year, just stand back and watch as other forms of quasi-research flood the public sphere.

 

Letting the journalists do the satisfaction research

Take the piece in today’s Times by Jenni Russell, which claims that many students are ‘getting a third-class education’. This contains no mention at all of the NSS. I mean, why should she worry about this longstanding and universal survey of finalists, when even the chair of the TEF seems so keen to run it down? And the piece demonstrates very well what happens when journalists go looking for other forms of evidence.

Russell has two sources of evidence. First, she uses students’ perceptions of ‘value for money’ as recorded in the Student Academic Experience Survey administered by the Higher Education Policy Institute. Fair enough: that survey gathered 15,000 responses in 2016. But to cry in horror at evidence that students became less convinced of value for money when the government transferred the cost of higher education from the state to the student, payable in the form of income-contingent loans, is a bit rich. Obviously this is a measure universities have to address, but of course students were going to react when their levels of debt soared.

Second, rather than turning to the NSS, Russell performed her own student satisfaction survey. In her words: ‘I’ve talked to ten students in the past few days’. Ten students: that’s a tough piece of research. And on this basis she concludes that Oxbridge is good, on the back of a chat with maybe no more than one student. But, you know, that feels about right, doesn’t it; after all, Russell herself is an Oxbridge graduate. She assures us also that the sciences are doing fine: maybe another two or three students there. So what’s left? Let’s have a go at the humanities and social sciences; that would be original, wouldn’t it? Sussex is slated on the word of one student. Feedback at Manchester is trashed on the basis of one tutor.

 

Reputation, reputation, reputation

Reputations for quality teaching are hard-won. And they matter; in a context of declining university applications, jobs depend upon them. There may well be programmes at Sussex or Manchester that are poor – I don’t know – but to pass judgement on dozens of academic departments, at a moment when applicants are making decisions about where to study, is desperately irresponsible.

But, to say it again, maybe that’s just what we get if we diss the NSS. It’s not hard to work through unistats and identify some programmes that are getting poor satisfaction results. It’s also easy enough to drill down, say, to satisfaction with feedback, or to compare results over a succession of years.

If a journalist were to identify a course that’s been underperforming for years, then seek out some students on that course, I’d applaud her. I’ve worked to turn around units getting poor NSS results, and this has convinced me: a) that the NSS gives us reliable data; and b) that concentrated work can turn the situation around. Poor practice should be exposed and addressed, but let’s be careful how we identify it in the first place.

 

Attacks on the NSS seem especially unfortunate given the excellent, considered revision the survey has undergone this year. It seems to me blindingly obvious that student satisfaction does matter in a competitive system, whether academics like it or not, and universities are being sensible and responsible when they respond, year by year, to NSS results. Jenni Russell has merely demonstrated what we get when we don’t stand with confidence behind this reliable evidence-base.

 

Boycott the National Student Survey? Please don’t be so stupid*

A March update

The recent vote in the House of Lords to decouple TEF results from fees might have looked like a victory for the inflation-denial arguments of the NUS. But a couple of reflections are worth making: a) the House of Lords doesn’t make the laws, so this will bounce back to the Commons; b) the members who made the winning arguments were explicitly not opposed to inflationary fee increases, which they suggested should happen at all universities regardless.

So smashing the TEF may be a pain-pain result for students: i.e. they lose the power of the TEF, with its agenda-shifting link to fees; and they get fees rise in line with inflation regardless. My prediction, by the way, is a fudge: inflationary rises for all universities, on the condition that they enter a pass-fail TEF. Just a prediction, though.

Finally, while people from all angles are denigrating the NSS, look what happens when journalists conduct their own student satisfaction surveys. Is this what we want?

And an April update

The government is determined to maintain a link between the TEF and fees, but is buying itself some wiggle-room as it tries to push through the HE Bill. To make sense of the amendments, I recommend the excellent Andrew McGettigan.

And from here, from January (as relevant as ever, kind of) …

The National Union of Students is calling for a boycott of the 2017 National Student Survey. The argument goes that onceboycott-the-nss-_640x400 the NSS is used as a metric to inform the Teaching Excellence Framework – as is now happening – it becomes implicated in the commercialisation of education. Hence – or so it’s claimed – by boycotting the NSS students can undermine the TEF and derail fee rises.

As a long-time lover of the NSS, I share the frustration that it should be monetized in such a crass manner. But I believe that the boycott is based on wonky logic, and will only hurt those the NUS represents. Students, please, don’t boycott the NSS.

 

When ‘fee rises’ aren’t fee rises

The TEF, according to the NUS, is a vehicle designed to increase fees. But I’d argue precisely the opposite: it’s designed to suppress fees.

My logic depends on a thing called ‘inflation’. The £9000 that a student was paying four years ago is not the same as £9000 today. Hence the government’s decision in 2010 that £9000 was a fair price for 2012 entrants leads logically – or should have done – to a principle of rises in line with inflation. Did anyone seriously believe that £9000 would remain the maximum price indefinitely?

The only reason we are where we are, in fact, is that the government neglected to future-proof the system. That was stupid, but it was also politics. And the political fix now is not a rise; indeed it’s not even a commitment to maintaining funding levels in real terms. It’s a commitment to reducing funding for universities.

This is because only some universities will be allowed to increase their fees in line with inflation. Others – those that receive ‘silver’ or ‘bronze’ ratings in the TEF – will experience real-terms cuts since their students will pay less, in real terms, than those of previous years. Students and staff alike really should be bloody angry about this assault on universities’ finances.

And I’m aware, by the way, that some people will argue that £9000 was not a fair price in the first place. I’m also aware that some see the TEF as a kind of stalking-horse, that will open the door to more fundamental deregulation of fees. And I’m aware, above all, that very many of us regret the state’s withdrawal of public funds for higher education. But the fact remains that, given where we are and the system in which we’re working, what look like fee rises are really nothing of the sort.

 

Students can’t break the TEF; they can only make it worse

I don’t much like the TEF. I think it’s unnecessary because standards are already high; I think it’s an instance of government fussing about stuff it doesn’t much understand; and I think the infantile ‘gold’, ‘silver’ and ‘bronze’ tags will make us all look ridiculous. I also rather suspect it might collapse, or at least metamorphose into something quite different, under the weight of its idiocy.

But I don’t think that a boycott of the NSS will do anything more than make the TEF worse. Participation rates will drop in response to the NUS position, but not enough to trouble anyone at a senior level. I mean, just about all the TEF data are already wobbly one way or another; this is a low-expectation environment.

The data will be less valuable, because some of the most politically engaged students will withhold their opinions. But probably the only people who will notice the difference will be those of us at department level who really, deeply care about the NSS because we profoundly value our students’ opinions. A boycott will hurt us, but the TEF will roll on regardless.

 

What’s so smart about saying nothing?

nss-logoI’ve read plenty of people arguing that the TEF is not really assessing teaching quality at all. That’s fair enough at a theoretical level: we can all see that satisfaction and graduate outcomes are not precise measures of teaching quality. But it’s nonetheless ridiculous to argue, with the NUS, that the NSS does not ‘have anything to do with teaching quality’.

In actual fact the NSS is a pretty good proxy for measuring quality. Moreover it has been the greatest agent of educational reform that I have known. I’ve seen how poor NSS results can provide a catalyst for major reforms within departments. I’ve also seen how, even within a department getting quite good results overall, the NSS helps academics to rethink aspects of how we work.

I could give countless examples, but here are a few. The NSS has put contact hours firmly on the agenda across the sector. Ditto schedules for the return of feedback on assignments, and equally the form of feedback. Now, thanks to the new questions on student engagement, we’re all thinking about the culture and communities within which our students are learning.

Of course universities also use good NSS results for promotional purposes. But why does this become, for the NUS, such a terrible thing? Good NSS results are the result of hard, successful work. There are still departments out there getting crap results, and boycotting the NSS will only give them an excuse to hide for another year.

 

So please, please let’s not boycott the NSS. There’s plenty to be angry about, but I can’t believe we’ve reached a point at which saying nothing makes political sense. In fact this campaign feels to me like an insult to students who have waited for three years to have their say. I think those students are smarter than the NUS campaign.

  • Published under a more polite title by wonkhe.com.

What have they done to the NSS?

Here’s your cut-out-and-keep guide to what’s happening with the NSS for 2017. It’s getting quite an overhaul.

Since a picture tells a thousand words, and since I’ve made my views clear through the extended consultation phase, I’ll add no more here. Please have a look at the changes and complete the poll below.

nss-track-changes

The paradoxes of neo-liberalism in UK higher education

The latest argument that we should be worried about a crisis in the university system comes, via The Guardian, from the US consultant Karen Kelsky. Her key concerns are student debt and casualization of the academic labour-market. The UK, she claims, is barrelling down a road familiar from the US.

Student debt? Well, yes: absolutely, and let’s not forget it. But I want to reflect here on the arguments about casualization and the fears for early-career academics, because it seems to me there are some curious paradoxes here. For Kelsky, the ‘neo-liberal’ structures of UK higher education, such as the Research Excellence Framework, are all part of the problem. But as I see things, they are also having contrary effects, helping to hold at bay some of the economic logic that has driven things in the US.

So let’s consider some effects of two of those neo-liberal monitoring structures, both unique to the UK: the REF and the National Student Survey.

 

The REF and the academic job market

Casualization of academic labour makes good economic sense. Why pay someone a full-time professional salary when you can hire in perfectly well qualified temporary lecturers and pay them only for the teaching they do? That logic has taken root in many US universities; it will most likely drive the growth of private universities in the UK. So there definitely are reasons to be concerned.

But the REF posits a contrary logic, along the lines: why appoint temporary and part-time teaching staff, when you could appoint someone who will contribute to the REF? And that appointment – not always, but more often than not – will be permanent. Certainly that’s my experience. We always have temporary lecturers – to cover for people on funded leave, or maternity leave, and so forth – but we appoint to permanent, research-active posts whenever we can.

And it seems to me that the REF is also a friend to early-career academics. There’s a strong, REF-guided logic to appoint younger people, publishing high-quality work often straight from their doctoral research. It’s not about quantity: four decent pieces in 6-7 years is not unreasonable, and ECRs will typically require fewer than four. The principle of peer-review, meanwhile, remains strong, underpinning the commitment to rewarding quality.

And this all means that universities will generally gain more benefit, for relatively low cost, appointing junior lecturers, as opposed to appointing senior people. This does stimulate the job-market, albeit in an uneven pattern: better in pre-REF years than others. It won’t create jobs for everyone finishing PhDs (and there is a genuine debate to be had over whether we are educating too many smart young people to doctoral level), but it surely works against the logic of casualization.

Kelsky argues also that the REF drives us all towards performance targets, and leads to the persecution of great minds who work slowly. Yes, that’s an old and not invalid argument. But here’s the paradox: if the goal is to create a structure that is more open to early-career academics there is actually a value in ensuring that those in mid-career and late-career are actually doing all parts of their job. However much some of us might resent it, the REF helps with that.

 

The National Student Survey

The National Union of Students has proposed a boycott of the NSS. Actually there’s a logic to this: they argue that if the NSS is to be linked into the Teaching Excellence Framework, and if the latter is to be used to determine differential fees, then the Survey’s original purpose of feeding back to universities on their performance will in practice be superseded by its use as a vehicle of marketization. As someone who has seen the NSS improve the quality of education over many years, I fear a boycott would be self-defeating. But I can see the point.

Yet the NSS – and in due course the TEF, presumably – also works contrary to the forces of casualization that Kelsky bemoans. The NSS gives students some power, and in my experience students tend to be fairly clear about their desire to be taught by fully-qualified and fairly employed lecturers. That’s not to say that we haven’t had superb feedback in my department, year after year, on our (very well trained) graduate teaching assistants. But it is definitely one reason why we employ very few people who look like US-style ‘adjuncts’: who are, typically, people with PhDs, paid to drop into a campus to deliver particular courses, and often working simultaneously at multiple universities.

Do we have people in this category in the UK? Absolutely we do; and I agree that it’s a problem. But I’m yet to see hard evidence that it’s getting worse. And, as I see things at least, another of those neo-liberal monitoring devices, the NSS, is working to some extent counter to the logic of casualization.

 

I’ve written before about the temptation to draw easy parallels between the US and the UK (in the context of the so-called ‘crisis of the humanities’). I think there are genuine problems in the US, which affect all of us one way or another; and I think there is always cause to be vigilant about developments in the UK system. But there are paradoxes in some of these arguments, and structural forces pulling against what may seem like an incontrovertible economic logic. As much as the UK’s various monitoring systems may at times feel oppressive or frustrating, it just might be the case that they have some positive effects on the sector.

 

Out goes ‘enthusiasm’, in comes ‘engagement’: the National Student Survey of the future

So farewell then, Key Information Sets: hundreds of thousands of university applicants never even knew you existed.

Having been involved in the farcical attempts to provide regularized and visualizable data on contact hours, forms of assessment, and the like, I find it difficult to mourn the demise of KIS. Yet there are many more interesting stories buried within the HEFCE document that signals the end of KIS: Review of Information about Learning and Teaching, and the Student Experience. Those stories centre on the National Student Survey.

Much of this document is reminiscent of previous consultations. But we’re getting somewhere: the NSS is in for a makeover. And given the extent to which we all gear our practices towards the existing NSS questions, this demands our attention. While it’s important to stress that this is a consultation document, it’s also worth noting that there are years worth of momentum behind the proposals. And some of the changes that are eventually made will affect the 2017 survey, to be taken by students already established at university. So even as we keep one eye firmly on 2016, it’s surely worth keeping the other eye on the horizon.

The apotheosis of ‘engagement’

For those of us who have been banging the ‘student engagement’ drum for years, this is big news. I’m excited. I’m even ‘enthusiastic’ – but more on enthusiasm below. There are nine proposed questions, though it’s hard to imagine that more than a handful will make it. The questions are:

  • ‘My course has challenged me to achieve my best work’
  • ‘My course has provided me with opportunities to explore ideas or concepts in depth’
  • ‘My course has provided me with opportunities to bring information and ideas together from different topics’
  • ‘My course has provided me with opportunities to apply what I have learnt’
  • ‘I have had the right opportunities to provide feedback on my course’
  • ‘Staff value students’ views and opinions about the course’
  • ‘It is clear how students’ feedback on the course has been acted on’
  • ‘I have had the right opportunities to work with other students as part of my course’
  • ‘I feel part of a community of staff and students’

There are some axe-magnets there when it comes time to trim the list. I mean ‘right opportunities’: really? But if the first and last questions survive, I could retire (not yet, mind you) happy. The question of ‘community’ is especially interesting. There are reasons why it might not make the cut: it’s a tougher challenge for bigger programmes and cross-disciplinary programmes, and it arguably reinforces the NSS’s bias towards campus universities. But I hope it makes it makes it, because it matters.

What’s disappearing?

Out will go questions perceived to be ‘duplicating’ other questions.

  • Firstly, out go ‘enthusiastic’ teachers, on the assumption that teachers who ‘make the subject interesting’ are good enough. Personally, that’s a bit of a relief; enthusiasm has never been my strongest point.
  • Secondly, out go ‘clear’ assessment criteria, on the assumption that ‘fair assessment’ matters more. That will be welcomed across the sector, because it’s just bloody hard to get students to come to terms with those criteria. That’s not to say, though, that the effort was not noble.
  • Thirdly, out goes feedback that is ‘detailed’ and ‘helps me understand’, in favour of ‘helpful comments on my work’. That makes sense, though it perhaps means that we will ease back on that other noble crusade: insisting that there are valid forms of feedback beyond written comments.

And here’s a headline for me: all the ‘personal development’ questions are listed for removal. Stunningly, after however many years of beating ourselves up trying to work out how to fix our ‘personal development problems’, we’re told that ‘cognitive testing’ does ‘not produce valid results’. Students, moreover, are ‘unsure of the intent behind the questions’. So out go:

  • ‘The course has helped me to present myself with confidence’
  • ‘My communication skills have improved’
  • (And everyone’s favourite) ‘As a result of the course, I feel confident in tackling unfamiliar problems’.

What’s in the wording?

And finally come the changes in terminology.

  • In the ‘learning resources’ section, there will still be questions about ‘library resources’ and ‘IT resources’, but ‘good enough for my needs’ will be replaced by ‘have supported my learning well’. There will also be a welcome prompt in the ‘library’ question, to remind students that ‘online resources’ – i.e. all that stuff that looks free because it works a bit like google, even though it has actually cost the university thousands of pounds – matter as much as ‘books’. Finally, the NSS enters the twenty-first century.
  • And there are some interesting changes to ‘assessment and feedback’ terminology. Firstly, ‘prompt’ feedback becomes ‘timely’ feedback. Well bugger that: just when we have crushed our lives under regimented essay-return deadlines. Jo Johnson wants us emailing our students on the weekends; hell, we’ve been marking essays on Christmas Day! Nonetheless, the logic of ‘timely’ (i.e. in time to make a difference) is irrefutable and eminently sane. Secondly, ‘detailed comments’ becomes ‘helpful comments’. Good sense there as well: sometimes, quality simply does not equate with quantity.

However self-effacing this consultation document may look, it will affect our lives sooner than we think. As head of a department that has done rather well in the NSS over the years, I won’t be waiting for the results of the consultation before taking some action. In fact I feel a ‘community’ working group coming on.

Still loving the National Student Survey

I didn’t plan to write a post on the NSS this year, but sometimes I see something so bonkers that I can’t hold my tongue. That came along last week in the form of a Guardian blog-post: The National Student Survey should be abolished before it does any more harm. And so, while my mate Derfel has already led the defence, and although I’ve made some of the points below before, here we go again.

Let’s tackle those NSS fallacies – most of them familiar enough – as rehearsed in the Guardian blog.

1.There must something wrong with a survey that demonstrates high levels of satisfaction

Well: why? Surely it’s possible that if students say they’re satisfied, they are satisfied. And surely it’s also possible that satisfaction rates have gravitated upwards because we’ve systematically raised standards over the years since its introduction. We’ve worked hard, we’ve got better, and our students are getting a better experience. Why wouldn’t we want to consider that as a possibility? Why wouldn’t we want to scream it from the rooftops – especially when there are plenty of people outside of universities telling us we’re not worth it? Or should we, perhaps, devise a new measure that demonstrates once and for all that we’re crap?

Over many years of studying and responding to NSS results, I’ve been involved in reviews of a couple of programmes getting really appalling satisfaction rates. (Don’t worry: not recently and not my own.) And I’ve seen what a very powerful tool for reform those results can be. I’ve seen a DVC banging a desk and saying the results are shameful. And I can assure you that we fixed those programmes. Without the NSS, can I be sure those changes would have happened?

Maybe there’s not so much fixing left to be done. Possibly. Maybe there’s also a case, now, for a review of the NSS questions. Last year I argued in favour of the proposed ‘engagement’ questions, which I still think can help us to move forwards on the agenda of quality enhancement. There’s always cause, in other words, to look for the next horizon. But why trash what has been – and remains – such a powerful tool for reform?

  1. It doesn’t demonstrate differences between programmes at an institution

Honestly! The Guardian blog-post was published the day after the institution-level results were published. But here’s the thing: there’s more to come. Really! Programme level data, comments – and people like me will devour the lot. Really.

And we’ll do that because the real value of the NSS is not whether it leaves my department one place above or below, say, Warwick – which is really neither here nor there – but what it identifies as priorities for the year to come. Because the micro-level data can be very, very specific.

In my department, for instance, we’ve had a couple of years of very high results on ‘prompt’ feedback and ‘detailed’ feedback, but significantly lower on feedback that has helped the students ‘clarify things [they] don’t understand’. Given that I’d rather like our students to understand things, and that I’d also rather like to protect my colleagues from crushing workloads, this tells me that we’ve got a problem: we’re working ourselves into the ground, but not working as effectively as we might. If we can fix that problem – easier said than done, granted – we could improve things for everybody.

  1. The NSS fosters a ‘race to the bottom’

The perception here is that we have all cut reading lists, shortened assessments, and generally made life easier for students, so that they will be more satisfied. And the evidence?

In my part of the world, actually, most of the evidence is to the contrary. Most students, in my experience, want more, not less, assessment. Funny thing, that: something to do with the fact that they want to learn. Who’d have thought it? And our students are unquestionably reading more, not less, than they were 5-10 years ago. How we facilitate that reading may have changed, due to the digitization of material. Some people may argue that this is spoon-feeding, but it would be hard to argue that it’s a product of the NSS specifically, rather than of advances in technology more generally.

One of the bigger changes in my department, meanwhile, has been in contact hours. Again, students wanted more, and we gave them more. Again, some argue that this is spoon-feeding, but again there’s plenty of evidence that they value, more than anything, quality learning hours, not hours that deliver piles of information in generic contexts.

In fact my biggest beef with this argument is that it assumes that students don’t want to be challenged and don’t know what’s good for them. While they won’t always be right, of course, I think that assumption, as an assumption, is patronizing and wrong.

  1. The NSS is a waste of resources

This is a new one to me: ‘just the cost of rewarding survey-completers with vouchers would cover a lecturer’s salary at many institutions’. Really?

For a start, of course, we don’t know who’s completed it and who hasn’t. So how are we all ‘rewarding survey-completers’? At Exeter this year I think we had banks of laptops and plates of donuts here and there: the message being, ‘stop for a few minutes, complete the NSS and have a donut’. I guess the donut was a ‘reward’, but it doesn’t strike me as excessive. Maybe some universities were letting them take the laptops home if they ticked the right boxes. Who’s to know?

—————————————————-

So let’s, please, defend the NSS. I don’t for a minute think students should be our only source of evidence about quality teaching. That’s another debate. But nor do I think we should ignore their views, because the NSS has proved our students, year after year, to be astute and honest commentators on what we’re doing. And it’s shown us, on the whole, to be doing a bloody good job.

What’s the Problem with the National Student Survey?

The announcement of the latest NSS results has been accompanied with a surprising amount of moaning. Academics are grumpy; The Tab even claimed, in its evidently scientific poll, that 75% of students think the NSS ‘is having a laugh’. You’ve got to love The Tab – don’t you?

Given that the 2014 NSS demonstrated record levels of satisfaction, this seems counter-intuitive. It’s perhaps a bit like trashing A Levels when the results keep rising. Might the more obvious explanations, in each case, be worth consideration. Maybe good teaching and hard work, in the case of the A Levels; maybe standards at universities have risen, in the case of the NSS. In fact – and here’s a thought – maybe the NSS has been a key agent in that process of improvement.

I’m happy to make that case. I can remember the pre-NSS days, and I’ve been involved in monitoring results and devising responses for more years than not since it was introduced. The problem, to the extent that there is one, is that the headline figures and the resultant league tables are what get all the attention, and these actually don’t tell us an awful lot. I mean, there’s obviously a difference between a unit consistently getting ratings around 90% and another one getting around 70%. And those figures can be very helpful in confronting the under-performing unit, by the way: been there, seen what can be done. But arguing that there’s a significant difference between two English departments because there’s a couple of points between them is manifestly silly, as much as we all insist on doing it. That’s what gives it a bad reputation.

But even in the consistently high-performing departments, the NSS almost always has something to tell us. English at Exeter is regularly in the top dozen or so, which is excellent, yet I think we’ve been able to maintain that position in part because we’ve listened to what it’s said. These lessons aren’t going to be in the headline figures (i.e. the section averages that feed into league tables, or the overall satisfaction question), but in the responses to individual questions, and also the comments. (For Exeter readers, all the figures are here: http://www.exeter.ac.uk/spc/stratplan/studentsurveys/nationalstudentsurvey/nss2014/.) A few years ago the comments were relentless on contact hours: so we did something about it, and those comments have died away. That happened before all the dogmatic fussing over key information sets, and was surprisingly specific. Other humanities departments weren’t getting those comments; relatively small alterations fixed it.

We don’t yet have the comments from this year’s survey, but I’m fascinated (maybe it doesn’t take much, but I really am) by the differences between responses on different questions. On section averages, we’re riding high: seventh in the sector, with none of our serious competitors above us. But that average is composed of some wide variations: on one question, we can be third-best in the sector, on another we’re down to 48th. And these variations are occurring even within sections (assessment and feedback, academic support, etc.). One possible conclusion to be drawn from this is that it’s all a little random – that students don’t really know what we’re asking, don’t have any points of comparison, and so forth. My conclusion, though, would be that the students are actually making some careful and subtle points. They recognize that we’re good at some things, but they think we could do better at others. I think they might be right.

Maybe we know some of these things already. We’re in the process of changing our personal tutoring system – though the big question is whether we can ensure that it addresses the relevant NSS questions (‘I have received sufficient advice and support with my studies’, ‘I have been able to contact staff when I needed to’, ‘Good advice was available when I needed to make study choices’) better than the old one. Assessment and feedback? Haven’t we just cracked the ‘prompt’ side of things? It’s our best result of all. But don’t we want, above all else, to be at the top on Q9: ‘Feedback on my work has helped me clarify things I did not understand’? We’ve agonized for years, like only academic departments can agonize, over our feedback form. I’m coming round to the view that a few boxes/prompts (‘suggestions for improvement’, etc.) could only be a good thing, since it would give a few clear signposts to what we’re doing.

The NSS is likely to change after a recent review, though it’s not going to go away. Actually to me the review report makes a huge amount of sense. The plan is to revise some of the questions – all very sensible, indeed overdue – and to introduce some new questions, most of which lean towards ‘student engagement’. The meaning of that term has been controversial (not least between me and my mate Derfel), but I really like the way the proposed questions will focus not only on matters that we would all see as important to the culture of our department, but also matters that we can influence. So, for instance, the proposal is to ask students whether they ‘feel part of a group of students and staff committed to learning’, whether they’ve been ‘encouraged to talk about academic ideas with other students’, whether ‘staff appear to value the course feedback given by students’, and so on. I think we’ll do well on this sort of thing; I also think that to consider how we might ensure we do well can only make us a more successful department.

There is one problem with the NSS: it appears to favour small/medium-sized campus universities. (This is roughly the astute UEA VC’s point, though one that’s been around for a while.) But: a) that’s not Exeter’s problem; and b) it goes some way to compensating for the fact that the international league tables do the opposite. So I’m happy enough about the NSS, though I’m also happy to hear contrary views.