The announcement of the latest NSS results has been accompanied with a surprising amount of moaning. Academics are grumpy; The Tab even claimed, in its evidently scientific poll, that 75% of students think the NSS ‘is having a laugh’. You’ve got to love The Tab – don’t you?
Given that the 2014 NSS demonstrated record levels of satisfaction, this seems counter-intuitive. It’s perhaps a bit like trashing A Levels when the results keep rising. Might the more obvious explanations, in each case, be worth consideration. Maybe good teaching and hard work, in the case of the A Levels; maybe standards at universities have risen, in the case of the NSS. In fact – and here’s a thought – maybe the NSS has been a key agent in that process of improvement.
I’m happy to make that case. I can remember the pre-NSS days, and I’ve been involved in monitoring results and devising responses for more years than not since it was introduced. The problem, to the extent that there is one, is that the headline figures and the resultant league tables are what get all the attention, and these actually don’t tell us an awful lot. I mean, there’s obviously a difference between a unit consistently getting ratings around 90% and another one getting around 70%. And those figures can be very helpful in confronting the under-performing unit, by the way: been there, seen what can be done. But arguing that there’s a significant difference between two English departments because there’s a couple of points between them is manifestly silly, as much as we all insist on doing it. That’s what gives it a bad reputation.
But even in the consistently high-performing departments, the NSS almost always has something to tell us. English at Exeter is regularly in the top dozen or so, which is excellent, yet I think we’ve been able to maintain that position in part because we’ve listened to what it’s said. These lessons aren’t going to be in the headline figures (i.e. the section averages that feed into league tables, or the overall satisfaction question), but in the responses to individual questions, and also the comments. (For Exeter readers, all the figures are here: http://www.exeter.ac.uk/spc/stratplan/studentsurveys/nationalstudentsurvey/nss2014/.) A few years ago the comments were relentless on contact hours: so we did something about it, and those comments have died away. That happened before all the dogmatic fussing over key information sets, and was surprisingly specific. Other humanities departments weren’t getting those comments; relatively small alterations fixed it.
We don’t yet have the comments from this year’s survey, but I’m fascinated (maybe it doesn’t take much, but I really am) by the differences between responses on different questions. On section averages, we’re riding high: seventh in the sector, with none of our serious competitors above us. But that average is composed of some wide variations: on one question, we can be third-best in the sector, on another we’re down to 48th. And these variations are occurring even within sections (assessment and feedback, academic support, etc.). One possible conclusion to be drawn from this is that it’s all a little random – that students don’t really know what we’re asking, don’t have any points of comparison, and so forth. My conclusion, though, would be that the students are actually making some careful and subtle points. They recognize that we’re good at some things, but they think we could do better at others. I think they might be right.
Maybe we know some of these things already. We’re in the process of changing our personal tutoring system – though the big question is whether we can ensure that it addresses the relevant NSS questions (‘I have received sufficient advice and support with my studies’, ‘I have been able to contact staff when I needed to’, ‘Good advice was available when I needed to make study choices’) better than the old one. Assessment and feedback? Haven’t we just cracked the ‘prompt’ side of things? It’s our best result of all. But don’t we want, above all else, to be at the top on Q9: ‘Feedback on my work has helped me clarify things I did not understand’? We’ve agonized for years, like only academic departments can agonize, over our feedback form. I’m coming round to the view that a few boxes/prompts (‘suggestions for improvement’, etc.) could only be a good thing, since it would give a few clear signposts to what we’re doing.
The NSS is likely to change after a recent review, though it’s not going to go away. Actually to me the review report makes a huge amount of sense. The plan is to revise some of the questions – all very sensible, indeed overdue – and to introduce some new questions, most of which lean towards ‘student engagement’. The meaning of that term has been controversial (not least between me and my mate Derfel), but I really like the way the proposed questions will focus not only on matters that we would all see as important to the culture of our department, but also matters that we can influence. So, for instance, the proposal is to ask students whether they ‘feel part of a group of students and staff committed to learning’, whether they’ve been ‘encouraged to talk about academic ideas with other students’, whether ‘staff appear to value the course feedback given by students’, and so on. I think we’ll do well on this sort of thing; I also think that to consider how we might ensure we do well can only make us a more successful department.
There is one problem with the NSS: it appears to favour small/medium-sized campus universities. (This is roughly the astute UEA VC’s point, though one that’s been around for a while.) But: a) that’s not Exeter’s problem; and b) it goes some way to compensating for the fact that the international league tables do the opposite. So I’m happy enough about the NSS, though I’m also happy to hear contrary views.