It’s been open season for attacks on the National Student Survey in recent months. The House of Lords, for instance, was treated to the old chestnut that students give better satisfaction ratings to easier courses. That’s not only entirely unsubstantiated, but disrespectful to the many thousands of students who actually want to be challenged, and in turn respond honestly to NSS questions. Even Chris Husbands, Chair of the Teaching Excellence Framework, has labelled the NSS as a ‘flawed … proxy for teaching quality’.
This has been music to the ears of many academics, who have long felt the NSS as a stick used to beat them, a key and resented manifestation of an audit-culture. Some objections to the NSS are fair, and indeed entirely helpful in limiting the uses to which it might be put; others, in my opinion, are knee-jerk reactions to students who dare to voice their dissatisfaction.
My point here is: be careful what you wish for. Because if we don’t stand behind the survey that gathers the considered responses of roughly 70% of all final-year undergraduates each year, just stand back and watch as other forms of quasi-research flood the public sphere.
Letting the journalists do the satisfaction research
Take the piece in today’s Times by Jenni Russell, which claims that many students are ‘getting a third-class education’. This contains no mention at all of the NSS. I mean, why should she worry about this longstanding and universal survey of finalists, when even the chair of the TEF seems so keen to run it down? And the piece demonstrates very well what happens when journalists go looking for other forms of evidence.
Russell has two sources of evidence. First, she uses students’ perceptions of ‘value for money’ as recorded in the Student Academic Experience Survey administered by the Higher Education Policy Institute. Fair enough: that survey gathered 15,000 responses in 2016. But to cry in horror at evidence that students became less convinced of value for money when the government transferred the cost of higher education from the state to the student, payable in the form of income-contingent loans, is a bit rich. Obviously this is a measure universities have to address, but of course students were going to react when their levels of debt soared.
Second, rather than turning to the NSS, Russell performed her own student satisfaction survey. In her words: ‘I’ve talked to ten students in the past few days’. Ten students: that’s a tough piece of research. And on this basis she concludes that Oxbridge is good, on the back of a chat with maybe no more than one student. But, you know, that feels about right, doesn’t it; after all, Russell herself is an Oxbridge graduate. She assures us also that the sciences are doing fine: maybe another two or three students there. So what’s left? Let’s have a go at the humanities and social sciences; that would be original, wouldn’t it? Sussex is slated on the word of one student. Feedback at Manchester is trashed on the basis of one tutor.
Reputation, reputation, reputation
Reputations for quality teaching are hard-won. And they matter; in a context of declining university applications, jobs depend upon them. There may well be programmes at Sussex or Manchester that are poor – I don’t know – but to pass judgement on dozens of academic departments, at a moment when applicants are making decisions about where to study, is desperately irresponsible.
But, to say it again, maybe that’s just what we get if we diss the NSS. It’s not hard to work through unistats and identify some programmes that are getting poor satisfaction results. It’s also easy enough to drill down, say, to satisfaction with feedback, or to compare results over a succession of years.
If a journalist were to identify a course that’s been underperforming for years, then seek out some students on that course, I’d applaud her. I’ve worked to turn around units getting poor NSS results, and this has convinced me: a) that the NSS gives us reliable data; and b) that concentrated work can turn the situation around. Poor practice should be exposed and addressed, but let’s be careful how we identify it in the first place.
Attacks on the NSS seem especially unfortunate given the excellent, considered revision the survey has undergone this year. It seems to me blindingly obvious that student satisfaction does matter in a competitive system, whether academics like it or not, and universities are being sensible and responsible when they respond, year by year, to NSS results. Jenni Russell has merely demonstrated what we get when we don’t stand with confidence behind this reliable evidence-base.