Not much love for the TEF*

How do we like the Teaching Excellence Framework so far? According to a survey of participating higher education institutions conducted earlier in 2017 – after documents were submitted but before results were announced – the answer looks like a resounding: ‘not much at all, thanks’. This survey, part of a rapid-response Universities UK review of the TEF, poses some important questions, centring particularly on the relation between future costs and benefits.

The survey results are heavy with the collective shrugging of PVC shoulders. Asked whether ‘The TEF will accurately assess teaching and learning excellence’, 73% of respondents actively disagreed while just 2% agreed. Given that only 83 universities responded, that means the managers of only two universities in the entire country are prepared to argue that the TEF does its job. Or, to be more accurate, that was the case before the feel of a gold medal possibly changed some minds.

But there is also evidence of a curiously widespread acceptance of TEF. Asked whether it will make a ‘positive contribution to student decision-making’, only 18% agreed. Asked whether it will ‘enhance teaching and learning practice’, views were roughly split (25% agree; 29% disagree). But asked whether it will ‘enhance the profile of learning and teaching’, 73% agreed. It may be unreliable, even misleading – seriously, overall these are awful results, far below my expectations – but at least it’s putting education on the agenda.

It’s perhaps also driving spending decisions within universities. Asked about investment subsequent to the announcement of TEF in 2015, the general attitude seems to be: ‘well, you see, we were going to do that anyway’. Indeed 53% said TEF had no influence or impact on such decisions. But the raw facts of investment in learning and teaching in this period are nonetheless impressive: ‘additional investment in academic support’ (over 70%); ‘additional investment in learning and teaching’ (over 80%); ‘new progression routes for teaching’ (over 20%).

There are plenty of critics of UK higher education who are asserting, on the back of frankly bugger all evidence, that universities are not investing along these lines at all. We’re spending it all on vice-chancellors’ salaries, their houses and, like, loads of other fat-cat stuff. Maybe these critics would say: well, the universities would say they’re doing all that worthy stuff, wouldn’t they? But it’s actually dead easy to document this investment, if anyone cares to look; in fact those provider submissions will help. Our critics are fixated on the lack of competition on price, but anyone working in higher education knows that there is intense competition at the level of quality.

Those, including me, who would argue from this point that the TEF is too important to be allowed to fail might also derive comfort from the estimated cost. Taking account of estimates of staff time devoted to the 2017 TEF by universities, the cost came in at about £4.1 million. Now I know what you’re thinking: we could employ at least nine extra fat-cats for that money. We could let them loose to roam wastefully around the country. But it’s worth remembering that the REF comes in at £212 million. If the TEF helps universities, in the face of ill-informed criticism, to demonstrate commitment to learning and teaching, maybe it’s a steal at £4 million.

But I sense a dilemma. The devastating lack of confidence in the process must be addressed, since indifference could so easily lead to cynicism, and from there to notoriety. Hence the attention that will be devoted to responses to the questions concerned with ‘priorities for future development’. There’s not exactly unbridled enthusiasm here, but let’s work with what we’ve got. Among other things, we have some interest in measures of ‘learning gain’, a fairly solid commitment to the value of teaching qualifications, and a vague appeal to the idea of ‘new metrics’ and ‘more sophistica[tion]’.

Sophistication and new metrics, however, sound kind of expensive. Measuring learning gain is a bottomless money-pit. And subject-level TEF, to which the government remains committed in the face of an overwhelming lack of interest within the universities, is predicted to cost up to £13.7.

So what’s the value to the sector of a TEF that will command a greater degree of confidence? One of the intriguing facts offered in this report is that there was a high correlation between Guardian league table results – which cost the sector nothing – and TEF results. I think I might have predicted that would happen; although if I did, I’ll concede that it was always likely to be the case. Consequently, it seems to me self-evident that if TEF in future is to be demonstrably better than existing league tables, and certainly if it is to be used as the basis for differentials in fees, it will inevitably become significantly more expensive. At what point will its value justify its cost?

* Published under a different title by

‘We need to talk about your gross teaching quotient’*

There’s a new frontier in the quest to demonstrate teaching excellence, and it’s called the Gross Teaching Quotient. By mid-winter senior managers will be descending upon departments across the country bearing spreadsheets and bad news: ‘We need to talk about your Gross Teaching Quotient’.

It’s gold, gold, gold for the English Department

The Gross Teaching Quotient – and let’s call it GTQ, because that’s what it will become – is the most eye-catching aspect of the recent proposals outlining the subject-level Teaching Excellence Framework. It’s just a pilot version at this stage, but it’s coming at us fast, scheduled to be in place for real in 2019-20.

I’m broadly supportive of institution-level TEF. Some aspects of it, such as the olympic-style medals, are manifestly barking, but there’s no avoiding the push to demonstrate teaching excellence. The TEF uses reasonable proxies for quality – and I haven’t heard any better suggestions, despite all the noise – while the written submission encourages innovative actions and reflective thinking.

But I’ve had my head firmly in the sand on subject-level TEF. On practical grounds the concept has seemed too cumbersome, while I’ve also wondered about the relation between cost and value. I mean, if institutional-level TEF is already focusing managerial minds and driving reform – as I think it is, though nobody has bothered to wait long enough to assess this – where is the added value to subject-level TEF?

Jo Johnson’s answer would be that it’s all about the consumer. Potential students generally choose courses first and universities second, so they will want evidence of quality at that more granular level. In my experience such people are in fact overwhelmed with evidence – from league tables, unistats and so forth – but perhaps that might equally support Johnson’s argument. Some people, it seems, just want to see a gold medal.

The pilot will include only a handful of universities. Subject-level TEF will use the same metrics as institution-level TEF, with a few notable additions. There will be a written submission, stripped back to five pages. And there will be 35 subjects or subject-groupings, to avoid the metrical muddle that can be caused by small disciplines.

There are two pilot models. Model A (‘by exception’) will simply give subjects the same rating as their institutions unless the metrics indicate a need for closer investigation. Model B (‘bottom-up’) will assess each subject fully, and build towards an institution-level award from this basis. I propose to label these, respectively: ‘the sane model’ and ‘brace yourself, it’s coming’.

Then it starts to get interesting. All the well-meaning complaints from across the sector about the TEF merely measuring proxies have got the TEF-team thinking. They’re not for turning; they’re marching ever onward to the holy grail of quantifiable teaching excellence. And this brings them – as, of course, it would – to the GTQ.

The GTQ is a measure of ‘teaching intensity’. And teaching intensity is not all about contact hours; honest, the document says this maybe fifteen times, so I swear they must mean it. Instead it’s an idea, kicked about in last year’s Success in a Knowledge Economy White Paper, about the relation between the quantity and quality of teaching. And that was all derived from Graham Gibbs’s 2010 report Dimensions of Quality.

Teaching intensity will be measured in part by a student-survey. Think about that for a minute: students will be asked questions about whether they’re getting enough teaching. And then there will be the calculations, weighting ‘the number of hours taught by the staff-student ratio of each taught hour’. Got it? The GTQ is then ‘calculated by multiplying the taught hours by the appropriate weighting and summing the total across all groups, followed by multiplying by 10 to arrive at an easily interpretable number’. And then it’s divided by the square-root of staff days lost due to stress. Or something like that.

It’s worth noting just what a narrow reading of Gibbs this actually is. In the desperation to create a new metric, lots of valuable Gibbsian ‘dimensions’ have been set aside: from the critical questions of who does the teaching and how well they have been trained (puzzlingly ignored by TEF so far), through assessment and feedback, and beyond. I guess we get to brush up on this stuff when we’re preparing our written submissions, but there’s a curious narrowing of vision for all the rhetoric to the contrary.

My greatest concern about all this is that GTQs will evidently be produced as comparative measures, driven by an underlying assumption that more intensity is always going to be better. Practice in my department may be perfectly sound from all sorts of perspectives; however, as I understand the proposals, if our GTQ is weaker than a competitor’s we may be heading for silver. Admittedly  it’s only one metric, but from a management perspective it will attract attention as the newest and perhaps the easiest to manipulate. Subject-level TEF will understandably instil anxiety in all sorts of people in management positions; not all of them can be relied upon to respond reasonably.

This is a pilot, and much may change between now and full implementation. Crucially, the review of the first round of institution-level TEF will affect anything that happens thereafter at subject level. But subject-level TEF may affect departments pretty much immediately. Much of that change will be for the good, some will be more questionable, and an awful lot could increase workloads and stress-levels, for students and  staff alike.

* This piece was first published in Times Higher Education, 10/8/17

The year the National Student Survey was sabotaged*


The cunning plan of the National Union of Students to boycott the National Student Survey feels like a long time ago. It wasn’t so much a different news cycle as a different dimension of experience altogether. I mean, back then Andrew Adonis looked like a supporter of the British higher education system.

Now that it’s come back to bite, it’s worth reminding ourselves of what it was all about. The NUS was aggrieved that the NSS was being used as a metric in an exercise – the Teaching Excellence Framework – that was being used to drive an agenda of marketization. It wasn’t a protest against the NSS itself, but the only way that students could see to undermine the TEF. It remains NUS policy.

The boycott has had highly marked though isolated effects. While the national response rate has dropped only four points, twelve universities did not receive sufficient responses to be able to register results. That says a lot about the NUS: passionately political on some campuses but not at others. At my university I couldn’t even find students who wanted to debate the issue.

Should the student leaders at those twelve unlisted universities be proud of themselves this morning? Doubtless they will see it as a result; many students devoted an awful lot of time and energy to sabotaging the survey. They have ensured that the 2017 NSS results will always be marked with an asterisk.

But I don’t see any chance of this stopping the TEF. That ship has sailed; the debate has moved on in the meantime to new metrical frontiers, such as learning gain and teaching intensity. While many people argue that the TEF metrics are no more than proxies of teaching quality, the direction of travel is towards more rather than fewer metrics, and also towards the granularity of subject-level assessment.

Meanwhile, the fact remains that there is only one TEF metric that directly registers the perceptions of students, and this is the NSS. It’s also been arguably the greatest single driver of reform in higher education over the past decade. I’ve seen it prompt wholesale change in poorly-performing departments. And even in my own, which generally does well, we always identify specific areas for attention: feedback, programme management, student support, resources, and so forth.

So I feel sorry for the  students at those twelve unlisted universities who completed the survey. No, actually I feel bloody angry on their behalf. Their responses will be made available internally so they should still have some impact; however, they won’t be published and won’t register in league tables. A handful of managers this morning will be breathing sighs of relief, and that’s not what their students deserve. Those students paid £27000 – in fees alone – and their views matter.

I also feel sorry for the people who put so much effort into revising the NSS. The focus right now shouldn’t be on the boycott; it should be on the responses to the new questions added this year. My favourite one was: ‘I feel part of a community of staff and students’. But there was also: ‘I have had the right opportunities to work with other students as part of my course’; and ‘The IT resources and facilities have supported my learning well’. These questions help to document the full dimensions of higher education. They are light-years away from the ‘value-for-money’ reductivism of certain other student surveys that jostle for the attention of policy-makers and journalists.

The NSS also includes a section on ‘student voice’. There’s: ‘It is clear how students’ feedback on the course has been acted on’. And there’s a bleak irony to this one: ‘The students’ union effectively represents students’ academic interests’. Well, did they?

I’m not immediately sure how the NSS results will be spun as bad news, but I expect it will happen. Maybe Lord Adonis will claim that there is ‘no student satisfaction in Cambridge’. It feels like a precarious moment for the sector, and everyone – not least the students – could do with some credible data on what’s working and what needs attention. In this context, the boycott-compromised 2017 results feel like an own-goal for British higher education. I’m not sure that’s exactly what the NUS had in mind.

But only one tantrum: getting it right and wrong on TEF results day*

The TEF results have come and gone, and the press has predictably declared some of the nation’s best universities to be ‘second-rate’. One lesson to be drawn from the past few months is that there are plenty of people determined to kick the British university system for no better reason than that it remains world-class despite an overriding spirit of national decay. The TEF has fed such commentators an easy line.

But setting aside those frustrations, what might we learn from initial responses across the sector? What might they tell us about the faultlines in debates over TEF as we move forward?


Only the one tantrum? Really?

Just a hunch, this, but I reckon that when Christopher Snowden, VC of Southampton, cut loose within hours of his university’s bronze award being announced, he rather expected to be leading a chorus. A concerted vice-chancellorial spitefest might have battered a hole in the TEF.

Tantrums there must surely have been, but Snowden is to date the only one (as far as I know) to have taken his into the public domain. I suspect he’ll regret it. Meanwhile there are some outstanding examples of how to present mediocre news. Bristol, for instance, takes the opportunity to boast about other league table successes, and lists some ‘ongoing improvements’. UCL’s Michael Arthur boldly concedes that there remains another level beyond silver: ‘UCL puts education firmly on a par with research and we will not be satisfied until we have achieved a gold standard’.

Maybe those statements are easier to articulate when one’s institution has outperformed expectations, but their focus on the future is smart. In my mind that’s also preferable to the vainglorious puff produced by some of the golden ones. But perhaps the dominant impression, certainly from non-gold Russell Group universities, is a desire that this whole bloody thing will just go away. Some don’t even mention the TEF on their websites. Others waited a day or so to do so, when they had had the information two days before its release. Listen carefully: that’s the sound of institutional passive aggression.

All of which suggests that the TEF has roughly the same level of security as Brexit. It will probably survive, but plenty of people will try to soften it or blow it out of the water in the months and years to come.


But when did words ever matter?

What, then, do responses indicate about the possible shapes TEF might assume? For me, one surprising aspect of the post-results commentary has been criticism  that provider submissions have influenced the panel’s judgement. In other words, a set of metrics that indicated a lower grade has in some instances been trumped by sixteen pages of prose – or, in a couple of PVC-career-wrecking cases, vice versa.

Goddamn, the university system in this country is addicted to metrics. We’ll bicker forever and a day about what those metrics should be, then we’ll cry foul when they aren’t followed slavishly by the experts we’ve put in place to exercise some discretion. Why not just use a computer next time?

I’d argue, on the contrary, that those submissions are a critical part of the process. When I was involved in producing my university’s statement, I was vaguely aware that it probably wouldn’t much matter since our metrics were already good. Yet, as I’ve written before, the process of that document’s composition, which involved senior figures across the university, was part of the TEF’s point. It focused minds on what we were doing well, and what we could be doing better. Now we’ll work through the submissions of other successful universities, looking for further ways to improve.

Anyone who argues that these documents should be disregarded doesn’t understand the relation between self-reflection and transformation. But it seems that this point might need to be made to metrics-heads in reviews to come. Moreover, those arguing that a gold rating will lead to complacency similarly fail to understand how such levels of performance have been achieved. My university’s gold award is the result of a decade or more of relentless attention to the student experience. Nobody can expect to slack off for a couple of years and revive things in time for the next assessment.


Future medalling

I think I’ve made my disdain for the olympic-style medals clear enough already. But I appreciate the dilemma. The architects of TEF wanted something that couldn’t easily be brushed aside. They also wanted to avoid the silliness of meaninglessly fine league-table gradations, of the kind produced by the National Student Survey. Chris Husbands, the TEF Chair, has been quick to rubbish league tables produced on the basis of the TEF results.

It seems to me that some of this year’s silver medallists might have pointed the way to the future in their press releases. Warwick’s headline, for example, states simply: ‘Government declares Warwick teaching “excellent”’. And so it did. By implication it declared certain other universities to be ‘outstanding’. Perhaps those bronze universities, meanwhile, were ‘good’. Maybe that sounds a bit OFSTED-ish, but there’s a value in that kind of recognition.

One interesting question, though, is grade-inflation. A system of medals suggests there will always be a decent proportion of bronzes. But what if TEF does its job and raises levels of performance? Will panels of the future reward those improvements, or systematically raise the bar?


I’m betting the TEF will continue for a while, though it’s bound to change. Focusing on proxies of teaching quality, though unpopular with some, remains the only feasible method. Medals remain irredeemably mad. Subject-level assessment looks as cumbersome and impractical as ever, and might quietly be shelved. Linking the right to raise fees to a silver rating or gold rating, and not messing around with gradations between them, looks like a sensible modification of the crazily finely-grained model originally proposed. And honestly, even the LSE might be able to get a silver if they set their minds to it.

* Published under a different title by