There’s a new frontier in the quest to demonstrate teaching excellence, and it’s called the Gross Teaching Quotient. By mid-winter senior managers will be descending upon departments across the country bearing spreadsheets and bad news: ‘We need to talk about your Gross Teaching Quotient’.
The Gross Teaching Quotient – and let’s call it GTQ, because that’s what it will become – is the most eye-catching aspect of the recent proposals outlining the subject-level Teaching Excellence Framework. It’s just a pilot version at this stage, but it’s coming at us fast, scheduled to be in place for real in 2019-20.
I’m broadly supportive of institution-level TEF. Some aspects of it, such as the olympic-style medals, are manifestly barking, but there’s no avoiding the push to demonstrate teaching excellence. The TEF uses reasonable proxies for quality – and I haven’t heard any better suggestions, despite all the noise – while the written submission encourages innovative actions and reflective thinking.
But I’ve had my head firmly in the sand on subject-level TEF. On practical grounds the concept has seemed too cumbersome, while I’ve also wondered about the relation between cost and value. I mean, if institutional-level TEF is already focusing managerial minds and driving reform – as I think it is, though nobody has bothered to wait long enough to assess this – where is the added value to subject-level TEF?
Jo Johnson’s answer would be that it’s all about the consumer. Potential students generally choose courses first and universities second, so they will want evidence of quality at that more granular level. In my experience such people are in fact overwhelmed with evidence – from league tables, unistats and so forth – but perhaps that might equally support Johnson’s argument. Some people, it seems, just want to see a gold medal.
The pilot will include only a handful of universities. Subject-level TEF will use the same metrics as institution-level TEF, with a few notable additions. There will be a written submission, stripped back to five pages. And there will be 35 subjects or subject-groupings, to avoid the metrical muddle that can be caused by small disciplines.
There are two pilot models. Model A (‘by exception’) will simply give subjects the same rating as their institutions unless the metrics indicate a need for closer investigation. Model B (‘bottom-up’) will assess each subject fully, and build towards an institution-level award from this basis. I propose to label these, respectively: ‘the sane model’ and ‘brace yourself, it’s coming’.
Then it starts to get interesting. All the well-meaning complaints from across the sector about the TEF merely measuring proxies have got the TEF-team thinking. They’re not for turning; they’re marching ever onward to the holy grail of quantifiable teaching excellence. And this brings them – as, of course, it would – to the GTQ.
The GTQ is a measure of ‘teaching intensity’. And teaching intensity is not all about contact hours; honest, the document says this maybe fifteen times, so I swear they must mean it. Instead it’s an idea, kicked about in last year’s Success in a Knowledge Economy White Paper, about the relation between the quantity and quality of teaching. And that was all derived from Graham Gibbs’s 2010 report Dimensions of Quality.
Teaching intensity will be measured in part by a student-survey. Think about that for a minute: students will be asked questions about whether they’re getting enough teaching. And then there will be the calculations, weighting ‘the number of hours taught by the staff-student ratio of each taught hour’. Got it? The GTQ is then ‘calculated by multiplying the taught hours by the appropriate weighting and summing the total across all groups, followed by multiplying by 10 to arrive at an easily interpretable number’. And then it’s divided by the square-root of staff days lost due to stress. Or something like that.
It’s worth noting just what a narrow reading of Gibbs this actually is. In the desperation to create a new metric, lots of valuable Gibbsian ‘dimensions’ have been set aside: from the critical questions of who does the teaching and how well they have been trained (puzzlingly ignored by TEF so far), through assessment and feedback, and beyond. I guess we get to brush up on this stuff when we’re preparing our written submissions, but there’s a curious narrowing of vision for all the rhetoric to the contrary.
My greatest concern about all this is that GTQs will evidently be produced as comparative measures, driven by an underlying assumption that more intensity is always going to be better. Practice in my department may be perfectly sound from all sorts of perspectives; however, as I understand the proposals, if our GTQ is weaker than a competitor’s we may be heading for silver. Admittedly it’s only one metric, but from a management perspective it will attract attention as the newest and perhaps the easiest to manipulate. Subject-level TEF will understandably instil anxiety in all sorts of people in management positions; not all of them can be relied upon to respond reasonably.
This is a pilot, and much may change between now and full implementation. Crucially, the review of the first round of institution-level TEF will affect anything that happens thereafter at subject level. But subject-level TEF may affect departments pretty much immediately. Much of that change will be for the good, some will be more questionable, and an awful lot could increase workloads and stress-levels, for students and staff alike.
* This piece was first published in Times Higher Education, 10/8/17