Learning gain: or, the fantasy of measuring just about everything that happens in higher education

We will all be hearing more about ‘learning gain’ over the next year or two. Indeed we’ve been promised, in a report published late last year, ‘a series of symposia or conferences to inform the sector and broader public’ of its ‘relevance’. There might even be T-shirts and dancers and those cupcakes with glittery icing we get on graduation days. (Ok: I made up that last bit.)

Learning gain seems to me the ultimate dream of our overseers, whether you identify the latter as government ministers or consumers. The Green Paper, notably, listed it as an ‘aspect of teaching excellence’. The goal is to quantify precisely how much any student, or cohort of students, has gained from time at university. And hence, of course, to answer the question: was it all worth it? Like so many utopian schemes, I’m betting that it will founder on the shores of reality. But I also wonder whether something, at least, might be salvaged from the wreckage, in the interests of students themselves.

 

The mother of all metrics

The motivation behind the goal of measuring learning gain is neatly expressed by Johnny Rich:

 

In theory, universities could be admitting highly capable, independent learners and merely providing them with an amenable atmosphere for a few years. On graduation, the university gives the student a stamp of approval [and a cupcake – ed.] and takes credit for any personal growth or development they may have experienced. In reality the student may either have taught themselves or simply acquired three years of life experience. This may not be happening, but where is the contrary evidence?

 

Rich is broadly sympathetic to higher education: his point is simply that, ‘in an austere world driven by econometrics, what is hard to measure is hard to fund’. And we’re all familiar enough with the concept of ‘value-added’ education in schools, so something similar should be possible to implement in higher education.

There are three main approaches to the measurement of learning gain, as well as any number of mixed models. Grades would appear to be the most obvious, though perhaps also the least reliable. But, as the report acknowledges, there are massive issues of ‘comparability’, across institutions, disciplines, and even years of study. Well, yes. Surveys offer perhaps the most interesting approach in pedagogical terms – more on this below – but will this really satisfy the dream of objective, panoptic measurement? And thirdly there are standardized tests: give them all a bunch of skills tests, discipline-specific tests, even psychometric tests when they arrive, and another when they leave. How hard can it be?

So, yes, I’m sceptical. I certainly appreciate Rich’s argument about the value, to the sector, of metrics that demonstrate how well we’re doing. The REF is thus fundamentally a force for good; the commitment to documenting and measuring the impact of research has also helped to convince politicians and bureaucrats of the value of what we all do. But learning gain looks to me like a quest for the holy grail, conducted by those who don’t trust, or have simply become a bit tired of, the education metrics that have served us well for many years.

 

What about the students?

But I wouldn’t want to trash the entire project of measuring learning gain. While I struggle to see a method that will do what the Green Paper wants it to do, it seems to me that we might more sensibly approach learning gain from a different angle. What would happen if we saw it less as a way for the government to judge whether Exeter students are learning more than Oxford students, and more as a way in which students might measure their own progress and identify targets?

Indeed it seems to me there might be a value in including students in these discussions: not just as consumers, giving us data on their consumption, but as – well – students. Most students, after all, very much want to learn. Most will also, I expect, welcome some guidance about the range of learning that they might reasonably expect from their time at university.

And such conversations may have particular value on the question of the soft skills so valued by employers, yet so commonly overlooked on campus. (See further my last blog-post.) In my own department, certainly, we’re absolutely first-rate at delivering modules, and obviously these modules do aid the development of soft skills. But I think that we could do more with students to reflect on skills acquisition. The ground is thick with the relics of failed schemes: e-PDP (remember that?), skills audits, and so forth.

So maybe it’s time to try again, in the light of the learning gain agenda. The report helpfully refers to some (supposedly) successful models, such as the Durham skills audit, sent to students before they leave home, or a ‘careers registration instrument’ at Leeds (which sounds interesting, as described in the report, but I can’t evidence of find it online). Student buy-in with such schemes will always be a challenge, but this surely doesn’t absolve us of the responsibility to try. In addition, the report suggests a focus on student engagement, either through questions in the NSS or use of a survey such as the US National Survey of Student Engagement or the UK Engagement Survey. Learning analytics may also have a role to play.

————————————–

Learning gain will doubtless feel, in part, like one more stick framed for the ritual beating of academics. Same old story: we don’t like teaching; we don’t much like work for that matter. The country sorely needs, in the language of the Green Paper, ‘additional incentives to drive up teaching quality’.

But there’s an agenda ripe for remodelling, and a discourse ready for reclaiming. There’s not an academic alive who doesn’t want his/her students to gain from the experience of higher education, so there must be some potential in a concept that enables us to think afresh about just how – and how effectively – this happens.

Out goes ‘enthusiasm’, in comes ‘engagement’: the National Student Survey of the future

So farewell then, Key Information Sets: hundreds of thousands of university applicants never even knew you existed.

Having been involved in the farcical attempts to provide regularized and visualizable data on contact hours, forms of assessment, and the like, I find it difficult to mourn the demise of KIS. Yet there are many more interesting stories buried within the HEFCE document that signals the end of KIS: Review of Information about Learning and Teaching, and the Student Experience. Those stories centre on the National Student Survey.

Much of this document is reminiscent of previous consultations. But we’re getting somewhere: the NSS is in for a makeover. And given the extent to which we all gear our practices towards the existing NSS questions, this demands our attention. While it’s important to stress that this is a consultation document, it’s also worth noting that there are years worth of momentum behind the proposals. And some of the changes that are eventually made will affect the 2017 survey, to be taken by students already established at university. So even as we keep one eye firmly on 2016, it’s surely worth keeping the other eye on the horizon.

The apotheosis of ‘engagement’

For those of us who have been banging the ‘student engagement’ drum for years, this is big news. I’m excited. I’m even ‘enthusiastic’ – but more on enthusiasm below. There are nine proposed questions, though it’s hard to imagine that more than a handful will make it. The questions are:

  • ‘My course has challenged me to achieve my best work’
  • ‘My course has provided me with opportunities to explore ideas or concepts in depth’
  • ‘My course has provided me with opportunities to bring information and ideas together from different topics’
  • ‘My course has provided me with opportunities to apply what I have learnt’
  • ‘I have had the right opportunities to provide feedback on my course’
  • ‘Staff value students’ views and opinions about the course’
  • ‘It is clear how students’ feedback on the course has been acted on’
  • ‘I have had the right opportunities to work with other students as part of my course’
  • ‘I feel part of a community of staff and students’

There are some axe-magnets there when it comes time to trim the list. I mean ‘right opportunities’: really? But if the first and last questions survive, I could retire (not yet, mind you) happy. The question of ‘community’ is especially interesting. There are reasons why it might not make the cut: it’s a tougher challenge for bigger programmes and cross-disciplinary programmes, and it arguably reinforces the NSS’s bias towards campus universities. But I hope it makes it makes it, because it matters.

What’s disappearing?

Out will go questions perceived to be ‘duplicating’ other questions.

  • Firstly, out go ‘enthusiastic’ teachers, on the assumption that teachers who ‘make the subject interesting’ are good enough. Personally, that’s a bit of a relief; enthusiasm has never been my strongest point.
  • Secondly, out go ‘clear’ assessment criteria, on the assumption that ‘fair assessment’ matters more. That will be welcomed across the sector, because it’s just bloody hard to get students to come to terms with those criteria. That’s not to say, though, that the effort was not noble.
  • Thirdly, out goes feedback that is ‘detailed’ and ‘helps me understand’, in favour of ‘helpful comments on my work’. That makes sense, though it perhaps means that we will ease back on that other noble crusade: insisting that there are valid forms of feedback beyond written comments.

And here’s a headline for me: all the ‘personal development’ questions are listed for removal. Stunningly, after however many years of beating ourselves up trying to work out how to fix our ‘personal development problems’, we’re told that ‘cognitive testing’ does ‘not produce valid results’. Students, moreover, are ‘unsure of the intent behind the questions’. So out go:

  • ‘The course has helped me to present myself with confidence’
  • ‘My communication skills have improved’
  • (And everyone’s favourite) ‘As a result of the course, I feel confident in tackling unfamiliar problems’.

What’s in the wording?

And finally come the changes in terminology.

  • In the ‘learning resources’ section, there will still be questions about ‘library resources’ and ‘IT resources’, but ‘good enough for my needs’ will be replaced by ‘have supported my learning well’. There will also be a welcome prompt in the ‘library’ question, to remind students that ‘online resources’ – i.e. all that stuff that looks free because it works a bit like google, even though it has actually cost the university thousands of pounds – matter as much as ‘books’. Finally, the NSS enters the twenty-first century.
  • And there are some interesting changes to ‘assessment and feedback’ terminology. Firstly, ‘prompt’ feedback becomes ‘timely’ feedback. Well bugger that: just when we have crushed our lives under regimented essay-return deadlines. Jo Johnson wants us emailing our students on the weekends; hell, we’ve been marking essays on Christmas Day! Nonetheless, the logic of ‘timely’ (i.e. in time to make a difference) is irrefutable and eminently sane. Secondly, ‘detailed comments’ becomes ‘helpful comments’. Good sense there as well: sometimes, quality simply does not equate with quantity.

However self-effacing this consultation document may look, it will affect our lives sooner than we think. As head of a department that has done rather well in the NSS over the years, I won’t be waiting for the results of the consultation before taking some action. In fact I feel a ‘community’ working group coming on.