Learning gain: or, the fantasy of measuring just about everything that happens in higher education

We will all be hearing more about ‘learning gain’ over the next year or two. Indeed we’ve been promised, in a report published late last year, ‘a series of symposia or conferences to inform the sector and broader public’ of its ‘relevance’. There might even be T-shirts and dancers and those cupcakes with glittery icing we get on graduation days. (Ok: I made up that last bit.)

Learning gain seems to me the ultimate dream of our overseers, whether you identify the latter as government ministers or consumers. The Green Paper, notably, listed it as an ‘aspect of teaching excellence’. The goal is to quantify precisely how much any student, or cohort of students, has gained from time at university. And hence, of course, to answer the question: was it all worth it? Like so many utopian schemes, I’m betting that it will founder on the shores of reality. But I also wonder whether something, at least, might be salvaged from the wreckage, in the interests of students themselves.

 

The mother of all metrics

The motivation behind the goal of measuring learning gain is neatly expressed by Johnny Rich:

 

In theory, universities could be admitting highly capable, independent learners and merely providing them with an amenable atmosphere for a few years. On graduation, the university gives the student a stamp of approval [and a cupcake – ed.] and takes credit for any personal growth or development they may have experienced. In reality the student may either have taught themselves or simply acquired three years of life experience. This may not be happening, but where is the contrary evidence?

 

Rich is broadly sympathetic to higher education: his point is simply that, ‘in an austere world driven by econometrics, what is hard to measure is hard to fund’. And we’re all familiar enough with the concept of ‘value-added’ education in schools, so something similar should be possible to implement in higher education.

There are three main approaches to the measurement of learning gain, as well as any number of mixed models. Grades would appear to be the most obvious, though perhaps also the least reliable. But, as the report acknowledges, there are massive issues of ‘comparability’, across institutions, disciplines, and even years of study. Well, yes. Surveys offer perhaps the most interesting approach in pedagogical terms – more on this below – but will this really satisfy the dream of objective, panoptic measurement? And thirdly there are standardized tests: give them all a bunch of skills tests, discipline-specific tests, even psychometric tests when they arrive, and another when they leave. How hard can it be?

So, yes, I’m sceptical. I certainly appreciate Rich’s argument about the value, to the sector, of metrics that demonstrate how well we’re doing. The REF is thus fundamentally a force for good; the commitment to documenting and measuring the impact of research has also helped to convince politicians and bureaucrats of the value of what we all do. But learning gain looks to me like a quest for the holy grail, conducted by those who don’t trust, or have simply become a bit tired of, the education metrics that have served us well for many years.

 

What about the students?

But I wouldn’t want to trash the entire project of measuring learning gain. While I struggle to see a method that will do what the Green Paper wants it to do, it seems to me that we might more sensibly approach learning gain from a different angle. What would happen if we saw it less as a way for the government to judge whether Exeter students are learning more than Oxford students, and more as a way in which students might measure their own progress and identify targets?

Indeed it seems to me there might be a value in including students in these discussions: not just as consumers, giving us data on their consumption, but as – well – students. Most students, after all, very much want to learn. Most will also, I expect, welcome some guidance about the range of learning that they might reasonably expect from their time at university.

And such conversations may have particular value on the question of the soft skills so valued by employers, yet so commonly overlooked on campus. (See further my last blog-post.) In my own department, certainly, we’re absolutely first-rate at delivering modules, and obviously these modules do aid the development of soft skills. But I think that we could do more with students to reflect on skills acquisition. The ground is thick with the relics of failed schemes: e-PDP (remember that?), skills audits, and so forth.

So maybe it’s time to try again, in the light of the learning gain agenda. The report helpfully refers to some (supposedly) successful models, such as the Durham skills audit, sent to students before they leave home, or a ‘careers registration instrument’ at Leeds (which sounds interesting, as described in the report, but I can’t evidence of find it online). Student buy-in with such schemes will always be a challenge, but this surely doesn’t absolve us of the responsibility to try. In addition, the report suggests a focus on student engagement, either through questions in the NSS or use of a survey such as the US National Survey of Student Engagement or the UK Engagement Survey. Learning analytics may also have a role to play.

————————————–

Learning gain will doubtless feel, in part, like one more stick framed for the ritual beating of academics. Same old story: we don’t like teaching; we don’t much like work for that matter. The country sorely needs, in the language of the Green Paper, ‘additional incentives to drive up teaching quality’.

But there’s an agenda ripe for remodelling, and a discourse ready for reclaiming. There’s not an academic alive who doesn’t want his/her students to gain from the experience of higher education, so there must be some potential in a concept that enables us to think afresh about just how – and how effectively – this happens.

Leave a comment