Case studies do not measure research impact, they demonstrate it
Tim Cahill, director of Research Strategies Australia, specialising in higher education research policy, advises against introducing case studies for measuring impact in Australia.
He says: “The value of case studies is what we can learn from them. The UK has already produced thousands of case studies that we can use – are we going to learn anything new by producing hundreds or even thousands of our own?”
To quickly expand on this statement: first, case studies do not measure impact, they demonstrate it. In many cases they do this by quantifying the impact, but this is different from measurement. Measurement implies that there is an agreed standard – for example, a metre – that can be used to gauge and compare things on a common scale – e.g. as measured in metres the distance from my home to work is shorter than the distance from my home to the moon.
So-called measuring of impact through case studies does not operate in the same way, even where the units of measure might be the same – such as income generated or the size of an audience in attendance at a recital. What case studies of research impact attempt do is combine a number of self-selected indicators to demonstrate a self-nominated impact. Can we compare them against each other? Yes. Can we derive meaning from them? Yes. Can we rank them relative to each other? Absolutely. But none of this is related to measuring the impacts of research.
Learning how impact happens
So, why is this an important distinction to make? Because the focus on measuring the impact of research through case studies has obscured their real value, which is to show the different ways that successful research impacts have occurred. What case studies are really good for is demonstrating the different players, conditions and activities that were involved in taking research from an insight to an impact – who was involved, what were they doing and what were the specific conditions that needed to be in place for that to work.
Looked at in this way, case studies are an important tool that teaches us about what we can do to maximise the likelihood of repeating past successes of. The lessons we learn from case studies allow us to create the conditions that have been proven to deliver impact.
Which is why I don’t think we need to undertake a case study-based research impact evaluation in Australia. The UK has already done it, and the case studies have been made freely available online. As far as learning from case studies goes, I see no reason why what we could learn from the some 6,500 case studies in the UK would be any different from what we could learn if we produced our own set of hundreds or thousands of Australian case studies.
The cost of not-measuring research impact?
Now for why we shouldn’t use case studies to evaluate research impact in Australia: the effort involved will be very disruptive. Some simple calculations make this clear.
In 2013 there were some 15,602 Research FTE and 27,387 Teaching and Research FTE. The academic working year consists of 48 weeks (or 240 days); the usual Research contract is 80% research, or 192 days a year; the usual Teaching and Research contract is 40% research, or 96 days a year. In this time Australian academics produced 65,557 research outputs.Which is about 86 days per output (assuming that the research time was not split with HDR supervision, conferences etc.)
As reported elsewhere each case study in the REF 2014 took about 30 days of staff time to create.In other words, each case study costs about 35% of a research output.
So one way to think about it is how many research outputs would a national research evaluation cost? I will discuss two approaches to gauge this: first, we could use the REF model, which is 1 case study for every 10 academics submitted. In Australia, our research evaluation system, ERA, is comprehensive, not selective like in the UK, so 1 case study for each of the 43,000 or so FTE in ERA would be around 4,300 case studies. That would be 128,967 days of staff time, or 1503 research outputs, which is about 2.3% of the national yearly total research outputs.
Another way to determine the figure would be to take the number of evaluations in the recent ERA round as a guide – in ERA 2015 there were 2,460 units evaluated. If we required one case study for each of these units that would be equate to 73,800 days of staff time, or, in other words, 858 research outputs that would not be produced. That is about 1.3% of the total research output of Australia in 2013.
Neither figure seems a lot. However, in my experience it is usually the most senior research leaders in an institution that undertake such tasks as preparing university submissions for evaluation. This means that we are not just trading off 1-2% of our research outputs, but potentially our top 1-2% of research outputs.
The second way to look at the equation is in terms of how much universities would need to receive back in terms of funding to cover costs. Again, in REF, the median cost of a case study was £7,500, or about $15,600 AU. If we multiply that by the figures above we get $67M and $38M respectively for the two models. For impact case studies to be a zero sum game, this is how much universities would need to receive on the back of the outcomes. Consider for a moment that this year ERA will deliver around $77M to universities. Introducing a case study approach would need to more or less double the amount of block funding that is delivered through research evaluation, which is a significant change in policy with unknown outcomes.
What should we do?
I think the most important thing for us to do is undertake a large scale analysis of the REF case studies and see what we can learn from them. What works, what doesn’t work, how does impact happen, are there patterns, common themes etc.?
This will be far cheaper than running our own case study evaluation and may give us a large part of the value that such an exercise would bring.