Research Strategies Australia

Blog

Menu Close

Category: research impact

Case studies of research impact in Australia: why we don’t need to, and why we shouldn’t

Case studies do not measure research impact, they demonstrate it

I was quoted recently in an article in The Conversation on some of the recommendations coming from the recently released Watt Review:

Tim Cahill, director of Research Strategies Australia, specialising in higher education research policy, advises against introducing case studies for measuring impact in Australia.

He says: “The value of case studies is what we can learn from them. The UK has already produced thousands of case studies that we can use – are we going to learn anything new by producing hundreds or even thousands of our own?”

To quickly expand on this statement: first, case studies do not measure impact, they demonstrate it. In many cases they do this by quantifying the impact, but this is different from measurement. Measurement implies that there is an agreed standard – for example, a metre – that can be used to gauge and compare things on a common scale – e.g. as measured in metres the distance from my home to work is shorter than the distance from my home to the moon.

So-called measuring of impact through case studies does not operate in the same way, even where the units of measure might be the same – such as income generated or the size of an audience in attendance at a recital. What case studies of research impact attempt do is combine a number of self-selected indicators to demonstrate a self-nominated impact. Can we compare them against each other? Yes. Can we derive meaning from them? Yes. Can we rank them relative to each other? Absolutely. But none of this is related to measuring the impacts of research.

Learning how impact happens

So, why is this an important distinction to make? Because the focus on measuring the impact of research through case studies has obscured their real value, which is to show the different ways that successful research impacts have occurred. What case studies are really good for is demonstrating the different players, conditions and activities that were involved in taking research from an insight to an impact – who was involved, what were they doing and what were the specific conditions that needed to be in place for that to work.

Looked at in this way, case studies are an important tool that teaches us about what we can do to maximise the likelihood of repeating past successes of. The lessons we learn from case studies allow us to create the conditions that have been proven to deliver impact.

Which is why I don’t think we need to undertake a case study-based research impact evaluation in Australia. The UK has already done it, and the case studies have been made freely available online. As far as learning from case studies goes, I see no reason why what we could learn from the some 6,500 case studies in the UK would be any different from what we could learn if we produced our own set of hundreds or thousands of Australian case studies. 

The cost of not-measuring research impact?

Now for why we shouldn’t use case studies to evaluate research impact in Australia: the effort involved will be very disruptive. Some simple calculations make this clear.

In 2013 there were some 15,602 Research FTE and 27,387 Teaching and Research FTE. The academic working year consists of 48 weeks (or 240 days); the usual Research contract is 80% research, or 192 days a year; the usual Teaching and Research contract is 40% research, or 96 days a year. In this time Australian academics produced 65,557 research outputs.Which is about 86 days per output (assuming that the research time was not split with HDR supervision, conferences etc.)

As reported elsewhere each case study in the REF 2014 took about 30 days of staff time to create.In other words, each case study costs about 35% of a research output.

So one way to think about it is how many research outputs would a national research evaluation cost? I will discuss two approaches to gauge this: first, we could use the REF model, which is 1 case study for every 10 academics submitted. In Australia, our research evaluation system, ERA, is comprehensive, not selective like in the UK, so 1 case study for each of the 43,000 or so FTE in ERA would be around 4,300 case studies. That would be 128,967 days of staff time, or 1503 research outputs, which is about 2.3% of the national yearly total research outputs.

Another way to determine the figure would be to take the number of evaluations in the recent ERA round as a guide – in ERA 2015 there were 2,460 units evaluated. If we required one case study for each of these units that would be equate to 73,800 days of staff time, or, in other words, 858 research outputs that would not be produced. That is about 1.3% of the total research output of Australia in 2013.

Neither figure seems a lot. However, in my experience it is usually the most senior research leaders in an institution that undertake such tasks as preparing university submissions for evaluation. This means that we are not just trading off 1-2% of our research outputs, but potentially our top 1-2% of research outputs. 

The second way to look at the equation is in terms of how much universities would need to receive back in terms of funding to cover costs. Again, in REF, the median cost of a case study was £7,500, or about $15,600 AU. If we multiply that by the figures above we get $67M and $38M respectively for the two models. For impact case studies to be a zero sum game, this is how much universities would need to receive on the back of the outcomes. Consider for a moment that this year ERA will deliver around $77M to universities. Introducing a case study approach would need to more or less double the amount of block funding that is delivered through research evaluation, which is a significant change in policy with  unknown outcomes. 

What should we do?

I think the most important thing for us to do is undertake a large scale analysis of the REF case studies and see what we can learn from them. What works, what doesn’t work, how does impact happen, are there patterns, common themes etc.?

This will be far cheaper than running our own case study evaluation and may give us a large part of the value that such an exercise would bring. 

Research Engagement and Creative Arts Research

I was very happy to spend the day with the Deans and Directors of Creative Arts (DDCA) a couple of Wednesdays ago for their annual conference and AGM. There have been some interesting submissions coming from this group to major reviews that are currently underway including the ACOLA Review of the Research Training System and the Watt Review of Research Policy and Funding Arrangements (lots of credit to Su Baker and Jenny Wilson).

My panel session was dedicated to strategic questions around positioning of Creative Arts research in relation to emerging discussions in research evaluation. For me, one of the most pleasing aspects of our discussions on the day was how comfortable people were with the idea of engaging with research end-users. It seems that a strong grounding in creative-practice makes a focus on research engagement a natural fit; by their very nature performance- and exhibition-based research disciplines are audience/end-user-centric.

The issues I foresee for these disciplines in a research-engagement paradigm have less to do with outlining the importance of research engagement and more to do with how these transactions operate within Creative Arts disciplines. Three key issues are outlined below.

 

Performance and exhibition spaces as research infrastructure – since the introduction of ERA there is widespread acceptance that live performances, original creative works, curated works and recorded/rendered works (can) meet the definition of research. It is no great leap that the galleries, museums and performance spaces that support these research activities are therefore important research infrastructure. Importantly, funding received to support these infrastructure should be submitted as part of the HERDC return for institutions – my sense is that this is a discussion that still needs to be had in a number of institutions. Here is the relevant description from the HERDC guidelines:

Net receipted  income which can be included in the Research Income Return […] grants for specific and specialised equipment used for the conduct of research
In-kind support – in-kind support is a mainstay of Creative Arts research funding , but is not eligible to be submitted under HERDC. There are a few potential approaches to address this:
  • The first is to lobby the Department of Education and Training  for in-kind support to be included in HERDC. I do not know the reasons why this is currently excluded, but I do know that comprehensive records of in-kind support are not recorded widely by universities. Further, my sense is that beyond a line in an ARC Linkage Grant, receipt of in-kind support is not uniformly applied.
  • A more complex approach would be to work closely with funding partners to see if ‘in-kind’ is the appropriate classification for this support, or if there are more appropriate ways to record this support (e.g. ‘donations’, which are eligible under HERDC Category 3 income). I admit to know very little about this, except that it is likely to involve taxation laws and employee arrangements (on both sides of the support) in addition to HERDC rules. Anyway, it is worth asking the question.
  • The most practical, but perhaps least satisfying approach is to accept that existing data (including ARC Linkage grants and Category 2-3 income) will correlate very closely with levels of in-kind support i.e. it would be uncommon to have significant amounts of in-kind support in the absence of financial support (I have no evidence to support this statement, but it can be easily tested by universities). As longs as any use of these data is sensitive to different practices between disciplines then there should be no problem with using financial indicators as a proxy for in-kind support i.e. comparing Category 2-3 Creative Arts research against Medical research is not fair, but comparing Creative Arts Cat 2-3 research income between universities is ok.

Consulting, contracting and commercialsiation – many Creative Arts researchers in academia maintain active professional careers in practice. At present, much of this activity is conducted by individual academics under personal ABN/ACN arrangements, and therefore is not eligible for reporting under HERDC where income has to have been transacted through the university. In some cases this is unavoidable – e.g. where funding bodies only support individuals or corporations (and not universities) – but in some cases there is no technical reason why this is the case. There are possibly very good financial reasons that an academic would choose to receive this income outside of the institution, including that universities usually take a cut of this income to recover costs. I personally contend that if the work is done on the university’s time and/or with their resources (computers, offices, studios etc.) then this income should be transacted through the university, and not through a private company or other arrangement. But that is me, and there are is plenty of room for compromise on such issues within universities. There are likely also some discussions to be had about IP but again universities can be nothing if not flexible on such things.

 

Addressing these three key issues alone will, in my view, hugely benefit Creative Arts research in Australian universities. As far as I can tell, researchers in this field have to address some minor misalignment but overall a focus on research engagement suits the kind of work that they have always done.

 

Putting the ‘public’ back in publicly funded research – UoW postgraduate careers day keynote

photo

I was very happy to deliver the keynote address today at the University of Wollongong Graduate Researcher Careers Conference. My presentation is attached at the bottom of this post for those interested in looking at it.

The take-home messages from my address are summed up below:

  • Remember the ‘public’ in publicly funded research
  • Think about the public who pays for your research and what you give them in return
  • Remember that what you do as a researcher always (sometimes profoundly) changes the world we live in
  • Think about research beyond the walls of the university and beyond the covers of a journal

 

UoW postgrad (ppt)

The ‘moral evaluation gap’

Here is a link to Sarah de Rijcke’s recent keynote at the European Sociological association conference on 

how indicators influence knowledge production in the life sciences and social sciences, and how in- and exclusion mechanisms get built into the scientific system through certain uses of evaluative metrics. 

De Rijcke argues that her findings demonstrate 

that we need an alternative moral discourse in research assessment, centered around the need to address growing inequalities in the science system. 

In de Rijcke’s words, there is an ‘evaluation gap’, a

Discrepancy between evaluation criteria and the social, cultural and economic functions of science

What she means in short is that our focus in research evaluation on measurements of quality have resulted in a system focused on meeting performance targets at the expense of creating a socially responsible research system.

While I agree with this, this assumes that we agree on what the social, cultural and economic functions of science and research are – it is good to talk about economic, social and cultural benefits, but we should first be able to answer the question, Benefits to whom? The social, cultural and economic functions of science are not given, universal nor eternal – one need only look to the case of Vannevar Bush and the Office of Scientific Research and Development, or the tragic case of Lysenkoism to understand this. And while these examples are hyperbole, they make the point – the social, economic and cultural are situated in the historical and political, and by extension so is science functioning in the service of the social, economic and cultural. 

This is not to say that there is a pure realm of research that exists beyond historical and political contingencies. On the contrary, it is to say that science and research are always the product of their time and place.

Those of us working in research evaluation, when we talk about measurement, have to depart from the understanding that what we are trying to measure, in the first instance, are political and historical interests, and that in simply measuring these, as de Rijcke demonstrates, we are going to push science and research in the direction of those interests. We spend a lot of time talking about the social and economic impacts of science, but far less time talking about how the social and economic impact on science.

Two quick ideas to increase research impact

I was recently invited to participate in the Scholarly Communication Symposium at Griffith University  discussing ‘increasing research impact in sciences’. I couldn’t attend in person, but provided the following input to two questions posed below.

Q: What is one problem that you think needs to be addressed in order to maximise the impact of research in Australia?

A: Lack of support for long term funding of applied research in Australian universities

In Australia, increases in Higher Education Expenditure on R&D (HERD) have been accompanied by increases in applied research. In the period from 1992 to 2012, the shape of the Australian higher education research effort has significantly changed, from a sector characterised by basic research, to one characterised by applied research effort (Figure 1).

In the early 1990s, basic research accounted for 60 per cent (pure basic and strategic basic) of research activity, with applied research comprising only around 30 per cent. Until 2010 the focus of Australia’s universities remained basic research. However, in 2010, the balance shifted, with applied research reaching 47 per cent, overtaking basic research at 45 per cent for the first time.

Figure 1 Australian HERD expenditure by activity 1992-2010

increasing impact in science figure 1

Since the 1990s, project based funding has become a standard form of research activity in Australia. This has in part come at the expense of long term support for research activities. At present levels, 60 per cent of Australia Government support for HERD is delivered through project funding, with the remainder delivered through institution-based funding (Figure 2).

Figure 2 Government funded HERD, 2010

increasing impact in science figure 2

Institutional funding allows for long-term planning of research agendas. The major source of long term funding stability for university research is provided through the Research Block Grants (RBGs). At present, publications and Nationally Competitive Grants (HERDC Category 1) remain the focus of the RBGs, which only include a limited focus on research engagement through the Joint Research Engagement (JRE) funding pool.

The calculation of the JRE includes the following inputs:

  • Research income is weighted at 60 per cent and includes HERDC Category 2 (Other Public Sector Income), Category 3 (Industry and Other Income) and Category 4 (CRC Income) amounts;
  • Student load is weighted at 30 per cent ; and
  • Research publications are weighted at 10 per cent and include HERDC Category Books, Book Chapters, Journal Articles and Conference Papers.

In 2014, the JRE allocation was $342.6m, or 20 per cent of the total RBGs for 2014. In other words, Category 2-4 income accounted for 60 per cent of 20 per cent of the funding for university research.

The relative size of this reward is in stark contrast to the relative importance of Category 2-4 income to the sector. For the years 2008-2010, the proportion of the funding that these categories represented was close to 60 per cent of the HERDC income Categories. There is, in other words, a discrepancy between the focus of universities on engagement activities and the reward and RBG incentives that support long term strategic research planning, which are still reliant upon research publications and Category 1 funding outcomes. This will continue as universities focus more and more on applied research without a commensurate reward mechanism.

 

Q: What is one possible solution/opportunity to maximise research impact?

A: Make R&D tax incentives available for research in humanities and social sciences

Australia has amongst the lowest levels of direct Government funding for business R&D across comparator countries at 1.8 per cent (Figure 3).

Figure 3 Direct Government funding of business R&D, 2011

increasing impact in science figure 3

At the same time, Australia has some of very high proportions of support for business R&D provided through tax incentives (Figure 4). In fact, Australia has the second lowest level of direct funding to business R&D on the available data, second only to Mexico.

Figure 4 Direct Government funding of business R&D vs tax incentives, 2011

increasing impact in science figure 4

As it stands, tax incentives are not available to research conducted in the humanities and social sciences, which are excluded from the scheme. This disincentive to collaboration with the private sector is of particular note given the large proportion of Australia’s research effort that is conducted in these disciplines. Humanities and social sciences currently receive 16 per cent of Australia’s Category 3 (Industry and Other) income. This represents 16 per cent of what is already invested from the private sector into research engagement with universities that are not considered through the Government’s primary support mechanism.

In addition, excluding humanities and social science researchers from the R&D Tax incentive prevents 43 per cent of the Australian higher education research workforce – who produce around 30 per cent of university research outputs – from participating (Figure 5).

Figure 5 HASS vs non-HASS

Untitled

Allowing humanities and social science research to be eligible for R&D tax incentives would likely increase the impact of Australia’s research significantly by unlocking new opportunities for collaboration between the private sector and universities.