Research Strategies Australia

Blog

Menu Close

Category: Research evaluation

Case studies of research impact in Australia: why we don’t need to, and why we shouldn’t

Case studies do not measure research impact, they demonstrate it

I was quoted recently in an article in The Conversation on some of the recommendations coming from the recently released Watt Review:

Tim Cahill, director of Research Strategies Australia, specialising in higher education research policy, advises against introducing case studies for measuring impact in Australia.

He says: “The value of case studies is what we can learn from them. The UK has already produced thousands of case studies that we can use – are we going to learn anything new by producing hundreds or even thousands of our own?”

To quickly expand on this statement: first, case studies do not measure impact, they demonstrate it. In many cases they do this by quantifying the impact, but this is different from measurement. Measurement implies that there is an agreed standard – for example, a metre – that can be used to gauge and compare things on a common scale – e.g. as measured in metres the distance from my home to work is shorter than the distance from my home to the moon.

So-called measuring of impact through case studies does not operate in the same way, even where the units of measure might be the same – such as income generated or the size of an audience in attendance at a recital. What case studies of research impact attempt do is combine a number of self-selected indicators to demonstrate a self-nominated impact. Can we compare them against each other? Yes. Can we derive meaning from them? Yes. Can we rank them relative to each other? Absolutely. But none of this is related to measuring the impacts of research.

Learning how impact happens

So, why is this an important distinction to make? Because the focus on measuring the impact of research through case studies has obscured their real value, which is to show the different ways that successful research impacts have occurred. What case studies are really good for is demonstrating the different players, conditions and activities that were involved in taking research from an insight to an impact – who was involved, what were they doing and what were the specific conditions that needed to be in place for that to work.

Looked at in this way, case studies are an important tool that teaches us about what we can do to maximise the likelihood of repeating past successes of. The lessons we learn from case studies allow us to create the conditions that have been proven to deliver impact.

Which is why I don’t think we need to undertake a case study-based research impact evaluation in Australia. The UK has already done it, and the case studies have been made freely available online. As far as learning from case studies goes, I see no reason why what we could learn from the some 6,500 case studies in the UK would be any different from what we could learn if we produced our own set of hundreds or thousands of Australian case studies. 

The cost of not-measuring research impact?

Now for why we shouldn’t use case studies to evaluate research impact in Australia: the effort involved will be very disruptive. Some simple calculations make this clear.

In 2013 there were some 15,602 Research FTE and 27,387 Teaching and Research FTE. The academic working year consists of 48 weeks (or 240 days); the usual Research contract is 80% research, or 192 days a year; the usual Teaching and Research contract is 40% research, or 96 days a year. In this time Australian academics produced 65,557 research outputs.Which is about 86 days per output (assuming that the research time was not split with HDR supervision, conferences etc.)

As reported elsewhere each case study in the REF 2014 took about 30 days of staff time to create.In other words, each case study costs about 35% of a research output.

So one way to think about it is how many research outputs would a national research evaluation cost? I will discuss two approaches to gauge this: first, we could use the REF model, which is 1 case study for every 10 academics submitted. In Australia, our research evaluation system, ERA, is comprehensive, not selective like in the UK, so 1 case study for each of the 43,000 or so FTE in ERA would be around 4,300 case studies. That would be 128,967 days of staff time, or 1503 research outputs, which is about 2.3% of the national yearly total research outputs.

Another way to determine the figure would be to take the number of evaluations in the recent ERA round as a guide – in ERA 2015 there were 2,460 units evaluated. If we required one case study for each of these units that would be equate to 73,800 days of staff time, or, in other words, 858 research outputs that would not be produced. That is about 1.3% of the total research output of Australia in 2013.

Neither figure seems a lot. However, in my experience it is usually the most senior research leaders in an institution that undertake such tasks as preparing university submissions for evaluation. This means that we are not just trading off 1-2% of our research outputs, but potentially our top 1-2% of research outputs. 

The second way to look at the equation is in terms of how much universities would need to receive back in terms of funding to cover costs. Again, in REF, the median cost of a case study was £7,500, or about $15,600 AU. If we multiply that by the figures above we get $67M and $38M respectively for the two models. For impact case studies to be a zero sum game, this is how much universities would need to receive on the back of the outcomes. Consider for a moment that this year ERA will deliver around $77M to universities. Introducing a case study approach would need to more or less double the amount of block funding that is delivered through research evaluation, which is a significant change in policy with  unknown outcomes. 

What should we do?

I think the most important thing for us to do is undertake a large scale analysis of the REF case studies and see what we can learn from them. What works, what doesn’t work, how does impact happen, are there patterns, common themes etc.?

This will be far cheaper than running our own case study evaluation and may give us a large part of the value that such an exercise would bring. 

Research Engagement and Creative Arts Research

I was very happy to spend the day with the Deans and Directors of Creative Arts (DDCA) a couple of Wednesdays ago for their annual conference and AGM. There have been some interesting submissions coming from this group to major reviews that are currently underway including the ACOLA Review of the Research Training System and the Watt Review of Research Policy and Funding Arrangements (lots of credit to Su Baker and Jenny Wilson).

My panel session was dedicated to strategic questions around positioning of Creative Arts research in relation to emerging discussions in research evaluation. For me, one of the most pleasing aspects of our discussions on the day was how comfortable people were with the idea of engaging with research end-users. It seems that a strong grounding in creative-practice makes a focus on research engagement a natural fit; by their very nature performance- and exhibition-based research disciplines are audience/end-user-centric.

The issues I foresee for these disciplines in a research-engagement paradigm have less to do with outlining the importance of research engagement and more to do with how these transactions operate within Creative Arts disciplines. Three key issues are outlined below.

 

Performance and exhibition spaces as research infrastructure – since the introduction of ERA there is widespread acceptance that live performances, original creative works, curated works and recorded/rendered works (can) meet the definition of research. It is no great leap that the galleries, museums and performance spaces that support these research activities are therefore important research infrastructure. Importantly, funding received to support these infrastructure should be submitted as part of the HERDC return for institutions – my sense is that this is a discussion that still needs to be had in a number of institutions. Here is the relevant description from the HERDC guidelines:

Net receipted  income which can be included in the Research Income Return […] grants for specific and specialised equipment used for the conduct of research
In-kind support – in-kind support is a mainstay of Creative Arts research funding , but is not eligible to be submitted under HERDC. There are a few potential approaches to address this:
  • The first is to lobby the Department of Education and Training  for in-kind support to be included in HERDC. I do not know the reasons why this is currently excluded, but I do know that comprehensive records of in-kind support are not recorded widely by universities. Further, my sense is that beyond a line in an ARC Linkage Grant, receipt of in-kind support is not uniformly applied.
  • A more complex approach would be to work closely with funding partners to see if ‘in-kind’ is the appropriate classification for this support, or if there are more appropriate ways to record this support (e.g. ‘donations’, which are eligible under HERDC Category 3 income). I admit to know very little about this, except that it is likely to involve taxation laws and employee arrangements (on both sides of the support) in addition to HERDC rules. Anyway, it is worth asking the question.
  • The most practical, but perhaps least satisfying approach is to accept that existing data (including ARC Linkage grants and Category 2-3 income) will correlate very closely with levels of in-kind support i.e. it would be uncommon to have significant amounts of in-kind support in the absence of financial support (I have no evidence to support this statement, but it can be easily tested by universities). As longs as any use of these data is sensitive to different practices between disciplines then there should be no problem with using financial indicators as a proxy for in-kind support i.e. comparing Category 2-3 Creative Arts research against Medical research is not fair, but comparing Creative Arts Cat 2-3 research income between universities is ok.

Consulting, contracting and commercialsiation – many Creative Arts researchers in academia maintain active professional careers in practice. At present, much of this activity is conducted by individual academics under personal ABN/ACN arrangements, and therefore is not eligible for reporting under HERDC where income has to have been transacted through the university. In some cases this is unavoidable – e.g. where funding bodies only support individuals or corporations (and not universities) – but in some cases there is no technical reason why this is the case. There are possibly very good financial reasons that an academic would choose to receive this income outside of the institution, including that universities usually take a cut of this income to recover costs. I personally contend that if the work is done on the university’s time and/or with their resources (computers, offices, studios etc.) then this income should be transacted through the university, and not through a private company or other arrangement. But that is me, and there are is plenty of room for compromise on such issues within universities. There are likely also some discussions to be had about IP but again universities can be nothing if not flexible on such things.

 

Addressing these three key issues alone will, in my view, hugely benefit Creative Arts research in Australian universities. As far as I can tell, researchers in this field have to address some minor misalignment but overall a focus on research engagement suits the kind of work that they have always done.

 

Imagining ‘the system’ – on SIGMETRICS, bibliometrics and academic standards

PROP. XXVI. The human mind does not perceive any external body as actually existing, except through the ideas of the modifications of its own body.

Proof.—If the human body is in no way affected by a given external body, then (II. vii.) neither is the idea of the human body, in other words, the human mind, affected in any way by the idea of the existence of the said external body, nor does it in any manner perceive its existence. But, in so far as the human body is affected in any way by a given external body, thus far (II. xvi. and Coroll.) it perceives that external body. Q.E.D.

Corollary.—In so far as the human mind imagines an external body, it has not an adequate knowledge thereof.

Proof.—When the human mind regards external bodies through the ideas of the modifications of its own body, we say that it imagines (see II. xvii. note); now the mind can only imagine external bodies as actually existing. Therefore (by II. xxv.), in so far as the mind imagines external bodies, it has not an adequate knowledge of them. Q.E.D.

I have mentioned elsewhere my surprise at colleagues who continue to imagine ‘the [so-called] system’ as a top-down authority that shapes and coerces academic work. I continue to believe that this is a misleading depiction, and that ‘the [so-called] system’ is nothing more than the sum total of bureaucratic, political and academic practices – including individual academics. Case in point: the obsession with performance measurement and bibliometric analyses performed by non-experts.

Recently, Diana Hicks et al. illustrated the point – between 1984 and 2014 mentions of the much maligned ‘journal impact factor’ (the average citations for papers published in the last two years in a journal) increased dramatically in journal articles and editorials; the obsession with this form of debasing performance measurement, which wants to reduce academic work to single digits, hasn’t been driven by contributions from academics and professionals working in bibliometrics/scientometrics/infometrics and research evaluation, but has taken place in the pages of multi-disciplinary, medical and life sciences journals.

I was reminded of this recently while reading through the latest post from the SIGMETRICS mailing list – for those who don’t know, the list is

intended for the exchange of technical information among members of the performance evaluation community. Typical submissions include performance-related questions and announcements of research papers, software, job opportunities, conferences, and calls for papers.

One of the regular features of the list is a contribution from Eugene Garfield including bibliographic details of recent papers mentioning bibliometrics, scientometrics etc. Today, as I read through it struck me that most of the articles listed were a) published in journals outside of the field of bibliometrics/scientometrics/infometrics, b) published by academics with listed affiliations outside of bibliometrics/scientometrics/infometrics disciplines, and c) contained little or no engagement with the academic field of bibliometrics/scientometrics/infometrics. Below are a couple of examples (note – scroll through to the bottom if you want to skip to the rest of this post) – Example 1:

 

Title:

Scientific impact of studies published in temporarily available radiation oncology journals: a citation analysis

Authors:

Nieder, C; Geinitz, H; Andratschke, NH; Grosu, AL

Addresses:

[Nieder, Carsten] Nordland Hosp, Dept Oncol & Palliat Med, N-8092 Bodo, Norway.

[Nieder, Carsten] Univ Tromso, Fac Hlth Sci, Inst Clin Med, N-9038 Tromso, Norway.

[Geinitz, Hans] Johannes Kepler Univ Linz, Krankenhaus Barmherzigen Schwestern, Dept Radiat Oncol, A-4010 Linz, Austria.

[Geinitz, Hans] Johannes Kepler Univ Linz, Fac Med, A-4010 Linz, Austria.

[Andratschke, Nicolaus H.] Univ Zurich Hosp, Dept Radiat Oncol, CH-8091 Zurich, Switzerland.

[Grosu, Anca L.] Univ Hosp Freiburg, Dept Radiat Oncol, D-79106 Freiburg, Germany.

Source:

SPRINGERPLUS, 4 10.1186/s40064-015-0885-y FEB 24 2015

Abstract:

The purpose of this study was to review all articles published in two temporarily available radiation oncology journals (Radiation Oncology Investigations, Journal of Radiosurgery) in order to evaluate their scientific impact. From several potential measures of impact and relevance of research, we selected article citation rate because landmark or practice-changing research is likely to be cited frequently. The citation database Scopus was used to analyse number of citations. During the time period 1996-1999 the journal Radiation Oncology Investigations published 205 articles, which achieved a median number of 6 citations (range 0-116). However, the most frequently cited article in the first 4 volumes achieved only 23 citations. The Journal of Radiosurgery published only 31 articles, all in the year 1999, which achieved a median number of 1 citation (range 0-11). No prospective randomized studies or phase I-II collaborative group trials were published in these journals. Apparently, the Journal of Radiosurgery acquired relatively few manuscripts that were interesting and important enough to impact clinical practice. Radiation Oncology Investigations’ citation pattern was better and closer related to that reported in several previous studies focusing on the field of radiation oncology. The vast majority of articles published in temporarily available radiation oncology journals had limited clinical impact and achieved few citations. Highly influential research was unlikely to be submitted during the initial phase of establishing new radiation oncology journals.

Cited References:

Holliday Emma, 2013, INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, V85, P23

Wazer DE, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P111

Solberg TD, 1999, J Radiosurg, V2, P57

Joschko M A, 1997, Radiation oncology investigations, V5, P62

Maire JP, 1999, J Radiosurg, V2, P7

Chaney A W, 1998, Radiation oncology investigations, V6, P264

Epperly MW, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P331

Sanghavi S, 1999, J Radiosurg, V2, P119

Monga U, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P178

Norman A, 1997, Radiation oncology investigations, V5, P8

Kramer B A, 1998, Radiation oncology investigations, V6, P18

Kang S, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P309

Merrick G S, 1998, Radiation oncology investigations, V6, P182

Leborgne F, 1997, Radiation oncology investigations, V5, P289

Seymour C B, 1997, Radiation oncology investigations, V5, P106

Teicher BA, 1996, Radiat Oncol Invest, V4, P221

Garell PC, 1999, J Radiosurg, V2, P1

Nathu R M, 1998, Radiation oncology investigations, V6, P233

Nieder C., 2012, STRAHLENTHERAPIE UND ONKOLOGIE, V188, P865

Durieux Valerie, 2010, RADIOLOGY, V255, P342

Gibon D, 1999, J Radiosurg, V2, P167

Banasiak D, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P77

Smith BD, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P125

Stickle RL, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P204

Schmidt-Ullrich RK, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P321

Nieder C, 2013, J Cancer Sci Ther, V5, P115

Durand R E, 1997, Radiation oncology investigations, V5, P213

Haffty B G, 1997, Radiation oncology investigations, V5, P235

Fernandez-Vicioso E, 1997, Radiation oncology investigations, V5, P31

Peschel RE, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P278

Chidel MA, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P313

Prete J J, 1998, Radiation oncology investigations, V6, P90

Kondziolka Douglas, 2011, STEREOTACTIC AND FUNCTIONAL NEUROSURGERY, V89, P56

Kanaan Ziad, 2011, ANNALS OF SURGERY, V253, P619

Shao Hongfang, 2013, ONCOLOGY REPORTS, V29, P1441

Stringer Michael J., 2010, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, V61, P1377

Gieger M, 1997, Radiation oncology investigations, V5, P72

Nieder Carsten, 2013, SpringerPlus, V2, P261

Kulkarni Abhaya V., 2009, JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, V302, P1092

Holliday Emma B., 2014, INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, V88, P18

Johnson C R, 1998, Radiation oncology investigations, V6, P52

Desai J, 1998, Radiation oncology investigations, V6, P135

Roach M 3rd, 1997, Radiation oncology investigations, V5, P187

Sheridan MT, 1997, Radiat Oncol Invest, V5, P186

 

Of the 44 cited references listed, one is in a bibliometrics/scientometrics journal. Example 2:

 

Title:

Highest Impact Articles in Microsurgery: A Citation Analysis

Authors:

Kim, K; Ibrahim, AMS; Koolen, PGL; Markarian, MK; Lee, BT; Lin, SJ

Addresses:

[Kim, Kuylhee; Ibrahim, Ahmed M. S.; Koolen, Pieter G. L.; Markarian, Mark K.; Lee, Bernard T.; Lin, Samuel J.] Harvard Univ, Div Plast Surg, Beth Israel Deaconess Med Ctr, Sch Med, Boston, MA 02115 USA.

Source:

JOURNAL OF RECONSTRUCTIVE MICROSURGERY, 31 (7):527-540; 10.1055/s-0035-1546292 SEP 2015

Abstract:

Background Microsurgery has developed significantly since the inception of the first surgical microscope. There have been few attempts to describe “classic” microsurgery articles. In this study citation analysis was done to identify the most highly cited clinical and basic science articles published in five peer-reviewed plastic surgery journals. Methods Thomson/Reuters web of knowledge was used to identify the most highly cited microsurgery articles from five journals: Plastic and Reconstructive Surgery, Annals of Plastic Surgery, Journal of Plastic, Reconstructive & Aesthetic Surgery, Journal of Reconstructive Microsurgery, and Microsurgery. Articles were identified and sorted based on the number of citations and citations per year. Results The 50 most cited clinical and basic science articles were identified. For clinical articles, number of total citations ranged from 120 to 691 (mean, 212.38) and citations per year ranged from 30.92 to 3.05 (mean, 9.33). The most common defect site was the head and neck (n = 15, 30%), and flaps were perforator and muscle/musculocutaneous flaps (n = 10 each, 20%, respectively). For basic science articles, number of citations ranged from 71 to 332 (mean, 130.82) and citations per year ranged from 2.20 to 11.07 (mean, 5.27). There were 27 animal, 21 cadaveric, and 2 combined studies. Conclusions The most highly cited microsurgery articles are a direct reflection of the educational and clinical trends. Awareness of the most frequently cited articles may serve as a basis for core knowledge in the education of plastic surgery trainees. Level of Evidence III.

Cited References:

GARFIELD E, 1987, JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, V257, P52

Hirsch JE, 2005, PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, V102, P16569

Rickard Rory F., 2014, ANNALS OF PLASTIC SURGERY, V73, P465

Egghe Leo, 2006, SCIENTOMETRICS, V69, P131

DUBIN D, 1993, ARCHIVES OF DERMATOLOGY, V129, P1121

Baltussen A, 2004, INTENSIVE CARE MEDICINE, V30, P902

Ibrahim George M., 2012, EPILEPSIA, V53, P765

Volgas DA, 2006, Orthop Trauma Dir, V05, P29

Fersht Alan, 2009, PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, V106, P6883

Masic Izet, 2013, JOURNAL OF RESEARCH IN MEDICAL SCIENCES, V18, P516

Paladugu R, 2002, WORLD JOURNAL OF SURGERY, V26, P1099

Loonen MP, 2008, Plast Reconstr Surg, V121, P320e

KOSHIMA I, 1989, BRITISH JOURNAL OF PLASTIC SURGERY, V42, P645

GODINA M, 1986, PLASTIC AND RECONSTRUCTIVE SURGERY, V78, P285

Hallock Geoffrey G., 2012, PLASTIC AND RECONSTRUCTIVE SURGERY, V130, P769E

ALLEN RJ, 1994, ANNALS OF PLASTIC SURGERY, V32, P32

TAYLOR GI, 1975, PLASTIC AND RECONSTRUCTIVE SURGERY, V56, P243

Garfield E, 2006, JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, V295, P90

Zhang Wen-Jun, 2013, ANNALS OF PLASTIC SURGERY, V71, P103

Wei FC, 2002, PLASTIC AND RECONSTRUCTIVE SURGERY, V109, P2219

Shashikiran N D, 2013, Journal of the Indian Society of Pedodontics and Preventive Dentistry, V31, P133

Fenton JE, 2002, JOURNAL OF LARYNGOLOGY AND OTOLOGY9th Meeting of the British-Society-of-History-of-ENT, SEP, 2001, BIRMINGHAM, ENGLAND, V116, P494

TAYLOR GI, 1975, PLASTIC AND RECONSTRUCTIVE SURGERY, V55, P533

ZAMBONI WA, 1993, PLASTIC AND RECONSTRUCTIVE SURGERY, V91, P1110

ROELANTS G, 1978, BULLETIN OF THE MEDICAL LIBRARY ASSOCIATION, V66, P363

NYLEN C O, 1954, Acta oto-laryngologica. Supplementum, V116, P226

DANIEL RK, 1973, PLASTIC AND RECONSTRUCTIVE SURGERY, V52, P111

Thomson Reuters Web of Science, Institute for Scientific Information (ISI) Journal Citation Reports,

MOON HK, 1988, PLASTIC AND RECONSTRUCTIVE SURGERY, V82, P815

HIDALGO DA, 1989, PLASTIC AND RECONSTRUCTIVE SURGERY, V84, P71

Wang Dashun, 2013, SCIENCE, V342, P127

Celayir S., 2008, EUROPEAN JOURNAL OF PEDIATRIC SURGERY, V18, P160

Wei FC, 2002, PLASTIC AND RECONSTRUCTIVE SURGERY, V109, P2227

WEI FC, 1986, PLASTIC AND RECONSTRUCTIVE SURGERY, V78, P191

BOYD JB, 1984, PLASTIC AND RECONSTRUCTIVE SURGERY, V73, P1

Nam Jason J., 2014, JOURNAL OF BURN CARE & RESEARCH, V35, P176

 

Of the 36 cited references, 3 ( this, THIS and this). Compare these two examples with one of the other articles listed – Example 3:

 

Title:

Does Interdisciplinary Research Lead to Higher Citation Impact? The Different Effect of Proximal and Distal Interdisciplinarity

Authors:

Yegros-Yegros, A; Rafols, I; D’Este, P

Addresses:

[Yegros-Yegros, Alfredo] Leiden Univ, Ctr Sci & Technol Studies CWTS, Leiden, Netherlands.

[Rafols, Ismael; D’Este, Pablo] Univ Politecn Valencia, Ingenio CSIC UPV, E-46071 Valencia, Spain.

[Rafols, Ismael] Univ Sussex, SPRU Sci & Technol Policy Res, Brighton, E Sussex, England.

[Rafols, Ismael] OST HCERES, Paris, France.

Source:

PLOS ONE, 10 (8):10.1371/journal.pone.0135095 AUG 12 2015

Abstract:

This article analyses the effect of degree of interdisciplinarity on the citation impact of individual publications for four different scientific fields. We operationalise interdisciplinarity as disciplinary diversity in the references of a publication, and rather than treating interdisciplinarity as a monodimensional property, we investigate the separate effect of different aspects of diversity on citation impact: i.e. variety, balance and disparity. We use a Tobit regression model to examine the effect of these properties of interdisciplinarity on citation impact, controlling for a range of variables associated with the characteristics of publications. We find that variety has a positive effect on impact, whereas balance and disparity have a negative effect. Our results further qualify the separate effect of these three aspects of diversity by pointing out that all three dimensions of interdisciplinarity display a curvilinear (inverted U-shape) relationship with citation impact. These findings can be interpreted in two different ways. On the one hand, they are consistent with the view that, while combining multiple fields has a positive effect in knowledge creation, successful research is better achieved through research efforts that draw on a relatively proximal range of fields, as distal interdisciplinary research might be too risky and more likely to fail. On the other hand, these results may be interpreted as suggesting that scientific audiences are reluctant to cite heterodox papers that mix highly disparate bodies of knowledge-thus giving less credit to publications that are too groundbreaking or challenging.

Cited References:

Sarewitz D, 2004, ENVIRONMENTAL SCIENCE & POLICY Symposium on the Politicization of Science – Learning from the Lomborg Affair, FEB, 2002, Boston, MA, V7, P385

Page SE, 2007, DIFFERENCE: HOW THE POWER OF DIVERSITY CREATES BETTER GROUPS, FIRMS, SCHOOLS, AND SOCIETIES, P1

Gunn J, 1999, The development of the social sciences in the United States and Canada: the role of Philantrophy, P97

PETERS HPF, 1994, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE, V45, P39

Ioannidis John P. A., 2014, NATURE, V514, P561

Levitt Jonathan M., 2008, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, V59, P1973

Fleming L, 2001, MANAGEMENT SCIENCE, V47, P117

Porter Alan L., 2006, RESEARCH EVALUATION, V15, P187

Huutoniemi Katri, 2010, RESEARCH POLICY, V39, P79

Lariviere Vincent, 2010, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, V61, P126

Abbot A, 2001, Chaos of disciplines,

Zhang L, 2015, Journal of the Association for Information Science and Technology,

Lowe P, 2006, JOURNAL OF AGRICULTURAL ECONOMICS, V57, P165

Boyack KW, 2014, STI 2014 Leiden Conference, P64

Katz S., 1997, Scientometrics, V40, P541

Kiesler S, 2005, Social Studies of Science, V35, P733

Corsi Marcella, 2010, AMERICAN JOURNAL OF ECONOMICS AND SOCIOLOGY, V69, P1495

Morillo F, 2003, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, V54, P1237

Bauman Z., 2005, Liquid life,

Gibbons M, 1999, NATURE, V402, PC81

Sarewitz Daniel, 2007, ENVIRONMENTAL SCIENCE & POLICY, V10, P5

Adams J, 2007, Report to the Higher Education Funding Council for England,

Nightingale P., 2007, Science and Public Policy, V34, P543

Rhoten D., 2009, Thesis Eleven, V96, P83

Rinia EJ, 2001, RESEARCH POLICY, V30, P357

Wagner Caroline S., 2011, JOURNAL OF INFORMETRICS, V5, P14

Mallard Gregoire, 2009, SCIENCE TECHNOLOGY & HUMAN VALUES97th Annual Meeting of the American-Sociological-Association, AUG 15-19, 2002, CHICAGO, IL, V34, P573

Lariviere Vincent, 2015, PLOS ONE, V10,

Wang Jian, 2015, PLOS ONE, V10,

Jacobs Jerry A., 2009, ANNUAL REVIEW OF SOCIOLOGY, V35, P43

Stirling Andy, 2007, JOURNAL OF THE ROYAL SOCIETY INTERFACE, V4, P707

Nooteboom Bart, 2007, RESEARCH POLICY, V36, P1016

ERC. ERC Grant Schemes, Guide for Applicants for the Starting Grant 2011 Call,

Hessels LK, 2011, Industry & Higher Education, V25, P347

Waltman Ludo, 2013, JOURNAL OF INFORMETRICS, V7, P833

Hessels Laurens K., 2011, SCIENCE AND PUBLIC POLICY, V38, P555

Leydesdorff Loet, 2013, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, V64, P2573

Turner S, 2000, Practising Interdisciplinarity, P46

Steele TW, 2000, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE, V51, P476

Hollingsworth R, 2000, Practising Interdisciplinarity, P215

Llerena P, 2004, Science and Innovation Rethinking the Rationales for Funding and Governance, P69

Molas-Gallart J, 2014, Journal of Science Policy and Research Management, V29, P69

Porter Alan L., 2009, SCIENTOMETRICS, V81, P719

Frenken Koen, 2009, JOURNAL OF INFORMETRICS, V3, P222

Uzzi Brian, 2013, SCIENCE, V342, P468

Karim Salim S. Abdool, 2011, NATURE, V474, P29

van Rijnsoever Frank J., 2011, RESEARCH POLICY, V40, P463

Hoffman E, 1999, Letters of transit Reflections on exile, identity, language and loss,

Phillips Nicola, 2009, REVIEW OF INTERNATIONAL POLITICAL ECONOMY, V16, P85

Katz J., 1997, Research Policy, V16, P1

Laudel Grit, 2006, RESEARCH EVALUATION, V15, P2

Willmott Hugh, 2011, ORGANIZATION, V18, P447

van Eck Nees Jan, 2013, PLOS ONE, V8,

Rafols Ismael, 2012, RESEARCH POLICY, V41, P1262

Klein Julie T., 2008, AMERICAN JOURNAL OF PREVENTIVE MEDICINE, V35, PS116

Bruhn JG, 1995, INTEGRATIVE PHYSIOLOGICAL AND BEHAVIORAL SCIENCE, V30, P331

National Academies of Science, 2004, Facilitating Interdisciplinary Research,

Braun T, 2003, SCIENTOMETRICS, V58, P183

Carayol N, 2005, RESEARCH EVALUATION8th International Conference on Science and Technology Indicators, SEP 23-25, 2004, Leiden, NETHERLANDS, V14, P70

Waltman Ludo, 2011, SCIENTOMETRICS, V87, P467

NARIN F, 1991, SCIENTOMETRICSINTERNATIONAL CONF ON OUTPUT INDICATORS FOR EVALUATION OF THE IMPACT OF EUROPEAN COMMUNITY RESEARCH PROGRAM, JUN 14-15, 1990, PARIS, FRANCE, V21, P313

Rafols Ismael, 2007, INNOVATION-THE EUROPEAN JOURNAL OF SOCIAL SCIENCE RESEARCHConference on Converging Science and Technologies – Research Trajectories and Institutional Settings, MAY 14-15, 2007, Vienna, AUSTRIA, V20, P395

Rafols Ismael, 2010, SCIENTOMETRICS, V82, P263

Barry Andrew, 2008, ECONOMY AND SOCIETY, V37, P20

Stirling A, 1998, SPRU Electronic Working Papers, P28

Bruce A, 2004, FUTURES, V36, P457

Salter A, 2002, IPTS Report, P66

Chavarro Diego, 2014, RESEARCH EVALUATION, V23, P195

 

Example 3 (still not from in a specialised journal) is written by academics working in the bibliometrics/scientometrics/infometrics field (check the affiliations), and the differences are clear – a quick check of the references confirms that it engages that field. In other words, it meets one of the minimum standards for published academic work that our peer review processes are supposed to enforce. As should be obvious from the above comparison, any academic working in the bibliometrics/scientometrics/infometrics field would immediately know that Example 1 and Example 2 do not engage the field, which begs the question, how did these articles make it through the peer review process?

I am not saying that academics in other fields might not have something useful to offer on the subject of bibliometrics/scientometrics/infometrics, and indeed given how some of the ideas from the discipline permeate their professional life academics would do well to be across some of the basic concepts. But imagine if the current situation were reversed – an academic working in the field of bibliometrics downloaded some easily accessed  data on cancer outcomes and wrote an article titled something like ‘Radiation, Surgery or Chemotherapy? Effectiveness of treatment for patient outcomes’. Not only that, but imagine that the article contains no references to the field of radiation oncology…and then it gets submitted, peer reviewed and published in a bibliometrics journal! It makes no sense at all.

In my experience working with proprietary citation data, it is complex, requires huge amounts of cleaning and curating, and the data that come from front-end products like Web of Science (WoS) and Scopus look nothing like custom data solutions that funding councils and groups such as CWTS, iFQ and Science-Metrix work with on a regular basis in research evaluation, policy development and research. A quick look at the Scopus Custom Data Documentation surely illustrates that we should be more thoughtful than to simply download some data from Scopus and get on with the analysis.

The problem is hopefully obvious, but the reasons are not. Why is it that when it comes to the specialised discipline of bibliometrics/scientometrics/infometrics, seemingly any academic thinks that they can do it and academic rigour does not apply?

One of the reasons for the above situation is that products like Scopus and WoS have been aggressively marketed as easy solutions to the complex problem of research management. They provide push-button answers to what are, effectively, issues of public policy and industrial relations. Push-button solutions to other policy issues such as economic inequality, aging populations or migration would no doubt likewise find a welcome market.

Partly,  it is the fault of those of us working in bibliometrics/scientometrics/infometrics and research evaluation who perhaps should have foreseen these consequences and policed the use of citation data better. As Hicks et al. recount,

As scientometricians, social scientists and research administrators, we have watched with increasing alarm the pervasive misapplication of indicators to the evaluation of scientific performance.

But again, this is only part of the explanation – as Hicks et al. also point out, it is impractical to think that we can be in the room every time there is a discussion about research evaluation within a university, or every time an academic from outside of the field mentions impact factors or h-indexes.

Which brings me back to my point – yet another part of the explanation must be that academics themselves are involved in perpetuating the current misuse of metrics such as ‘impact factors’, as  Example 1 and Example 2 above illustrate.

This should be part of how we think about ‘the system’. Thinking about academics at the mercy of ‘the system’ is a one way transaction in which academics are affected by ‘the system’, but in this account academics play no role in creating ‘the system’, perpetuating ‘the system’ or benefiting from ‘the system’. I accept that there are important aspects of university policy and administration that are out side the control of the average academic, like government research priorities, funding council rules, university budgets etc. I also accept that academic work is in many ways hostage to global commercial interests (big publishers, citation data providers etc.). But as in Example 1 and 2 above where academic rigour is clearly compromised – in the name of an easy answer, a quick publication, the inherent competitiveness of academics…I don’t know what – and as recently outlined in Hicks’ et al, and as in a range of other aspects of academic work, academics’ practices sustain ‘the [current] system’. To improve ‘the system’, to make it more open, engaged and democratic, we must understand these complex interactions, accepting that how academics choose to practice academic work plays an important part. Then we have to agree to hold to a higher standard; then we can begin to change what we can change and not rely on simplistic and disingenuous explanations of how ‘the system’ is broken.

The ‘moral evaluation gap’

Here is a link to Sarah de Rijcke’s recent keynote at the European Sociological association conference on 

how indicators influence knowledge production in the life sciences and social sciences, and how in- and exclusion mechanisms get built into the scientific system through certain uses of evaluative metrics. 

De Rijcke argues that her findings demonstrate 

that we need an alternative moral discourse in research assessment, centered around the need to address growing inequalities in the science system. 

In de Rijcke’s words, there is an ‘evaluation gap’, a

Discrepancy between evaluation criteria and the social, cultural and economic functions of science

What she means in short is that our focus in research evaluation on measurements of quality have resulted in a system focused on meeting performance targets at the expense of creating a socially responsible research system.

While I agree with this, this assumes that we agree on what the social, cultural and economic functions of science and research are – it is good to talk about economic, social and cultural benefits, but we should first be able to answer the question, Benefits to whom? The social, cultural and economic functions of science are not given, universal nor eternal – one need only look to the case of Vannevar Bush and the Office of Scientific Research and Development, or the tragic case of Lysenkoism to understand this. And while these examples are hyperbole, they make the point – the social, economic and cultural are situated in the historical and political, and by extension so is science functioning in the service of the social, economic and cultural. 

This is not to say that there is a pure realm of research that exists beyond historical and political contingencies. On the contrary, it is to say that science and research are always the product of their time and place.

Those of us working in research evaluation, when we talk about measurement, have to depart from the understanding that what we are trying to measure, in the first instance, are political and historical interests, and that in simply measuring these, as de Rijcke demonstrates, we are going to push science and research in the direction of those interests. We spend a lot of time talking about the social and economic impacts of science, but far less time talking about how the social and economic impact on science.