Research Strategies Australia

Blog

Menu Close

Ensuring research engagement, and not just research for private profit

The report on the Pilot project of Research Engagement for Australia will be released soon, and I admit to being very happy that we (myself, ATSE and others) have managed to move the discussion of how we value and support university research away from the twin focus on ‘quality’ (as measured by ERA) and so-called research ‘impact’ (as per the UK REF) to now include research engagement. As outlined by the ARC Research Impact Principles and Framework:

Engagement describes the interaction between researchers and research organisations and their larger communities/industries for the mutually beneficial exchange of knowledge, understanding and resources in a context of partnership and reciprocity.

As we move further down this path, and as such ideas get adopted into new policies includingn the National Innovation and Science Agenda, we have to take care to ensure that we do not simply make ‘research engagement’ a synonym for ‘research industrialisation’, where research is motivated only by private profit, where research ethics are compromised etc. This runs the risk of breaking the civic pact of universities.

I am referring to things such as this  recent case of Tamiflu.

In Australia, a first step would be for universities to make detailed research funding records publicly available through the HERDC. At present universities provide grant-level detail for grants provided by the Federal Government and its funding agencies and programs (so-called Category 1 income), but the same is not required for other public sector and private sector funding (Category 2 and 3 income). These data are already sitting in universities’ financial and administrative systems, and are already submitted in an aggregated format. Providing the more detailed version is likely a small additional burden, if at all, on the university sector. But the transparency that it would engender would be a huge step forward and provide a cost-effective safeguard against the kinds of issues posed by greater ties between our public research institutions and the private and government sectors that they can service.

My new job – Chief Data Scientist, The Conversation Media Group

Today I am happy to announce I begin a new role as Chief Data Scientist for The Converstion Media Group. The role involves working across a range of  professional services for the university and public research sectors, including consulting and analysis services (amongst other things). As such, the consulting  provided in the past through Research Strategies Australia will be accumulated into the new job and continue to be available.

I wanted to take this opportunity to thank everybody who has supported the work of ResearchStrategies Australia in the last fourteen months, and to recap some of the major achievements:

  • Research Engagement for Australia (REA) – I am very happy to have led this project for ATSE, and even happier that it made it into the Watt review and now into the National Innovation and Science Agenda. Mostly, I am happy that Australia has shifted a large part of its focus away from measuring research impact, and towards measuring research engagement. The results of the REA pilot study conducted with all QLD and SA universities will be appearing soon and should move the discussion forward again.
  • Science and Research Priorities and Practical Challenges – Working closely with my friend and colleague Adam Finch at CSIRO, we developed a method for quantifying Australia’s past research effort against each of the Government’s new research priorities and practical challenges. For my part, I applied this to ARC and NHMRC funding data to show how much has been spent against each priority area. Big(ish) data and text mining…the results are in the charts if you follow the link above.
  • Measuring the Value of International Research Collaboration – This one has yet to be published but you can read a little more about it here where it is referenced. I expect that it will be the beginning of a larger discussion of how we measure and value our international research partnerships. I also suspect that it will further connect our world class research with a range of end users of research, especially governments who can derive so much value from embedding research policy in other policy settings such as international development aid, diplomacy, economic and trade policy…the list goes on.
  • Single Higher Education Research Data Collection Working Group – It was a great pleasure to Chair the working group trying to bring together a single set of rules for the HERDC and ERA data collections. 

There are plenty more projects I could go through, but at the end of the day, I wanted to simply say thanks to all of those I have had the pleasure of working with, the members who agreed to lend their expertise and time to the working groups and steering committees, colleagues here and abroad across the research sector and government who have made themselves available to provide guidance and friendship, the individuals, academies, Dean’s groups, departments and universities that have employed me, the conference organisers who have invited me to speak…and everybody else who helped Research Strategies Australia do what it has done. Also, a special thanks to Melinda Laundon and Andreea Papuc Krischer for making themselves available to help out on many of the projects!

As I said, the work will continue in the new job, and myself and a new team will be available for consulting and advice as always. The role is an exciting opportunity to move this to a global stage, and with a great group of people around me. I look forward to continuing to work with you all.

(I will also continue to post to the blog.)

Links for January

Here are a bunch of links to recent reading:

Why do authors pursue low risk publication options?

The politics of commercial open access providers
Online social data for prediction

Measuring the correlation between expertise and influence online
The Chief Economist on R&D in small enterprises 

The social relations between science and society

The problem of social good in higher education
Science isn’t broken
The limit of market reforms in higher education

Case studies of research impact in Australia: why we don’t need to, and why we shouldn’t

Case studies do not measure research impact, they demonstrate it

I was quoted recently in an article in The Conversation on some of the recommendations coming from the recently released Watt Review:

Tim Cahill, director of Research Strategies Australia, specialising in higher education research policy, advises against introducing case studies for measuring impact in Australia.

He says: “The value of case studies is what we can learn from them. The UK has already produced thousands of case studies that we can use – are we going to learn anything new by producing hundreds or even thousands of our own?”

To quickly expand on this statement: first, case studies do not measure impact, they demonstrate it. In many cases they do this by quantifying the impact, but this is different from measurement. Measurement implies that there is an agreed standard – for example, a metre – that can be used to gauge and compare things on a common scale – e.g. as measured in metres the distance from my home to work is shorter than the distance from my home to the moon.

So-called measuring of impact through case studies does not operate in the same way, even where the units of measure might be the same – such as income generated or the size of an audience in attendance at a recital. What case studies of research impact attempt do is combine a number of self-selected indicators to demonstrate a self-nominated impact. Can we compare them against each other? Yes. Can we derive meaning from them? Yes. Can we rank them relative to each other? Absolutely. But none of this is related to measuring the impacts of research.

Learning how impact happens

So, why is this an important distinction to make? Because the focus on measuring the impact of research through case studies has obscured their real value, which is to show the different ways that successful research impacts have occurred. What case studies are really good for is demonstrating the different players, conditions and activities that were involved in taking research from an insight to an impact – who was involved, what were they doing and what were the specific conditions that needed to be in place for that to work.

Looked at in this way, case studies are an important tool that teaches us about what we can do to maximise the likelihood of repeating past successes of. The lessons we learn from case studies allow us to create the conditions that have been proven to deliver impact.

Which is why I don’t think we need to undertake a case study-based research impact evaluation in Australia. The UK has already done it, and the case studies have been made freely available online. As far as learning from case studies goes, I see no reason why what we could learn from the some 6,500 case studies in the UK would be any different from what we could learn if we produced our own set of hundreds or thousands of Australian case studies. 

The cost of not-measuring research impact?

Now for why we shouldn’t use case studies to evaluate research impact in Australia: the effort involved will be very disruptive. Some simple calculations make this clear.

In 2013 there were some 15,602 Research FTE and 27,387 Teaching and Research FTE. The academic working year consists of 48 weeks (or 240 days); the usual Research contract is 80% research, or 192 days a year; the usual Teaching and Research contract is 40% research, or 96 days a year. In this time Australian academics produced 65,557 research outputs.Which is about 86 days per output (assuming that the research time was not split with HDR supervision, conferences etc.)

As reported elsewhere each case study in the REF 2014 took about 30 days of staff time to create.In other words, each case study costs about 35% of a research output.

So one way to think about it is how many research outputs would a national research evaluation cost? I will discuss two approaches to gauge this: first, we could use the REF model, which is 1 case study for every 10 academics submitted. In Australia, our research evaluation system, ERA, is comprehensive, not selective like in the UK, so 1 case study for each of the 43,000 or so FTE in ERA would be around 4,300 case studies. That would be 128,967 days of staff time, or 1503 research outputs, which is about 2.3% of the national yearly total research outputs.

Another way to determine the figure would be to take the number of evaluations in the recent ERA round as a guide – in ERA 2015 there were 2,460 units evaluated. If we required one case study for each of these units that would be equate to 73,800 days of staff time, or, in other words, 858 research outputs that would not be produced. That is about 1.3% of the total research output of Australia in 2013.

Neither figure seems a lot. However, in my experience it is usually the most senior research leaders in an institution that undertake such tasks as preparing university submissions for evaluation. This means that we are not just trading off 1-2% of our research outputs, but potentially our top 1-2% of research outputs. 

The second way to look at the equation is in terms of how much universities would need to receive back in terms of funding to cover costs. Again, in REF, the median cost of a case study was £7,500, or about $15,600 AU. If we multiply that by the figures above we get $67M and $38M respectively for the two models. For impact case studies to be a zero sum game, this is how much universities would need to receive on the back of the outcomes. Consider for a moment that this year ERA will deliver around $77M to universities. Introducing a case study approach would need to more or less double the amount of block funding that is delivered through research evaluation, which is a significant change in policy with  unknown outcomes. 

What should we do?

I think the most important thing for us to do is undertake a large scale analysis of the REF case studies and see what we can learn from them. What works, what doesn’t work, how does impact happen, are there patterns, common themes etc.?

This will be far cheaper than running our own case study evaluation and may give us a large part of the value that such an exercise would bring. 

How diverse are our academic interests?

There is a sense among many of the people I speak to at the moment that we have an opportunity to make transformational changes to the higher education sector in Australia. Between the review of research training, review of higher education research funding and the government’s upcoming innovation statement there is certainly a lot going on.

Throw into the mix global moves towards open access and a general discontent with the corporatisation of  publicly funded research – publisher pay walls etc. – and unprecedented scrutiny over the role of things like peer review, how we measure and value research and the basic social function of publicly funded research, and it really does seem like an exciting time.

But how likely are we to agree on what this change should look like and what the direction of the Australian higher education sector should be? My time working in university research policy has taught me that it is difficult to deliver a consensus on most issues, and that there are diverse competing interests.

The latest figures indicate that there are around 102,000 full time equivalent staff in the sector – about 44,000 on the academic scale and the remaining 58,000 on non-academic duties. To put that in perspective, you could fit the Australian university workforce in the stands of the Melbourne Cricket Ground.

If one were to gauge the diversity of that crowd by the number of representative and peak bodies that exist, they would get the impression that this is indeed a very diverse group.

First, there are 39 universities. Many of these universities have banded together under the umbrella of a cohort – we have the Go8, the ATN, the IRU and the RUN.

Within the universities there are disciplines which tend to be represented nationally by a Dean’s group – there are around 23 of these give or take. Many of the Dean’s groups tend to have sub-committees and groups, such as the Research Directors, etc.

The disciplines also have the Academies – the four Learned Academies (ASSA, AAH, AAS and ATSE) plus the four academies are joined through ACOLA.

Then there is Universities Australia and the various sub-groups therein (such as the DVCRs). Then the NTEU. Then groups like BHERT and ARMS and the list goes on and on and on! And almost all of these groups (plus many others) will have made submissions to each of the reviews currently underway, making a claim for unique interests that need to be represented.

Are the interests of the academic workforce in Australia really so diverse? And if so, why?

At some level, we are all working towards a few common goals, which include advancing and transmitting knowledge. And the way we understand this and do this doesn’t really differ from place to place – research and teaching at James Cook University looks pretty much the same as at the University of Western Australia. Nor does it change much from discipline to discipline – how we research and teach in social science is not that different to how we teach and research in, say, engineering.

Many of us know each other – we have worked together, met at conferences, served on review panels together, shared students, have mutual friends – and we would agree that we are not all that different from each other.

Then why would there appear to be so many competing interests?

I can account for much of this in two factors: first, academic work has always been a competitive sport where elites compete for prestige and resources and where demarcating differences between academic communities is the name of the game. The proliferation of more and more niche journals and the existence of Learned Academies and Societies are evidence enough of his.

Second, this is reflected back on the workforce in the mechanisms that govern academic work in Australia – from the market instruments of block funding to the national competitive grants programs, to the focus on elite discipline-based journals as the final destination of research. At each level of governance the focus is on competition.

These two factors make for a vicious circle in which competing interests flourish; what it masks is our shared goals and common interests. I would argue that a focus on the former has hindered our national research capacity, while recognising and building a future based on the latter would be to the benefit of all Australians, not just our university workforce.

Re-imagining a more democratic public university system – part 2

For a while I have been thinking that many of the issues facing the Australian higher education research sector – funding shortfalls, obsession with journal articles and associated article- and author-level metrics, disconnect with the public sphere and low collaboration with the private sector, among others – are compounded by the ‘dual funding’ model.

Government support for research comes in two forms – about half from peer reviewed grants (such as from ARC and NHMRC) and half from research block grants. However, as I have outlined elsewhere, the research block grants are driven 55% by the outcomes of the peer reviewed grants, ostensibly with the logic of offsetting the indirect costs associated with those grants. Meanwhile, income from public and private sector partners barely rates a mention in the allocation formula. This is not the case in many research intensive economies as outlined in Figure 1 .

 

Figure 1 Government funding of R&D in higher education by funding type, 2010 (from OECD Science, Technology and Industry Scorecard 2013)

increasing impact in science figure 2

 

But why does this matter? Well, firstly, it focuses the entire university research endeavour on 3-4 year project cycles, which is not conducive to breakthrough research which often requires long time frames and serendipity. It also minimises universities’ ability to back winners and undertake long term strategic planning of their research agendas, instead preparing for round after round of grant applications.

But, more importantly it forces academics to focus on grant-getting. And grant-getting is predicated on journal article writing to boost ‘track records’. And journal article writing is based on peer review which is based on ‘hermetically-sealed idiom’ with a good dose of gate-keeping. In other words, the funding model rewards academics for turning inward.

It may be argued using market logic that these competitive mechanisms will determine the correct outcomes, but as competitive market-based mechanisms, both individual grants and the block grants are less than competitive: both are predicated on the status quo, block grants through the many inbuilt safety nets, and ARC/NHMRC grants because they are geared towards previous winners who have used ARC/NHMRC grants to increase their track records to make themselves more competitive for further grants ad infinitum.

While grant-getting is the only game in town (both as an end in itself and as the driver of block grant funding) this cycle will perpetuate. I believe that a rebalance of funding that provides greater recognition for income derived from the public and private sectors, and at the same time delivers a larger proportion of funding through (reworked) block grants would go a long way to democratising university research.

Given the potential pool of funding available from the private and public sectors is virtually uncapped there may be additional benefits to this approach, such as addressing the funding crisis for university teaching – research done for and with private and public sector partners tends to get closer to fully funded (i.e. includes on-costs) a greater focus on this may diminish the need to cross-subsidise research from teaching budgets.

It will be interesting to see if any of this comes out in the Watt review

This post is part 2 of an ongoing series on re-imagining a more democratic public university system. Part 1 can be viewed here.

Research Engagement and Creative Arts Research

I was very happy to spend the day with the Deans and Directors of Creative Arts (DDCA) a couple of Wednesdays ago for their annual conference and AGM. There have been some interesting submissions coming from this group to major reviews that are currently underway including the ACOLA Review of the Research Training System and the Watt Review of Research Policy and Funding Arrangements (lots of credit to Su Baker and Jenny Wilson).

My panel session was dedicated to strategic questions around positioning of Creative Arts research in relation to emerging discussions in research evaluation. For me, one of the most pleasing aspects of our discussions on the day was how comfortable people were with the idea of engaging with research end-users. It seems that a strong grounding in creative-practice makes a focus on research engagement a natural fit; by their very nature performance- and exhibition-based research disciplines are audience/end-user-centric.

The issues I foresee for these disciplines in a research-engagement paradigm have less to do with outlining the importance of research engagement and more to do with how these transactions operate within Creative Arts disciplines. Three key issues are outlined below.

 

Performance and exhibition spaces as research infrastructure – since the introduction of ERA there is widespread acceptance that live performances, original creative works, curated works and recorded/rendered works (can) meet the definition of research. It is no great leap that the galleries, museums and performance spaces that support these research activities are therefore important research infrastructure. Importantly, funding received to support these infrastructure should be submitted as part of the HERDC return for institutions – my sense is that this is a discussion that still needs to be had in a number of institutions. Here is the relevant description from the HERDC guidelines:

Net receipted  income which can be included in the Research Income Return […] grants for specific and specialised equipment used for the conduct of research
In-kind support – in-kind support is a mainstay of Creative Arts research funding , but is not eligible to be submitted under HERDC. There are a few potential approaches to address this:
  • The first is to lobby the Department of Education and Training  for in-kind support to be included in HERDC. I do not know the reasons why this is currently excluded, but I do know that comprehensive records of in-kind support are not recorded widely by universities. Further, my sense is that beyond a line in an ARC Linkage Grant, receipt of in-kind support is not uniformly applied.
  • A more complex approach would be to work closely with funding partners to see if ‘in-kind’ is the appropriate classification for this support, or if there are more appropriate ways to record this support (e.g. ‘donations’, which are eligible under HERDC Category 3 income). I admit to know very little about this, except that it is likely to involve taxation laws and employee arrangements (on both sides of the support) in addition to HERDC rules. Anyway, it is worth asking the question.
  • The most practical, but perhaps least satisfying approach is to accept that existing data (including ARC Linkage grants and Category 2-3 income) will correlate very closely with levels of in-kind support i.e. it would be uncommon to have significant amounts of in-kind support in the absence of financial support (I have no evidence to support this statement, but it can be easily tested by universities). As longs as any use of these data is sensitive to different practices between disciplines then there should be no problem with using financial indicators as a proxy for in-kind support i.e. comparing Category 2-3 Creative Arts research against Medical research is not fair, but comparing Creative Arts Cat 2-3 research income between universities is ok.

Consulting, contracting and commercialsiation – many Creative Arts researchers in academia maintain active professional careers in practice. At present, much of this activity is conducted by individual academics under personal ABN/ACN arrangements, and therefore is not eligible for reporting under HERDC where income has to have been transacted through the university. In some cases this is unavoidable – e.g. where funding bodies only support individuals or corporations (and not universities) – but in some cases there is no technical reason why this is the case. There are possibly very good financial reasons that an academic would choose to receive this income outside of the institution, including that universities usually take a cut of this income to recover costs. I personally contend that if the work is done on the university’s time and/or with their resources (computers, offices, studios etc.) then this income should be transacted through the university, and not through a private company or other arrangement. But that is me, and there are is plenty of room for compromise on such issues within universities. There are likely also some discussions to be had about IP but again universities can be nothing if not flexible on such things.

 

Addressing these three key issues alone will, in my view, hugely benefit Creative Arts research in Australian universities. As far as I can tell, researchers in this field have to address some minor misalignment but overall a focus on research engagement suits the kind of work that they have always done.

 

Putting the ‘public’ back in publicly funded research – UoW postgraduate careers day keynote

photo

I was very happy to deliver the keynote address today at the University of Wollongong Graduate Researcher Careers Conference. My presentation is attached at the bottom of this post for those interested in looking at it.

The take-home messages from my address are summed up below:

  • Remember the ‘public’ in publicly funded research
  • Think about the public who pays for your research and what you give them in return
  • Remember that what you do as a researcher always (sometimes profoundly) changes the world we live in
  • Think about research beyond the walls of the university and beyond the covers of a journal

 

UoW postgrad (ppt)

Imagining ‘the system’ – on SIGMETRICS, bibliometrics and academic standards

PROP. XXVI. The human mind does not perceive any external body as actually existing, except through the ideas of the modifications of its own body.

Proof.—If the human body is in no way affected by a given external body, then (II. vii.) neither is the idea of the human body, in other words, the human mind, affected in any way by the idea of the existence of the said external body, nor does it in any manner perceive its existence. But, in so far as the human body is affected in any way by a given external body, thus far (II. xvi. and Coroll.) it perceives that external body. Q.E.D.

Corollary.—In so far as the human mind imagines an external body, it has not an adequate knowledge thereof.

Proof.—When the human mind regards external bodies through the ideas of the modifications of its own body, we say that it imagines (see II. xvii. note); now the mind can only imagine external bodies as actually existing. Therefore (by II. xxv.), in so far as the mind imagines external bodies, it has not an adequate knowledge of them. Q.E.D.

I have mentioned elsewhere my surprise at colleagues who continue to imagine ‘the [so-called] system’ as a top-down authority that shapes and coerces academic work. I continue to believe that this is a misleading depiction, and that ‘the [so-called] system’ is nothing more than the sum total of bureaucratic, political and academic practices – including individual academics. Case in point: the obsession with performance measurement and bibliometric analyses performed by non-experts.

Recently, Diana Hicks et al. illustrated the point – between 1984 and 2014 mentions of the much maligned ‘journal impact factor’ (the average citations for papers published in the last two years in a journal) increased dramatically in journal articles and editorials; the obsession with this form of debasing performance measurement, which wants to reduce academic work to single digits, hasn’t been driven by contributions from academics and professionals working in bibliometrics/scientometrics/infometrics and research evaluation, but has taken place in the pages of multi-disciplinary, medical and life sciences journals.

I was reminded of this recently while reading through the latest post from the SIGMETRICS mailing list – for those who don’t know, the list is

intended for the exchange of technical information among members of the performance evaluation community. Typical submissions include performance-related questions and announcements of research papers, software, job opportunities, conferences, and calls for papers.

One of the regular features of the list is a contribution from Eugene Garfield including bibliographic details of recent papers mentioning bibliometrics, scientometrics etc. Today, as I read through it struck me that most of the articles listed were a) published in journals outside of the field of bibliometrics/scientometrics/infometrics, b) published by academics with listed affiliations outside of bibliometrics/scientometrics/infometrics disciplines, and c) contained little or no engagement with the academic field of bibliometrics/scientometrics/infometrics. Below are a couple of examples (note – scroll through to the bottom if you want to skip to the rest of this post) – Example 1:

 

Title:

Scientific impact of studies published in temporarily available radiation oncology journals: a citation analysis

Authors:

Nieder, C; Geinitz, H; Andratschke, NH; Grosu, AL

Addresses:

[Nieder, Carsten] Nordland Hosp, Dept Oncol & Palliat Med, N-8092 Bodo, Norway.

[Nieder, Carsten] Univ Tromso, Fac Hlth Sci, Inst Clin Med, N-9038 Tromso, Norway.

[Geinitz, Hans] Johannes Kepler Univ Linz, Krankenhaus Barmherzigen Schwestern, Dept Radiat Oncol, A-4010 Linz, Austria.

[Geinitz, Hans] Johannes Kepler Univ Linz, Fac Med, A-4010 Linz, Austria.

[Andratschke, Nicolaus H.] Univ Zurich Hosp, Dept Radiat Oncol, CH-8091 Zurich, Switzerland.

[Grosu, Anca L.] Univ Hosp Freiburg, Dept Radiat Oncol, D-79106 Freiburg, Germany.

Source:

SPRINGERPLUS, 4 10.1186/s40064-015-0885-y FEB 24 2015

Abstract:

The purpose of this study was to review all articles published in two temporarily available radiation oncology journals (Radiation Oncology Investigations, Journal of Radiosurgery) in order to evaluate their scientific impact. From several potential measures of impact and relevance of research, we selected article citation rate because landmark or practice-changing research is likely to be cited frequently. The citation database Scopus was used to analyse number of citations. During the time period 1996-1999 the journal Radiation Oncology Investigations published 205 articles, which achieved a median number of 6 citations (range 0-116). However, the most frequently cited article in the first 4 volumes achieved only 23 citations. The Journal of Radiosurgery published only 31 articles, all in the year 1999, which achieved a median number of 1 citation (range 0-11). No prospective randomized studies or phase I-II collaborative group trials were published in these journals. Apparently, the Journal of Radiosurgery acquired relatively few manuscripts that were interesting and important enough to impact clinical practice. Radiation Oncology Investigations’ citation pattern was better and closer related to that reported in several previous studies focusing on the field of radiation oncology. The vast majority of articles published in temporarily available radiation oncology journals had limited clinical impact and achieved few citations. Highly influential research was unlikely to be submitted during the initial phase of establishing new radiation oncology journals.

Cited References:

Holliday Emma, 2013, INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, V85, P23

Wazer DE, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P111

Solberg TD, 1999, J Radiosurg, V2, P57

Joschko M A, 1997, Radiation oncology investigations, V5, P62

Maire JP, 1999, J Radiosurg, V2, P7

Chaney A W, 1998, Radiation oncology investigations, V6, P264

Epperly MW, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P331

Sanghavi S, 1999, J Radiosurg, V2, P119

Monga U, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P178

Norman A, 1997, Radiation oncology investigations, V5, P8

Kramer B A, 1998, Radiation oncology investigations, V6, P18

Kang S, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P309

Merrick G S, 1998, Radiation oncology investigations, V6, P182

Leborgne F, 1997, Radiation oncology investigations, V5, P289

Seymour C B, 1997, Radiation oncology investigations, V5, P106

Teicher BA, 1996, Radiat Oncol Invest, V4, P221

Garell PC, 1999, J Radiosurg, V2, P1

Nathu R M, 1998, Radiation oncology investigations, V6, P233

Nieder C., 2012, STRAHLENTHERAPIE UND ONKOLOGIE, V188, P865

Durieux Valerie, 2010, RADIOLOGY, V255, P342

Gibon D, 1999, J Radiosurg, V2, P167

Banasiak D, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P77

Smith BD, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P125

Stickle RL, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P204

Schmidt-Ullrich RK, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P321

Nieder C, 2013, J Cancer Sci Ther, V5, P115

Durand R E, 1997, Radiation oncology investigations, V5, P213

Haffty B G, 1997, Radiation oncology investigations, V5, P235

Fernandez-Vicioso E, 1997, Radiation oncology investigations, V5, P31

Peschel RE, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P278

Chidel MA, 1999, RADIATION ONCOLOGY INVESTIGATIONS, V7, P313

Prete J J, 1998, Radiation oncology investigations, V6, P90

Kondziolka Douglas, 2011, STEREOTACTIC AND FUNCTIONAL NEUROSURGERY, V89, P56

Kanaan Ziad, 2011, ANNALS OF SURGERY, V253, P619

Shao Hongfang, 2013, ONCOLOGY REPORTS, V29, P1441

Stringer Michael J., 2010, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, V61, P1377

Gieger M, 1997, Radiation oncology investigations, V5, P72

Nieder Carsten, 2013, SpringerPlus, V2, P261

Kulkarni Abhaya V., 2009, JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, V302, P1092

Holliday Emma B., 2014, INTERNATIONAL JOURNAL OF RADIATION ONCOLOGY BIOLOGY PHYSICS, V88, P18

Johnson C R, 1998, Radiation oncology investigations, V6, P52

Desai J, 1998, Radiation oncology investigations, V6, P135

Roach M 3rd, 1997, Radiation oncology investigations, V5, P187

Sheridan MT, 1997, Radiat Oncol Invest, V5, P186

 

Of the 44 cited references listed, one is in a bibliometrics/scientometrics journal. Example 2:

 

Title:

Highest Impact Articles in Microsurgery: A Citation Analysis

Authors:

Kim, K; Ibrahim, AMS; Koolen, PGL; Markarian, MK; Lee, BT; Lin, SJ

Addresses:

[Kim, Kuylhee; Ibrahim, Ahmed M. S.; Koolen, Pieter G. L.; Markarian, Mark K.; Lee, Bernard T.; Lin, Samuel J.] Harvard Univ, Div Plast Surg, Beth Israel Deaconess Med Ctr, Sch Med, Boston, MA 02115 USA.

Source:

JOURNAL OF RECONSTRUCTIVE MICROSURGERY, 31 (7):527-540; 10.1055/s-0035-1546292 SEP 2015

Abstract:

Background Microsurgery has developed significantly since the inception of the first surgical microscope. There have been few attempts to describe “classic” microsurgery articles. In this study citation analysis was done to identify the most highly cited clinical and basic science articles published in five peer-reviewed plastic surgery journals. Methods Thomson/Reuters web of knowledge was used to identify the most highly cited microsurgery articles from five journals: Plastic and Reconstructive Surgery, Annals of Plastic Surgery, Journal of Plastic, Reconstructive & Aesthetic Surgery, Journal of Reconstructive Microsurgery, and Microsurgery. Articles were identified and sorted based on the number of citations and citations per year. Results The 50 most cited clinical and basic science articles were identified. For clinical articles, number of total citations ranged from 120 to 691 (mean, 212.38) and citations per year ranged from 30.92 to 3.05 (mean, 9.33). The most common defect site was the head and neck (n = 15, 30%), and flaps were perforator and muscle/musculocutaneous flaps (n = 10 each, 20%, respectively). For basic science articles, number of citations ranged from 71 to 332 (mean, 130.82) and citations per year ranged from 2.20 to 11.07 (mean, 5.27). There were 27 animal, 21 cadaveric, and 2 combined studies. Conclusions The most highly cited microsurgery articles are a direct reflection of the educational and clinical trends. Awareness of the most frequently cited articles may serve as a basis for core knowledge in the education of plastic surgery trainees. Level of Evidence III.

Cited References:

GARFIELD E, 1987, JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, V257, P52

Hirsch JE, 2005, PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, V102, P16569

Rickard Rory F., 2014, ANNALS OF PLASTIC SURGERY, V73, P465

Egghe Leo, 2006, SCIENTOMETRICS, V69, P131

DUBIN D, 1993, ARCHIVES OF DERMATOLOGY, V129, P1121

Baltussen A, 2004, INTENSIVE CARE MEDICINE, V30, P902

Ibrahim George M., 2012, EPILEPSIA, V53, P765

Volgas DA, 2006, Orthop Trauma Dir, V05, P29

Fersht Alan, 2009, PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, V106, P6883

Masic Izet, 2013, JOURNAL OF RESEARCH IN MEDICAL SCIENCES, V18, P516

Paladugu R, 2002, WORLD JOURNAL OF SURGERY, V26, P1099

Loonen MP, 2008, Plast Reconstr Surg, V121, P320e

KOSHIMA I, 1989, BRITISH JOURNAL OF PLASTIC SURGERY, V42, P645

GODINA M, 1986, PLASTIC AND RECONSTRUCTIVE SURGERY, V78, P285

Hallock Geoffrey G., 2012, PLASTIC AND RECONSTRUCTIVE SURGERY, V130, P769E

ALLEN RJ, 1994, ANNALS OF PLASTIC SURGERY, V32, P32

TAYLOR GI, 1975, PLASTIC AND RECONSTRUCTIVE SURGERY, V56, P243

Garfield E, 2006, JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, V295, P90

Zhang Wen-Jun, 2013, ANNALS OF PLASTIC SURGERY, V71, P103

Wei FC, 2002, PLASTIC AND RECONSTRUCTIVE SURGERY, V109, P2219

Shashikiran N D, 2013, Journal of the Indian Society of Pedodontics and Preventive Dentistry, V31, P133

Fenton JE, 2002, JOURNAL OF LARYNGOLOGY AND OTOLOGY9th Meeting of the British-Society-of-History-of-ENT, SEP, 2001, BIRMINGHAM, ENGLAND, V116, P494

TAYLOR GI, 1975, PLASTIC AND RECONSTRUCTIVE SURGERY, V55, P533

ZAMBONI WA, 1993, PLASTIC AND RECONSTRUCTIVE SURGERY, V91, P1110

ROELANTS G, 1978, BULLETIN OF THE MEDICAL LIBRARY ASSOCIATION, V66, P363

NYLEN C O, 1954, Acta oto-laryngologica. Supplementum, V116, P226

DANIEL RK, 1973, PLASTIC AND RECONSTRUCTIVE SURGERY, V52, P111

Thomson Reuters Web of Science, Institute for Scientific Information (ISI) Journal Citation Reports,

MOON HK, 1988, PLASTIC AND RECONSTRUCTIVE SURGERY, V82, P815

HIDALGO DA, 1989, PLASTIC AND RECONSTRUCTIVE SURGERY, V84, P71

Wang Dashun, 2013, SCIENCE, V342, P127

Celayir S., 2008, EUROPEAN JOURNAL OF PEDIATRIC SURGERY, V18, P160

Wei FC, 2002, PLASTIC AND RECONSTRUCTIVE SURGERY, V109, P2227

WEI FC, 1986, PLASTIC AND RECONSTRUCTIVE SURGERY, V78, P191

BOYD JB, 1984, PLASTIC AND RECONSTRUCTIVE SURGERY, V73, P1

Nam Jason J., 2014, JOURNAL OF BURN CARE & RESEARCH, V35, P176

 

Of the 36 cited references, 3 ( this, THIS and this). Compare these two examples with one of the other articles listed – Example 3:

 

Title:

Does Interdisciplinary Research Lead to Higher Citation Impact? The Different Effect of Proximal and Distal Interdisciplinarity

Authors:

Yegros-Yegros, A; Rafols, I; D’Este, P

Addresses:

[Yegros-Yegros, Alfredo] Leiden Univ, Ctr Sci & Technol Studies CWTS, Leiden, Netherlands.

[Rafols, Ismael; D’Este, Pablo] Univ Politecn Valencia, Ingenio CSIC UPV, E-46071 Valencia, Spain.

[Rafols, Ismael] Univ Sussex, SPRU Sci & Technol Policy Res, Brighton, E Sussex, England.

[Rafols, Ismael] OST HCERES, Paris, France.

Source:

PLOS ONE, 10 (8):10.1371/journal.pone.0135095 AUG 12 2015

Abstract:

This article analyses the effect of degree of interdisciplinarity on the citation impact of individual publications for four different scientific fields. We operationalise interdisciplinarity as disciplinary diversity in the references of a publication, and rather than treating interdisciplinarity as a monodimensional property, we investigate the separate effect of different aspects of diversity on citation impact: i.e. variety, balance and disparity. We use a Tobit regression model to examine the effect of these properties of interdisciplinarity on citation impact, controlling for a range of variables associated with the characteristics of publications. We find that variety has a positive effect on impact, whereas balance and disparity have a negative effect. Our results further qualify the separate effect of these three aspects of diversity by pointing out that all three dimensions of interdisciplinarity display a curvilinear (inverted U-shape) relationship with citation impact. These findings can be interpreted in two different ways. On the one hand, they are consistent with the view that, while combining multiple fields has a positive effect in knowledge creation, successful research is better achieved through research efforts that draw on a relatively proximal range of fields, as distal interdisciplinary research might be too risky and more likely to fail. On the other hand, these results may be interpreted as suggesting that scientific audiences are reluctant to cite heterodox papers that mix highly disparate bodies of knowledge-thus giving less credit to publications that are too groundbreaking or challenging.

Cited References:

Sarewitz D, 2004, ENVIRONMENTAL SCIENCE & POLICY Symposium on the Politicization of Science – Learning from the Lomborg Affair, FEB, 2002, Boston, MA, V7, P385

Page SE, 2007, DIFFERENCE: HOW THE POWER OF DIVERSITY CREATES BETTER GROUPS, FIRMS, SCHOOLS, AND SOCIETIES, P1

Gunn J, 1999, The development of the social sciences in the United States and Canada: the role of Philantrophy, P97

PETERS HPF, 1994, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE, V45, P39

Ioannidis John P. A., 2014, NATURE, V514, P561

Levitt Jonathan M., 2008, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, V59, P1973

Fleming L, 2001, MANAGEMENT SCIENCE, V47, P117

Porter Alan L., 2006, RESEARCH EVALUATION, V15, P187

Huutoniemi Katri, 2010, RESEARCH POLICY, V39, P79

Lariviere Vincent, 2010, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, V61, P126

Abbot A, 2001, Chaos of disciplines,

Zhang L, 2015, Journal of the Association for Information Science and Technology,

Lowe P, 2006, JOURNAL OF AGRICULTURAL ECONOMICS, V57, P165

Boyack KW, 2014, STI 2014 Leiden Conference, P64

Katz S., 1997, Scientometrics, V40, P541

Kiesler S, 2005, Social Studies of Science, V35, P733

Corsi Marcella, 2010, AMERICAN JOURNAL OF ECONOMICS AND SOCIOLOGY, V69, P1495

Morillo F, 2003, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, V54, P1237

Bauman Z., 2005, Liquid life,

Gibbons M, 1999, NATURE, V402, PC81

Sarewitz Daniel, 2007, ENVIRONMENTAL SCIENCE & POLICY, V10, P5

Adams J, 2007, Report to the Higher Education Funding Council for England,

Nightingale P., 2007, Science and Public Policy, V34, P543

Rhoten D., 2009, Thesis Eleven, V96, P83

Rinia EJ, 2001, RESEARCH POLICY, V30, P357

Wagner Caroline S., 2011, JOURNAL OF INFORMETRICS, V5, P14

Mallard Gregoire, 2009, SCIENCE TECHNOLOGY & HUMAN VALUES97th Annual Meeting of the American-Sociological-Association, AUG 15-19, 2002, CHICAGO, IL, V34, P573

Lariviere Vincent, 2015, PLOS ONE, V10,

Wang Jian, 2015, PLOS ONE, V10,

Jacobs Jerry A., 2009, ANNUAL REVIEW OF SOCIOLOGY, V35, P43

Stirling Andy, 2007, JOURNAL OF THE ROYAL SOCIETY INTERFACE, V4, P707

Nooteboom Bart, 2007, RESEARCH POLICY, V36, P1016

ERC. ERC Grant Schemes, Guide for Applicants for the Starting Grant 2011 Call,

Hessels LK, 2011, Industry & Higher Education, V25, P347

Waltman Ludo, 2013, JOURNAL OF INFORMETRICS, V7, P833

Hessels Laurens K., 2011, SCIENCE AND PUBLIC POLICY, V38, P555

Leydesdorff Loet, 2013, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, V64, P2573

Turner S, 2000, Practising Interdisciplinarity, P46

Steele TW, 2000, JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE, V51, P476

Hollingsworth R, 2000, Practising Interdisciplinarity, P215

Llerena P, 2004, Science and Innovation Rethinking the Rationales for Funding and Governance, P69

Molas-Gallart J, 2014, Journal of Science Policy and Research Management, V29, P69

Porter Alan L., 2009, SCIENTOMETRICS, V81, P719

Frenken Koen, 2009, JOURNAL OF INFORMETRICS, V3, P222

Uzzi Brian, 2013, SCIENCE, V342, P468

Karim Salim S. Abdool, 2011, NATURE, V474, P29

van Rijnsoever Frank J., 2011, RESEARCH POLICY, V40, P463

Hoffman E, 1999, Letters of transit Reflections on exile, identity, language and loss,

Phillips Nicola, 2009, REVIEW OF INTERNATIONAL POLITICAL ECONOMY, V16, P85

Katz J., 1997, Research Policy, V16, P1

Laudel Grit, 2006, RESEARCH EVALUATION, V15, P2

Willmott Hugh, 2011, ORGANIZATION, V18, P447

van Eck Nees Jan, 2013, PLOS ONE, V8,

Rafols Ismael, 2012, RESEARCH POLICY, V41, P1262

Klein Julie T., 2008, AMERICAN JOURNAL OF PREVENTIVE MEDICINE, V35, PS116

Bruhn JG, 1995, INTEGRATIVE PHYSIOLOGICAL AND BEHAVIORAL SCIENCE, V30, P331

National Academies of Science, 2004, Facilitating Interdisciplinary Research,

Braun T, 2003, SCIENTOMETRICS, V58, P183

Carayol N, 2005, RESEARCH EVALUATION8th International Conference on Science and Technology Indicators, SEP 23-25, 2004, Leiden, NETHERLANDS, V14, P70

Waltman Ludo, 2011, SCIENTOMETRICS, V87, P467

NARIN F, 1991, SCIENTOMETRICSINTERNATIONAL CONF ON OUTPUT INDICATORS FOR EVALUATION OF THE IMPACT OF EUROPEAN COMMUNITY RESEARCH PROGRAM, JUN 14-15, 1990, PARIS, FRANCE, V21, P313

Rafols Ismael, 2007, INNOVATION-THE EUROPEAN JOURNAL OF SOCIAL SCIENCE RESEARCHConference on Converging Science and Technologies – Research Trajectories and Institutional Settings, MAY 14-15, 2007, Vienna, AUSTRIA, V20, P395

Rafols Ismael, 2010, SCIENTOMETRICS, V82, P263

Barry Andrew, 2008, ECONOMY AND SOCIETY, V37, P20

Stirling A, 1998, SPRU Electronic Working Papers, P28

Bruce A, 2004, FUTURES, V36, P457

Salter A, 2002, IPTS Report, P66

Chavarro Diego, 2014, RESEARCH EVALUATION, V23, P195

 

Example 3 (still not from in a specialised journal) is written by academics working in the bibliometrics/scientometrics/infometrics field (check the affiliations), and the differences are clear – a quick check of the references confirms that it engages that field. In other words, it meets one of the minimum standards for published academic work that our peer review processes are supposed to enforce. As should be obvious from the above comparison, any academic working in the bibliometrics/scientometrics/infometrics field would immediately know that Example 1 and Example 2 do not engage the field, which begs the question, how did these articles make it through the peer review process?

I am not saying that academics in other fields might not have something useful to offer on the subject of bibliometrics/scientometrics/infometrics, and indeed given how some of the ideas from the discipline permeate their professional life academics would do well to be across some of the basic concepts. But imagine if the current situation were reversed – an academic working in the field of bibliometrics downloaded some easily accessed  data on cancer outcomes and wrote an article titled something like ‘Radiation, Surgery or Chemotherapy? Effectiveness of treatment for patient outcomes’. Not only that, but imagine that the article contains no references to the field of radiation oncology…and then it gets submitted, peer reviewed and published in a bibliometrics journal! It makes no sense at all.

In my experience working with proprietary citation data, it is complex, requires huge amounts of cleaning and curating, and the data that come from front-end products like Web of Science (WoS) and Scopus look nothing like custom data solutions that funding councils and groups such as CWTS, iFQ and Science-Metrix work with on a regular basis in research evaluation, policy development and research. A quick look at the Scopus Custom Data Documentation surely illustrates that we should be more thoughtful than to simply download some data from Scopus and get on with the analysis.

The problem is hopefully obvious, but the reasons are not. Why is it that when it comes to the specialised discipline of bibliometrics/scientometrics/infometrics, seemingly any academic thinks that they can do it and academic rigour does not apply?

One of the reasons for the above situation is that products like Scopus and WoS have been aggressively marketed as easy solutions to the complex problem of research management. They provide push-button answers to what are, effectively, issues of public policy and industrial relations. Push-button solutions to other policy issues such as economic inequality, aging populations or migration would no doubt likewise find a welcome market.

Partly,  it is the fault of those of us working in bibliometrics/scientometrics/infometrics and research evaluation who perhaps should have foreseen these consequences and policed the use of citation data better. As Hicks et al. recount,

As scientometricians, social scientists and research administrators, we have watched with increasing alarm the pervasive misapplication of indicators to the evaluation of scientific performance.

But again, this is only part of the explanation – as Hicks et al. also point out, it is impractical to think that we can be in the room every time there is a discussion about research evaluation within a university, or every time an academic from outside of the field mentions impact factors or h-indexes.

Which brings me back to my point – yet another part of the explanation must be that academics themselves are involved in perpetuating the current misuse of metrics such as ‘impact factors’, as  Example 1 and Example 2 above illustrate.

This should be part of how we think about ‘the system’. Thinking about academics at the mercy of ‘the system’ is a one way transaction in which academics are affected by ‘the system’, but in this account academics play no role in creating ‘the system’, perpetuating ‘the system’ or benefiting from ‘the system’. I accept that there are important aspects of university policy and administration that are out side the control of the average academic, like government research priorities, funding council rules, university budgets etc. I also accept that academic work is in many ways hostage to global commercial interests (big publishers, citation data providers etc.). But as in Example 1 and 2 above where academic rigour is clearly compromised – in the name of an easy answer, a quick publication, the inherent competitiveness of academics…I don’t know what – and as recently outlined in Hicks’ et al, and as in a range of other aspects of academic work, academics’ practices sustain ‘the [current] system’. To improve ‘the system’, to make it more open, engaged and democratic, we must understand these complex interactions, accepting that how academics choose to practice academic work plays an important part. Then we have to agree to hold to a higher standard; then we can begin to change what we can change and not rely on simplistic and disingenuous explanations of how ‘the system’ is broken.

The Social Function of Science

Throughout the course of the history of science the scientist individually has had to exist on sufferance; he worked, inevitably, for ignorant patrons who could not understand even what he was trying to do, and if they had they would have had little wish to further it. Now, with the scientists’ growth in numbers and importance, this attitude is no longer necessary and will soon be no longer possible. Scientists also recognize their weaknesses, a lack of contact, not so much with the seats of power as with the people who can be the real beneficiaries of science. When that contact is renewed and improved we can hope to have a world where science ceases to be a threat to mankind and becomes a guarantee for a better future.

  
J.D. Bernal The Social Function of Science (Preface to the second edition, ‘After twenty-five years’, 1964)