ORCID: giving new meaning to “open” research

At the beginning of peer review week, Natasha Simons writes on ORCID – an essential tool throughout academia now.

Contact: Twitter @n_simons

Have you ever tried to search for the works of a particular author and found that there are literally hundreds of authors with the same name? Or found that your name has been misspelt on a publication or that it is plain wrong because you changed your name when you got married (or divorced) a few years back? Well, you are not alone. Did you know that the top 100 surnames in China account for 84.77% of the population or that 70% of Medline names are not unique? So receiving credit where credit is due is badly needed by researchers the world over and in solving this problem, we can also improve the discoverability of their research. But to solve a global problem, we need a global solution. Enter ORCID – the Open Researcher and Contributor Identifier.

orcid_128x128ORCID provides individual researchers and scholars with a persistent unique identifier which links a researcher with their works and professional activities – ensuring the work is recognised and discoverable. Sure, there are many other researcher identifiers out there but ORCID has the ability to reach across disciplines, research sectors, and national boundaries. ORCID distinguishes an individual researcher in a similar way to how a Digital Object Identifier (DOI) uniquely identifies a scholarly publication. It lasts for a lifetime and remains the same whether you move institutions, countries or (heaven forbid) change disciplines. If you’ve not seen one before, check out the ORCID for Nobel Prize laureate and Australian of the Year Peter C. Doherty.

ORCID works as a solution to name ambiguity because it is:

  • Widely used;
  • Embedded in the workflows of submission systems for publishers, funders and institutions;
  • The product of a global, collaborative effort;
  • Open, non-profit and researcher-driven.

There are over 300 ORCID members (organisations or groups of organisations) from every section of the international research community. Over 1.5 million ORCID identifiers for individual researchers have been issued since its launch in October 2012. In Australia, the key role of ORCID has been recognised in two Joint Statements and – as is the case in many other countries – plans for an ORCID Consortium are well underway.

From its very beginning, ORCID has embraced “open” – it is free for researchers to sign up, open to any interested organisation to join, releases software under an Open Source Software license, and provides a free public API. Institutions who wish to embed ORCID into their workflows are advised to join ORCID and this membership fee (for service) in turn supports ORCID to continue to function as a non-profit entity.

A key activity of ORCID at the moment is completing the metadata round trip. It sure doesn’t sound exciting but it is actually. Really! It works like this: when a researcher submits an article to a publisher, a dataset to a data centre, or a grant to a funder, they include their ORCID iD. When the work is published and the DOI assigned, information about the work is automatically connected to the researcher’s ORCID record. Other systems can query the ORCID registry and draw in that information. This will save researchers a lot of time currently spent updating multiple data systems, and ensures correct credit and discoverability of their research. See? Exciting, huh!

Another great thing ORCID is doing is Peer Review Week (28 September – 2 October), which grew out of informal conversations between ORCID, Sense about Science, ScienceOpen, and Wiley. The week highlights a collaborative effort in finding ways to build trust in peer review by making the process more transparent and giving credit for the peer review activity. ORCID have also been collaborating with Mozilla Science Lab, BioMed Central, Public Library of Science, The Wellcome Trust, and Digital Science, among others, to develop a prototype for assigning badges to individuals based on the contributor role vocabulary developed by Project CRediT earlier this year.

It’s great news that this year and for the first time ever, ORCID are officially joining the Open Access Week celebrations. OA Week runs from October 19-26 and their goal is to sign up 10,000 new ORCID registrants and increase the number of connections between ORCID iDs and organisation iDs. They hope you can help! So go on, why not go sign up for an ORCID iD now? You’ll be helping to ensure your scholarly work is discoverable, correctly attributed to you, and you’ll save time in the bargain.

About the author

Natasha Simons is a Research Data Management Specialist with the Australian National Data Service

Publish or perish culture encourages scientists to cut corners

Reposted from The Conversation, 23 Sept 2015
Last week there was another very public case of a journal article being retracted as a result of academic misconduct. This time it was in the Journal of the American Medical Association (JAMA), with the lead author – Dr Anna Ahimastos, working at Melbourne’s Baker IDI – reportedly admitting she fabricated data.

Sadly, the story is all-too familiar. But this is not to say that science is imperiled, only that we need to ensure the reward and support structures in academia promote the best practices rather than corner cutting.

We have only recently begun looking closely at how the scientific literature could function better, and what can go wrong. And there are conflicting opinions on how to handle underlying problems.

Peer review is currently the primary tool we have for assessing papers prior to publication. Although it has its strengths, especially when overseen by skilled editors, it can’t pick up all instances of fraud or sloppy scientific practices.

In the past these errors may have lain hidden for many years, or never come to light. Now, post publication scrutiny is picking up more and more papers with questionable data. This is leading to corrections, or even retractions. Websites such as Retraction Watch have sprung up to document these retractions.

Peerless research

To non-academics, this might all seem rather surprising. Isn’t science governed by strict protocols for performing and reporting research?

Well, no. Unlike industrial processes, for example, which have standard operating procedures and oversight, science is usually organised locally. Expert laboratory heads typically have the responsibility for the oversight of their laboratories’ work.

Many laboratories work as part of larger collaborations, which may have their own checks and balances in place, as do the academic institutions to which they belong. Even so, ultimately the researchers and individual laboratories are responsible for their own work.

The medical sciences have developed their own standards of reporting studies, including clinical trials. But even these standards are not employed universally.

The system of rewards within science is possibly even more perplexing. Academia is a highly competitive profession. The basic training in science is a PhD, with more than 6,000 awarded each year in Australia alone, which is many more than can ever end up as career researchers, even at the lowest level.

The situation gets worse the more senior a researcher gets. According to a 2013 discussion document less than 5% of those who were originally awarded PhDs find permanent academic positions. Even these senior researchers rarely have permanent positions, but are instead expected to compete for funding every few years.

And the primary way academics compete is in the number of papers they publish in peer reviewed journals, especially the handful of what are considered to be top journals, such as Science, http://www.nature.com/link text and The Lancet.

Under pressure

Why does this all matter? Doesn’t this competition lead to selection of the best of the best in research and a faster pace of advancement of science? In fact, the reverse may be the case.

In a seminal paper published in 2005, provocatively titled Why Most Published Research Findings are False, John Ioannidis discussed a number of reasons why research may be unreliable. One finding was that papers in highly competitive areas were more likely to be false than papers in less competitive fields.

In 2014, the Nuffield Council on Bioethics probed these issues among UK researchers in a year long study. What they found was alarming.

Researchers stated that there was strong pressure on them to publish in a limited number of top journals, “resulting in important research not being published, disincentives for multidisciplinary research, authorship issues, and a lack of recognition for non-article research outputs”. Even worse was that the need to get into these top journals led to “scientists feeling tempted or under pressure to compromise on research integrity and standards”.

What can be done? Increasingly, groups of scientists are coming together to develop standards in reporting, conduct and reproducibility. Organisations such as the Committee on Publication Ethics (COPE), which I chair, advise editors on how to handle problem papers.

Perhaps most interestingly, a number of technological innovations have arisen that could lead to more reliable science, if adopted widely. Probably the most important innovation is that of Open Science, i.e. open access to research publications, and open access to the data and methodology that underpins those publications.

But we also need to develop ways to reward scientists who do make their publications, data and methodology open for scrutiny, and don’t just pursue publication in top journals.

Research data organisations, such as the Australian National Data Service (ANDS), are developing the infrastructure for systematic and standardised ways of linking to data, but as yet funders and institutions do not routinely reward such behaviour.

In the end, science is a human endeavour. And like humans everywhere, those who work in it will do what they are rewarded for, for better or for worse. So we need to make sure those reward structures are encouraging good quality research, not the opposite.

The Conversation

Virginia Barbour, Executive Officer, Australasian Open Access Support Group, Australian National University

This article was originally published on The Conversation. Read the original article.

AOASG Submission to the Review of Research Policy and Funding Arrangements for Higher Education

September 2015


The Australasian Open Access Support Group (AOASG) was established in 2013 by nine Australian universities that were committed to ensuring that the research outputs of Australia were made available openly with the ultimate aim of optimising the return on investment in research done in Australian Universities

Over the past three years the AOASG has worked to assist researchers, funders, research organisations and libraries by sharing knowledge about and assisting in building capacity for Open Access.

The AOASG now includes all eight New Zealand universities as well as nine Australian universities.

AOASG welcomes the Review of Research Policy and Funding Arrangements for Higher Education, in particular the focus on increasing the impact of the significant investment made by the Commonwealth Government in universities.

Comments from the AOASG follow in relation to Sections 1,2, and 4 of the issues paper.


Section 1

To achieve industry impact and enable commercialisation, the research outcomes from universities need to be easily discoverable, free online with clear reuse rights, and with linked access to the data that underpins the research.

Recent years have seen the government launch some important initiatives to assist availability of research outputs. AOASG welcomes the policies of the Australian Research Council (ARC) and National Health and Medical Research Council (NHMRC)which recommend Open Access for research outputs from research funded by the Councils. We note that compliance is as yet, unmeasured.

The National Collaborative Research Infrastructure Strategy funded projects (eg ANDS) have significantly improved the infrastructure for access to data from research, although much remains to be done in order for data to consistently be made open. We note that research from Houghton and Gruen finds that:

“Conservatively, we estimate that the value of data in Australia’s public research to be at least $1.9 billion and possibly up to $6 billion a year at current levels of expenditure and activity. Research data curation and sharing might be worth at least $1.8 billion and possibly up to $5.5 billion a year, of which perhaps $1.4 billion to $4.9 billion annually is yet to be realized. Hence, any policy around publicly-funded research data should aim to realise as much of this unrealised value as practicable [1]”

There will be genuine financial benefit from making research outputs, including data, available on an open access basis.

Research [2] into the use of research by industry and business associations has found that major barriers include a lack of access to research findings and data. These barriers are compounded by inadequate discoverability of the research. Furthermore, there is as yet inconsistent linking via unique identifiers of researchers (such as ORCiD) to research publications and data

Australian Government policy is underdeveloped in this area, leading to inconsistent practices, limited availability of funded research outputs and sub-optimal industry impact.

mandated deposit

2010 – 2014

Figure 1. The percentage of Australasian Institutions with a mandated deposit for all research publications. There was a marked increase in the number of universities mandating deposit for all types of research publications, with the percentage rising from 16% in 2013 to 37% in 2014. [3]

In conclusion, factors impeding commercialisation of the research output of Australia’s universities (1.4.1), and barriers to improving research-industry collaboration (1.4.2) include:

  • a lack of policy frameworks to ensure research is published openly and is thus available
  • a lack of an overarching national discovery mechanism – for example the ability to search seamlessly across all research outputs, including published work and that deposited in repositories.
  • a lack of a service that would support industry awareness of new research (such as the SHARE initiative in the US) [4].
  • No entity which is responsible for the the collaboration required between policy makers, such as ARC and NHMRC, academic institutions, and those who can provide technology and infrastructure, such as Intersect and the National Library of Australia.

Section 2 and 4

One reliable measure of the use of research is by counting the number of citations to that work in other publications.  This is an important metric for researchers. There is substantial evidence that making research available via Open Access publications increases citations [5], ie researchers can improve their citation rate in this way.  However, it is important that such citations are measured at the article, not the journal level and are part of an overall programme of considered impact evaluation.[6]

In order to optimise accessibility, use and reuse of research outputs by industry measures (2.3.5) we recommend that award assessment and impact measurement (4.3.6) should include identification of and metrics for:

  • research outputs – number that have been made openly accessible and information on their use and reuse
  • research datasets – number that have been made openly accessible and information on their use and reuse
  • grant acquittals and reporting should include a requirement that outputs are reported in terms of those that are freely or openly accessible, and by what method and those that are behind a paywall

On the question of what can universities do to enhance collaboration (4.3.3) we believe that making universities’ research more discoverable via Open Access, either via universities’ repositories or through fully open access journals is a crucial first step. There is substantial evidence that industry – especially small and medium sized enterprises (SMEs) [3] – are not able to routinely and affordably access the research they need.

However, an important overarching barrier to improving accessibility and hence translation of both research publications and the associated data is financial – ie the cost of Open Access publication, repository infrastructure and data curation.

Full funding for the dissemination of results in the relevant grants, with specific line items associated with such funding would we believe substantially improve the use, reuse, impact and translation of Australian research.

1. Houghton, J., Gruen, N. (2014) Open Research Data Report to the Australian National Data Service (ANDS). November 2014  http://ands.org.au/resource/open-research-data-report.pdf

2. Houghton, Swan and Brown Access to Research and Technical Information in Denmark http://www.deff.dk/uploads/media/Access_to_Research_and_Technical_Information_in_Denmark.pdf

3. Council of Australian University Librarians (2014) 2014 Research Publications Repository Survey Report http://www.caul.edu.au/content/upload/files/surveys/repositories2014public.pdf

4. The Association of Research Libraries (ARL), the Association of American Universities (AAU), and the Association of Public and Land-grant Universities (APLU) jointly launched the SHARE initiative in 2013. http://www.share-research.org/

5. SPARC Europe The Open Access Citation Advantage Service http://sparceurope.org/oaca/

6. The Leiden Manifesto for Research Metrics http://www.leidenmanifesto.org/

A PDF version of this submission is available here:

Open data is good, because …

Belinda Weaver writes on the many benefits of open data

Contact: Twitter @cloudaus

Why should we advocate for open data? What benefits does it bring?

Transparency, for one.

Governments do stuff. We don’t always like it. It helps if we have the data to back up our objections. The Guardian datablog publishes and visualises a lot of information which helps pierce the opacity around government. Two examples – the costs of the post-GFC UK bank bailout and where UK spending actually goes. Both do a great job of communicating a message, and the data can be downloaded and reused.

The ABC’s FactCheck service is one way Australians can check on what governments are saying.

Open data makes things more efficient.

Disasters happen. In the helping phase, open data helps relief agencies get the information they need to direct operations on the ground. It helps governments get the plans and details of the infrastructure they need to fix. The New Zealand response to the Christchurch earthquake is a case in point. Crisis.net is a global source of information to help make disaster response quicker and more efficient.

Open data helps join things up.

Cities are complex beasts and making things work in synch requires a lot of planning and coordination. Plenar.io provides a platform for all kinds of data – transport, air quality – to be stored, interrogated and overlaid. Chicago and San Francisco in the US and Glasgow and Newcastle in the UK have all implemented Plenario for cities data. Data exists on a single map and a single timeline, making it easy to access multiple datasets at once, even those originally housed at different data portals.

Open data democratises access.

Codex Sinaiaticus, the Christian Bible in Greek, was handwritten more than 1,600 years ago, and is the oldest substantial book to survive antiquity. The manuscript contains the oldest complete copy of the New Testament.  Its text is of outstanding importance for the history of the Bible, and the manuscript is of supreme importance for the history of the book.  The MS is in four locations – London, St Petersburg, Sinai and Leipzig. Now the item has been fully digitised, scholars from anywhere can work on it. What was once accessible only to a privileged few is now open to all.

Open data enables new businesses.

The Go Brisbane app allows users to save favourite journeys and view timetables for them very quickly. This beats using official transport websites where getting the same information takes a whole lot longer. Open mapping information has created a range of new businesses – travel, holidays, restaurant guides, walking tours, direction finders … the possibilities are endless.

Open data saves lives.

‘Dr Internet’ is blamed for many false diagnoses, but it can also foster real ones. As more and more medical information becomes freely available, patients can investigate their problems and possibly find some answers, as this story shows

Have you got a good open data story? Share it here.

About the author

Belinda Weaver is eResearch Analyst Team Leader, Queensland Cyber Infrastructure Foundation.

Going beyond the published article: how Open Access is just a start

Alex Holcombe agues that if academics learn how to code and post their code, replication can become routine instead of a heroic painstaking effort.

The post is especially timely following the publication last week of a paper in Science documenting the difficulty in replicating published psychology studies

Contact: alex.holcombe@sydney.edu.au or Twitter @ceptional

A published article is only a summary description of the work that was done and also only a summary of the data that was collected. Making the published article open access is important, but is only a start towards opening up the research described by the article.

Openness is fundamental to science, because scientific results should be verifiable. For each result, at least one of two possible verification approaches ought to be made viable. One is scrutinizing every step of the research process that yielded the new finding, looking for errors or misleading conclusions. The second approach is to repeat the research.

The two approaches are linked, in that both require full disclosure of the research process. The research can only be judged error-free if every step of the research is available to be scrutinized. This information is also what’s needed to repeat the research. I will use the term reproducible research for this high standard of full publication that we should be aspiring to.

Figure 1 Via the Center for Open Science, a community of researchers (including myself) have developed badges to indicate that the components of a scientific report needed for reproducibility are available. The badges can then be awarded to qualified papers by journals.

Figure 1 Via the Center for Open Science, a community of researchers (including myself) have developed badges to indicate that the components of a scientific report needed for reproducibility are available. The badges can then be awarded to qualified papers by journals.

Unfortunately, the explicitness that would be required for exact reproduction is far higher than the norm of what is typically published. This is true even for the most influential studies. I come across this problem regularly in my role as “Registered Replication Report” co-editor for the journal Perspectives on Psychological Science. As editor, I supervise the replication of important experiments, and this often requires extensive work in re-developing experiment software and analysis code on the basis of the summaries typical in journal articles.

In experimental psychology, the main steps of doing an experiment are presenting the stimuli to the participant, collecting the responses, doing statistical analysis of the responses, and creating the plots. If one were to write out every step involved in these, it would take a very long time.

It would certainly take more time than most academics have. Academics today are under tremendous pressure to generate new findings for their next paper, and under little or no pressure to document their processes meticulously.

Funder mandates and incentives have been pushing researchers towards making their science more reproducible. This, accompanied by cultural change at the grassroots and at the level of journal editors, is making substantial headway. Efforts at each of these levels reinforce each other in a virtuous circle.

open science wins

Figure 2 A virtuous circle of action at multiple levels is needed to achieve full reproducibility. “Open science” is a closely related concept

A virtuous circle of action at multiple levels is needed to achieve full reproducibility. “Open science” is a closely related concept.

One facet of the virtuous circle that often goes unappreciated is automation. Automation requires advances in technology. In science, these advances are often achieved by researchers and programmers contributing open source code. Automation has many benefits, allowing it to progress independently of reproducibility incentives and culture.

Automation has of course transformed many industries previously, from the making of telephone calls (no more switchboard operators) to the making of cars (robots do much of the assembly).

Figure 3 An early printing press. While the actual printing is here done by machine, humans must guide the machine through hundreds of steps. Unfortunately, this is reminiscent of how much of laboratory science is done today.

Figure 3 An early printing press. While the actual printing is here done by machine, humans must guide the machine through hundreds of steps. Unfortunately, this is reminiscent of how much of laboratory science is done today.

But rather than being a large-scale production system, science is more like a craft. In experimental psychology, each laboratory is doing their own little study, and often doing experiments significantly different than the same lab did a year ago. From one project to the next, the steps can change. If a researcher is doing an experiment or data analysis procedure that they may never have to repeat, there is little incentive to automate it.

It is almost always true, however, that aspects of one’s data analysis will need to be repeated. I refer not only to the need to repeat the analysis for future projects, but rather what one must due to satisfy reviewers. After submission of one’s manuscript to a journal, it tends to come back with peer reviewer complaints about the amount of data (not enough) or the way the data was analysed (not quite right).

This is where I was first truly pleased by having automated my processes – when to satisfy the reviewers, I only needed to change a few lines in my analysis code. Following that, simply clicking “run” in my analysis program resulted in all the relevant numbers and statistics being calculated and all the plots of results being re-done and saved in a tidy folder.

Unfortunately, most researchers never learn the skills needed to automate their data analysis or any other process, at least in psychology. Usually it involves programming. For data analysis, the best language to learn is R or Python.

R has gradually become easier and easier to use, but for those without much programming experience, an intensive effort is still required to become comfortable with it. Python is more elegant, but doesn’t have as much easily-usable statistical code available.

I have begun organising R programming classes for postgraduates at the University of Sydney- here is a description of the first one. I have two main reasons for doing this. Foremost is to empower students, both with the ability to automate their data analyses and with programming skills – useful for a range of endeavours. Second is to make research reproducible, which can only happen if the new generation of scientists are able to automate their processes.

A fantastic organisation of volunteers called Software Carpentry teaches researchers to program. Two junior researchers at the University of Sydney this year completed the Software Carpentry instructor training program – Fabian Held and Westa Domenova. With Fabian and Westa as the instructors, a 2-day Software Carpentry bootcamp is being planned for February 2016, as part of the ResBaz  multi-site research training conference.

Unfortunately, formal postgraduate classes have been sorely lacking at nearly every university I have been associated with.  And at the University of Sydney as well, we don’t have the financial resources to set up a class with a properly paid instructor. Fortunately, Software Carpentry provides a fantastic low-cost, volunteer-based way to disseminate programming skills. While it would be hard to find volunteer instructors for most types of classes, something about programming seems to brings out the best in people – just have a look at the amazing range of software resources created by the open-source community.

I like to think I am helping create a future where reproducing the research in published psychology articles will not require extensive software development or many manual steps that must be reverse-engineered from a few paragraphs of description in the article. For the data analysis component of projects, if not the actual experiments, one ought to be able to download and run code that is published along with the paper. Aside from being good for the world, that would make my job editing Registered Replication Reports a lot easier.

About the Author 

Alex Holcombe is Associate Professor of Psychology and Co-director, Centre for Time, University of Sydney and  Associate Editor, Perspectives on Psychological Science

In an open world, what value do publishers add to research?

Jack Nunn, a consultant in public involvement in research, who works as a researcher in the Public Health Department at La Trobe University, reflects on how publishing could be different.

Contact: Jack.Nunn@latrobe.edu.au or Twitter: @jacknunn

cheese-852978_1280Using only camembert, smoked salmon and controlled laboratory conditions, I had a revelation about the relationship between researchers, publishers and the public. This is the story.

I was in one of the world’s leading laboratories being given a tour of a potentially hazardous area, when suddenly the PA barked ‘ATTENTION ALL STAFF, ATTENTION ALL STAFF’. I was ready for the worst, to evacuate or suit up. But why was I there at all?

I’d spoken earlier that day about public involvement in research and publishing at an event at the Walter and Eliza Hall Research Institute.

It was organised and paid for by the open access publisher Biomed Central, in order to raise awareness about their work.

The publisher recently asked if I would volunteer my time to be a member of the editorial board of the new journal ‘Research Involvement and Engagement’ and also speak at their events in Australia. It is a new journal being run on a not-for-profit model and BioMed Central are world-leaders in open access publishing, so it was exciting to accept.

I found myself plunged into the mysterious, intriguing and often self-perpetuating world of publishing.

The speech I made essentially asked the question ‘What value do publishers add to research, and therefore the public good’. This is a different question from how valuable is publishing – to which the answer is ‘very’.

Publishers make lots of money from publishing research, including open access research. In other words, I sought an answer to the question – ‘what are publishers giving back to the research process, in return for the money they take’.

I also asked how the public could be supported to be more involved in every stage of the research cycle, including publishing and dissemination. I ended with my usual plug for Tim Berners Lee’s eye-opening TED talk about open and linked data, which describes how everyone can access and interpret data – the very embodiment of public involvement in research.

In conclusion, I said I think publishers have an important and crucial role in science, and posed a series of questions to reflect on why do publishers exist as they do – much as one may ponder ‘Why do we have a Royal Family?’ in a neutral and balanced way.

After I spoke, I met interesting people around a delicious buffet of cheeses and smoked salmon and then was fortunate enough to be given a tour of the research institute.

Within half an hour I’d met world-leading cancer researchers, people developing potential malaria vaccines and seen other labs full of people working late, missing out on time with friends and family in order to do countless wonderful things in the name of research.

As it was a working lab, naturally there were exciting things like negative pressure rooms and gene-sequencers – but also the reminders you were somewhere potentially dangerous, with ‘biohazard’ signs and emergency eyewash and showers at every corner.

Suddenly the PA system barked out ‘ATTENTION ALL STAFF, ATTENTION ALL STAFF.

They had my attention too.  I was ready to evacuate, or go on a three-day lock-down to hunt for an escaped malaria-carrying mosquito.

The announcement continued:


I laughed, half in relief – but on reflection, there was nothing that funny about it. The food was from the BioMed Central event I had spoken at.

Naturally, no one wants food to go to waste – but the funny side wore off when I saw researchers head upstairs to eat leftovers from an event, which like many awareness raising events, is partly funded by open access fees. These are often paid by research institutions (thus indirectly, taxes or charitable donations) to publishers to cover the costs of making it available without a ‘paywall’.

However, many publishers also spend significant amounts of money to attract researchers to publish with them. Naturally it’s more complicated than this, but a simple thought struck me and I daydreamed…

I day-dreamed of a world where researchers doing life-saving work had publishers eating their leftovers, at events hosted by researchers. Events where researchers allowed potential publishers to apply for the privilege of publishing them – and researchers decide who they will allow to publish their important research.

I imagined what would happen if all researchers collectively and suddenly decided they didn’t want to submit to ‘for-profit’ publishers because they felt reputations and impact factors  were suddenly irrelevant in a digital age, thus disrupting any business model based on prestige.

Would less money go to publishers and more stay within research institutions for research? Would a sea of poor quality research drown good research with no one paid to check it, or would publishing just happen faster, like publishing this blog –  the reviewing stage happening afterwards, in the open, in public?

It was a wild day-dream and I blame the cheese.

So, if you ever feel you are not worthy to eat the leftovers and crumbs of others, always ask ‘whose table is it’?

In research, the table is for everyone, and we should all be invited to sit at it as equals.

We just need to figure out who is bringing the cheese and smoked salmon.

About the Author 

Jack has led the development and implementation of an internationally recognised model for building partnerships between the public and researchers. He has worked for Government, leading charities and universities, including the UK’s National Institute for Health Research and Macmillan Cancer Support. He has partnered with the World Health Organisation, the Cochrane Collaboration and community organisations across the UK, Europe, Australia and Asia.

Full disclosure: I receive no money for the time I volunteer with BioMed Central. I did, however, eat more than my fair share of cheese at one of their events. 

This is the first in a series of blogs which we hope will provoke and inform debate on issues in Open Scholarship across Australasia. If you’d like to write for us, please get in touch: eo@aoasg.org.au

Australian Open Access Support Group expands to become Australasian Open Access Support Group

OA Kiwi
Kiwi Open Access Logo by the University of Auckland, Libraries and Learning Services is licensed under a Creative Commons Attribution 3.0 Unported License.

Less than two years after being formed, we are delighted to announce that the Australian Open Access Support Group, formed in October 2013, has now become the Australasian Open Access Support Group, with the joining of all the New Zealand Libraries under its umbrella organisation, Council of New Zealand University Librarians (CONZUL).

Open Access, and the wider Open Scholarship movement are currently some of the most rapidly moving and hotly debated topics in academia both globally and in Australasia. In recent months alone there have been several conferences in Australia on these topics (with more to come), an Australian Government response indicating a possible whole-of-government policy on Open Access, and comment in the media.

It is important therefore that Open Access discussions include a wide variety of perspectives from across the region and we look forward to working with the New Zealand institutions on future initiatives.

For further information, please contact Dr Virginia Barbour, Executive Officer AOASG. eoATaoasg.org.au

What to believe in the new world of open access publishing

Virginia Barbour, Australian National University Executive Officer, AOASG

It’s never been easy for readers to know what to believe in academic research. The entire history of science publishing has been riddled with controversy and debate from its very beginning when Hobbes and Boyle, scientists at the Royal Society in London, argued over the scientific method itself.

Even a cursory glance at academic publishing since then shows articles contradicting each others’ findings, papers subsequently shown to contain half truths (even in the serious matter of clinical trials) and yet more that are simply fabricated. Shaky and controversial results have been a part of science since it began to be documented.

Enter a new apparent villain – “predatory open access” publishing, now claimed by some to be overwhelming the literature with questionable research. As highlighted in the recent documentary on Radio National, and subsequently discussed in The Conversation, there has been a proliferation of dodgy new journals and publishers who call themselves “open access” and who eagerly court academics to be editorial board members, to submit their articles and to attend and speak at conferences.

These activities have led to concern over whether any open access publications can be trusted. Librarians in institutions in Australia and elsewhere attempt to keep abreast of all these “predatory” journals and publishers.

In a more positive endeavour, an organisation of legitimate open access publishers (OASPA) has come together and they and other journal associations and the Directory of Open Access Journals have produced ways to assess journals.

Academic publishing has changed since the advent of the internet.

Although the extent of the problem is not known (and may even be exaggerated by ever-expanding blacklists), some academics still submit to questionable journals, newspapers give publicity to bizarre articles from them, and non-academic readers rightly wonder what on earth is going on.

It’s worth remembering how new this all is. Whereas scholarly publishing is 350 years old, it is only 25 years since the web began; academic online publishing followed about 20 years ago. Open access – a part of the wider open scholarship movement (which seeks to enhance integrity and good scholarship) – is barely 15 years old.

What we are witnessing is the oft-repeated story of what happens when any new technology appears. Alongside an explosion of opportunities for good, there will always be those that seek to exploit, such as these predatory publishers.

But just as no one ever assumed that everything in print was trustworthy, neither should that be the case for open access content. And in the end the content is what matters – whether delivered by open access, subscription publishing, or a printed document.

To complicate matters further, alongside this revolution in access, the academic literature itself is evolving apace with papers being put online before review and revisions of papers made available with peer review histories alongside.

Even the format of the academic paper is changing. Datasets or single figures with little explanation attached to them can now be be published. The concept of an academic paper that is a definitive statement of “truth” is finally being laid to rest.

It was never a realistic concept and arguably has led to much confusion about the nature of truth, especially in science. Science evolves incrementally. Each finding builds on evidence from before, some of which will stand up to scrutiny via replication, and some not.

As the amount of information available increases exponentially, the challenge for everyone is to learn how to filter and assess the information presented, wherever it is published.

For scientists, one way of deciding how important an article is has traditionally been which journal it has been published in. However, even prestigious journals publish work that is unreliable. Hence there are initiatives such as the San Francisco Declaration on Research Assessment which discourages judging papers only by where they are published.

For non-academic readers, understanding what to trust is even more challenging. Whether the article has been peer-reviewed is a good starting point.

Most important of all perhaps is the need for a modicum of common sense – the type of judgements we apply every day to claims about items in our daily lives: can I see the whole paper or am I just seeing an exerpt? How big was the study being reported? Do the claims seem sensible? Is the result backed up by other things I have read? And what do other experts in this area think of the research?

The Conversation

Virginia Barbour is Executive Officer, Australasian Open Access Support Group at Australian National University.

This article was originally published on The Conversation. Read the original article.

AOASG Response to Australian Government Paper “Vision for a Science Nation”

Earlier this year the Australian Government responded to the Chief Scientist’s paper, Science, Technology, Engineering and Mathematics: Australia’s Future, which was published in September 2014. The Australian Government’s response was entitled Vision for a Science Nation and responses were invited to it.

The AOASG prepared a response, which specifically focusses on discussions around Open Access to the research literature. The response is available below. If you would like a copy of the response or have feedback, please contact us eo@aoasg.org.au

Australasian Open Access Support Group Response to:

Vision for a Science Nation – Responding to Science, Technology, Engineering and Mathematics: Australia’s Future

July 2015

The AOASG is encouraged that both this paper and the Chief Scientist’s recommendations include reference to the importance of access to research. Professor Chubb divided his report into four sections:

  • Building competitiveness
  • Supporting high quality education and training
  • Maximising research potential
  • Strengthening international engagement

He made one specific recommendation with respect to open access, under the Maximising research potential section where he recommended the government should:

“Enhance dissemination of Australian STEM research by expanding open access policies and improving the supporting infrastructure.”

In addition, he referenced the need for IP regimes to support open access under the recommendation Building competitiveness section, where he noted the need to:

“Support the translation and commercialisation of STEM discoveries through: “a modern and flexible IP framework that embraces a range of capabilities from open access regimes to smart and agile use of patent and technology transfer strategies.”

In its response we note that the Government indicated two areas where it would increase access to research:

Australian competitiveness

“The Government is implementing a strategy to improve the translation of research into commercial outcomes by…

developing a plan to provide business with greater online access to publicly funded research and researchers’ expertise;”


“Enhancing dissemination of Australian research

Australia’s research councils and some Government science agencies have arrangements in place to ensure wide access to research publications arising from the research they fund or conduct. There is no comprehensive policy covering all publicly funded research.

The Government will develop a policy to ensure that more publicly-funded research findings are shared openly and available to be used commercially or in other ways that will bring the greatest benefit to Australians.”

AOASG comments

These recommendations and the responses come at crucial time for developments in research publishing and access policies globally, with a vigorous ongoing international debate.

Scholarship is at a crossroads. The research outputs from publically and privately funded research are often locked behind paywalls preventing new research opportunities for those without access to libraries with large budgets and excluding those in developing countries from the publically funded knowledge that is produced as a result of government research funding.

The UK model of Gold Open Access is unfundable and unsustainable. The results of studies by the Wellcome Trust [1] and RCUK [2] show that more than £UK15 million was spent by  RCUK in 2013/4 on  costs of Gold Open Access publishing with a large proportion (and the highest article processing charges) being spent on “hybrid” Open Access – i.e. payment to traditional publishers for single articles within a subscription journal.  Despite such models most of the world’s research remains inaccessible as current models reward publishers for limiting access to research.

There are models that Australia should use to increase access to research. Science Europe’s Social Sciences Committee Opinion Paper “The Need for Diamond Engagement around Open Access to High Quality Research Output” [3] highlights the need for partnership between policy makers and publishers to facilitate deposition in repositories; standardisation and interoperability of research information metadata; and the need to build on infrastructures and networks already in place. Other models are possible and are being tried. For example, Knowledge unlatched [4] is a completely different open access book publishing model which uses library purchases to pay for the first copy to be published and made available open access subsequently to all. This model has developed to ensure valuable scholarly works continue to be published and available in an environment where commercial publishers’ sales targets, and not academic merit alone, can be a significant factor in the decision as to whether a scholarly monograph is published or not.

Increasing access to research has benefits across all of Australian society and potentially can provide value in all of the areas highlighted by the Chief Scientist – competitiveness, high quality education and training, research and international engagement.

In order to have the maximum effect on all these areas, the Government needs to adopt principles as it seeks to develop a policy on Open Access for Australia.

  1. Open Access must be implemented flexibly. It is becoming clear that there will be no one single solution for Open Access, but rather it will need a number of different models within an environment where the default is “Open”. What is currently lacking however is sufficient funding to develop new experiments and support innovative solutions. The Government should encourage and make available financial support for the development of multiple solutions, through funded experiments where needed and support for functioning, already established solutions. Examples of experiments include Knowledge Unlatched [4] for the publication of books in the humanities and SCOAP3 for Particle physics [5].
  1. Green Open Access, providing access via university repositories is currently the most well established mechanism for providing access to the diverse outputs of Australian Universities. The investment in repositories is currently through individual universities, delivering a fragmented landscape without a cohesive infrastructure and resulting in delays in implementing, for example, technologies for different metrics to provide information on impact. Repositories need to be able to innovate develop within an international and national environment. They should be part of the research infrastructure roadmap and a national project and program is required. It needs to link to international work such as that of COAR [6].
  1. There may also be a case for support of Gold Open Access journals via article processing charges (APCs) publishing in some circumstances, especially from innovative, not for profit or society publishers. However, Universities currently have little ability to support APCs, given their current commitment to the payment of journal subscriptions.
  1. Any policy on Open Access should not be aimed at providing access to just one sector (e.g. science or business). Open Access to Australian research outputs including older research material in collections is also a key component of improving education and engagement in science in Australia and any policy therefore should aim to increase access across all of Australian society. In addition, increasing global access to the research from Australia plays a role in international engagement.
  1. Reuse and machine readability of Open Access work is a critical issue in order to maximise its usefulness. Currently, many works that are labelled “Open Access” are in fact only free to read, in that they do not have an associated license that enshrines right to reuse, mine and build on the work – and may only be free after an embargo period. The Government should build on work by its existing Licensing Framework, AusGoal [7] and encourage the development of policies across the University sector that require all work to be licensed in such a way, under Creative Commons licensing [8], that enable reuse. We believe this fits into the Chief Scientist’s recommendation for “a modern and flexible IP framework”.
  1. Lack of interoperability and as yet patchy uptake of some infrastructure initiatives are holding back Open Access development. The Government should support the development and implementation of standards and interoperability initiatives in key areas such as exchange of data within and between national and international repository networks (as currently being led internationally by COAR, 6), facilitation of deposition of articles in repositories, as well as essential infrastructure, such as the uptake of ORCiD [9] identifiers for researchers. It is also important that any Open Access policy is developed in conjunction with current initiatives on open data publishing.
  1. The role of supporting particularly Early Career researchers needs consideration and development. Mechanisms to support these researchers are required to enable maximum benefit for the future of Australian research.
  1. Any development in Open Access should also be considered in parallel with ongoing developments such as those on metrics and incentives within research. There has been much anxiety among scientists that new ways of publishing and dissemination are not adequately rewarded by their institutions and funders and the Government should encourage a culture whereby being “Open” is supported and rewarded. The UK’s HEFCE has recently published a report with recommendations for the use of metrics in the UK’s higher education sector. [10]


  1. Wellcome Trust The Reckoning: An Analysis of Wellcome Trust Open Access Spend 2013-14
  2. Research Councils UK 2014 Independent Review of Implementation
  3. Science Europe’s Social Sciences Committee Opinion Paper The Need for Diamond Engagement around Open Access to High Quality Research Output
  4. Knowledge Unlatched http://www.knowledgeunlatched.org/
  5. SCOAP3 – Sponsoring Consortium for Open Access Publishing in Particle Physics http://scoap3.org/
  6. Confederation of Open Access Repositories (COAR) https://www.coar-repositories.org/
  7. AUSGOAL http://www.ausgoal.gov.au/
  8. Creative Commons Australia http://creativecommons.org.au/
  9. Open Researcher and Contributor ID (ORCiD) http://orcid.org/
  10. HEFCE The Metric Tide

How researchers can protect themselves from publishing and conference scams

Roxanne Missingham, University Librarian at ANU and AOASG’s Deputy Chair, provides practical advice to researchers on how to prevent exploitation through being published in a journal, or participating in a conference, that could be considered “predatory” or “vanity”.

With the evolution of open access, enterprises have emerged that run conferences and journals with low or no peer review or other quality mechanisms. They approach academics, particularly early career academics, soliciting contributions for reputable sounding journals and conferences.

On 2 August, the ABC’s Background briefing highlighted the operation of this industry, Predatory publishers criticised for ‘unethical, unprincipled’ tactics” focusing in particular on one organisation, OMICS. There is little doubt that the industry has burgeoned.  The standard of review in such unethical journals can best be described by the example of the article written by David Mazières and Eddie Kohler which contains basically the words of the title repeated over and over. The article was accepted by the International Journal of Advanced Computer Technology and the review process included a peer review report that described it as “excellent”. You can see the documentation here. Not only will these publishers take your publications, they will charge you for the pleasure (or lack of).

Jeffrey Beall, librarian at Auraria Library, University of Colorado, Denver, coined the term “predatory publisher” after noticing a large number of emails requesting he submit articles to or join editorial boards of journals he had not heard of.  His research has resulted in lists – “Potential, possible, or probable predatory scholarly open-access publishers” and “Potential, possible, or probable predatory scholarly open-access journals”.

While Beall’s lists have been the subject of some debate, acknowledging publishers that are low quality is important to assist researchers. The debate on predatory publishing does not mean that open access publishing is poor per se. There are many high quality open access publishers, including well established university presses at the University of Adelaide, the Australian National University and University of Technology, Sydney.

Ensuring the quality of the journals you submit to and conference you propose papers for is important to assist you in developing your research profile and building your career.

And don’t forget, traditional publishers can also have problems of quality. For example, in early 2014 Springer and IEEE removed more than 120 papers after Cyril Labbé of Joseph Fourier University in Grenoble, France, discovered  computer-generated papers published in their journals.

How can you prevent this happening to you?

Three major tips are:

  • If you haven’t heard of the journal or conference check Beall’s list or ask your local librarian
  • Don’t believe the web site – ask your colleagues and look at indicators of journal impact. A library’s guide to Increasing your research impact with information on Journal measures and tools can help you
  • Don’t respond to unsolicited emails – choose the journals you wish to submit to.

If in doubt contact your local Library or Research Office.

The Australasian Open Access Support Group is committed to supporting quality open access publishing and will continue to provide information through this web site and in our twitter, newsletters and discussion list.

Roxanne Missingham, University Librarian (Chief Scholarly Information Services), The Australian National University, Canberra.