Breaking the myths of scholarly credit

Catriona MacCallum on why it takes 30 seconds to transform science.

Twitter: @catmacOA
ORCID: 0000-0001-9623-2225

Information on the Feb 2016 Australian Outreach Meeting, and the official launch in Canberra of the Australian ORCID consortium  is here

orcid_128x128Imagine a world where researchers can reliably keep track of – and receive credit for – the myriad ways in which they contribute to science – not just articles, books, data and software but peer-review reports, preprints, grants, blog posts, video and sound recordings, and not just complete stories but individual experiments, images, methods and analyses.

A diverse group of publishers and journals, including PLOS, eLife, the Royal Society, AGU, EMBO, Hindawi, IEEE and the Science Journals, have banded together to help make this a reality by supporting the adoption of ORCID iDs, a persistent digital identifier for researchers, in their publication workflows in 2016. In an Open Letter, they outline their intention to require iDs from corresponding authors of accepted articles that will ensure researchers get credit for their work while reducing the reporting burden on them. The specific date of requirement in 2016 will vary and be added to the letter subsequently.

As Natasha Simons pointed out in a previous post on this blog, ORCID iDs are already integrated in the workflows of many publishers and other scholarly platforms. Indeed, there are currently more than 200 research platforms and workflow systems that collect and connect iDs from researchers. And almost 2 million researchers have registered for an iD, not least because it helps to distinguish their contributions from those of all the other Smiths, Jones or Zhangs in their field. Funders are also signalling their interest. The Wellcome Trust requires their grantees to use ORCID iDs in grant applications and others, such as the NHMRC and ARC in Australia, look poised to follow suit.

If ORCID iDs are being embraced so widely, why is this new commitment by publishers needed?

The rationale is to speed up the adoption and use of ORCID iDs within scholarly systems. This will benefit researchers, publishers and funders who want to ensure that appropriate credit is given for an output, and also help readers or future collaborators discover the work of particular researcher more easily.

Persistent identifiers are increasingly common. Most researchers are familiar with Digital Object Identifiers (DOIs), which are a unique alphanumeric string attached to a digital object, most commonly an article, book or dataset. They persist because they contain stable information (metadata) about the object even if the URL to the website where the object is hosted changes or the object is hosted on multiple websites.

DOIs work because they have been adopted by 1000s of publishers and libraries as the de facto standard for identifying and locating scholarly digital objects. They are an accepted and essential part of the scholarly infrastructure – a key machine-readable connector in the global digital network, with registration organisations such as Crossref and DataCite acting as a junction box.

In many respects, an ORCID iD provides the equivalent of a DOI for researchers – enabling articles and datasets and a host of other outputs to be linked unambiguously to specific individuals – while ORCID the organisation is the junction box. To register for an iD takes about 30 seconds and is free, and it’s up to the researcher to choose which data and fields are made public in the ORCID record associated with their iD (e.g. see Jonathan Eisen’s public record as well as the privacy policy on ORCID).

In the open letter, the guidelines for publishers includes a requirement that the metadata they already send to Crossref with DOIs also include the ORCID iDs for authors. This will help reduce the reporting burden for researchers (e.g., to funders or institutions) because Crossref’s new auto-update function means that researchers can choose to have their ORCID record automatically updated with any new article, book, dataset (or other object) that already has a Crossref DOI.

An oft repeated sentiment is that the value of open access is to enable others to discover and build on work that already exists. But making something freely available on the web is just a first and somewhat limited step. Persistent identifiers, such as DOIs and ORCID iDs, are crucial to building the infrastructure for Open Science and enable discovery not just of the work itself but also of the researchers who made that contribution possible.

Moreover, if we are to reform the evaluation and credit system, then we need to be able to reliably link scientists (in the broadest sense) to all their contributions. Making these traceable and transparent will help dispel the myths that the only valid contribution to science comes in the form of a published article or book and the only measure of quality is publication in a high impact journal or established monograph press.

ORCID iDs provide the digital glue to facilitate this. The hope is that the publisher’s Open Letter and joint commitment will accelerate the incorporation of ORDID iDs in every scholarly system.  There are many different ways that funders, research organisations and content providers can support ORCID (available on their website). If you are a publisher, make the commitment and sign the open letter. If you are a researcher take 30 seconds to help transform research – register for an ORCID iD and use it wherever you use your name.


See also the post about the initiative by Laurel Haak, Executive Director of ORCID.

Competing interests: Catriona MacCallum is a paid employee of PLOS, one of the organisers and original signatories of the Open Letter supporting ORCID. PLOS is also an unfunded partner in the EU THOR project, whose aim is to establish seamless integration between articles, data, and researchers across the research lifecycle.

 About the author: Catriona is currently the Acting Advocacy Director for PLOS

Processing APCs: a necessary pain

In a sequel to his Oct 9 blog, Anton Angelo writes on what happened next in their investigation of APCs at the University of Canterbury.


“Pain” LL Twistiti CC BY-NC

Let’s face it, Article Processing Charges (APCs) are a pain to understand and manage.  APCs devolve the costs of scholarly publication away from from the library, where subscriptions can be neatly reported on, monitored and centralised, to being the responsibility of individual researchers where payments are currently almost entirely untraceable.

We last left gentle readers with our efforts to understand how much the University of Canterbury was paying in APCs (Follow the money: tracking Article Processing Charges at the University of Canterbury).  Our next step was to launch a pilot where we had some central funds to pay APCs, and asked for applications.  This is the story of that pilot, and how it proved useful far beyond simply providing needed cash for researchers to make their work Open Access (OA).

The initial idea for the pilot came as a recommendation after we had analysed the data of our survey on APCs.  There were a number of threads we recognised needed to be addressed:

  • The order of magnitude of the problem was in the hundreds of thousands of dollars (and probably higher), an amount of spending by the university that could not be ignored.
  • If the institution was supporting Open Access, then we needed to support that practically, whatever our thoughts were about Gold v. Green (or the horrid hybrid). Researchers should publish in the best possible place for their work, and when that was in an OA journal that required APCs, then it is incumbent on us as an institution to help with that.
  • Libraries are constantly concerned about maintaining their relevance and launching an initiative kept the library in centre of the scholarly publications process.

The fund itself was set up quickly – $NZ10,000, and a six week application period at the end of 2015.  We knocked up a few web forms, sent out emails to Heads of Departments and advertised in the internal bulletin.  Criteria included; that local scholars were the lead author, journals and publishers were ‘reputable’ and early career researchers would be preferred if the fund was oversubscribed.  A crack committee of interested academics was assembled to assess the applications, chaired by me, as a facilitator.

The first application came in within an hour of the web form going live.

In all, six applications were received, from all parts of the University, totalling a request of $NZ11,000.  We found another thousand dollars so we didn’t have to reject any applications because of money, and went through them one by one.

The applications could not have represented the current state of Gold OA better.  Three of them accounted for 90% of the money, with APCs of $2,500, $3,000 and $4,500 apiece.  The most expensive was a hybrid journal, with a publisher that had no concrete reporting on how APC payments affected our journal subscription rates.

Two of the applicants were for PeerJ author memberships for one article.  The committee exercised itself over these, as the cost was for the submission of the article, not for its publication after it was accepted, as is the PeerJ way.  Though we decided to fund the membership only if the article had been accepted (and let the poor researchers suffer a financial loss as well as the indignity of rejection) the idea of sponsoring researchers to publish – and thereby sponsoring the model and publisher itself, was found to be Very Interesting Indeed.

The last application was the hardest.  From junior faculty, it was a request to publish with a “suspected predatory publisher” according to Jeffrey Beall.  We had included Beall’s list in our criteria of ‘reputability’ as well as h-index, inclusion in the Directory of Open Access Journals (DOAJ), and impact factors.  I had argued against formally including Beall’s list, as I have an issue with black lists being problematic. White lists and selection criteria are harder to manage, but fairer and less liable to bias, but Beall’s list has captured the academic imagination, so it was in.

I was trepidatious to tell the researcher that we had rejected their application and why.  In fact, the conversation went really well, and a teaching moment on the dark side of scholarly publication was not missed.  A new journal was targeted, and if the APC for that is applied for in a new round, it will be about NZD$3,000, rather than the predatory publisher’s $150.

Dealing with hybrid, predatory, ‘straight’ OA, and PeerJ’s memberships, virtually all the OA business models were covered.  A really strong set of examples mean that the pilot APC fund not only met our objectives above, but also let us work in practice with the implications of supporting OA.  We grew a supportive team in our assessment committee, who can take the message back to their communities, and we have successfully placed the library at the heart of the matter, practical, effective and principled.

The recommendation from this exercise is a strong one to continue funding OA APCs from a central source, if it hasn’t been allowed for as a specific line item in a grant application.  As more funders demand public access to research outputs as a result of their philanthropy, it will move from the policy of the university, to a practical necessity to have mechanisms in place to pay APCs.

We will never be in the position where all of the money that goes towards scholarly publication will neatly lie in one budget for journal and book purchases ever again.  This collaborative, but library led, approach was really successful as one of many ways we are going to need to rethink how we pay the costs for new knowledge to be disseminated.  APCs are a real pain, but fronting up and asking for support from your community can ease that, and even help with the communication about OA and changes in scholarly publication in general.


Once all the dust had settled, we asked for invoices so we could pay for the APCs we had funded. There was a little urgency, as we wanted to spend the fund within a particular budgeting period.  Of course, none of the authors were at the stage where they needed to pay for their APC charges.  Publishing is a notoriously slow and involved process, so we should have expected that this part of the process would be delayed. Discussions with colleagues who had run similar programs revealed they had suffered from the same problem. When this process scales up to cover more of our research output there will be a considerable administrative workload as the Library shifts from paying for entire databases worth of articles in one hit, to paying to publish each article from your institution.  Though most publishers offer institutional deals for covering APCs, they are not worthwhile to us from a purely financial perspective.  When we have a larger volume of publications APCs are due for, and once we account for the time of handling each invoice and payment, it may make more sense to do some kind of bulk deal with the bigger players.

 Anton Angelo is Research Data Co-ordinator, University of Canterbury.

Contact: @antlion


AOASG and Creative Commons Australia response to the Australian Government National Innovation and Science Agenda and Review of Research Policy and Funding Arrangements

The Australasian Open Access Support Group (AOASG)  and Creative Commons Australia welcome the new initiatives in the government’s National Innovation and Science Agenda .

In a modern world both  publications and the data underlying them are equally important parts of research and their value and potential to contribute to innovation  are maximized when the publications and data are made openly available.

We therefore very much welcome the recognition of the importance of open data, as reflected in the creation of Data61. We also support the idea that new and additional ways of recognizing the impact of universities is needed – beyond traditional publications.

The Review of Research Policy and Funding Arrangements, which was announced on Friday, acknowledged the importance of open access to research results.

This is a pivotal moment and there is an  opportunity now to ensure that as part of, and in order to drive, innovation, open access to all parts of the research lifecycle – publications and data – is supported.

Hence, what will be important is the implementation of such recommendations and also how “open” is defined. The Open Access policy of the Higher Education Funding Council for England is a good model for driving open access to the research literature, with its emphasis on deposition of research results published in journal articles into an open access repository within 3 months of acceptance and made openly available at 12 or 24 months at the latest, with credits given to outputs that can be text mined – that is  with a license that allows reuse.

Open access to research data is now also increasingly being recognized as being a critical part of ensuring that research is reproducible. Publishers and universities, along with agencies such as ANDS,  are developing the necessary policies and processes.

We look forward to working on the consultation for, and implementation of, the new initiatives and would specifically highlight the need for:

  • Adequate investment in the infrastructure to support open access to publications and data, including physical infrastructure such as repositories and in the skills needed to manage these policies and processes
  • Appropriate consideration of licensing arrangements (such as through Creative Commons licenses) associated with all research outputs so that they can not only be accessed but also reused in the way that will maximize their impact.

Australasian Libraries Needed to Help Scale Knowledge Unlatched

Lucy Montgomery writes about the need for new models in humanities publishing and the second round of Knowledge Unlatched. Other models include Open Library of the Humanities.

Contact: @KUnlatched

Specialist scholarly books, or monographs, are a vital form of publication for Humanities and Social Sciences (HSS) scholars globally. Monographs allow HSS researchers to develop and share complex ideas at length, and to engage with international communities of peers in processes of knowledge creation. However, library spending on books hasn’t kept pace with growth in the number of researchers required to publish a book in order to secure tenure and promotion. Dramatic increases in the costs of maintaining journal subscriptions have left libraries with little to spend on other areas. As a result monograph sales have declined by as much as 90% over 20 years.

Although a growing number of librarians, authors, research funders and publishers would like to see books transition to OA, book-length scholarly works pose unique challenges. This is because the fixed costs of publishing a 70,000 — 100,000-word book are much higher than they are for a 5,000 – 10,000 word journal article. High costs mean that ‘gold’ routes to OA are not a practical option for most authors. Monograph publishers, many of whom are not-for-profit University Presses and already dependant on subsidies, are struggling to find funding to support OA experimentation. Creative approaches to enabling positive change across the system are needed.

ku_mark_facebookAustralasian Libraries are playing a key role the development of one such model. In 2014 Australasian libraries took part in the global pilot of a revolutionary OA book experiment: Knowledge Unlatched (KU). Libraries from around the world were invited to share the costs of making a 28 book Pilot Collection OA.  The collection, which included globally relevant topics such as Constructing Muslims in France and Understanding the Global Energy Crisis, has now been downloaded more than 40,000 times by readers in 170 countries. In addition to demonstrating the viability of KU’s global library consortium approach to supporting OA for books, the award-winning Pilot also allowed KU to demonstrate the power of OA to increase the visibility of specialist scholarly books in digital landscapes. In 2015 KU helped to secure the indexing of monographs in Google Scholar.

The 2014 KU Pilot confirmed that Australasian libraries are important change-makers in the global scholarly communications landscape.  KU is widely regarded as a strongly Australasian project, thanks in no small part to the three Founding Libraries that provided additional cash support for the development of the KU model: UWA, University of Melbourne and QUT. Australasia also punched well above its weight in sign-up rates for the Pilot Collection. 28 libraries from Australia and New Zealand took part, joining a global community of close to 300 libraries that contributed to making the 28 book Pilot Collection OA.

Libraries are now invited to support the next phase of the project by signing up for Round 2. Round 2 is a key step in scaling the KU model and ensuring that the project delivers on its promise to create a sustainable route to OA for large numbers of scholarly books.

As the end of the year fast approaches, we encourage you to consider signing up. Libraries have until 31 January 2016 to pledge, but we’d be happy to assist with earlier invoicing for those that would prefer to support the project from a 2015 budget. KU Round 2 is an opportunity for libraries from around the world to share the costs of making 78 new books from 26 recognised publishers OA.  The 78 new books are being offered in 8 individual packages. Libraries must sign up for at least six in order to participate.

As with the Pilot Collection, books in Round 2 will also be hosted on OAPEN and HathiTrust with  Creative Commons licences, preserved by CLOCKSS and Portico, and MARC records will be provided to libraries.

If models like KU are to succeed it will be because libraries have made a conscious effort to move beyond established work-flows to support new innovative approaches to OA and publishing generally.  At this stage in its development the support of Australian libraries remains key to the capacity of KU to scale and operate sustainably.

Competing interests: Lucy Montgomery is Deputy Director (an unpaid voluntary position) of Knowledge Unlatched.

About the author: Associate Professor Lucy Montgomery is Deputy Director of Knowledge Unlatched and Director of the Centre for Culture and Technology at Curtin University.


Open Access, and why it matters to medical students.

David Jakabek on new ways that medical students get information & the role of Open Access



Twitter: @AMSJteam

Medical students are consumers of research output, but are also under increased requirements to become producers of research content. Open Access (OA) has clear advantages from both perspectives.

Students access research literature across both pre-clinical and clinical phases. In both instances, students are encouraged to consult a variety of sources, ranging from traditional textbooks to more current journal articles, with the aim of forming a solid and current knowledge base. Additionally, medical school curricula feature assignments where students are required to gain skills in searching and evaluating research literature. Since medical students encounter research output in a variety of ways, any methods which facilitate these process are encouraged.

OA allows medical students to draw on a wider array of research output than would otherwise be possible. Increasing journal numbers mean that university libraries are unable to afford subscriptions to quality indexed journals. Frequently a “perfect” article is found, only to soon realise it’s behind a paywall with no library journal subscription. The option of paying $US30-40 for access to single paper is rarely palatable for a student budget. For the same reason that OA is said to bring knowledge to developing nations, local medical students can have access to a wider array of research to incorporate into their knowledge base.

Moreover, newer developments in OA are quickly gaining momentum. Some OA resources such as Wikipedia are not completely reliable, and alternatives such as Free Open Access Meducation (FOAM) resources are gaining popularity. The fantastic and heavily-Australian contributed Life In The Fastlane has quickly become a go-to reference for up to date information into emergency medicine and critical care; in some cases it surpasses even traditional textbooks. These developments provide access to new research, or new ways of looking at existing research. Without the open access component it’s unlikely such a resource would have gained the popularity and support of senior clinicians to generate quality content.

Not only is research important for medical students as consumers, but it is also important for medical students as burgeoning clinician-researchers. The majority (if not all) medical curricula contain some research component, where students carry out and report on their own research projects. This element is only set to expand with the introduction of MD-titled masters-level medical degrees. These MD degrees have as a requirement a substantial research project. With such a growth in medical student research, there is wide scope to encourage OA publication.

Poised to take advantage of this increased research focus, the Australian Medical Student Journal is approaching its 7th volume and has utilised the OA model (though not currently with CC  licenses) from its inception. The OA model has brought with it some challenges, but predominantly there is a benefit for students.

The reward for students is primarily one of access. We are a small journal and a subscription model is unlikely to be successful given the competition for library subscription fees. By being OA, our articles are able to be read worldwide, and thus student work is able to be cited and incorporated into the global scientific discussion. Our citation rate is gradually increasing over time, judging by the citations on Google Scholar, and this would not be possible without an OA model.

In addition, the journal has adopted the OA ethos of expanding access to information by accepting papers of more specialist scientific interest. This is beneficial to students since medical student research projects are more focused on demonstrating competent and sound research design and conduct, with a lower importance of the impact of results. As such, we are able to accept publications which would be typically rejected from subscription journals due to a lack of general interest. In contrast, a subscription model would place a greater focus on selecting higher impact publications, which in turn would conflict with the primary aim of medical student research.

We do not charge article processing charges for publishing and one major difficulty for us being OA has been the absence of revenue from these (or subscriptions). At times it has been challenging to run the journal with a volunteer staff and a budget primarily derived from advertising. However, it has meant that all medical students are easily able to submit their work for consideration without additional financial burdens. Ultimately we aim to encourage research and publication, and the more barriers which we can remove, the better we will achieve this aim.

Medical students have much to gain from OA; both as authors providing access to their scientific developments to a broad audience, to readers needing to gain large amounts of current information from a wide variety of sources. By utilising OA at the Australian Medical Student Journal we hope to demonstrate the benefit of OA and encourage its uptake to future generations of our clinician-researchers.

Competing interests: David Jakabek is the Editor in Chief of the Australian Medical Student Journal

 About the author: David is currently a final-year medical student at the University of Wollongong

Australasian startups: part of a movement towards making peer review open and free

Lachlan Coin writes on how peer review is changing

Contact: twitter @lachlancoin

Peer review is not open.  Passing peer review asserts to scientists and the public alike that the methodology was sound; that the conclusions are correct; that the experimental protocols work ;  that policy should be written; that medical  interventions should, or should not be made.   When some of these claims are later retracted, both scientific and public trust in peer review  and the scientific method is eroded.   Imagine then, if the entire peer review literature were open, as it already is in a handful of journals including  BMJ Open, Gigascience and PeerJ.  Journalists, scientists, policy-makers, doctors and patients could assess how rigorously the peer-review process was applied and how well the authors were able to address the issues raised. Rather than seeing the scientific literature as uniformly correct, we could begin to accept  that every  manuscript has limitations as well as strengths.

Publons is a start up from New Zealand which is making huge in-roads towards making peer review more open.  Publons has enabled reviewers to publish ~10,000 reviews under a CC-BY license. The vast majority of these are pre-publication peer review (although the reviews are not made public until the article is itself published) and are now cross-referenced to the original articles via Europe PubMed Central.    Publons also provide the option for reviews to be registered but not shared publicly,  enabling reviewers to be credited for their reviewing activity.

The “slow, cumbersome and distorting practice of pre-publication peer review”  has led PLOS co-founder Mike Eisen to advocate abandoning pre-publication peer review altogether and switching to a  model in which papers are published without review and subsequently evaluated openly by the community post-publication.  Such  services are now provided by F1000, ScienceOpen and The WinnowerPubMed Commons  is an National Institute of Health run service which enables any academic (listed as an author on a PubMed-indexed paper) to comment on another PubMed listed paper.  PubPeer allows anyone to comment anonymously on any published paper, which has on several occasions led to retractions.

A more popular form of the ‘publish-first-get-reviewed-second’ model is provided by preprint servers. Posting preprints to arXiv  is common practice in mathematics and physics.  With the launch of bioRxiv  this is gaining traction in biological sciences. The majority of preprints submitted to bioRxiv are published in a peer-reviewed journal within 12 months.  Preprint servers have essentially made sharing scientific manuscripts a free service. The operating costs for arXiv are estimated to be US$826,000 p.a, which is supported by a membership model in which participating universities contribute up to US$3000 p.a.

Peer review, however,  is still not free, both in the sense that it costs money, and also that the ways in which it can be accessed are limited.  As an author, I can choose to give up my copyright and  restrict who can access my work by submitting to a subscription journal, or I can choose to pay an Article Processing Charge (APC)  of anywhere between US$695  and US$5200 by submitting to an open-access journal.  Both types of journals ultimately access the same pool of reviewers to provide peer review.  Either way, publishers make lucrative operating margins by controlling access to peer review.  It is ironic that the only sense in which peer review is free is that the reviewer is not paid by the publisher for their effort.

I am co-founder of another Australasian startup (Academic Karma) whose mission is to  make peer-review free as well as open.  We  envisage a ‘1. post-preprint; 2. get peer-reviewed and 3.  submit to a journal’  model of scientific publishing.  In order to achieve this, we have launched a pilot ‘global peer review network’ together with librarians from The University of Queensland, Imperial College London, The Australian National University and Cambridge University.   Any auhor from one of these universities can use this network to access peer review  for a arXiv or bioRxiv listed preprint outside the journal system. The reviews, together with an editorial summary of the strengths and limitations of the paper are collated into a document which can be submitted together with the manuscript for consideration at an  open-access journal.  The reviews will be published ( at once the manuscript is published.The author pays for peer review not in dollars, but with ‘karma’ they earned by reviewing for others. While there is no penalty for a karma debt, we hope this system helps remind reviewers to try to perform as much review as they consume – an absolute necessity for the system to be self regulating.

Although it has been almost 15 years  since the open-access publishing movement was launched in earnest with the establishment of the  Budapest Open Access Initiative,  the founding of  BiomedCentral, PLOS’s open letter to scientific publishers and then the launch of PLOS as an open access publisher, publishing in open access journals is still a long way from reaching 100% penetration.   Perhaps one of the main remaining reasons for this is cost – many researchers, particularly junior researchers face tough choices in deciding between paying to publish or paying for other lab expenses to further their research.   Co-ordinating peer review has been estimated to make up from 25%, to almost all the running costs of an online open access journal  We hope that providing high quality open peer review for free prior to journal submission will enable open-access journals to drop their APCs, thus making open access publishing more accessible to all.

About the author: Lachlan Coin is Group Leader, Genomics of Development and Disease Division
Deputy Director, Centre for Superbug Solutions at the University of Queensland

Conflict of interests:  Lachlan Coin is the founder of Academic Karma

Open government, open data and innovation

Linda O’Brien writes on open data as part of a wider innovation agenda

monitor-933392_1920Within the last week we have seen the release of The Open Government Partnership Third Open Government National Action Plan for the United State of America.  This Plan not only reaffirms the government’s commitment to open and transparent government but recognizes the importance of public access to data, open educational resources and to open science data, research and technologies in catalyzing innovation and business entrepreneurship. Amongst the many excellent initiatives are specific commitments to:

  • ensuring “Data must be accessible, discoverable, and usable to have the desired impact of increasing transparency and improving public service delivery” (p.10). More specifically Open Data National Guidelines will be developed and public feedback tools will be put in place to facilitate the release of open data.
  • Expanding access to educational resources through open licensing and technology by making Federal grant-supported educational materials and resources widely and freely available. (p.3)
  • Advancing open science through increased public access to data, research and technologies. All Federal agencies that spend more than $100 million per year on research and development are required to “implement policies and programs to make scientific publications and digital data from Federally funded research accessible to and useable by scientists, entrepreneurs, educators, students, and the general public” (p.9)

It is great to see recognition of the broad spectrum of “open” in a single document. I would argue that making government data open we also contribute to national innovation and entrepreneurship – and in that I am in good company!

Just this week ago the Australian government announced the establishment of a public private partnership, DataStart,  to drive data-driven innovation in Australia. The announcement notes that “Data-driven innovation added approximately $67 billion to the Australian economy in 2013[i] It is estimated that the Australian tech startup sector has the potential to contribute over $100 billion (4% of GDP) to the Australian economy by 2033[ii]”. This initiative is to “find, incubate and accelerate innovative business ideas that leverage openly available data from the Australian Government”.

DataStart is one of the first initiatives of the newly formed Data Policy unit under the Department of Prime Minster and Cabinet. This brings together data policy and digital strategy, placing data at the heart of the Federal government’s agenda.  A brilliant initiative and one to watch.

About the author:

Linda O’Brien is  Pro Vice Chancellor (Information Services), Griffith University and is on the Board of ODIQueensland

[i] PriceWaterhouse Coopers, Deciding with Data – How Data Driven Innovation is fuelling Australia’s economic growth, September 2014

[ii]  Price Waterhouse Coopers, The Startup Economy Report, 2013

OA week wrap for 2015: what’s next?

5 days of frenetic  #OAweek activity and then OA can go back in the closet for the rest of the year? That doesn’t seem a good use of the momentum that the week generates. Below are some snapshots of the week and some thoughts for what’s next. There is a lot more on the OATP. Did we miss anything important? Let us know.

OA events in Australia and New Zealand

A lot went on in #OAweek across the region – much of which was compiled here before the week started.
Highlights included the Tuesday NZ/AU tweetchat – see tweet reach analysis, above, There was good discussion, including how the timing of the event works (or doesn’t) in this part of the world.
The Brisbane tri-university event on Back to the Future day was, as  Sue Hutley from QUT noted, extremely eclectic, with examples of best practice in “openness” being shared across disciplines. And that seemed like a particularly important theme overall.

UTS had a blog updated each day of OA week Other events to highlight (not already on the AOASG page) were Charles Darwin University’s event which included Professor Lawrence Cram, Pro Vice Chancellor, Research and Research Training and Georgina Taylor, Co-lead, Open Access Button – as well as the presentation of an OA prize.
In support of Open Access Week the University of Newcastle Library offered UON staff and RHD students the chance to win an iPad. Simply by submitting a copy of their full-text, peer-reviewed manuscript (Final accepted version) to the NOVA repository during the promotion period they were entered into the iPad draw. All entrants also received a free coffee. The promotion was well received by current repository users as well as encouraging new open access supporters (approximately 25% of entrants this year had not previously archived).

The University of Queensland had librarians fanning out across the university to talk to researchers about OA in an OA Awareness campaign. And UWA library did a set of tweets of OA facts

OA videos and audio

If you haven’t already seen them – take a look at  these videos produced for OA week from Griffith UniversityUWA Curtin Library. All are also listed on the AOASG video page. You can also listen to Designing for serendipity on ABC RN and how OA fits in.

Open Science Prize Announced

Big news of the week was that the Wellcome Trust has teamed up with the US National Institutes of Health and the Howard Hughes Medical Institute to launch a new prize that will seek to unleash the power of open content and data to advance research and its application for health benefit. The prizes are substantial and are specifically aimed at stimulating international collaboration. Closing date Feb 29 (yes, it’s a leap year) 2016.

Australasian Open Research Video Competition

Our OA week competition was to partner with thinkableon a competition to highlight OA work.   The Australasian Open Research Video Competition will showcase the best video abstracts, as voted by the community. It is open to any researcher based in Australia or New Zealand, of work published in an open access journal or which is made freely available via an open access repository. The competition is open for submissions for another month – so get making your video.

Open Access roundups

In addition to the OA events from across across the world, there were some good roundups of the  history and state of play in OA notably from – Creative Commons Aotearoa, JISC in the UK, and also in the UK, the Wellcome Trust produced a timeline of its 10 years in OA – and released the code so anyone can use it. Stephen Pinfield reflected on the State of OA in 18 Statements Peter Suber posted his suggested readings for OA week

 New resources for Open Access

Creative Commons Australia produced a handy new resource on CC licenses “Know your rights“. Pasteur4OA project produced a set of OA advocacy resourcesORCID officially partnered with OA week and had some new graphics to link the two. SPARC launched an OA Spectrum Evaluation tool which quantitatively scores journals’ degrees of openness.

Open Access and why it is important –  quotes from across the region

Alex Holcombe, University of Sydney “I’ve had friends at small tech companies ask, jealously, how they can get the access to thousands of pay-walled scholarly journals that I enjoy. It’s often the engineers at a small start-up company, or a suffering medical patient, who would get the most use out of a published paper, not we academics.”
Alice Williamson from Open Source Malaria.  “The Open Source Malaria Consortium publishes all research data and results online so that anyone can read about, contribute to or use the data generated. This has effectively lowered any barriers to participation in the project and means that we can collaborate with scientists from very different backgrounds – from highly experienced medicinal chemists to high school students!” 

David Jakabek, Editor in Chief, Australian Medical Student Journal

“Open access is exciting for medical students for both academic study and research projects. With increasing journal numbers, fewer library subscriptions, and limited finances, Open Access allow medical students to draw on a wider array of research output than would otherwise be possible. From a publishing point of view, Open Access at the Australian Medical Student Journal reduces barriers for medical student work to be accessible to the wider scientific community. Students and colleagues can see medical student work being read and cited, which encourages further medical research in our future doctors.”

Roxanne Missingham, ANU

“Open for collaboration gave the opportunity for Dr Dan Andrews and Dr Julia Miller to give terrific presentations to more than 60 at the ANU providing very important insights into the complex nature of genomic and language data, the important of managing data well, the importance of considering the role of researchers in curating rather than owing data and the challenging of working towards national and international alliances to make data open.  Globally sharing data as openly as possible, with appropriate protections,  is essential for the creation of new research within both science and the social sciences and humanities.”

Open Access (and open data – the next “open” week?) in the news

The week started off in the Conversation with an article on the “Battle for OA” which has attracted many comments  – and rounded off the week with a Q & A featuring Lucy Montgomery from Curtin and Knowledge Unlatched and Tom Cochrane from QUT. Alex Holcombe reflected back on the week in a piece which asked whether we need an open data week

OA Quiz

and finally.. test your knowledge with the 15 question quiz on OA from BMC.

Why the Open Science Prize is important

Fabiana Kubke reflects on the launch on the Open Science Prize

Contact Twitter: @Kubke

Screen Shot 2015-10-22 at 9.51.08 pmIt gave me great pleasure to see the launch of the Open Science Prize in the middle of this year’s Open Access Week. Sponsored by the Wellcome Trust, the National Institutes of Health and the Howard Hughes Medical Institute, this prize provides a great incentive for international collaborations that help foster Open Science.

Science should be Open and collaborative – anything else just creates barriers for the application of or challenges to the findings, which are at the core of how science works and moves forward. As researchers we have, however, managed to build communities that tend to disincentivise this open collaboration. We have traded the Mertonian values for a form of commodified science that does not take advantage of the opportunities offered by the technologies of today (cue in Internet, digital technologies). As our individual ability to openly and freely communicate our science increases, so do the forces that fight to control the knowledge increase.

It is in this context that the Open Science Prize is important. Backed by three major international funding agencies the Open Science Prize sends a clear signal to researchers about what the Science enterprise should be expected to look like and puts their money where their mouth is. This prize is not just about celebrating successes in Open Science, it is also about specifically funding it. It brings Open Science into the mainstream, and, I hope, will get people thinking (and talking) about why it is important.

At the end of the day, Open Science should not be seen as some odd peripheral way of doing things or contrasted against mainstream science – but rather as a synonym of Science itself.  I look forward to the day when we frame the conversation around contrasting Science to ‘Closed Science’ instead.

I am honoured to have been invited to join a great panel of expert advisors, and of course to bring a ‘down under’ perspective to the process. I look forward to working on the rest of the process with the rest of the team.

Fabiana Kubke is a neu­ro­science researcher and teacher at the Uni­ver­sity of Auck­land. She is an Academic Editor for PLOS ONE and PeerJ and Chair of the Advisory Board of Creative Commons Aotearoa New Zealand.

She is on the panel of Expert Advisors for the Open Science Prize

What are the benefits of sharing grant data openly?

Marta Poblet and Amir Aryani talk about the importance of open sharing of grant data – a topic not often brought into the OA debate

Contacts on twitter: @mpoblet @amir_at_ands

Every year, the Australian Commonwealth and the State governments spend billions of dollars in grants to individuals, small business, communities, not-for-profits, universities, corporations, etc. Philanthropy organisations, nearly 5,000 in Australia, are giving approximately an extra billion dollars in grants.

Yet, to date the total, combined value of grants from the Commonwealth, the States, and the philanthropy sector can only be estimated. In 2014, the Australian National Audit Office noted that “the precise number and value of grants made by the Commonwealth Government in any one year is difficult to establish as details are contained in individual entity documents”. It also warned that  “the Commonwealth may be providing very significant subsidies for particular services or outcomes without a good understanding of the level of subsidy provided and consequently value for money”.

Monies from government, philanthropy, and corporate grants fuel research, innovation, and social change, but duplication, over or underinvestment are likely outcomes of a system that keeps confining grant data into organisational silos. Opening up access to grant information across the whole sector would certainly help maximise its efficiency and social impact.

Every year, public and private grant programs generate large volumes of data throughout the different stages of the grantmaking cycle (from publication of the call to final assessment). This grant data should not be considered as a mere by-product of the process. In addition to its organisational value, grant data can also contribute to a common good: linking grant data to recipients, and outputs (including publications, if available) improves the discoverability of the programs and increases the transparency of the overall system.

In Australia, the Australian National Data Service (ANDS) sees the value in connecting grants with grant outputs like data and publications, so the ANDS data discovery service includes open research grant data. The service currently includes grant data from the National Health and Medical Research Council (NHMRC) and the Australian Research Council (ARC) Some of these data are also linked to project outputs such as datasets and publications.

But discoverability and transparency are not the only benefits of open grant data. As Rachel Wharton from NPC puts it, “accurate and timely data allows funders to learn quickly about a strategy or area in which they are interested. It encourages them to think strategically about their giving and to communicate and collaborate with others in the sector.”

When it comes to philanthropy grants, there is a growing number of international initiatives embracing openness. Examples are the International Aid Transparency Initiative (IATI), Glasspockets and Washfunders by the Foundation Center, PoweredbyData or the recently released GrantNav by the 360 Giving initiative in the UK. This initiative has also developed a data standard which is compatible with the IATI standard and the Foundation Center hGrant standard. More innovations in the philanthropy sector can be found in this report.

Generally, the barriers to creating a comprehensive open framework linking grantmakers with grant programs, grantees, and grant outputs (including publications) are a combination of technical and legal challenges.

A technical challenge, for instance, is how to connect grantmakers and grantees. Researchers in academia are increasingly using author identifiers, unique numbers identifying them so that their names are properly linked to their publications and/or grants. ORCID, ISNI, or ResearchID are the most widely adopted identifiers. This technology could be extended to all types of grants to disambiguate the names of both grantmakers and grantees.

Likewise, an open grant data framework needs to connect grant programs with the outputs produced with those grants. When these outputs are academic publications, platforms such as FundRef or OpenAIRE enable to link who funded what to whom, but the lack of unique and persistent identifiers for grants hinders significantly the visibility of such programs.

The technical barriers need to be addressed hand in hand with the legal and ethical issues associated with openly releasing grant data. Privacy, data protection, and confidentiality principles and rules need to be carefully weighed to protect all stakeholders from potential threat or harm. Intellectual property and licensing issues will also emerge through the process. A balanced approach needs to be translated into appropriate policies, guidelines and procedures to help building a trusted process to share grant data for the public interest.

About the authors

Marta Poblet is a Vice-Chancellor’s Senior Researcher at RMIT University (Graduate School of Business and Law) @mpoblet

Amir Aryani is a project manager in Australian National Data Service and technology lead for the Research Data Switchboard ( @amir_at_ands

They declare no conflicts of interest