Reflections on the OAR Conference 2013

The QUT hosted the Open Access and Research Conference 2013 between 31 October – 1 November 2013. The conference was preceded by several half-day Pre-conference workshops on the 30 October.Screen Shot 2013-12-09 at 12.55.16 PM

Overall, the conference worked on the theme of Discovery, Impact and Innovation and aimed to provide an opportunity to reflect on the progress of Open Access and to consider the strategic advantages these developments bring to the research sector more generally. A broad spectrum of policy and research management issues were covered including advocacy, open innovation and alternative metrics.

There was a huge amount covered in the two days, and as always the opportunity to meet colleagues face to face after in some cases years of online collaboration was a highlight. The conference was filmed and the video recordings are linked on this page from presentations from Day One and Day Two below. The full program can be downloaded here.

This blog will summarise some of the key messages that emerged from the discussions. A caveat – these are a tiny sample of the whole event. For a bigger perspective see the Twitter feed: #OAR2013conf

Global and National Open Access Developments

The first day focused on Global and National Open Access Developments. The sessions covered the breadth of recent international initiatives.  Key messages are below

  • The current publishing model is not sustainable.

In the future the dominant model of publishing will have the web as the distribution. Managing and controlling a publishing environment of global publishers will be difficult. The ARC cannot be too prescriptive about open access models because it funds across so many domains. – Prof Aidan Byrne | Australian Research Council

  • The public remain depressingly confused about open access.

The web has been around for 20 years, after 10 years of monitoring the debates about open access it became clear that high profile universities in the USA and Europe were not going to take the lead on the policy front.  QUT then started implementing an open access policy in 2003. It took less than a year before it was endorsed by the University Academic Board. Prof Tom Cochrane | Queensland University of Technology

  • It is extremely important to ensure the definition of open access is consistent and includes detail about reuse of material.

Reuse included machine analysis of information. It is difficult to retrospectively add details into policies. It is also very helpful to tie this policy into existing policy platforms. The NIH policy has been extremely successful and more than 2/3 of the users of the research are outside the academy – Developing a Framework for Open Access Policies in the United States
 Heather Joseph | Scholarly Publishing and Academic Resources Coalition, United States

  • Having good open access requires: good policy development, infrastructure to support the open access system and advocacy of the policy.

Despite the gobsmackingly complex area that is European politics, they have managed to pull off the Horizon2020 policy development. The policy is consistent across the European Union and beyond. Part of the reason it succeeded was a huge campaign of 18,000 signatures from the research community. – Open Access Developments in Europe
 Dr Alma Swan | Scholarly Publishing and Academic Resources Coalition, Europe

  • Australia is building real momentum in the open access area.

Now one quarter of Australian institutions have open access policies, there are several open access monograph presses, and both government funding bodies are mandating open access to funded research outputs – Open Access Developments in Australia 
Dr Danny Kingsley | Australian Open Access Support Group

  • Chinese publishers are increasingly ambitious in the international market.

Publication in China is oriented towards evaluation of academia, and is only undertaken by state owned publishers, many enjoying subsidy from the government. There are about 1000 open access journals in China, many with a higher than average impact factor. The centralised platform of 89 institutional repositories called GRID (Chinese Academy of Science IR) – with over 400,000 full text items. – Open Access Developments in China
 Dr Xiang Ren | University of Southern Queensland

  • India is a net importer of knowledge – so open access helps India.

While India is not playing a significant role in open science and scholarship it is addressing ‘open’ issues elsewhere. There is a National Repository for open education, India has adopted the AustLI model for access to legal Acts, there are also interesting developments in the patent space to allow access to cheaper drugs. – Opening India 
Prof Shamnad Basheer | National University of Juridical Sciences, India

  • A good policy requires deposit immediately on acceptance for publication.

This ensures things are deposited and there are ways to allow researchers to have access to papers even during the embargoes. Waiting until the end of an embargo potentially loses use and application during that period – OA: A Short History of the Problem and Its Solution
 Prof Stevan Harnad | University of Southampton, United Kingdom

  • It is good to reach out to communities in their own language.

Open access advocacy in developing countries uses a range of tools, from high level stakeholders and influential researchers through to radio talk shows and actively engaging the community. Tools like usage statistics and live examples have proved successful. Open Access Advocacy in Developing and Transition Countries
 Iryna Kuchma | Electronic Information for Libraries, Ukraine

  • The open and networked web can be exploited to solve complex scientific problems.

For this to work it is important to have research outcomes that are reproducible or repurposable. It requires communicating research to different audiences who have different needs for support and functionality. Currently we do not have the data or models we need to analyse the system of scholarly outputs. We must not lose control of data into proprietary hands. Network Ready Research: Architectures and Instrumentation for Effective Scholarship
 Dr Cameron Neylon | Public Library of Science, United Kingdom

  • Altmetrics are a researcher’s footprint in the community.

They complement traditional metrics and research evaluation. Researchers thinking about a research impact strategy and funding agencies might want to include an impact statement in their Final Reports. – Altmetrics as Indicators of Public Impact
 Pat Loria | Charles Sturt University

Video of presentations from Day One

Open Data, Open Innovation and Open Access Publishing

The second day featured thematic sessions – focusing on specific areas of research and information management necessary to the advancement of Open Access. Specifically Open Data, Open Innovation and Open Access Publishing. Key messages:

  • Having a mandate alone is not enough.

An empty repository is useless, a partly filled repository is partly useless. It doesn’t work spontaneously – there is a need for an institutional policy that must be enforced. The Liege repository has 60,000+ items with 60% full text available – as only articles are mandated. The average number of downloads for items is 61.73. – Perspectives of a Vice-Chancellor Prof Bernard Rentier | University of Liège, Belgium

  • The patent system is supposed to lubricate the system but is increasingly throwing sand into the gears.

Copyright protects expression and patents protect functionality. Strong patents mean people make investments in order for people to convert ideas into product. However there is increasing concern that actual and potential litigation are not just costly but actually inhibiting innovation. The Economics of Open Innovation
 Prof Adam Jaffe | Motu Economic and Public Policy Research, New Zealand

  • Open stuff is useless unless you can translate it to something that means something.

We are no longer moving physical things, we are now moving information through the knowledge space. Because patents are jurisdictional there are many other countries that can use the patented information. The new facility The Lens is a map of the patent world allowing innovators worldwide to access all of the knowledge held in the patent system. “Solving the Problem of Problem Solving”: How Open Access will Shift the Demographics of Innovation to Create a More Fair Society and More Resilient Global Economy.
 Prof Richard Jefferson | Cambia

  • If monographs are behind paywalls when journals are free there is a problem for monographs.

The systems supporting scholarly communication via the monograph are falling down. Under the Knowledge Unlatched model libraries from around the world collaborate to share the publications. This spreads the costs of OA across many institutions globally. It ensures HSS books are accessible as OA journals. Libraries should avoid double dipping – if they were going to buy the titles on the startup list, sign up for KU instead. Knowledge Unlatched
 Dr Lucy Montgomery | Knowledge Unlatched

  • It is not adequate to ignore the humanities and say ‘we will deal with monographs later’

With monographs IP is not about capitalism but it is recompensation for the professional labour of editorial input that is significant and inherent to the quality of the product. The format is not important in policy setting (pixels or print). Ideally there would be a shared infrastructure that everyone can tap into, but this needs startup assistance. Free as in Love: the Humanities and Creative Arts in Open Access Publishing
 Dr John Byron | Book Industry Collaborative Council

  • We need to be thinking of knowledge as a network and an infrastructure – a common intellectual conversation and a quest for knowledge.

At the core scholarly communication is about communicating new knowledge. The default price on items online. The marginal cost of serving one more copy of an article is zero (more or less). The license is the one thing that does not cost anything – the more people reading doesn’t change the first copy costs. The question is how to charge for what actually costs money. There is a need to protect and retain core business but innovate on the non-core processes. Innovation in the Age of Open Access Publishing 
Dr Caroline Sutton | Co-Action Publishing, Sweden

Video of presentations from Day Two

Open Access Publishing – feature article

Earlier in 2013, the then Department of Industry, Innovation, Science, Research and Tertiary Education invited the AOASG to contribute a feature article to the Australian Innovation System Report 2013 which was published in early November. Entitled ‘Open Access Publishing’, the feature article by Dr Danny Kingsley appears in Chapter 4: Public Research Capacity and Innovation: University research quality assessment. The text of the article is reproduced here with the kind permission of the Department of Industry.

AISR2013-BoxImage

The full report is downloadable as a pdf here 

Open Access Publishing

Opening up access to publicly funded research outputs has been on an increasing number of political agendas across the world. The issue of unsustainable rising publisher subscription costs to research publications has been flagged since the 1980s. In the intervening period developments in technology such as the advent of the Internet have made the sharing of research outputs both possible and affordable.

Making publicly funded research openly available benefits all of society. The biggest issues the world faces require long term cooperative international research, and research is only effective when other researchers are able to see the outcomes of others’ research. As the total volume and pace of research increases, practitioners in any field need to be able to see the latest (quality assured) findings in order to provide the best service, and unless they have an institutional affiliation, they are unable to do so. Start-up innovation companies need access to research to inform their endeavours. Researchers also benefit from their findings having more exposure. And the taxpayer should be able to look up the latest findings if they wish to, for example to access information about health issues.

The Internet has forever altered the way information is disseminated and accessed. The open access movement has developed databases that specifically allow information to be indexed by search engines, and therefore findable. Called repositories, these can be organised by discipline, for example ArXiv.org which caters for the physics community, or can be hosted by an institution as a collection of that institution’s research outputs. Most publishers will allow the author’s final manuscript version of an article to be placed into a repository although sometimes they require it not be made available for a period of time, called an embargo. The benefit of making work available in this way is the researcher is not compelled to alter their publishing choices, although they may tend towards more permissive publishers.

Another development has been the rise of open access journals. These make research freely available to all readers without a subscription. The majority of these journals are run through smaller society publishers using open source software. There are some commercial open access publishers, including Springer and Hindawi. The Public Library of Science is a trailblazer in this field. The multidisciplinary PLOS ONE open access journal launched in December 2006. Within two years it was largest open access journal in the world. In 2010, it was the largest journal in the world (by volume). The OA megajournal business model has been embraced by academic authors, and several other commercial publishers have since launched their own versions. Commercial open access publishers charge an article processing fee at the beginning of the publication process rather than charging a subscription for access. Many regular commercial academic publishers now offer open access options.

Over the past seven years many research funding bodies have made open access to research publications a requirement of funding. In 2006 the Wellcome Trust introduced their open access policy in the UK, followed by the US National Institutes of Health announcing their Public Access Policy in 2008. This trend is increasing exponentially with 2012 seeing the “Report of the Working Group on Expanding Access to Published Research Findings” from the Finch Group which recommended all UK research be made available in open access journals. In July the European Commission announced that research funded between 2014 and 2020 under the Horizon2020 programme will have to be open access to “give Europe a better return on its €87 billion annual investment in R&D”. In the early months of 2013 the Obama administration in the US has released a policy requiring all US federal agencies to prepare plans to make research available.

Domestically, in 2012 the National Health and Medical Research Council (NHMRC) announced its revised policy on the dissemination of research findings, effective 1 July 2012. The Australian Research Council (ARC) released its Open Access Policy on 1 January 2013. Both policies require that any publications arising from a funded research project must be deposited into an open access institutional repository within a 12 month period from the date of publication. There are two minor differences between the two policies. The NHMRC relates only to journal articles where the ARC encompasses all publication outputs. In addition, the NHMRC mandate affects all publications as of 1 July 2012, but the ARC will only affect the outputs produced from the research funded in 2013. Researchers are also encouraged to make accompanying datasets available open access.

Both policies require the deposit of work in the originating institution’s open access repository. All universities in Australia host a repository, many of them developed with funds the government provided through the Australian Scheme for Higher Education Repositories (ASHER). This scheme which ran from 2007–2009 was originally intended to assist the reporting requirement for the Research Quality Framework (RQF) research assessment exercise, which became Excellence in Research for Australia (ERA). The ASHER program had the aim of “enhancing access to research through the use of digital repositories”.

Repositories in Australia are generally managed by libraries and have been supported by an ongoing organised community. In 2009–2010, the Council of Australian University Librarians (CAUL) established the CAUL Australian Institutional Repository Support Service (CAIRSS) and when central government funding for the service ended, the university libraries agreed to continue the service by supporting it with member contributions. CAIRSS ended in December 2012; however, the email list continues a strong community of practice.

In October 2012 the Australian Open Access Support Group launched, beginning staffed operations in January 2013. The group aims to provide advice and information to all practitioners in the area of open access.

Historically Australia has a strong track record in the area of supporting open access. The Australasian Digital Theses (ADT) program began in 2000 as a system of sharing PhD theses over the Internet. The ADT was a central registry and open access display of theses, which were held in self-contained repositories at each university using a shared software platform that had been developed for the purpose. The first theses were made available in July 2000. In 2011, as all these were then being held in universities’’ institutional repositories, the ADT was decommissioned. It was estimated that the number of full text Australian theses available in repositories at the time was over 30,000.

The Australian government is investing tens of millions of dollars in developing the frameworks to allow Australian researchers to share their data. The Australian National Data Service (ANDS) has responsibility for supporting public access to as much publicly funded research data as can be provided within the constraints of privacy, copyright, and technology. In an attempt to provide a platform for sharing information about data, ANDS has developed a discovery service for data resulting from Australian research, called Research Data Australia, which is a national data registry service meshing searchable web pages that describe Australian research data collections supplementing published research. Records in Research Data Australia link to the host institution, which may (or not) have a direct link to the data.

The work of ANDS reflects the broader government position in Australia of making public data publicly available. The Declaration of Open Government was announced on July 16, 2010. This policy position is in the process of practical implementation across the country, providing access to information about locations of government services, for example. The level of engagement between government areas and different levels of government varies. Another government initiative has been the Australian Governments Open Access and Licensing Framework (AusGOAL) which has an emphasis on open formats and open access to publicly funded information and provides a framework to facilitate open data from government agencies. In addition to providing information and fora for discussion, it has developed a licence suite that includes the Australian Creative Commons Version 3.0 licences.

Shall we sing in CHORUS or just SHARE? Responses to the US OA policy

Well things certainly have been moving in the land of the free since the Obama administration announced its Increasing Access to the Results of Federally Funded Scientific Research policy  in February.

In short, the policy requires that within 12 months US Federal agencies that spend over $100 million in research and development have to have a plan to “support increased public access to the results of research funded by the Federal Government”. (For a more detailed analysis of that policy see this previous blog.)

In the last couple of weeks two opposing ‘solutions’ have been proposed for the implementation of the policy.

In the publishing corner…

A coalition of subscription based journal publishers has suggested a system called CHORUS – which stands for Clearing House for the Open Research of the United States. The proposal is for a “framework for a possible public-private partnership to increase public access to peer-reviewed publications that report on federally-funded research”.

The plan is to create a domain called CHORUS.gov where publishers can deposit the metadata about papers that have relevant funding. When a user wants to find research they can look via CHORUS or through the funding agency site, and then view the paper through a link back to the publishers site.

While this sounds reasonable the immediate questions that leap out is why would this not be searchable through search engines, and what embargo periods are being held on the full text of publications? The limited amount of information available on the proposal does not seem to address these questions.

The Association of American Publishers released their explanation of the proposal ‘Understanding CHORUS’ on 5 June. There is not a great deal of other information available, although The Chronicle published a news story about it.

The Scholarly Kitchen blog – run by the Society for Scholarly Publishing – put up a post on 4 June 2013 with some further details. According to the post the CHORUS group represents a broad-based group of scholarly publishers, both commercial and not-for-profit There are 11 members on the steering group and many signatory organisations. The blog states the group collectively publishes the vast majority of the articles reporting on federally-funded research.

The time frame is fast, with plans including:

  • High-level System Architecture — Friday, June 14
  • Technical Specifications — Friday, July 26
  • Initial Proof-of-Concept — Friday, August 30

On this blog there is the comment that CHORUS is:

a much more modern and sensible response to the demand for access to published papers after a reasonable embargo period, as it doesn’t require an expensive and duplicative secondary repository like PubMed Central. Instead, it uses networked technologies in the way they were intended to be used, leveraging the Internet and the infrastructure of scientific publishing without diverting taxpayer dollars from research budgets.

Not surprisingly the comment coming from commercial publishers about diverting taxpayer dollars from research budgets has attracted some criticism, not least from Stevan Harnad in his commentary “Yet another Trojan Horse from the publishing industry” :

And, without any sense of the irony, the publisher lobby (which already consumes so much of the scarce funds available for research) is attempting to do this under the pretext of saving “precious research funds” for research!

Harnad’s main argument against this proposal is that it represents an attempt to take the power to provide open access out of the hands of researchers so that publishers gain control over both the timetable and the infrastructure for providing open access.

Mike Eisen in his blog on the topic points out that taxpayers will end up paying for the service anyway:

publishers will without a doubt try to fold the costs of creating and maintaining the system into their subscription/site license charges – the routinely ask libraries to pay for all of their “value added” services. Thus not only would potential savings never materialize, the government would end up paying the costs of CHORUS indirectly.

Harnad notes that this is a continuation from previous activities by publishers to counter the open access movement, not least the 2007 creation of PRISM (the Partnership for Research Integrity in Science and Medicine)  which grew from the American Association of Publishers employing a public relations expert to “counter messages from groups such as the Public Library of Science (PLoS)”

In the university corner….

Three days after the Scholarly Kitchen blog, the development paper for a proposal called SHARE was released from a group of university and library organisations.

The paper for SHARE (the SHared Access Research Ecosystem) states the White House directive ‘provides a compelling reason to integrate higher education’s investments to date into a system of cross-institutional digital repositories’. The plan is to federate existing university-based digital repositories, obviating the need for central repositories.

The Chronicle published a story on the proposal on the same day.

The SHARE system would draw on the metadata and repository knowledge already in place in the institutional community, such as using ORCID numbers to identify researchers. There would be a requirement that all items added to the system include the correct metadata like: the award identifier, PI number and the repository in which it sits.

This type of normalisation of metadata is something repository managers have already addressed in Australia, in response to the development of Trove at the National Library of Australia which pulls information in from all Australian institutional repositories. Also more recently here, there has been agreement about the metadata field to be used to identify research from a grant to comply with the NHMRC and the ARC policies.

In the SHARE proposal, existing repositories, including subject based repositories, would work together to ensure metadata matching to become a ‘linked node’ in the system. The US has a different university system to Australia with a mixture of private and state-funded institutions. But every state has one or more state-funded universities and most of these already have repositories in place. Other universities without repositories would use the repository of their relevant state university.

A significant challenge in the proposal, as it reads, is the affirmation that for the White House policy to succeed, federal agencies will need universities to require of their Principal Investigators; “sufficient copyright licensing licensed to enable permanent archiving, access, and reuse of publication”. While sounding simple, in practicality, this means altering university open access and intellectual property policies, and running a substantial educational campaign amongst researchers. This is no small feat.

The timeframe the SHARE proposal puts forward is in phases, with requirement and capabilities developed within 12-18 months, and the supporting software completed within another six months. So there is a two-year minimum period after initiation of implementation before the system would be operational. It is also possible that given the policy issues, it could take longer to eventuate in reality.

There has been less discussion about the SHARE proposal on open access lists, but this is hardly surprising as more energy on these lists will be directed towards criticism of the publishers’ proposal.

So which one will win?

Despite the two proposals emerging within days of one another, the sophistication of both proposals indicates that they have been in development from some time.

Indeed, the CHROUS proposal would have required lead-time to negotiate ‘buy-in’ from the different publishers. On the other hand, the SHARE proposal includes a complex flow chart on page 4 which appears to be the equivalent to the ‘High-level System Architecture’ the CHROUS proposal states would be ready on Friday 14 June. According to a post on the LibLicense discussion list, SHARE was developed without awareness of CHORUS, so it is not an intentional ‘counterattack’.

There are glaring differences between the two proposals. SHARE envisions text and data mining as part of the system, two capabilities missing from the CHORUS proposal. SHARE also provides searching through Google rather than requiring the user to go to the system to find materials as CHORUS seems to be proposing. But as Peter Suber points out: “CHORUS sweetens the deal by proposing OA to the published versions of articles, rather than to the final versions of the author’s peer-reviewed manuscripts”.

So which will be adopted? As one commentator said CHORUS will work because publishers have experience setting up this kind of system, whereas SHARE does not have a good track record in this area. They suggest that.

A cynical publisher might say: Let’s fight for CHORUS, but let’s make sure SHARE wins. Then we (the publishers) have the best of all worlds: the costs of the service will not be ours to bear, the system will work haphazardly and pose little threat to library subscriptions, and the blame will lie with others.

This is an area to watch.

Dr Danny Kingsley
Executive Officer
Australian Open Access Support Group

Recent US developments in open access

Welcome to the Australian Open Access Support Group blog. We hope this will be a place to explore some ideas and happening in open access in Australia. Of course we live in a global world, so it is important to understand what is happening elsewhere and how this might affect us here.

And things certainly are happening.

US Policy – Increasing Access to the Results of Federally Funded Scientific Research

On February 22, the Obama Administration released a new policy “Increasing Access to the Results of Federally Funded Scientific Research“ that talks about the benefit to society for having open access to government data and research. It requires that within 12 months Federal agencies that spend over $100 million in research and development have to have a plan to “support increased public access to the results of research funded by the Federal Government”.

The policy is clear that it incorporates both scientific publications and digital scientific data, and limits embargo periods to twelve months post-publication.

The policy has had an instant effect, at least in registering policies. Steven Harnad yesterday posted an increase of 24 policies to ROARMAP (which lists open access policies) within four days of the policy being announced.

Similarities with Australian mandates

The interesting thing from the Australian perspective is this policy appears to mirror the NHMRC  and ARC policies in that it requires research metadata to be put in a repository.

The policy requires “Ensure full public access to publications’ metadata without charge upon first publication in a data format that ensures interoperability with current and future search technology. Where possible, the metadata should provide a link to the location where the full text and associated supplemental materials will be made available after the embargo period”.

Given the policy provides a series of suggestions about where repositories ‘could’ be housed, it seems the repository infrastructure in the US is less developed than in Australia. Presumably the repositories could be a way of monitoring progress, although the policy indicates that monitoring will be through twice yearly reports the agencies will have to provide for two years after their plan becomes effective.

Differences with the Australian mandates

While the intent of the policies are similar, the US policy relates only to larger Federal agencies (which may include some universities – note their higher education and research funding model is very different to Australia).

It is also a policy that asks the agencies to develop a *plan* to open up access within 12 months, so we might not see action for some time. Experience has shown setting up open access technology and work processes can be time consuming.

Something that strikes me as interesting is the US policy states that the material to be made open access – needs to be in a form that allows users to “read, download, and analyze in digital form”. This relates to the concept of text or data mining, a subject of many discussions recently. Indeed some people argue that if an item cannot be text or data mined then it is not actually open access. One of the big proponents of text and data mining is Cambridge University chemist Peter Murray Rust.

You cannot textmine a pdf. And the vast majority of work in Australian repositories, at least, are pdfs. This issue is something to watch into the future.

Odd components of the policy

The embargo period of 12 months doesn’t appear to be set in stone. I am unsure what this paragraph means in practice: “provide a mechanism for stakeholders to petition for changing the embargo period for a specific field by presenting evidence demonstrating that the plan would be inconsistent with the objectives articulated in this memorandum”.

Given that ‘stakeholders’ include publishers, then I’m sure they could produce ‘evidence’ that somehow will support the argument that making work available does not benefit society.

Another puzzling statement is: “Agency plans must also describe, to the extent feasible, procedures the agency will take to help prevent the unauthorized mass redistribution of scholarly publications.”

I’m not sure what that means. Isn’t making something openly accessible ‘mass distribution’? And surely having proper license restrictions on making work open access – like Creative Commons  licenses – will resolve how material should be redistributed? The scholarly communication norms require attribution within other scholarly articles, regardless of the distribution method. So this statement strikes me as completely at odds with the reminder of the document.

People power

The Increasing Access to the Results of Federally Funded Scientific Research policy is partially a result of a ‘We the People’ petition in May 2012 which received 65,704 signatures, more than double the required 25,000 signatures in 30 days that means the petition will be considered by the White House. As an interesting aside, in mid January the rules were changed so the petitions need 100,000 signatures before receiving an official response from the White House.

This policy is NOT the same thing as the FASTR

It is easy to get this mixed up. The Fair Access to Science and Technology Research Act (FASTR)  was introduced in both the House of Representatives and the Senate in mid February. It follows from the three previously unsuccessful attempts to get the Federal Research Public Access Act (FRPAA) passed.

FASTR is similar to the new Increasing Access to the Results of Federally Funded Scientific Research policy in that it is also restricted to agencies with research budgets of more than $100 million and it requires placement of work in a repository in a form that allows for text or data mining. It differs in that it has an embargo of only 6 months.

The Bill has not been passed through the legislative system in the US, and there are some activities online  that encourage people to support the Bill. The Association of American Publishers have described the FASTR as “different name, same boondoggle” and as “unnecessary and a waste of federal resources”.

Not everyone is cheering

Mike Eisen, an editor and founding member of PLoS argues that the Increasing Access to the Results of Federally Funded Scientific Research policy represents a missed opportunity  – the thrust of his argument is that the 12 month embargo on the 2008 NIH mandate was seen by some open access activists as a starting point which would reduce over time. But this new policy has cemented the 12 month embargo across the whole of government.

He is specifically angry that the government was so successfully lobbied by the publishers, saying the authors of the policy fell for publishers’ arguments “that the only way for researchers and the public to get the services they provide is to give them monopoly control over the articles for a year – the year when they are of greatest potential use.”

If the publishers have been successful in their lobbying, it might explain why the Association of American Publisher’s response to the policy was almost the polar opposite to their response to (the very similar) FASTR. The AAP have said the policy is very positive, saying it was a “reasonable, balanced resolution of issues around public access to research funded by federal agencies”. Interesting.

Dr Danny Kingsley
Executive Officer
Australian Open Access Support Group