Monday, March 13, 2017

The OA interviews: Philip Cohen, founder of SocArXiv

Fifteen years after the launch of the Budapest Open Access Initiative (BOAI) the OA revolution has yet to achieve its objectives. It does not help that legacy publishers are busy appropriating open access, and diluting it in ways that benefit them more than the research community. As things stand we could end up with a half revolution.

But could a new development help recover the situation? More specifically, can the newly reinvigorated preprint movement gain sufficient traction, impetus, and focus to push the revolution the OA movement began in a more desirable direction?

This was the dominant question in my mind after doing the Q&A below with Philip Cohen, founder of the new social sciences preprint server SocArXiv.

Preprint servers are by no means a new phenomenon. The highly-successful physics preprint server arXiv (formally referred to as an e-print service) was founded way back in 1991, and today it hosts 1.2 million e-prints in physics, mathematics, computer science, quantitative biology, quantitative finance and statistics. Currently around 9,000-10,000 new papers each month are submitted to arXiv.

Yet arXiv has tended to complement – rather than compete with – the legacy publishing system, with the vast majority of deposited papers subsequently being published in legacy journals. As such, it has not disrupted the status quo in ways that are necessary if the OA movement is to achieve its objectives – a point that has (somewhat bizarrely) at times been celebrated by open access advocates.

In any case, subsequent attempts to propagate the arXiv model have generally proved elusive. In 2000, for instance, Elsevier launched a chemistry preprint server called ChemWeb, but closed it in 2003. In 2007, Nature launched Nature Precedings, but closed it in 2012.

Hope springs eternal


Fortunately, hope springs eternal in academia, and new attempts to build on the success of arXiv are regularly made. Notably, in 2013 Cold Spring Harbor Laboratory (CSHL) launched a preprint server for the biological sciences called bioRxiv. To the joy of preprint enthusiasts, it looks as if this may prove a long-term success. As of March 8th 2017, some 8,850 papers had been posted, and the number of monthly submissions has grown to around 620.

Buoyed up by bioRxiv’s success, and convinced that the widespread posting of preprints on the open Web has great potential for improving scholarly communication, last year life scientists launched the ASAPbio initiative. The initial meeting was deemed so successful that the normally acerbic PLOS co-founder Michael Eisen penned an uncharacteristically upbeat blog post about it (here).  

Has something significant changed since Elsevier and Nature unsuccessfully sought to monetise the arXiv model. If so, what? Perhaps the key word here is “monetise”. We can see rising anger at the way in which legacy publishers have come to dominate and control open access (see here, here, and here for instance), anger that has been amplified by a dawning realisation that the entire scholarly communication infrastructure is now in danger of being – in the words of  Geoffrey Bilderenclosed by private interests, both by commercial publishers like Elsevier, and by for-profit upstarts like ResearchGate and Academia.ede (see here, here and here for instance).

CSHL/bioRxiv and arXiv are, by contrast, non-profit initiatives whose primary focus is on research, and facilitating research, not the pursuit of profit. Many feel that this is a more worthy and appropriate mission, and so should be supported. Perhaps, therefore, what has changed is that there is a new awareness that while legacy publishers contribute very little to the scholarly communication process, they neverthelss profit from it, and excessively at that. And for this reason they are a barrier to achieving the objectives of the OA movement.

Reproducibility crisis


But what is the case for making preprints freely available online? After all, the research community has always insisted that it is far preferable (and safer) for scholars to rely on papers that have been through the peer-review process, and published in respectable scholarly journals, in order to stay up to date in their field, not on self-deposited early versions of papers that might or might not go on to be published.

Advocates for open access, however, now argue that making preprints widely available enables research to be shared with colleagues much more quickly. Moreover, they say, it enables papers to potentially be scrutinised by a much greater number of eyeballs than with the traditional peer review system. As such, they add, the published version of a paper is likely to be of higher quality if it has first been made available as a preprint. In addition, they say, posting preprints allows researchers to establish priority in their discoveries and ideas that much earlier. Finally, they argue, the widespread sharing of preprints would benefit the world at large, since it would speed up the entire research process and maximise the use of taxpayer money (which funds the research process).

Many had assumed that OA would provide these kind of benefits. In addition to making papers freely available, it was assumed that open access would introduce a quicker time-to-publish process. This has not proved the case. For instance, while the peer review “lite” model pioneered by PLOS ONE did initially lead to faster publication times, these have subsequently begun to lengthen again.

Above all, open access has failed to address the so-called reproducibility crisis (also referred to as the replication crisis). By utilising a more transparent publishing process (sometimes including open peer review) it was assumed that open access would increase the quality of published research. Unfortunately, the introduction of pay-to-publish gold OA has undermined this, not least because it has encouraged the emergence of so-called predatory OA publishers (or article brokers), who gull researchers into paying (or sometimes researchers willingly pay) to have their papers published in journals that wave papers past any review process.

The reproducibility crisis is by no means confined to open access publishing (the problem is far bigger), but it could hold out the greatest hope for the budding preprint movement.

Why do I say this? And what is the reproducibility crisis? Stanford Professor of Medicine John Ioannidis neatly summarised the reproducibility crisis in 2005, when he called his seminal paper on the topic “Why most published research findings are false”. In this and subsequent papers Ioannidis has consistently argued that the findings of many published papers are simply wrong.

Shocked at Ioannidis’ findings, other researchers set about trying to size the problem and to develop solutions. In 2011, for instance, social psychologist Brian Nosek launched the Reproducibility Project, whose first assignment consisted of a collaboration of 270 contributing authors who sought to repeat 100 published experimental and correlational psychological studies. Their conclusion: only 36.1% of the studies could be replicated, and where they did replicate their effects were smaller than the initial studies effects, seemingly confirming Ioannidis’ findings.

The Reproducibility Project has subsequently moved on to examine the situation in cancer biology (with similar initial results). Meanwhile, a survey undertaken by Nature last year would appear to confirm that there is a serious problem.

Whatever the cause and extent of the reproducibility crisis, Nosek’s work soon attracted the attention of John Arnold, a former Enron trader who has committed a large chunk of his personal fortune to funding those working to – as Wired puts it – “fix science”. In 2013, Arnold awarded Nosek a $5.25 million grant to allow him and colleague Jeffrey Spies to found the Center for Open Science (COS).

COS is a non-profit organisation based in Charlottesville, Virginia. Its mission is to “increase openness, integrity, and reproducibility of scientific research”. To this end, it has developed a set of tools that enable researchers to make their work open and transparent throughout the research cycle. So they can register their initial hypotheses, maintain a public log of all the experiments they run, and the methods and workflows they use, and then post their data online. And the whole process can be made open for all to review.

Monday, February 20, 2017

Copyright: the immoveable barrier that open access advocates underestimated

In calling for research papers to be made freely available open access advocates promised that doing so would lead to a simpler, less costly, more democratic, and more effective scholarly communication system. 

To achieve their objectives they proposed two different ways of providing open access: green OA (self-archiving) and gold OA (open access publishing).

However, while the OA movement has succeeded in persuading research institutions and funders of the merits of open access, it has failed to win the hearts and minds of most researchers. 

More importantly, it is not achieving its objectives. There are various reasons for this, but above all it is because OA advocates underestimated the extent to which copyright would subvert their cause. That is the argument I make in the text I link to below, and I include a personal case study that demonstrates the kind of problems copyright poses for open access.

I also argue that in underestimating the extent to which copyright would be a barrier to their objectives, OA advocates have enabled legacy publishers to appropriate the movement for their own benefit, rather than for the benefit of the research community, and to pervert both the practice and the concept of open access.

As usual, it is a long document and I have published it in a pdf file that can be access here

I have inserted a link to the case study at the top for those who might wish only to read that.


For those who prefer paper, a print version is available here.

Friday, January 20, 2017

The NIH Public Access Policy: A triumph of green open access?

There has always been a contradiction at the heart of the open access movement. Let me explain.

The Budapest Open Access Initiative (BOAI) defined open access as being the:

“free availability [of research papers] on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.”

BOAI then proceeded to outline two strategies for achieving open access: (I) Self-archiving; (II) a new generation of open-access journals. These two strategies later became known, respectively, as green OA and gold OA.

At the time of the BOAI meeting the Creative Commons licences had not been released. When they were, OA advocates began to insist that to meet the BOAI definition, research papers had to have a CC BY licence attached, thereby signalling to the world that anyone was free to share, adapt and reuse the work for any purpose, even commercially.

For OA purists, therefore, a research paper can only be described as open access if it has a CC BY licence attached.

The problem here, of course, is that the vast majority of papers deposited in repositories cannot be made available on a CC BY basis, because green OA assumes authors continue to publish in subscription journals and then self-archive a copy of their work in an open repository.

Since publishing in a subscription journal requires assigning copyright (or exclusive publishing rights) to a publisher, and few (if any) subscription publishers will allow papers that are earning them subscription revenues to be made available with a CC BY licence attached, we can see the contradiction built into the open access movement. Quite simply, green OA cannot meet the definition of open access prescribed by BOAI.

To see how this works in practice, let’s consider the National Institutes of Health (NIH) Public Access Policy. This is described on Wikipedia as an “open access mandate”, and by Nature as a green OA policy, since it requires that all papers published as a result of NIH funding have to be made freely available in the NIH repository PubMed Central (PMC) within 12 months of publication. In fact, the NIH policy is viewed as the premier green OA policy.

But how many of the papers being deposited in PMC in order to comply with the Policy have a CC BY licence attached and so are, strictly speaking, open access?

There are currently 4.2 million articles in PMC. Of these around 1.5 million consist of pre-2000 historical content being deposited as part of the NIH’s scanning projects. Some of these papers are still under copyright, some are in the public domain, and some are available CC BY-NC. However, since this is historical material pre-dating both the open access movement and the NIH Policy let’s put it aside.

That leaves us with around 2.7 million papers in PMC that have been published since 2000. Today around 24% of these papers have a CC BY licence attached. In other words, some 76% of the papers in PMC are not open access as defined by BOAI.

The good news is that the percentage with a CC BY licence is growing, and the table below (kindly put together for me by PMC) shows this growth. In 2008, just 8% of the papers in PMC had a CC BY licence attached. Since then the percentage has grown to 12% in 2010, 14% in 2012, 19% in 2014 and, as noted, it stands at 24% today. 



So, although the majority of papers in PMC today are not strictly speaking open access, the percentage that are is growing over time. Is this a triumph of green OA? Let’s consider.

There are two submission routes to PMC. Where there is an agreement between NIH and a publisher, research papers can be input directly into PMC by that publisher. Authors, and publishers with no PMC agreement, have to use the NIH Manuscript Submission System (NIHMS, overview here).

The table above shows that the number of “author manuscripts” that came via the NIHMS route represents just 19% of the content in PMC. And since some publishers do not have an agreement with PMC, the number that will have been self-archived by authors will be that much lower. So the overwhelming majority of papers being uploaded to PMC are being uploaded not by authors, but by publishers, and it seems safe to assume that those papers with a CC BY licence attached (currently 24% of the total) will have been published as gold OA rather than under the subscription model.

We could also note that just 0.06% of the papers in PMC today that were deposited via the NIHMS have a CC BY licence attached, and we can assume that these were submitted by gold publishers that do not have an agreement allowing for direct deposit, rather than by authors. 

In short, it would seem that the growth in CC BY papers in PMC is a function of the growth of gold OA, not green OA. As such, we might want to conclude that the success of PMC is a triumph of gold OA rather than of green OA.

Does this matter? The answer will probably depend on one’s views of the merits of article-processing charges, which I think it safe to assume most of the papers in PMC with a CC BY licence will have incurred.

Either way, that today 76% of the content in PMC – the world’s premier open repository – still cannot meet the BOAI definition of open access suggests that the OA movement still has a way to go. 

Wednesday, December 28, 2016

Open access and Africa

In November I reported that PLOS CEO Elizabeth Marincola is leaving the open access publisher in order to take up a position as Senior Advisor for Science Communication and Advocacy at an African organisation. 

At the time, PLOS said it could not say exactly where Marincola was going as it had to wait until the organisation concerned had held its board meeting in December.

But last week Marincola confirmed to The Scientist that the organisation she will be joining is the African Academy of Sciences (AAS), based in Nairobi, Kenya. (I am not aware that PLOS itself has put out a press release on this). Marincola will be leaving PLOS at the end of the year (this week), with PLOS Chief Financial Officer Richard Hewitt serving as interim CEO from January 1st 2017.

We can surely assume that Marincola will be advocating strongly for open access in her new position at the AAS.

But where does this leave PLOS? I discussed this and the challenges I believe PLOS currently faces in November, but I was not able to get Marincola’s views. In a Q&A published yesterday, however, The Scientist asked Marincola where she saw PLOS’ place in today’s open-access publishing marketplace.

Marincola replied, “The first and primary mission of PLOS when it was founded was to make the case that open-access publishing could be a sustainable business, whether in a nonprofit environment or a for-profit environment. So the very fact we have a lot of competition now is extremely satisfying to us and it is, in itself, a major part of our vision. As Harold Varmus said when he cofounded PLOS, if we could put ourselves out of business because the whole world becomes open-access STM publishing, that would be the greatest testament to our achievements.”

Meanwhile at Elsevier


Marincola is not the only publisher to have developed an interest in open access, in Africa, and in the African Academy of Sciences. In 2014 Elsevier announced that it was partnering with AAS to support researchers by means of a publishing training programme. This, it said, would include offering access to Elsevier Publishing Connect and providing support for hosting live, online webinars.

And last year SciDev.net reported that Elsevier is planning to launch a new African open access mega journal (presumably in the style of PLOS ONE). This would be free to readers, but authors and their organisations would have to pay to publish – although SciDev.net indicated that internal discussions were taking place over whether publishing fees should be waived for the first five years.

One of the organisations Elsevier was said to be working with in developing the mega journal is the AAS. The other partners in the group are the African Centre for Technology, the South African Medical Research Council and IBM Research-Africa.

SciDev.net anticipated that the new journal would be launched this year, with the first papers being published in 2017. If the journal is still planned, then presumably the launch date has slipped.

Clearly there is growing interest in promoting open access and OER in Africa. But some believe that the involvement of people and organisations from the Global North can be a mixed blessing, as they can end up setting the agenda in a way that is not conducive to local conditions. One African tweeter commented recently, “The agenda for, and lead in, African studies should be set by African scholars.”

The same sentiment is often expressed about publishing and publishers, especially when large for-profit companies like Elsevier get involved. In a blog post last year University of Cape Town OA advocate Eve Gray said of the planned new mega-journal: “Could this venture under the Elsevier banner provide the impact and prestige that the continent’s research has been so sadly lacking? Or could it be simply that it could provide a blank slate for Elsevier, experimenting in the face of market uncertainty?  Or, at its crudest, just a neo-colonial land-grab in the face of challenges in the markets that Elsevier dominates?”

Certainly as it confronts growing hostility in Europe (and German researchers face the new year without access to its journals as a result), Elsevier must be keen to develop new markets in other parts of the world.

But as always with open access and scholarly publishing there are no simple answers, nothing can be predicted, and opinion is invariably divided.

Postscript: I emailed the African Academy of Sciences and asked whether Marincola will be working on Elsevier's new mega-journal in any way. As of writing this, I have yet to receive a reply.

Tuesday, December 06, 2016

Tracking Trump


While many, many words have already been spilled on the manifold implications of the surprise win of Donald Trump in the US presidential elections, I am not aware that much has been written about what it might mean for Public Access, as Open Access is called in the context of research funded by the US Government.

I was therefore interested last week to receive a copy of the current issue of David Wojick’s Inside Public Access newsletter. Wojick has been tracking the US Public Access program for a while now, and the latest issue of his subscription newsletter looks at what the arrival of the Trump Administration might mean for the Program. Wojick agreed to let me publish an edited version of the issue, which can be read below.

Guest post by David Wojick 

The transition team


To begin with, the Trump Administration has gotten off to a very slow start. The transition team did very little work prior to the election, which is unusual. Federal funding is available to both major candidates as soon as they are nominated. Romney’s transition team spent a reported 8.9 million dollars before the election. The Trump team has spent very little.

The transition team has a lot to do. To begin with it is supposed to vet applicants and job holders for about 4,000 federal positions which are held “at the pleasure of the President.” About 1,000 of these positions require Senate approval, so the vetting is not trivial.

There is a transition team for each Cabinet Department and the major non-Cabinet agencies, like EPA and the SEC. In addition to vetting applicants, the teams are supposed to meet with the senior civil servants of each department and agency, to be briefed on how these huge and complex organizations actually operate. Something as small as Public Access may not be noticed.

Each team is also supposed to begin to formulate specific policies for their organization. Given how vague Trump has been on policy specifics, this may not be easy. Or it may mean that the teams have pretty broad latitude when it comes to specific agency policies. There seems to be little information as to who makes up each agency team, so their views on public access are unknown at this point.


Moreover, the head of the Energy Department transition team was recently replaced, which has to slow things down a bit. DOE has been a leader in developing the Public Access Program. But in the long run the fate of Public Access is in the hands of the Department and Agency heads, and their deputies, not the transition team. Science related nominations have yet to even be announced.

The Science Advisor and OSTP


Then there is the issue of OSTP and the 2013 Memorandum that created the Public Access Program. The Office of Science and Technology Policy is part of the Executive Office of the President. It is headed by the President’s Science Advisor.

At one extreme the Memo might simply be rescinded. President Obama issued a great many orders and executive memos, in direct defiance of the Republican led Congress. Many of these orders seem likely to be rescinded and Public Access might get caught in the wave and wiped out. Then too, Republicans tend to be pro-business and the publishers may well lobby against the Public Access Program.

On the other hand, a public access policy is relatively non-partisan, as well as being politically attractive. The new OSTP head might even decide to strengthen the program, especially because Trump is being labeled as anti-science by his opponents.

The OSTP situation is also quite fluid at this point. No Science Advisor has even been proposed yet, that I know of. The vast majority of academic scientists are Democrats. The last Republican president took a year in office before nominating a Science Advisor, and he was a Democrat.

The American science community is watching this issue very closely, even though the Science Advisor and OSTP have very little actual authority. The Public Access Program is really something of an exception in this regard, but it is after all largely an administrative program. In the interim, OSTP has over a hundred employees so it will keep operating. So will the Public Access Program if the Memo is not rescinded.

In fact, the slower the Trump people are in taking over, the longer the Government will be run by civil servants who will favor the status quo. This will be true of all the Departments and Agencies. The worst-case scenario would be if OSTP were eliminated altogether. There is some discussion of this, but it seems unlikely as a political strategy. It would be viewed as a direct attack on science and it has no upside.

In any case, given that their internal Public Access Programs are well established, the agencies could decide to continue them, absent the OSTP Memo, or even OSTP.

Funding


Then there is the funding issue. The Public Access Program is generally internally funded out of existing research budgets. If these are cut, then Public Access might be internally defunded.

Both the Trump people and the Congressional leaders are talking about cutting funding for certain research areas. A prominent example is NASA’s Earth Science Division, which grew significantly under President Obama. If funds are actually cut, rather than simply redirected, then Public Access might take a hit.


Innovation


On the other hand, every new Department and Agency head and staff will be looking for flashy new ideas, especially if they do not cost much. Public Access has a populist aspect, which is Trump’s theme, so it could well be presented this way.

The agency civil servants are missing a bet if they do not see this opportunity to pitch public access. “Science for everyone” is a central theme of open access. So is accelerating science and innovation, which fits into the “Making America great” slogan of the Trump campaign.

Congress


More deeply, Congress is likely to be unleashed, after many years of partisan gridlock. This may be far more important than what the new Administration does. Congress controls the money and makes the laws and the lack of statutory authority for most agencies has been a vulnerability for Public Access.


In other words, while the OSTP Memo can be rescinded, a law is permanent (unless repealed of course). The US National Institutes of Health (NIH) introduced a mandatory Public Access Policy in 2008, but other agencies proved shy to follow its example, which is why we saw the OSTP Memo. This reluctance (along with a desire to provide Public Access with a more solid foundation) has also seen growing pressure for a statutory Public Access requirement for US Government departments.

Section 527 of the Consolidated Appropriations Act of 2014 required that the Departments of HHS, Education and Labor introduce a Public Access Program along the lines of the OSTP Memo. More importantly, the proposed Fair Access to Science and Technology Research (FASTR) Act is waiting in the wings.

FASTR would require that all US Government departments and agencies with annual extramural research expenditures of over $100 million make manuscripts of journal articles stemming from research funded by that agency publicly available over the Internet. First introduced in 2013, FASTR was reintroduced in 2015.

It is worth stressing that FASTR is a bipartisan bill, and was introduced to the Senate by Republican John Cornyn. As such, a Congressional mandate is well within reason.

 CHORUS


If the Public Access Program disappears then CHORUS will need to redirect its efforts. It already has several pilot efforts going in that direction. These include working with the Japanese Government and several US universities.


Conclusion


In short, interesting times lie ahead for the US Public Access Program, as the Trump Administration emerges and begins to act, along with the now unfettered Congress. Inside Public Access will be tracking this action.

__________________________________________________________________
Information about Inside Public Access can be accessed here.

David Wojick is an independent engineer, consultant and researcher with a Ph.D. in Philosophy of Science and a forty-year career in public policy. He has also written 30 articles for the Scholarly Kitchen, mostly on OA. From 2004 to 2014 Wojick was Senior Consultant on Innovation for the US Energy Department’s Office of Scientific and Technical Information (OSTI), a leader in public access.

Monday, November 21, 2016

PLOS CEO steps down as publisher embarks on “third revolution”

I HAVE POSTED AN UPDATE PIECE ON THIS HERE.


On 31st October, PLOS sent out a surprise tweet saying that its CEO Elizabeth Marincola is leaving the organisation for a new job in Kenya. Perhaps this is a good time to review the rise of PLOS, put some questions to the publisher, and consider its future.

PLOS started out in 2001 as an OA advocacy group. In 2003, however, it reinvented itself as an open access publisher and began to launch OA journals like PLOS Biology and PLOS Medicine. Its mission: “to accelerate progress in science and medicine by leading a transformation in research communication.” Above all, PLOS’ goal was to see all publicly-funded research made freely available on the internet.

Like all insurgent organisations, PLOS has over the years attracted both devoted fans and staunch critics. The fans (notably advocates for open access) relished the fact that PLOS had thrown down a gauntlet to legacy subscription publishers, and helped start the OA revolution. The critics have always insisted that a bunch of academics (PLOS’ founders) would never be able to make a fist of a publishing business.

At first, it seemed the critics might be right. One of the first scholarly publishers to attempt to build a business on article-processing charges (APCs), PLOS gambled that pay-to-publish would prove to be a viable business model. The critics demurred and said that in any case the level that PLOS had set its prices ($1,500) would prove woefully inadequate. Commenting to Nature in 2003, cell biologist Ira Mellman of Yale University, and editor of The Journal of Cell Biology, said. “I feel that PLOS’s estimate is low by four- to sixfold,”

In 2006, PLOS did increase the fees for its top two journals by 66% (to $2,500), and since then the figure has risen to $2,900. While this is neither a four- or sixfold increase, we must doubt that these prices would have been enough to make an organisation with PLOS’ ambitions viable. In 2008 Nature commented, “An analysis by Nature of the company’s accounts shows that PLOS still relies heavily on charity funding, and falls far short of its stated goal of quickly breaking even through its business model of charging authors a fee to publish in its journals. In the past financial year, ending 30 September 2007, its $6.68-million spending outstripped its revenue of $2.86 million.”

Wednesday, October 05, 2016

Institutional Repositories: Response to comments

The introduction I wrote for the recent Q&A with Clifford Lynch has attracted some commentary from the institutional repository (IR) and open access (OA) communities. I thank those who took the time to respond. After reading the comments the following questions occurred to me.

1.     Is the institutional repository dead or dying?

Judging by the Mark Twain quote with which COAR’s Kathleen Shearer headed her response (“The reports of our death have been greatly exaggerated”), and judging by CORE’s Nancy Pontika insisting in her comment that we should not give up on the IR (“It is my strong belief that we don’t need to abandon repositories”) people might conclude that I had said the IR is dead.

Indeed, by the time Shearer’s comments were republished on the OpenAIRE blog (under the title “COAR counters reports of repositories’ demise”) the wording had strengthened – Shearer was now saying that I had made a number of “somewhat questionable assertions, in particular that institutional repositories (IRs) have failed.”

That is not exactly what I said, although I did quote a blog post by Eric Van de Velde (here) in which he declared the IR obsolete. As he put it, “Its flawed foundation cannot be repaired. The IR must be phased out and replaced with viable alternatives.”

What I said (and about this Clifford Lynch seemed to agree, as do a growing number of others) is that it is time for the research community to take stock, and rethink what it hopes to achieve with the IR.

It is however correct to say I argued that green OA has “failed as a strategy”. And I do believe this. I gave some of the reasons why I do in my introduction, the most obvious of which is that green OA advocates assumed that once IRs were created they would quickly be filled by researchers self-archiving their work. Yet seventeen years after the Santa Fe meeting, and 22 years after Stevan Harnad began his long campaign to persuade researchers to self-archive, it is clear there remains little or no appetite for doing so, even though researchers are more than happy to post their papers on commercial sites like Academia.edu and ResearchGate.

However, I then went on to say that I saw two possible future scenarios for the IR. The first would see the research community “finally come together, agree on the appropriate role and purpose of the IR, and then implement a strategic plan that will see repositories filled with the target content (whatever it is deemed to be).”

The second scenario I envisaged was that the IR would be “captured by commercial publishers, much as open access itself is being captured by means of pay-to-publish gold OA.”

Neither of these scenarios assumes the IR will die, although they do envisage somewhat different futures for it. That said, what they could share in common is a propensity for the link between the IR and open access to weaken. Already we are seeing a growing number of papers in IRs being hidden behind login walls – either as a result of publisher embargoes or because many institutions have come to view the IR less as a way of making research freely available, more as a primary source of raw material for researcher evaluation and/or other internal processes. As IRs merge with Research Information Management (RIM) tools and Current Research Information Systems (CRIS) this darkening of the content in IRs could intensify.  

What makes this darkening likely is that the internal processes that IRs are starting to be used for generally only require the deposit of the metadata (bibliographic details) of papers, not the full-text. As such, the underlying documents may not just be inaccessible, but entirely absent.

This outcome seems even more likely in my second scenario. Here the IR is (so far as research articles are concerned) downgraded to the task of linking users to content hosted on publishers’ sites. Again, to fulfil such a role the IR need host only metadata.

2.     So what is the role of an institutional repository? What should be deposited in it, and for what purpose?

As I pointed out in my introduction, there is today no consensus on the role and purpose of the IR. Some see it as a platform for green OA, some view it as a journal publication platform, some as a metadata repository, some as a digital archive, some as a research data repository (I could go on).

It is worth noting here a comment posted on my blog by David Lowe. The reason why the IR will persist, he said, “is not related to OA publishing as such, but instead to ETDs.” Presumably this means that Lowe expects the primary role of the IR to become that of facilitating ETD workflows.

It turns out that ETDs are frequently locked behind login walls, as Joachim Schöpfel and Hélène Prost pointed out in a 2014 paper called Back to Grey: Disclosure and Concealment of Electronic Theses and Dissertations. “Our paper,” they wrote “describes a new and unexpected effect of the development of digital libraries and open access, as a paradoxical practice of hiding information from the scientific community and society, while partly sharing it with a restricted population (campus).”

And they concluded that the Internet “is not synonymous with openness, and the creation of institutional repositories and ETD workflows does not make all items more accessible and available. Sometimes, the new infrastructure even appears to increase barriers.”

In short, the roles that IRs are expected to play are now manifold and sometimes they are in conflict with one another. One consequence of this is that the link between the repository and open access could become more and more tenuous. Indeed, it is not beyond the bounds of possibility that the link could break altogether.

3.     To what extent can we say that the IR movement – and the OAI-PMH standard on which it was based – has proved successful, both in terms of interoperability and deposit levels?

As I said in my introduction, thousands of IRs have been created since 1999. That is undoubtedly an achievement. On the other hand, many of these repositories remain half empty, and for the reasons stated about we could see them increasingly being populated with metadata alone.

Both Shearer and Pontika agree that more could have been achieved with the IR. With regard to OAI-PMH Pontika says that while it has its disadvantages, “it has served the field well for quite some time now.”

But what does serving the field well mean in this context? Let’s recall that the main reason for holding the Santa Fe meeting, and for developing OAI-PMH, was to make IRs interoperable. And yet interoperability remains more aspiration than reality today. Perhaps for this reason most research papers are now located by means of commercial search engines and Google Scholar, not OAI-PMH harvesters – a point Shearer conceded when I interviewed her in 2014.

Of course, if running an IR becomes less about providing open access and more about enabling internal processes, or linking to papers hosted elsewhere, interoperability begins to seem unnecessary.

4.     Do IR advocates now accept that there is a need to re-think the institutional repository, and is the IR movement about to experience a great leap forward as a result?

Most IR advocates do appear to agree that it is time to review the current status of the institutional repository, and to rethink its role and purpose. And it is the Confederation of Open Access Repositories (COAR) that is leading on this.

“The calls for a fundamental rethink of repositories is already being answered!” Tony Ross-Hellauer –  scientific manager at OpenAIRE (a member of COAR) –  commented on my blog.  “See the ongoing work of the COAR next-generation repositories working group.”

Shearer, who is the executive director of COAR (and so presumably responsible for the working group), explains in her response that the group has set itself the task of identifying “the core functionalities for the next generation of repositories, as well as the architectures and technologies required to implement them.”

As a result, Shearer says, the IR community is “now well positioned to offer a viable alternative for an open and community led scholarly communication system.”

So all is well? Not everyone thinks so. As an anonymous commenter pointed out on my blog: “All this is not really offering a new way and more like reacting to the flow. Maybe that has to do with the kind of people working on it, the IR crowd is usually coming from the library field and their job is not to be inventive but to archive and keep stuff save.”

Archiving and keeping stuff save are very worthy missions, but it is to for-profit publishers that people tend to turn when they are looking for inventive solutions, and we can see that legacy publishers are now keen to move into the IR space. This suggests that if the goal is to create a community-led scholarly communications system COAR’s initiative could turn out to be a case of shutting the stable door after the horse has bolted.

5.     What is the most important task when seeking to engineer radical change in scholarly communication: articulating a vision, providing enabling technology, or getting community buy-in?

“Ultimately, what we are promoting is a conceptual model, not a technology,” says Shearer “Technologies will and must change over time, including repository technologies. We are calling for the scholarly community to take back control of the knowledge production process via a distributed network based at scholarly institutions around the world.”

Shearer adds that the following vision underlies COAR’s work:

“To position distributed repositories as the foundation of a globally networked infrastructure for scholarly communication that is collectively managed by the scholarly community. The resulting global repository network should have the potential to help transform the scholarly communication system by emphasizing the benefits of collective, open and distributed management, open content, uniform behaviors, real-time dissemination, and collective innovation.”

As such, I take it that COAR is seeking to facilitate the first scenario I outlined. But were not the above objectives those of the attendees of the 1999 Santa Fe meeting? Yet seventeen years later we are still waiting for them to be realised. Why might it be different this time around, especially now that legacy publishers are entering the market for IR services, and some universities seem minded to outsource the hosting of research papers to commercial organisations, rather than work with colleagues in the research community to create an interoperable network of distributed repositories?

What has also become apparent over the past 17 years is that open movements and initiatives focused on radical reform of scholarly communication tend to be long on impassioned calls, petitions and visions, short on collective action.

As NYU librarian April Hathcock put it when reporting on a Force11 Scholarly Commons Working Group she attended recently: “As several of my fellow librarian colleagues pointed out at the meeting, we tend to participate in conversations like this all the time and always with very similar results. The principles are fine, but to me, they’re nothing new or radical. They’re the same things we’ve been talking about for ages.”

Without doubt, articulating a vision is a good and necessary thing to do. But it can only take you so far. You also need enabling technology. And here we have learned that there is many a slip ‘twixt the cup and the lip.” OAI-PMH has not delivered on its promise, as even Herbert Van de Sompel, one of the architects of the protocol, appears to have concluded. (Although this tweet suggests that he too does not agree with the way I characterised the current state of the IR movement).

Shearer is of course right to say that technologies have to change over time. However, choosing the wrong one can at derail, or significantly slow down, the objective you are working towards.

But even if you have articulated a clear and desirable vision, and you have put the right technology in place, in the generally chaotic and anarchic world of scholarly communication you can only hope to achieve your objectives if you get community buy-in. That is what the IR and self-archiving movements have surely demonstrated.

6.     To what extent are commercial organisations colonising the IR landscape?

In my introduction I said that commercial publishers are now actively seeking to colonise and control the repository (a strategy supported by their parallel activities aimed at co-opting gold open access). As such, I said, the challenge the IR community faces is now much greater than in 1999.

In her response, Shearer says that I mischaracterise the situation. “[T]here are numerous examples of not-for-profit aggregators including BASE, CORE, SemanticScholar, CiteSeerX, OpenAIRE, LA Referencia and SHARE (I could go on),” she said. “These services index and provide access to a large set of articles, while also, in some cases, keeping a copy of the content.”

In fact, I did discuss non-profit services like BASE and OpenAIRE, as well as PubMed Central, HAL and SciELO. In doing so I pointed out that a high percentage of the large set of articles that Shearer refers to are not actually full-text documents, but metadata records. And of the full-text documents that are deposited, many are locked behind login walls. In the case of BASE, therefore, only around 60% of the records it indexes provide access to the full-text.

In addition, many consist of non-peer-reviewed and non-target content such as blog posts. Thats fine, but this is not the target content that OA advocates say they want to see made open access. Indeed, in some cases a record may consist of no more than a link to a link (e.g. see the first item listed here).

So the claims that these services make about indexing and providing access to a large set of articles need to be taken with a pinch of salt.

It is also important to note that publishers are at a significant advantage here, since they host and control access to the full-text of everything they publish. Moreover, they can provide access to the version of record (VoR) of articles. This is invariably the version that researchers want to read.

It also means that publishers can offer access both to OA papers as well as to paywalled papers, all through the same interface. And since they have the necessary funds to perfect the technology, publishers can offer more and better functionality, and a more user-friendly interface. For this reason, I suggested, they will soon (and indeed some already are) charging for services that index open content, as I assume Elsevier plans to do with the DataSearch service it is developing. This seems to me to be a new form of enclosure of the commons.

Shearer also took me to task for attaching too much significance to the partnership between Elsevier and the University of Florida – in which the University has agreed to outsource access to papers indexed in its repository to Elsevier. I suggested that by signing up to deals like this, universities will allow commercial publishers to increasingly control and marginalise IRs. This is an exaggeration, says Shearer “[O]ne repository does not make a trend.”

I agree that one swallow does not a summer make. However, summer does eventually arrive, and I anticipate that the agreement with the University of Florida will prove the first swallow of a hot summer. Other swallows will surely follow.

Consider, for instance, that the University of Florida has also signed a Letter of Agreement with CHORUS in a pilot initiative intended to scale up the Elsevier project “to a multilateral, industry effort.”

In addition to Elsevier, publishers involved in the pilot include the American Chemical Society, the American Physical Society, The Rockefeller University Press and Wiley. Other publishers will surely follow.

And just last week it was announced that Qatar University Library has signed a deal with Elsevier that apes the one signed by the University of Florida. I think we can see a trend in the making here.

As things stand, therefore, it is not clear to me how initiatives like COAR and SHARE can hope to match the collective power of legacy publishers working through CHORUS.

Let’s recall that OA advocates long argued that legacy publishers would never be able to replicate in an OA environment the dominance they have long enjoyed in the subscription world. As a result, it was said, as open access commodifies the services they provide publishers will experience a downward pressure on prices. In response, they will either have to downsize their operations, or get out of the publishing business altogether. Today we can see that legacy publishers are not only prospering in the OA environment, but getting ever richer as their profits rise – all at the expense of the taxpayer.

But let me be clear: while I fear that legacy publishers are going to co-opt both OA and IRs, I would much prefer they did not. Far better that the research community – with the help of non-profit concerns – succeeded in developing COAR’s “viable alternative for an open and community led scholarly communication system.”

So I applaud COAR’s initiative and absolutely sign up to its vision. My doubts are that, as things stand, that vision is unlikely to be realised. For it to happen I believe more dramatic changes would be needed than the OA and IR movements appear to assume, or are working towards.

7.     Will the IR movement, as with all such attempts by the research community to take back control of scholarly communication, inevitably fall victim to a collective action dilemma?

Let me here quote Van de Sompel, one of the key architects of OAI-PMH. Van de Sompel, I would add, has subsequently worked on OAI-ORE (which Lynch mentions in the Q&A) and on ResourceSync (which Shearer mentions in her critique).

In a retrospective on repository interoperability efforts published last year Van de Sompel concluded, “Over the years, we have learned that no one is ‘King of Scholarly Communication’ and that no progress regarding interoperability can be accomplished without active involvement and buy-in from the stakeholder communities. However, it is a significant challenge to determine what exactly the stakeholder communities are, and who can act as their representatives, when the target environment is as broad as all nodes involved in web-based scholarship. To put this differently, it is hard to know how to exactly start an effort to work towards increased interoperability.”

The larger problem here, of course, is the difficulties inherent in trying to get the research community to co-operate.

This is the problem that afflicts all attempts by the research community to, in Shearer’s words, “take back control of the knowledge production process.” What inevitably happens is that they bump up against what John Wenzler, Dean of Libraries California State University, has described as a “collective action dilemma”.

But what is the solution? Wenzler suggests the research community should focus on trying to control the costs of scholarly communication. Possible ways of doing this he says could include requiring pricing transparency and lobbying for government intervention and regulation. “[T]he government can try to limit a natural monopoly’s ability to exploit its customers by regulating its prices instead.”)

He concedes however: “Currently, the dominant political ideology in Western capitalist countries, especially in the United States, is hostile to regulation, and it would be difficult to convince politicians to impose prices on an industry that hasn’t been regulated in the past.”

He adds: “Moreover, even if some kind of International Publishing Committee were created to establish price rates, there is a chance that regulators would be captured by publisher interests.”

It is worth recalling that while OA advocates have successfully persuaded many governments to introduce open access/public access policies, this has not put control of the knowledge production process back into the hands of the research community, or reduced prices. Quite the reverse: it is (ironically) increasing the power and dominance of legacy publishers.  

In short, as things stand if you want to make a lot of money from the taxpayer you could do no better than become a scholarly publisher!

I don’t like being the eternal pessimist. I am convinced there must be a way of achieving the objectives of the open access and IR movements, and I believe it would be a good thing for that to happen. Before it can, however, these movements really need to acknowledge the degree to which their objectives are being undermined and waylaid by publishers. And rather than just repeating the same old mantras, and recycling the same visions, they need to come up with new and more compelling strategies for achieving their objectives. I don’t claim to know what the answer is, but I do know that time is not on the side of the research community here.