by Sarah Vitak
In a world where you can send massive amounts of information in the blink of an eye, some of the most cutting-edge science is being communicated at the pace of a snail. But why? Welcome to the wonderful world of peer-reviewed academic publishing!
To explain: once a researcher has some findings worth publishing, she will compile a manuscript and submit it to a research journal. If the editor finds the work meritorious, she will have to find three experts who do not have a conflict of interest with the author. These experts will review the manuscript and provide feedback and suggestions. In response to this feedback, the researcher will likely have to perform additional experiments, rework the manuscript, and submit again for another round of reviews. Finally, after several months to a year (or more!), the work will be published.
This system is in place to ensure that any research being published is thorough, well balanced, and error free. However, the flipside of this is that scientific communication and sharing are ridiculously slow. More worrisome, the high stakes of this process incentivize researchers to spend more time hoarding their data and worrying about being scooped than they do collaborating. In the worst cases, reviewers have been known to unscrupulously hold papers to allow another author’s paper to be published first.
If that wasn’t enough, let’s look at this system from an economic perspective. All the research and writing is paid for by the researcher, who in some regard is funded by you, the taxpayer, when the research is government funded. The work done by reviewers is entirely unpaid. And the final product, which is exorbitantly expensive, exclusively profits the journal.
In short, I would argue that the academic publishing industry is exploitative, inefficient, and broken. And, most scientists would agree. According to a poll by ASAPBio, 91% of biologists are dissatisfied with the current state of publishing.
Open Source to the Rescue
Tactics to subvert the paywalls in academia have been wide-ranging and creative. One early and rather chaotic example is the #icanhazpdf hashtag on twitter. Users post the hashtag along with the publication they want. Someone with access will download the pdf and send it to the poster. Then the post will be deleted.
A more organized attempt is the academic journal article repository known as Sci-Hub. The site has more than 64 million academic papers that it has accessed and saved using academic proxies, which represents over two-thirds of scholarly articles. The website is slightly challenging to find at the moment, as domains keep being removed due to lawsuits from Elsevier, the premier academic publishing group. However, Sci-Hub is still accessible via numerous URLs, a direct IP address, and a Tor hidden site.
More recently, the Chrome extension Unpaywall has arrived on the scene to help users find free full-length papers more easily. Unlike Sci-Hub, Unpaywall claims to use only legal sources for full-text articles, such as universities and governments.
It is particularly telling that even researchers who have access to journals via their institutions have been known to use Sci-Hub or Unpaywall because these options are faster and easier than the maze of logins and passwords that are often needed to access articles.
The Magic of Preprints
While these workarounds are stellar for promoting open access, they do nothing to increase the speed with which researchers can share their findings with the world. Enter the preprint! A preprint is a version of a paper that is published before peer-reviewed publication.
The concept of preprints is not new. The physics and mathematics preprint repository called arXiv (pronounced “archive”) was founded in 1991, and publishing preprints in physics has been status quo since then. However, the culture surrounding research and publication is very different from discipline to discipline. In particular, chemists and biologists have been much slower to adapt. BioArxiv and ChemArixv launched in 2013 and 2016, respectively.
In part, the reluctance has been due to the hyper-competitive nature of these fields. But it has also been propagated by academic publishers, some of which will not accept manuscript submissions that have been published on a preprint repository. Concerns also exist surrounding public availability of research papers that are completely unreviewed. In particular, some researchers worry that the untrained public might mistake poorly produced research for fact.
Ultimately though, preprints are picking up steam across the board. In some cases, journals have even reached out to authors after seeing preprints to request that the manuscript be submitted to their journal.
The advantages of preprints are not limited to the speed with which research can be disseminated. In addition, researchers can get feedback, comments, and tips from as many people as their manuscript can reach on the Internet, as opposed to two or three reviewers chosen by an editor. Moreover, researchers can lay claim to their work early. If you publish a preprint with your findings before the peer review process, you can’t be scooped while your paper is taking months in the hands of reviewers and editors. Jobs and grants can also be won or lost as a result of publishing work as a preprint. Young researchers in particular can benefit from having a preprint on their resume while their journal publication is in review.
At the end of the day, preprints may not be a cure-all for the very broken academic publishing system. But they are certainly a step in the right direction.
The Future of Preprints
Submissions to preprint servers in biology are increasing at an exponential rate. For the moment, preprints are seen as an addendum to the current academic publishing system. But it is unclear if this will be the case indefinitely, or if they will allow the publishing landscape to change.
In the meantime, many interesting questions about the future of academic publishing and preprints have been raised. For example, some funding sources, such as the 4D Nucleome (NIH funded) and the Chan Zuckerburg Biohub have actually begun requiring researchers to publish preprints as a stipulation for receiving funding. While this move is significantly less controversial than it would have been even 5 years ago, there are still some who feel uneasy with the move. Certainly, scientists are wary of additional requirements attached to funding.
Preprints servers are not designed to replace the current publishing model. They are designed to communicate research quickly. And in fact, some journals have even started adding their own in-house preprint servers. It will be interesting to see how these two systems evolve together and separately.
There is real potential for more forms of commentary and feedback on preprints and even on published journal articles. In our day-to-day lives, tech takes advantage of crowdsourcing to review a multitude of products and services, such as restaurants, apps, and podcasts. As the number of preprint submissions grows exponentially, a logical next step in the preprint world is to create a platform for rating, reviewing, critiquing, and communicating about preprints. This would essentially serve as an online “journal club.”
Perhaps the most overlooked use of preprints is not as preprints at all but rather as research communications. Researchers can use preprints to disseminate data that they have but don’t intend to publish. Negative data and repeat experiments are notoriously hard to publish in a journal, which, combined with funding scarcity and a publish-or-perish mentality at most institutions, has created a breeding ground for non-reproducible research in many scientific fields. Thus, the wider use of preprints could not only increase the speed of scientific publication but also improve how scientific research is conducted.
How will preprints evolve in the years to come? And what will happen to the current academic publishing ecosystem? Only time will tell, but what is certain is that the system is in desperate need of change and that preprints are here to stay.
Sarah Vitak is a scientist, writer, artist, and science communicator. She delights in the intersection of technology, art, and human interaction. Her background includes experience in human computer interaction, DNA sequencing technology, 3D printing, and large art installations.