The scholarly publishing landscape is undergoing immense changes. New journals, marketing and distribution strategies, peer review processes, and scholarly products are announced each week. Scientific publications have led the way, offering authors alternatives to the traditional publication process. In 2002, Faculty of 1000 started providing post-publication expert review of research articles in biology and medicine. Their most recent venture, a new life sciences journal called F1000Research, combines the community building expertise F1000 developed over the past ten years with innovative processes related to open science and open peer review.
Iain Hrynaszkiewicz, F1000 Outreach Coordinator, spoke with Bonnie J.M. Swoger about his organization’s newest publication, F1000Research, the innovative practices that set it apart and how they will impact faculty and researchers.
Open access, open science
Can you tell us a little bit about F1000Research? What makes it different from other scientific journals?
F1000Research started publishing its first papers in 2012. It’s an open access journal; all papers are freely available. It’s also peer reviewed much like other open access journals: two or three expert reviewers opinions are sought, and that opinion is fed back to the author to make changes to the paper. Eventually they come up with a version that everyone is happy with.
What’s different about F1000Research is that this process happens transparently and publicly after the publication of the first version of the paper. The author sends their manuscript to the F1000Research editorial team, and they carry out some initial checks: does it make sense, is it in English, is it about the life sciences, have the authors complied with the journal’s instructions to authors, etc. Within about a week, that version of the paper is made available on the F1000Research website as a fully formatted PDF version and a full HTML version, but it is marked as “Awaiting peer review.”
So the peer review happens much like it does at other journals, but it happens publicly and openly for two main reasons. First, to speed up the publication process and second, to make the process more transparent, which hopefully makes it more efficient. Many authors spend lots of time sending a paper to lots of different journals, going through different rounds of peer review and eventually finding somewhere to publish. That isn’t the most efficient way of doing things. We feel that having open review should improve the way that peer review happens. We also want to give peer reviewers credit for their contributions to science. Reviewer names are attached to the papers and their reviews become part of the publication.
On your website, F1000Research promotes itself as the first “Open Science” journal for life sciences. What is F1000Research’s definition of “Open Science”?
It’s about going beyond just open access to papers. Open access has been around for well over a decade now; it’s here to stay. Open science means that not just the paper is open access but also the peer review process is open for everyone to see. Critically, open science also means open access to all the data and the software that has gone into producing the scientific results. We have one of the most stringent policies in the community regarding the availability of all data that support the results of the paper.
When someone submits a paper to F1000Research, editors make sure the authors have submitted all available data during the initial check. We ask authors to send all of their data to the editorial office regardless of how big it is or what format it is in, as long as it’s supporting the paper. If there isn’t a community endorsed and supported database (like there is in genomics), we put the data in a general repository called FigShare. Then we do some pretty cool things with it when we publish the paper.
FigShare has a number of ways of visualizing datasets, including ways to preview the first few lines of code for software or the first few lines of a genetic sequence. Then we can display those dataset previews embedded inline with the paper, so you can read the paper and can get full access to all the material that went into that experiment including the software and the data. It’s not just seeing reports of what people did, but seeing all the materials that went into it as well. It’s opening up the whole scientific process.
What led you to partner with FigShare, rather than developing your own systems for supplementary materials that don’t have a home in some of the disciplinary repositories?
I think there can be a tendency for people in science and publishing to reinvent the wheel. FigShare is a relatively new service but it was already established and doing some really interesting and innovative things with data. In partnering with them, we could achieve what we wanted…more quickly and provide authors with a better service as a result.
Taking on naysayers
How do you respond to critics who might argue that removing that anonymity may make reviewers less likely to be honestly critical of a paper? Especially younger faculty who may worry about offending established scientists who could have an impact on their careers?
Those concerns pertain to the idea that if the peer review is open, people might provide different comments or a different type of review. The British Medical Journal1,2 has done some research using a randomized, controlled trial design on the quality of open, signed peer review vs. closed peer review. Their study found that the quality of the reviews was exactly the same. The only difference found was that people invited to review for the open peer review journal may be a bit more likely to decline the invitation. This is understandable: if people aren’t used to reviewing openly, then they may be less comfortable with it. But the evidence suggests that the quality of open peer reviews is actually the same. There are lots of studies suggesting that scientific peer review isn’t the best process for identifying errors and identifying fraud, but we know that it is essential in science, and it is the best system we have. There are problems that exist in peer review, in closed reviews as much as they do with an open system.
An article was just published in PLoS Biology3 making the case for open pre-prints in biology. The authors discuss PeerJ, arXiv, F1000Research and other resources. But F1000Research doesn’t use the term preprint in reference to its publication process, as far as posting articles prior to peer review. Was that a conscious decision on your part? Do you think about F1000Research differently from these publications?
It may be a bit of both. Traditionally, a preprint server or repository was somewhere for people to put their own author version of their manuscript. They would go on to submit the paper to any one of a number of journals, and then the pre-publication review process would happen in the usual way. So what’s different about F1000Research is that it’s not a pre-print because the format of the publication is very much like a formal publication, you’ve got the formatted PDF and the HTML, and then the peer review happens within the same platform, so its not a pre-print in that sense. We generally don’t expect people to go elsewhere after they’ve come to F1000Research.
The way that indexers will be including metadata from F1000Research is different from traditional journals. Although you publish all manuscripts that pass the initial editorial review, only articles that receive two positive peer reviews will achieve the status of “Indexed” and have metadata passed on to indexers such as PubMed, Scopus, and Web of Science. How do you anticipate authors will respond if they don’t get the acceptable reviews that are required for articles to be indexed?
In principle, authors could continue revising their paper multiple times in order to meet the comments of the initially invited reviewers or they could seek comments from additional peer reviewers. There could be several versions of the paper. This is something that happens all the time—it just happens nontransparently. We know that people submit their papers to top-tier journals like Nature, and when rejected, they may try several other journals and go through several rounds of peer review. We’re just enabling that to happen transparently, on the same platform, and hopefully more efficiently.
F1000Research includes some article metadata about versioning and indexing in the traditional article title field, in square brackets.4 Do you find that information being dropped or retained in citations? Do you know anything about how others are treating your decision to include that information in the title?
Certainly journals have their own unique citation styles, particularly those that have only ever existed on the web and don’t have issue numbers. This is still a common question for online-only journals that publish continuously. We tried to make it as clear as possible, right near the top of each paper, how you cite the paper, including the version of the paper that is being cited. As more F1000Research papers are cited by other journals, it will be interesting to see how people are citing different or multiple versions of the papers, depending on when they access them.
You provide some article-level metrics5 including article views and downloads, and I was wondering if you had any intentions of trying to provide readers with some context to those numbers in the future, as PLOS ONE has just done.
Yes, absolutely. There are definite plans to add to the article-level information that’s available. At the moment you can see information about social media shares, downloads, and page views. It is in our plans to provide a more sophisticated collection of article-level metrics. These data are something that we’ve long been a part of here at F1000—the paper recommendations and scores provided by F1000Prime are part of that alternative metrics landscape.
What is the impact?
Does the journal get submissions from nontenured faculty? What has been the reaction of tenure committees to CVs that include articles from F1000Research?
The journal is open to submissions from all life scientists, as well as members of the F1000 faculty and the editorial board and has received submissions from a very broad spectrum of scientists already. As the journal only formally launched in January 2013, it is too soon to say if publications in F1000Research are viewed any differently by tenure committees. However, the publication of a paper in F1000Research that receives sufficient peer-review approval and indexing is equivalent to a publication in other peer-reviewed journals. Furthermore, publication in F1000Research with inclusion of the underlying data could potentially lead to greater visibility and impact (citations)—sharing detailed research data has been associated with more citations in some fields in which studies have been conducted.
What will be the impact on libraries of the availability of this new type of publication, since other STEM materials are so notoriously expensive?
We envisage little impact on libraries’ costs—particularly if they already support publication in open access journals that use the author-pays open access model. There is already growth of publication in open access journals, and the business model of F1000Research is the same as other open access journals. Many publishers of such journals, including Springer, PLOS, Nature Publishing Group, and the BMJ Group charge author fees. We have an institutional membership scheme whereby universities can provide funds for researchers to cover article processing charges. F1000Research’s article processing charges are $1,000 for full research papers, $500 for shorter articles, and $250 for medical case reports–less than the typical charge of these other previously mentioned publishers. Also, we are waiving the article processing charge until the end of August 2013 for studies which report negative results.
In our August 2013 issue, Bonnie J.M. Swoger will cover another innovative product aimed at the academic market, Plum Analytics, which allows librarians to track the impact of scientists and their work in new ways.
Bonnie J.M. Swoger is the Science and Technology Librarian at SUNY Geneseo’s Milne Library and the author of the Undergraduate Science Librarian blog, undergraduatesciencelibrarian.org. Readers can contact her at firstname.lastname@example.org
 Van Rooyen, S., Godlee, F., Evans, S., Black, N., & Smith, R. (1999). Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial. BMJ, 318(7175), 23–27. doi:10.1136/bmj.318.7175.23
 Van Rooyen, S., Delamothe, T., & Evans, S. J. W. (2010). Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial. BMJ, 341(nov16 2), c5729–c5729. doi:10.1136/bmj.c5729
 Desjardins-Proulx, P., White, E. P., Adamson, J. J., Ram, K., Poisot, T., & Gravel, D. (2013). The Case for Open Preprints in Biology. PLoS Biology, 11(5), e1001563. doi:10.1371/journal.pbio.1001563
 Senn S. (2013) Authors are also reviewers: problems in assigning cause for missing negative studies [v1; ref status: indexed, http://f1000r.es/uo] F1000Research, 2:17. doi: 10.12688/f1000research.2-17.v1