Project description:ABSTRACT The staff and editors of Disease Models & Mechanisms (DMM) wish to thank all our peer reviewers for their work in this vital role. We highlight some important changes that have been introduced to improve the peer-review process, for both authors and reviewers. Summary: DMM highlights changes in our peer-review process, and thanks peer reviewers for their involvement in this vital step in scholarly publication.
Project description:CytoJournal, with its continued contribution of scientific cytopathology literature to the public domain under open access (OA) charter, thanks its dedicated peer reviewers for devoting significant efforts, time, and resources during 2011. The abstracts of poster-platform submissions to the 59(th) Annual Scientific Meeting (November 2011) of the American Society of Cytopathology (ASC) in Baltimore, MD, USA, were peer reviewed by the ASC Scientific Program Committee.
Project description:Peer review is the gold standard for scientific communication, but its ability to guarantee the quality of published research remains difficult to verify. Recent modeling studies suggest that peer review is sensitive to reviewer misbehavior, and it has been claimed that referees who sabotage work they perceive as competition may severely undermine the quality of publications. Here we examine which aspects of suboptimal reviewing practices most strongly impact quality, and test different mitigating strategies that editors may employ to counter them. We find that the biggest hazard to the quality of published literature is not selfish rejection of high-quality manuscripts but indifferent acceptance of low-quality ones. Bypassing or blacklisting bad reviewers and consulting additional reviewers to settle disagreements can reduce but not eliminate the impact. The other editorial strategies we tested do not significantly improve quality, but pairing manuscripts to reviewers unlikely to selfishly reject them and allowing revision of rejected manuscripts minimize rejection of above-average manuscripts. In its current form, peer review offers few incentives for impartial reviewing efforts. Editors can help, but structural changes are more likely to have a stronger impact.
Project description:Peer review represents the primary mechanism used by funding agencies to allocate financial support and by journals to select manuscripts for publication, yet recent Cochrane reviews determined literature on peer review best practice is sparse. Key to improving the process are reduction of inherent vulnerability to high degree of randomness and, from an economic perspective, limiting both the substantial indirect costs related to reviewer time invested and direct administrative costs to funding agencies, publishers and research institutions. Use of additional reviewers per application may increase reliability and decision consistency, but adds to overall cost and burden. The optimal number of reviewers per application, while not known, is thought to vary with accuracy of judges or evaluation methods. Here I use bootstrapping of replicated peer review data from a Post-doctoral Fellowships competition to show that five reviewers per application represents a practical optimum which avoids large random effects evident when fewer reviewers are used, a point where additional reviewers at increasing cost provides only diminishing incremental gains in chance-corrected consistency of decision outcomes. Random effects were most evident in the relative mid-range of competitiveness. Results support aggressive high- and low-end stratification or triaging of applications for subsequent stages of review, with the proportion and set of mid-range submissions to be retained for further consideration being dependent on overall success rate.
Project description:Open peer review (OPR) is a cornerstone of the emergent Open Science agenda. Yet to date no large-scale survey of attitudes towards OPR amongst academic editors, authors, reviewers and publishers has been undertaken. This paper presents the findings of an online survey, conducted for the OpenAIRE2020 project during September and October 2016, that sought to bridge this information gap in order to aid the development of appropriate OPR approaches by providing evidence about attitudes towards and levels of experience with OPR. The results of this cross-disciplinary survey, which received 3,062 full responses, show the majority (60.3%) of respondents to be believe that OPR as a general concept should be mainstream scholarly practice (although attitudes to individual traits varied, and open identities peer review was not generally favoured). Respondents were also in favour of other areas of Open Science, like Open Access (88.2%) and Open Data (80.3%). Among respondents we observed high levels of experience with OPR, with three out of four (76.2%) reporting having taken part in an OPR process as author, reviewer or editor. There were also high levels of support for most of the traits of OPR, particularly open interaction, open reports and final-version commenting. Respondents were against opening reviewer identities to authors, however, with more than half believing it would make peer review worse. Overall satisfaction with the peer review system used by scholarly journals seems to strongly vary across disciplines. Taken together, these findings are very encouraging for OPR's prospects for moving mainstream but indicate that due care must be taken to avoid a "one-size fits all" solution and to tailor such systems to differing (especially disciplinary) contexts. OPR is an evolving phenomenon and hence future studies are to be encouraged, especially to further explore differences between disciplines and monitor the evolution of attitudes.
Project description:BACKGROUND:In biomedical research, there have been numerous scandals highlighting conflicts of interest (COIs) leading to significant bias in judgment and questionable practices. Academic institutions, journals, and funding agencies have developed and enforced policies to mitigate issues related to COI, especially surrounding financial interests. After a case of editorial COI in a prominent bioethics journal, there is concern that the same level of oversight regarding COIs in the biomedical sciences may not apply to the field of bioethics. In this study, we examined the availability and comprehensiveness of COI policies for authors, peer reviewers, and editors of bioethics journals. METHODS:After developing a codebook, we analyzed the content of online COI policies of 63 bioethics journals, along with policy information provided by journal editors that was not publicly available. RESULTS:Just over half of the bioethics journals had COI policies for authors (57%), and only 25% for peer reviewers and 19% for editors. There was significant variation among policies regarding definitions, the types of COIs described, the management mechanisms, and the consequences for noncompliance. Definitions and descriptions centered on financial COIs, followed by personal and professional relationships. Almost all COI policies required disclosure of interests for authors as the primary management mechanism. Very few journals outlined consequences for noncompliance with COI policies or provided additional resources. CONCLUSION:Compared to other studies of biomedical journals, a much lower percentage of bioethics journals have COI policies and these vary substantially in content. The bioethics publishing community needs to develop robust policies for authors, peer reviewers, and editors and these should be made publicly available to enhance academic and public trust in bioethics scholarship.
Project description:BackgroundDeveloping a comprehensive, reproducible literature search is the basis for a high-quality systematic review (SR). Librarians and information professionals, as expert searchers, can improve the quality of systematic review searches, methodology, and reporting. Likewise, journal editors and authors often seek to improve the quality of published SRs and other evidence syntheses through peer review. Health sciences librarians contribute to systematic review production but little is known about their involvement in peer reviewing SR manuscripts.MethodsThis survey aimed to assess how frequently librarians are asked to peer review systematic review manuscripts and to determine characteristics associated with those invited to review. The survey was distributed to a purposive sample through three health sciences information professional listservs.ResultsThere were 291 complete survey responses. Results indicated that 22% (n = 63) of respondents had been asked by journal editors to peer review systematic review or meta-analysis manuscripts. Of the 78% (n = 228) of respondents who had not already been asked, 54% (n = 122) would peer review, and 41% (n = 93) might peer review. Only 4% (n = 9) would not review a manuscript. Respondents had peer reviewed manuscripts for 38 unique journals and believed they were asked because of their professional expertise. Of respondents who had declined to peer review (32%, n = 20), the most common explanation was "not enough time" (60%, n = 12) followed by "lack of expertise" (50%, n = 10).The vast majority of respondents (95%, n = 40) had "rejected or recommended a revision of a manuscript| after peer review. They based their decision on the "search methodology" (57%, n = 36), "search write-up" (46%, n = 29), or "entire article" (54%, n = 34). Those who selected "other" (37%, n = 23) listed a variety of reasons for rejection, including problems or errors in the PRISMA flow diagram; tables of included, excluded, and ongoing studies; data extraction; reporting; and pooling methods.ConclusionsDespite being experts in conducting literature searches and supporting SR teams through the review process, few librarians have been asked to review SR manuscripts, or even just search strategies; yet many are willing to provide this service. Editors should involve experienced librarians with peer review and we suggest some strategies to consider.
Project description:Background: Decisions about which applications to fund are generally based on the mean scores of a panel of peer reviewers. As well as the mean, a large disagreement between peer reviewers may also be worth considering, as it may indicate a high-risk application with a high return. Methods: We examined the peer reviewers' scores for 227 funded applications submitted to the American Institute of Biological Sciences between 1999 and 2006. We examined the mean score and two measures of reviewer disagreement: the standard deviation and range. The outcome variable was the relative citation ratio, which is the number of citations from all publications associated with the application, standardised by field and publication year. Results: There was a clear increase in relative citations for applications with a better mean. There was no association between relative citations and either of the two measures of disagreement. Conclusions: We found no evidence that reviewer disagreement was able to identify applications with a higher than average return. However, this is the first study to empirically examine this association, and it would be useful to examine whether reviewer disagreement is associated with research impact in other funding schemes and in larger sample sizes.