Project description:BACKGROUND: Lack of appropriate reporting of methodological details has previously been shown to distort risk of bias assessments in randomized controlled trials. The same might be true for observational studies. The goal of this study was to compare the Newcastle-Ottawa Scale (NOS) assessment for risk of bias between reviewers and authors of cohort studies included in a published systematic review on risk factors for severe outcomes in patients infected with influenza. METHODS: Cohort studies included in the systematic review and published between 2008-2011 were included. The corresponding or first authors completed a survey covering all NOS items. Results were compared with the NOS assessment applied by reviewers of the systematic review. Inter-rater reliability was calculated using kappa (K) statistics. RESULTS: Authors of 65/182 (36%) studies completed the survey. The overall NOS score was significantly higher (p < 0.001) in the reviewers' assessment (median = 6; interquartile range [IQR] 6-6) compared with those by authors (median = 5, IQR 4-6). Inter-rater reliability by item ranged from slight (K = 0.15, 95% confidence interval [CI]?=?-0.19, 0.48) to poor (K = -0.06, 95% CI = -0.22, 0.10). Reliability for the overall score was poor (K = -0.004, 95% CI = -0.11, 0.11). CONCLUSIONS: Differences in assessment and low agreement between reviewers and authors suggest the need to contact authors for information not published in studies when applying the NOS in systematic reviews.
Project description:Audio descriptions (ADs) can increase access to videos for blind people. Researchers have explored different mechanisms for generating ADs, with some of the most recent studies involving paid novices; to improve the quality of their ADs, novices receive feedback from reviewers. However, reviewer feedback is not instantaneous. To explore the potential for real-time feedback through automation, in this paper, we analyze 1, 120 comments that 40 sighted novices received from a sighted or a blind reviewer. We find that feedback patterns tend to fall under four themes: (i) Quality; commenting on different AD quality variables, (ii) Speech Act; the utterance or speech action that the reviewers used, (iii) Required Action; the recommended action that the authors should do to improve the AD, and (iv) Guidance; the additional help that the reviewers gave to help the authors. We discuss which of these patterns could be automated within the review process as design implications for future AD collaborative authoring systems.
Project description:Open peer review (OPR) is a cornerstone of the emergent Open Science agenda. Yet to date no large-scale survey of attitudes towards OPR amongst academic editors, authors, reviewers and publishers has been undertaken. This paper presents the findings of an online survey, conducted for the OpenAIRE2020 project during September and October 2016, that sought to bridge this information gap in order to aid the development of appropriate OPR approaches by providing evidence about attitudes towards and levels of experience with OPR. The results of this cross-disciplinary survey, which received 3,062 full responses, show the majority (60.3%) of respondents to be believe that OPR as a general concept should be mainstream scholarly practice (although attitudes to individual traits varied, and open identities peer review was not generally favoured). Respondents were also in favour of other areas of Open Science, like Open Access (88.2%) and Open Data (80.3%). Among respondents we observed high levels of experience with OPR, with three out of four (76.2%) reporting having taken part in an OPR process as author, reviewer or editor. There were also high levels of support for most of the traits of OPR, particularly open interaction, open reports and final-version commenting. Respondents were against opening reviewer identities to authors, however, with more than half believing it would make peer review worse. Overall satisfaction with the peer review system used by scholarly journals seems to strongly vary across disciplines. Taken together, these findings are very encouraging for OPR's prospects for moving mainstream but indicate that due care must be taken to avoid a "one-size fits all" solution and to tailor such systems to differing (especially disciplinary) contexts. OPR is an evolving phenomenon and hence future studies are to be encouraged, especially to further explore differences between disciplines and monitor the evolution of attitudes.
Project description:BACKGROUND:In biomedical research, there have been numerous scandals highlighting conflicts of interest (COIs) leading to significant bias in judgment and questionable practices. Academic institutions, journals, and funding agencies have developed and enforced policies to mitigate issues related to COI, especially surrounding financial interests. After a case of editorial COI in a prominent bioethics journal, there is concern that the same level of oversight regarding COIs in the biomedical sciences may not apply to the field of bioethics. In this study, we examined the availability and comprehensiveness of COI policies for authors, peer reviewers, and editors of bioethics journals. METHODS:After developing a codebook, we analyzed the content of online COI policies of 63 bioethics journals, along with policy information provided by journal editors that was not publicly available. RESULTS:Just over half of the bioethics journals had COI policies for authors (57%), and only 25% for peer reviewers and 19% for editors. There was significant variation among policies regarding definitions, the types of COIs described, the management mechanisms, and the consequences for noncompliance. Definitions and descriptions centered on financial COIs, followed by personal and professional relationships. Almost all COI policies required disclosure of interests for authors as the primary management mechanism. Very few journals outlined consequences for noncompliance with COI policies or provided additional resources. CONCLUSION:Compared to other studies of biomedical journals, a much lower percentage of bioethics journals have COI policies and these vary substantially in content. The bioethics publishing community needs to develop robust policies for authors, peer reviewers, and editors and these should be made publicly available to enhance academic and public trust in bioethics scholarship.
Project description:PurposeRecent calls to improve transparency in peer review have prompted examination of many aspects of the peer-review process. Peer-review systems often allow confidential comments to editors that could reduce transparency to authors, yet this option has escaped scrutiny. Our study explores 1) how reviewers use the confidential comments section and 2) alignment between comments to the editor and comments to authors with respect to content and tone.MethodsOur dataset included 358 reviews of 168 manuscripts submitted between January 1, 2019 and August 24, 2020 to a health professions education journal with a single blind review process. We first identified reviews containing comments to the editor. Then, for the reviews with comments, we used procedures consistent with conventional and directed qualitative content analysis to develop a coding scheme and code comments for content, tone, and section of the manuscript. For reviews in which the reviewer recommended "reject," we coded for alignment between reviewers' comments to the editor and to authors. We report descriptive statistics.Results49% of reviews contained comments to the editor (n = 176). Most of these comments summarized the reviewers' impression of the article (85%), which included explicit reference to their recommended decision (44%) and suitability for the journal (10%). The majority of comments addressed argument quality (56%) or research design/methods/data (51%). The tone of comments tended to be critical (40%) or constructive (34%). For the 86 reviews recommending "reject," the majority of comments to the editor contained content that also appeared in comments to the authors (80%); additional content tended to be irrelevant to the manuscript. Tone frequently aligned (91%).ConclusionFindings indicate variability in how reviewers use the confidential comments to editor section in online peer-review systems, though generally the way they use them suggests integrity and transparency to authors.
Project description:Calls have been made for improving transparency in conducting and reporting research, improving work climates, and preventing detrimental research practices. To assess attitudes and practices regarding these topics, we sent a survey to authors, reviewers, and editors. We received 3,659 (4.9%) responses out of 74,749 delivered emails. We found no significant differences between authors', reviewers', and editors' attitudes towards transparency in conducting and reporting research, or towards their perceptions of work climates. Undeserved authorship was perceived by all groups as the most prevalent detrimental research practice, while fabrication, falsification, plagiarism, and not citing prior relevant research, were seen as more prevalent by editors than authors or reviewers. Overall, 20% of respondents admitted sacrificing the quality of their publications for quantity, and 14% reported that funders interfered in their study design or reporting. While survey respondents came from 126 different countries, due to the survey's overall low response rate our results might not necessarily be generalizable. Nevertheless, results indicate that greater involvement of all stakeholders is needed to align actual practices with current recommendations.
Project description:Peer review is the "gold standard" for evaluating journal and conference papers, research proposals, on-going projects and university departments. However, it is widely believed that current systems are expensive, conservative and prone to various forms of bias. One form of bias identified in the literature is "social bias" linked to the personal attributes of authors and reviewers. To quantify the importance of this form of bias in modern peer review, we analyze three datasets providing information on the attributes of authors and reviewers and review outcomes: one from Frontiers - an open access publishing house with a novel interactive review process, and two from Spanish and international computer science conferences, which use traditional peer review. We use a random intercept model in which review outcome is the dependent variable, author and reviewer attributes are the independent variables and bias is defined by the interaction between author and reviewer attributes. We find no evidence of bias in terms of gender, or the language or prestige of author and reviewer institutions in any of the three datasets, but some weak evidence of regional bias in all three. Reviewer gender and the language and prestige of reviewer institutions appear to have little effect on review outcomes, but author gender, and the characteristics of author institutions have moderate to large effects. The methodology used cannot determine whether these are due to objective differences in scientific merit or entrenched biases shared by all reviewers.
Project description:The COVID-19 pandemic affected the scientific workforce in many ways. Many worried that stay-at-home orders would disproportionately harm the productivity and well-being of women and early-career scientists, who were expected to shoulder more childcare, homeschooling, and other domestic duties while also interrupting field and lab research, essential for career advancement. AGU journal submission and author and reviewer demographic data allowed us to investigate the effect the pandemic may have had on many Earth and space scientists, especially on women and early career scientists. However, we found that submissions to AGU journals increased during the pandemic as did total submissions from women (with no difference in the proportion). Although the rate at which women agreed to review decreased slightly (down 0.5%), women still made up a larger proportion of agreed reviewers during the pandemic compared to 2 years earlier. Little difference was seen overall in median times to complete reviews except with women in their 40s and 70s, suggesting that they were affected more during the pandemic than other age and gender groups. Although AGU's data do not show that the effects of the pandemic decreased women's participation in AGU journals, the lag between research and writing/submitting may still be seen in later months, which we will continue to report on as we analyze the data. The stay-at-home orders may also have allowed people to devote time to writing up research conducted prepandemic; writing too can be done during down-time hours, which may have supported the increase in submissions to and reviews for AGU journals.