Project description:BACKGROUND: Lack of appropriate reporting of methodological details has previously been shown to distort risk of bias assessments in randomized controlled trials. The same might be true for observational studies. The goal of this study was to compare the Newcastle-Ottawa Scale (NOS) assessment for risk of bias between reviewers and authors of cohort studies included in a published systematic review on risk factors for severe outcomes in patients infected with influenza. METHODS: Cohort studies included in the systematic review and published between 2008-2011 were included. The corresponding or first authors completed a survey covering all NOS items. Results were compared with the NOS assessment applied by reviewers of the systematic review. Inter-rater reliability was calculated using kappa (K) statistics. RESULTS: Authors of 65/182 (36%) studies completed the survey. The overall NOS score was significantly higher (p < 0.001) in the reviewers' assessment (median = 6; interquartile range [IQR] 6-6) compared with those by authors (median = 5, IQR 4-6). Inter-rater reliability by item ranged from slight (K = 0.15, 95% confidence interval [CI]?=?-0.19, 0.48) to poor (K = -0.06, 95% CI = -0.22, 0.10). Reliability for the overall score was poor (K = -0.004, 95% CI = -0.11, 0.11). CONCLUSIONS: Differences in assessment and low agreement between reviewers and authors suggest the need to contact authors for information not published in studies when applying the NOS in systematic reviews.
Project description:Audio descriptions (ADs) can increase access to videos for blind people. Researchers have explored different mechanisms for generating ADs, with some of the most recent studies involving paid novices; to improve the quality of their ADs, novices receive feedback from reviewers. However, reviewer feedback is not instantaneous. To explore the potential for real-time feedback through automation, in this paper, we analyze 1, 120 comments that 40 sighted novices received from a sighted or a blind reviewer. We find that feedback patterns tend to fall under four themes: (i) Quality; commenting on different AD quality variables, (ii) Speech Act; the utterance or speech action that the reviewers used, (iii) Required Action; the recommended action that the authors should do to improve the AD, and (iv) Guidance; the additional help that the reviewers gave to help the authors. We discuss which of these patterns could be automated within the review process as design implications for future AD collaborative authoring systems.
Project description:Open peer review (OPR) is a cornerstone of the emergent Open Science agenda. Yet to date no large-scale survey of attitudes towards OPR amongst academic editors, authors, reviewers and publishers has been undertaken. This paper presents the findings of an online survey, conducted for the OpenAIRE2020 project during September and October 2016, that sought to bridge this information gap in order to aid the development of appropriate OPR approaches by providing evidence about attitudes towards and levels of experience with OPR. The results of this cross-disciplinary survey, which received 3,062 full responses, show the majority (60.3%) of respondents to be believe that OPR as a general concept should be mainstream scholarly practice (although attitudes to individual traits varied, and open identities peer review was not generally favoured). Respondents were also in favour of other areas of Open Science, like Open Access (88.2%) and Open Data (80.3%). Among respondents we observed high levels of experience with OPR, with three out of four (76.2%) reporting having taken part in an OPR process as author, reviewer or editor. There were also high levels of support for most of the traits of OPR, particularly open interaction, open reports and final-version commenting. Respondents were against opening reviewer identities to authors, however, with more than half believing it would make peer review worse. Overall satisfaction with the peer review system used by scholarly journals seems to strongly vary across disciplines. Taken together, these findings are very encouraging for OPR's prospects for moving mainstream but indicate that due care must be taken to avoid a "one-size fits all" solution and to tailor such systems to differing (especially disciplinary) contexts. OPR is an evolving phenomenon and hence future studies are to be encouraged, especially to further explore differences between disciplines and monitor the evolution of attitudes.
Project description:BACKGROUND:In biomedical research, there have been numerous scandals highlighting conflicts of interest (COIs) leading to significant bias in judgment and questionable practices. Academic institutions, journals, and funding agencies have developed and enforced policies to mitigate issues related to COI, especially surrounding financial interests. After a case of editorial COI in a prominent bioethics journal, there is concern that the same level of oversight regarding COIs in the biomedical sciences may not apply to the field of bioethics. In this study, we examined the availability and comprehensiveness of COI policies for authors, peer reviewers, and editors of bioethics journals. METHODS:After developing a codebook, we analyzed the content of online COI policies of 63 bioethics journals, along with policy information provided by journal editors that was not publicly available. RESULTS:Just over half of the bioethics journals had COI policies for authors (57%), and only 25% for peer reviewers and 19% for editors. There was significant variation among policies regarding definitions, the types of COIs described, the management mechanisms, and the consequences for noncompliance. Definitions and descriptions centered on financial COIs, followed by personal and professional relationships. Almost all COI policies required disclosure of interests for authors as the primary management mechanism. Very few journals outlined consequences for noncompliance with COI policies or provided additional resources. CONCLUSION:Compared to other studies of biomedical journals, a much lower percentage of bioethics journals have COI policies and these vary substantially in content. The bioethics publishing community needs to develop robust policies for authors, peer reviewers, and editors and these should be made publicly available to enhance academic and public trust in bioethics scholarship.
Project description:PurposeRecent calls to improve transparency in peer review have prompted examination of many aspects of the peer-review process. Peer-review systems often allow confidential comments to editors that could reduce transparency to authors, yet this option has escaped scrutiny. Our study explores 1) how reviewers use the confidential comments section and 2) alignment between comments to the editor and comments to authors with respect to content and tone.MethodsOur dataset included 358 reviews of 168 manuscripts submitted between January 1, 2019 and August 24, 2020 to a health professions education journal with a single blind review process. We first identified reviews containing comments to the editor. Then, for the reviews with comments, we used procedures consistent with conventional and directed qualitative content analysis to develop a coding scheme and code comments for content, tone, and section of the manuscript. For reviews in which the reviewer recommended "reject," we coded for alignment between reviewers' comments to the editor and to authors. We report descriptive statistics.Results49% of reviews contained comments to the editor (n = 176). Most of these comments summarized the reviewers' impression of the article (85%), which included explicit reference to their recommended decision (44%) and suitability for the journal (10%). The majority of comments addressed argument quality (56%) or research design/methods/data (51%). The tone of comments tended to be critical (40%) or constructive (34%). For the 86 reviews recommending "reject," the majority of comments to the editor contained content that also appeared in comments to the authors (80%); additional content tended to be irrelevant to the manuscript. Tone frequently aligned (91%).ConclusionFindings indicate variability in how reviewers use the confidential comments to editor section in online peer-review systems, though generally the way they use them suggests integrity and transparency to authors.
Project description:Peer review is the "gold standard" for evaluating journal and conference papers, research proposals, on-going projects and university departments. However, it is widely believed that current systems are expensive, conservative and prone to various forms of bias. One form of bias identified in the literature is "social bias" linked to the personal attributes of authors and reviewers. To quantify the importance of this form of bias in modern peer review, we analyze three datasets providing information on the attributes of authors and reviewers and review outcomes: one from Frontiers - an open access publishing house with a novel interactive review process, and two from Spanish and international computer science conferences, which use traditional peer review. We use a random intercept model in which review outcome is the dependent variable, author and reviewer attributes are the independent variables and bias is defined by the interaction between author and reviewer attributes. We find no evidence of bias in terms of gender, or the language or prestige of author and reviewer institutions in any of the three datasets, but some weak evidence of regional bias in all three. Reviewer gender and the language and prestige of reviewer institutions appear to have little effect on review outcomes, but author gender, and the characteristics of author institutions have moderate to large effects. The methodology used cannot determine whether these are due to objective differences in scientific merit or entrenched biases shared by all reviewers.
Project description:Response to comments on Cui Q-Q et al: "Hippocampal CD 39/ENTPD 1 promotes mouse depression-like behavior through hydrolyzing extracellular ATP".
Project description:OBJECTIVE:Many journals permit authors to submit supplementary material for publication alongside the article. We explore the value, use and role of this material in biomedical journal articles from the perspectives of authors, peer reviewers and readers. DESIGN AND SETTING:We conducted online surveys (November-December 2016) of corresponding authors and peer reviewers at 17 BMJ Publishing Group journals in a range of specialities. PARTICIPANTS:Participants were asked to respond to one of three surveys: as authors, peer reviewers or readers. RESULTS:We received 2872/20340 (14%) responses: authors 819/6892 (12%), peer reviewers 1142/6682 (17%) and readers 911/6766 (14%). Most authors submitted (711/819, 87%) and 80% (724/911) of readers reported reading supplementary material with their last article, while 95% (1086/1142) of reviewers reported seeing these materials sometimes. Additional data tables were the most common supplementary material reported (authors: 74%; reviewers: 89%; readers: 67%). A majority in each group indicated additional tables were most useful to readers (61%-77%); 20%-36%?and 3%-4% indicated they were most useful to peer reviewers and journal editors, respectively. Checklists and reporting guidelines showed the opposite: higher proportions of each group regarded these as most useful to journal editors. All three groups favoured the publication of additional tables and figures on the journal's website (80%-83%), with <4% of each group responding that these do not need to be available. Approximately one-fifth (16%-23%) responded that raw study data should be available on the journal's website, while 24%-33% said that these materials should not be made available anywhere. CONCLUSIONS:Authors, peer reviewers and readers agree that supplementary materials are useful. Supplementary tables and figures were favoured over reporting checklists or raw data for reading but not for study replication. Journals should consider the roles, resource costs and strategic placement of supplementary materials to ensure optimal usage and minimise waste. TRIAL REGISTRATION NUMBER:NCT02961036.
Project description:BackgroundSeveral influential aspects of survey research have been under-investigated and there is a lack of guidance on reporting survey studies, especially web-based projects. In this review, we aim to investigate the reporting practices and quality of both web- and non-web-based survey studies to enhance the quality of reporting medical evidence that is derived from survey studies and to maximize the efficiency of its consumption.MethodsReporting practices and quality of 100 random web- and 100 random non-web-based articles published from 2004 to 2016 were assessed using the SUrvey Reporting GuidelinE (SURGE). The CHERRIES guideline was also used to assess the reporting quality of Web-based studies.ResultsOur results revealed a potential gap in the reporting of many necessary checklist items in both web-based and non-web-based survey studies including development, description and testing of the questionnaire, the advertisement and administration of the questionnaire, sample representativeness and response rates, incentives, informed consent, and methods of statistical analysis.ConclusionOur findings confirm the presence of major discrepancies in reporting results of survey-based studies. This can be attributed to the lack of availability of updated universal checklists for quality of reporting standards. We have summarized our findings in a table that may serve as a roadmap for future guidelines and checklists, which will hopefully include all types and all aspects of survey research.
Project description:Synthesizing outcomes of underreported primary studies can pose a serious threat to the validity of outcomes and conclusions of systematic reviews. To address this problem, the Cochrane Collaboration recommends reviewers to contact authors of eligible primary studies to obtain additional information on poorly reported items. In this protocol, we present a cross-sectional study and a survey to assess (1) how reviewers of new Cochrane intervention reviews report on procedures and outcomes of contacting of authors of primary studies to obtain additional data, (2) how authors reply, and (3) the consequences of these additional data on the outcomes and quality scores in the review. All research questions and methods were pilot tested on 2 months of Cochrane reviews and were subsequently fine-tuned.Eligibility criteria are (1) all new (not-updates) Cochrane intervention reviews published in 2016, (2) reviews that included one or more primary studies, and (3) eligible interventions refer to contacting of authors of the eligible primary studies included in the review to obtain additional research data (e.g., information on unreported or missing data, individual patient data, research methods, and bias issues). Searching for eligible reviews and data extraction will be conducted by two authors independently. The cross-sectional study will primarily focus on how contacting of authors is conducted and reported, how contacted authors reply, and how reviewers report on obtained additional data and their consequences for the review.The same eligible reviews for the cross-sectional study will also be eligible for the survey. Surveys will be sent to the contact addresses of these reviews according to a pre-defined protocol. We will use Google Forms as our survey platform. Surveyees are asked to answer eight questions. The survey will primarily focus on the consequences of contacting authors of eligible primary studies for the risk of bias and Grading of Recommendations, Assessment, Development and Evaluation scores and the primary and secondary outcomes of the review.The findings of this study could help improve methods of contacting authors and reporting of these procedures and their outcomes. Patients, clinicians, researchers, guideline developers, research sponsors, and the general public will all be beneficiaries.