Project description:More than one published paper are often derived from analyzing the same cohort of individuals to make full use of the collected information. Preplanned study outcomes are generally mentioned in open databases while exhaustive information on methodological aspects are provided in submitted articles.
Project description:BackgroundThe syntheses of multiple qualitative studies can pull together data across different contexts, generate new theoretical or conceptual models, identify research gaps, and provide evidence for the development, implementation and evaluation of health interventions. This study aims to develop a framework for reporting the synthesis of qualitative health research.MethodsWe conducted a comprehensive search for guidance and reviews relevant to the synthesis of qualitative research, methodology papers, and published syntheses of qualitative health research in MEDLINE, Embase, CINAHL and relevant organisational websites to May 2011. Initial items were generated inductively from guides to synthesizing qualitative health research. The preliminary checklist was piloted against forty published syntheses of qualitative research, purposively selected to capture a range of year of publication, methods and methodologies, and health topics. We removed items that were duplicated, impractical to assess, and rephrased items for clarity.ResultsThe Enhancing transparency in reporting the synthesis of qualitative research (ENTREQ) statement consists of 21 items grouped into five main domains: introduction, methods and methodology, literature search and selection, appraisal, and synthesis of findings.ConclusionsThe ENTREQ statement can help researchers to report the stages most commonly associated with the synthesis of qualitative health research: searching and selecting qualitative research, quality appraisal, and methods for synthesising qualitative findings. The synthesis of qualitative research is an expanding and evolving methodological area and we would value feedback from all stakeholders for the continued development and extension of the ENTREQ statement.
Project description:Use of machine learning (ML) in clinical research is growing steadily given the increasing availability of complex clinical data sets. ML presents important advantages in terms of predictive performance and identifying undiscovered subpopulations of patients with specific physiology and prognoses. Despite this popularity, many clinicians and researchers are not yet familiar with evaluating and interpreting ML analyses. Consequently, readers and peer-reviewers alike may either overestimate or underestimate the validity and credibility of an ML-based model. Conversely, ML experts without clinical experience may present details of the analysis that are too granular for a clinical readership to assess. Overwhelming evidence has shown poor reproducibility and reporting of ML models in clinical research suggesting the need for ML analyses to be presented in a clear, concise, and comprehensible manner to facilitate understanding and critical evaluation. We present a recommendation for transparent and structured reporting of ML analysis results specifically directed at clinical researchers. Furthermore, we provide a list of key reporting elements with examples that can be used as a template when preparing and submitting ML-based manuscripts for the same audience.
Project description:This article discusses the background to the need for change in the reporting of experiments involving animals, including a report of a consensus meeting organised by the Basel Declaration Society and Understanding Animal Research UK that sought to Internationalise guidelines for reporting experiments involving animals. A commentary on the evolution of BJP's attempts to implement the ARRIVE guidelines and details of our new guidance for authors is published separately (McGrath, 2014). This is one of a series of editorials discussing updates to the BJP Instructions to Authors LINKED EDITORIALS: This Editorial is the first in a series. The other Editorials in this series will be published in the forthcoming issues. To view them, visit: http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1476-5381.
Project description:From January 2014, Psychological Science introduced new submission guidelines that encouraged the use of effect sizes, estimation, and meta-analysis (the "new statistics"), required extra detail of methods, and offered badges for use of open science practices. We investigated the use of these practices in empirical articles published by Psychological Science and, for comparison, by the Journal of Experimental Psychology: General, during the period of January 2013 to December 2015. The use of null hypothesis significance testing (NHST) was extremely high at all times and in both journals. In Psychological Science, the use of confidence intervals increased markedly overall, from 28% of articles in 2013 to 70% in 2015, as did the availability of open data (3 to 39%) and open materials (7 to 31%). The other journal showed smaller or much smaller changes. Our findings suggest that journal-specific submission guidelines may encourage desirable changes in authors' practices.
Project description:The presence of subclinical infection or clinical disease in laboratory zebrafish may have a significant impact on research results, animal health and welfare, and transfer of animals between institutions. As use of zebrafish as a model of disease increases, a harmonized method for monitoring and reporting the health status of animals will facilitate the transfer of animals, allow institutions to exclude diseases that may negatively impact their research programs, and improve animal health and welfare. All zebrafish facilities should implement a health monitoring program. In this study, we review important aspects of a health monitoring program, including choice of agents, samples for testing, available testing methodologies, housing and husbandry, cost, test subjects, and a harmonized method for reporting results. Facilities may use these recommendations to implement their own health monitoring program.
Project description:Clear and findable publishing policies are important for authors to choose appropriate journals for publication. We investigated the clarity of policies of 171 major academic journals across disciplines regarding peer review and preprinting. 31.6% of journals surveyed do not provide information on the type of peer review they use. Information on whether preprints can be posted or not is unclear in 39.2% of journals. 58.5% of journals offer no clear information on whether reviewer identities are revealed to authors. Around 75% of journals have no clear policy on co-reviewing, citation of preprints, and publication of reviewer identities. Information regarding practices of open peer review is even more scarce, with <20% of journals providing clear information. Having found a lack of clear information, we conclude by examining the implications this has for researchers (especially early career) and the spread of open research practices.
Project description:ObjectiveTo evaluate the accuracy of a 2015 cross-sectional analysis published in the BMJ Open which reported that pharmaceutical industry compliance with clinical trial registration and results reporting requirements under US law was suboptimal and varied widely among companies.DesignWe performed a reassessment of the data reported in Miller et al to evaluate whether statutory compliance analyses and conclusions were valid.Data sourcesInformation from the Dryad Digital Repository, ClinicalTrials.gov, Drugs@FDA and direct communications with sponsors.Main outcome measuresCompliance with the clinical trial registration and results reporting requirements under the Food and Drug Administration Amendments Act (FDAAA).ResultsIndustry compliance with FDAAA disclosure requirements was notably higher than reported by Miller et al. Among trials subject to FDAAA, Miller et al reported that, per drug, a median of 67% (middle 50% range: 0%-100%) of trials fully complied with registration and results reporting requirements. On reanalysis of the data, we found that a median of 100% (middle 50% range: 93%-100%) of clinical trials for a particular drug fully complied with the law. When looking at overall compliance at the trial level, our reassessment yields 94% timely registration and 90% timely results reporting among the 49 eligible trials, and an overall FDAAA compliance rate of 86%.ConclusionsThe claim by Miller et al that industry compliance is below legal standards is based on an analysis that relies on an incomplete dataset and an interpretation of FDAAA that requires disclosure of study results for drugs that have not yet been approved for any indication. On reanalysis using a different interpretation of FDAAA that focuses on whether results were disclosed within 30 days of drug approval, we found that industry compliance with US statutory disclosure requirements for the 15 reviewed drugs was consistently high.