Project description:BackgroundAs a result of reporting bias, or frauds, false or misunderstood findings may represent the majority of published research claims. This article provides simple methods that might help to appraise the quality of the reporting of randomized, controlled trials (RCT).MethodsThis evaluation roadmap proposed herein relies on four steps: evaluation of the distribution of the reported variables; evaluation of the distribution of the reported p values; data simulation using parametric bootstrap and explicit computation of the p values. Such an approach was illustrated using published data from a retracted RCT comparing a hydroxyethyl starch versus albumin-based priming for cardiopulmonary bypass.ResultsDespite obvious nonnormal distributions, several variables are presented as if they were normally distributed. The set of 16 p values testing for differences in baseline characteristics across randomized groups did not follow a Uniform distribution on [0,1] (p = 0.045). The p values obtained by explicit computations were different from the results reported by the authors for the two following variables: urine output at 5 hours (calculated p value < 10-6, reported p ? 0.05); packed red blood cells (PRBC) during surgery (calculated p value = 0.08; reported p < 0.05). Finally, parametric bootstrap found p value > 0.05 in only 5 of the 10,000 simulated datasets concerning urine output 5 hours after surgery. Concerning PRBC transfused during surgery, parametric bootstrap showed that only the corresponding p value had less than a 50% chance to be inferior to 0.05 (3,920/10,000, p value < 0.05).ConclusionsSuch simple evaluation methods might offer some warning signals. However, it should be emphasized that such methods do not allow concluding to the presence of error or fraud but should rather be used to justify asking for an access to the raw data.
Project description:BackgroundAlthough peer review is widely considered to be the most credible way of selecting manuscripts and improving the quality of accepted papers in scientific journals, there is little evidence to support its use. Our aim was to estimate the effects on manuscript quality of either adding a statistical peer reviewer or suggesting the use of checklists such as CONSORT or STARD to clinical reviewers or both.Methodology and principal findingsInterventions were defined as 1) the addition of a statistical reviewer to the clinical peer review process, and 2) suggesting reporting guidelines to reviewers; with "no statistical expert" and "no checklist" as controls. The two interventions were crossed in a 2x2 balanced factorial design including original research articles consecutively selected, between May 2004 and March 2005, by the Medicina Clinica (Barc) editorial committee. We randomized manuscripts to minimize differences in terms of baseline quality and type of study (intervention, longitudinal, cross-sectional, others). Sample-size calculations indicated that 100 papers provide an 80% power to test a 55% standardized difference. We specified the main outcome as the increment in quality of papers as measured on the Goodman Scale. Two blinded evaluators rated the quality of manuscripts at initial submission and final post peer review version. Of the 327 manuscripts submitted to the journal, 131 were accepted for further review, and 129 were randomized. Of those, 14 that were lost to follow-up showed no differences in initial quality to the followed-up papers. Hence, 115 were included in the main analysis, with 16 rejected for publication after peer review. 21 (18.3%) of the 115 included papers were interventions, 46 (40.0%) were longitudinal designs, 28 (24.3%) cross-sectional and 20 (17.4%) others. The 16 (13.9%) rejected papers had a significantly lower initial score on the overall Goodman scale than accepted papers (difference 15.0, 95% CI: 4.6-24.4). The effect of suggesting a guideline to the reviewers had no effect on change in overall quality as measured by the Goodman scale (0.9, 95% CI: -0.3-+2.1). The estimated effect of adding a statistical reviewer was 5.5 (95% CI: 4.3-6.7), showing a significant improvement in quality.Conclusions and significanceThis prospective randomized study shows the positive effect of adding a statistical reviewer to the field-expert peers in improving manuscript quality. We did not find a statistically significant positive effect by suggesting reviewers use reporting guidelines.
Project description:IntroductionPatients, families and clinicians rely on published research to help inform treatment decisions. Without complete reporting of the outcomes studied, evidence-based clinical and policy decisions are limited and researchers cannot synthesise, replicate or build on existing research findings. To facilitate harmonised reporting of outcomes in published trial protocols and reports, the Instrument for reporting Planned Endpoints in Clinical Trials (InsPECT) is under development. As one of the initial steps in the development of InsPECT, a scoping review will identify and synthesise existing guidance on the reporting of trial outcomes.Methods and analysisWe will apply methods based on the Joanna Briggs Institute scoping review methods manual. Documents that provide explicit guidance on trial outcome reporting will be searched for using: (1) an electronic bibliographic database search; (2) a grey literature search; and (3) solicitation of colleagues for guidance documents using a snowballing approach. Reference list screening will be performed for included documents. Search results will be divided between two trained reviewers who will complete title and abstract screening, full-text screening and data charting. Captured trial outcome reporting guidance will be compared with candidate InsPECT items to support, refute or refine InsPECT content and to assess the need for the development of additional items. Data analysis will explore common features of guidance and use quantitative measures (eg, frequencies) to characterise guidance and its sources.Ethics and disseminationA paper describing the review findings will be published in a peer-reviewed journal. The results will be used to inform the InsPECT development process, helping to ensure that InsPECT provides an evidence-based tool for standardising trial outcome reporting.
Project description:AIMS:The aim of this study was to provide guidance to improve the completeness and clarity of meta-ethnography reporting. BACKGROUND:Evidence-based policy and practice require robust evidence syntheses which can further understanding of people's experiences and associated social processes. Meta-ethnography is a rigorous seven-phase qualitative evidence synthesis methodology, developed by Noblit and Hare. Meta-ethnography is used widely in health research, but reporting is often poor quality and this discourages trust in and use of its findings. Meta-ethnography reporting guidance is needed to improve reporting quality. DESIGN:The eMERGe study used a rigorous mixed-methods design and evidence-based methods to develop the novel reporting guidance and explanatory notes. METHODS:The study, conducted from 2015 - 2017, comprised of: (1) a methodological systematic review of guidance for meta-ethnography conduct and reporting; (2) a review and audit of published meta-ethnographies to identify good practice principles; (3) international, multidisciplinary consensus-building processes to agree guidance content; (4) innovative development of the guidance and explanatory notes. FINDINGS:Recommendations and good practice for all seven phases of meta-ethnography conduct and reporting were newly identified leading to 19 reporting criteria and accompanying detailed guidance. CONCLUSION:The bespoke eMERGe Reporting Guidance, which incorporates new methodological developments and advances the methodology, can help researchers to report the important aspects of meta-ethnography. Use of the guidance should raise reporting quality. Better reporting could make assessments of confidence in the findings more robust and increase use of meta-ethnography outputs to improve practice, policy, and service user outcomes in health and other fields. This is the first tailored reporting guideline for meta-ethnography. This article is being simultaneously published in the following journals: Journal of Advanced Nursing, Psycho-oncology, Review of Education, and BMC Medical Research Methodology.
Project description:AimsThe aim of this study was to provide guidance to improve the completeness and clarity of meta-ethnography reporting.BackgroundEvidence-based policy and practice require robust evidence syntheses which can further understanding of people's experiences and associated social processes. Meta-ethnography is a rigorous seven-phase qualitative evidence synthesis methodology, developed by Noblit and Hare. Meta-ethnography is used widely in health research, but reporting is often poor quality and this discourages trust in and use of its findings. Meta-ethnography reporting guidance is needed to improve reporting quality.DesignThe eMERGe study used a rigorous mixed-methods design and evidence-based methods to develop the novel reporting guidance and explanatory notes.MethodsThe study, conducted from 2015 to 2017, comprised of: (1) a methodological systematic review of guidance for meta-ethnography conduct and reporting; (2) a review and audit of published meta-ethnographies to identify good practice principles; (3) international, multidisciplinary consensus-building processes to agree guidance content; (4) innovative development of the guidance and explanatory notes.FindingsRecommendations and good practice for all seven phases of meta-ethnography conduct and reporting were newly identified leading to 19 reporting criteria and accompanying detailed guidance.ConclusionThe bespoke eMERGe Reporting Guidance, which incorporates new methodological developments and advances the methodology, can help researchers to report the important aspects of meta-ethnography. Use of the guidance should raise reporting quality. Better reporting could make assessments of confidence in the findings more robust and increase use of meta-ethnography outputs to improve practice, policy, and service user outcomes in health and other fields. This is the first tailored reporting guideline for meta-ethnography. This article is being simultaneously published in the following journals: Journal of Advanced Nursing, Psycho-oncology, Review of Education, and BMC Medical Research Methodology.
Project description:Although regression models play a central role in the analysis of medical research projects, there still exist many misconceptions on various aspects of modeling leading to faulty analyses. Indeed, the rapidly developing statistical methodology and its recent advances in regression modeling do not seem to be adequately reflected in many medical publications. This problem of knowledge transfer from statistical research to application was identified by some medical journals, which have published series of statistical tutorials and (shorter) papers mainly addressing medical researchers. The aim of this review was to assess the current level of knowledge with regard to regression modeling contained in such statistical papers. We searched for target series by a request to international statistical experts. We identified 23 series including 57 topic-relevant articles. Within each article, two independent raters analyzed the content by investigating 44 predefined aspects on regression modeling. We assessed to what extent the aspects were explained and if examples, software advices, and recommendations for or against specific methods were given. Most series (21/23) included at least one article on multivariable regression. Logistic regression was the most frequently described regression type (19/23), followed by linear regression (18/23), Cox regression and survival models (12/23) and Poisson regression (3/23). Most general aspects on regression modeling, e.g. model assumptions, reporting and interpretation of regression results, were covered. We did not find many misconceptions or misleading recommendations, but we identified relevant gaps, in particular with respect to addressing nonlinear effects of continuous predictors, model specification and variable selection. Specific recommendations on software were rarely given. Statistical guidance should be developed for nonlinear effects, model specification and variable selection to better support medical researchers who perform or interpret regression analyses.
Project description:Experimental philosophy (x-phi) is a young field of research in the intersection of philosophy and psychology. It aims to make progress on philosophical questions by using experimental methods traditionally associated with the psychological and behavioral sciences, such as null hypothesis significance testing (NHST). Motivated by recent discussions about a methodological crisis in the behavioral sciences, questions have been raised about the methodological standards of x-phi. Here, we focus on one aspect of this question, namely the rate of inconsistencies in statistical reporting. Previous research has examined the extent to which published articles in psychology and other behavioral sciences present statistical inconsistencies in reporting the results of NHST. In this study, we used the R package statcheck to detect statistical inconsistencies in x-phi, and compared rates of inconsistencies in psychology and philosophy. We found that rates of inconsistencies in x-phi are lower than in the psychological and behavioral sciences. From the point of view of statistical reporting consistency, x-phi seems to do no worse, and perhaps even better, than psychological science.
Project description:Study design, statistical analysis, interpretation of results, and conclusions should be a part of all research papers. Statistics are integral to each of these components and are therefore necessary to evaluate during manuscript peer review. Research published in Toxicological Pathology is often focused on animal studies that may seek to compare defined treatment groups in randomized controlled experiments or focus on the reliability of measurements and diagnostic accuracy of observed lesions from preexisting studies. Reviewers should distinguish scientific research goals that aim to test sufficient effect size differences (i.e., minimizing false positive rates) from common toxicologic goals of detecting a harmful effect (i.e., minimizing false negative rates). This journal comprises a wide range of study designs that require different kinds of statistical assessments. Therefore, statistical methods should be described in enough detail so that the experiment can be repeated by other research groups. The misuse of statistics will impede reproducibility.
Project description:Research needs to be reported transparently so readers can critically assess the strengths and weaknesses of the design, conduct, and analysis of studies. Reporting guidelines have been developed to inform reporting for a variety of study designs. The objective of this study was to identify whether there is a need to develop a reporting guideline for survey research.We conducted a three-part project: (1) a systematic review of the literature (including "Instructions to Authors" from the top five journals of 33 medical specialties and top 15 general and internal medicine journals) to identify guidance for reporting survey research; (2) a systematic review of evidence on the quality of reporting of surveys; and (3) a review of reporting of key quality criteria for survey research in 117 recently published reports of self-administered surveys. Fewer than 7% of medical journals (n?=?165) provided guidance to authors on survey research despite a majority having published survey-based studies in recent years. We identified four published checklists for conducting or reporting survey research, none of which were validated. We identified eight previous reviews of survey reporting quality, which focused on issues of non-response and accessibility of questionnaires. Our own review of 117 published survey studies revealed that many items were poorly reported: few studies provided the survey or core questions (35%), reported the validity or reliability of the instrument (19%), defined the response rate (25%), discussed the representativeness of the sample (11%), or identified how missing data were handled (11%).There is limited guidance and no consensus regarding the optimal reporting of survey research. The majority of key reporting criteria are poorly reported in peer-reviewed survey research articles. Our findings highlight the need for clear and consistent reporting guidelines specific to survey research.
Project description:Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this 'co-piloting' currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.