Project description:BackgroundThere is increasing interest to make primary data from published research publicly available. We aimed to assess the current status of making research data available in highly-cited journals across the scientific literature.Methods and resultsWe reviewed the first 10 original research papers of 2009 published in the 50 original research journals with the highest impact factor. For each journal we documented the policies related to public availability and sharing of data. Of the 50 journals, 44 (88%) had a statement in their instructions to authors related to public availability and sharing of data. However, there was wide variation in journal requirements, ranging from requiring the sharing of all primary data related to the research to just including a statement in the published manuscript that data can be available on request. Of the 500 assessed papers, 149 (30%) were not subject to any data availability policy. Of the remaining 351 papers that were covered by some data availability policy, 208 papers (59%) did not fully adhere to the data availability instructions of the journals they were published in, most commonly (73%) by not publicly depositing microarray data. The other 143 papers that adhered to the data availability instructions did so by publicly depositing only the specific data type as required, making a statement of willingness to share, or actually sharing all the primary data. Overall, only 47 papers (9%) deposited full primary raw data online. None of the 149 papers not subject to data availability policies made their full primary data publicly available.ConclusionA substantial proportion of original research papers published in high-impact journals are either not subject to any data availability policies, or do not adhere to the data availability instructions in their respective journals. This empiric evaluation highlights opportunities for improvement.
Project description:Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were - study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, p<0.0001) from 42.5% (250/588) to 56.7 % (439/774). The overall proportion of errors in study design decreased significantly (χ2=16.783, Φ=0.12 p<0.0001), 41.3% (243/588) compared to 30.6% (237/774). In 2013, randomized clinical trials designs has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, p<0.0001), 82.2% (263/320) compared to 66.3% (325/490) and interpretation (χ2=25.616, Φ=0.173, p<0.0001), 32.5% (104/320) compared to 17.1% (84/490), though some serious ones were still present. Indian medical research seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of methodological problems.
Project description:ImportanceHigh-quality peer reviews are often thought to be essential to ensuring the integrity of the scientific publication process, but measuring peer review quality is challenging. Although imperfect, review word count could potentially serve as a simple, objective metric of review quality.ObjectiveTo determine the prevalence of very short peer reviews and how often they inform editorial decisions on research articles in 3 leading general medical journals.Design, setting, and participantsThis cross-sectional study compiled a data set of peer reviews from published, full-length original research articles from 3 general medical journals (The BMJ, PLOS Medicine, and BMC Medicine) between 2003 and 2022. Eligible articles were those with peer review data; all peer reviews used to make the first editorial decision (ie, accept vs revise and resubmit) were included.Main outcomes and measuresPrevalence of very short reviews was the primary outcome, which was defined as a review of fewer than 200 words. In secondary analyses, thresholds of fewer than 100 words and fewer than 300 words were used. Results were disaggregated by journal and year. The proportion of articles for which the first editorial decision was made based on a set of peer reviews in which very short reviews constituted 100%, 50% or more, 33% or more, and 20% or more of the reviews was calculated.ResultsIn this sample of 11 466 reviews (including 6086 in BMC Medicine, 3816 in The BMJ, and 1564 in PLOS Medicine) corresponding to 4038 published articles, the median (IQR) word count per review was 425 (253-575) words, and the mean (SD) word count was 520.0 (401.0) words. The overall prevalence of very short (<200 words) peer reviews was 1958 of 11 466 reviews (17.1%). Across the 3 journals, 843 of 4038 initial editorial decisions (20.9%) were based on review sets containing 50% or more very short reviews. The prevalence of very short reviews and share of editorial decisions based on review sets containing 50% or more very short reviews was highest for BMC Medicine (693 of 2585 editorial decisions [26.8%]) and lowest for The BMJ (76 of 1040 editorial decisions [7.3%]).Conclusion and relevanceIn this study of 3 leading general medical journals, one-fifth of initial editorial decisions for published articles were likely based at least partially on reviews of such short length that they were unlikely to be of high quality. Future research could determine whether monitoring peer review length improves the quality of peer reviews and which interventions, such as incentives and norm-based interventions, may elicit more detailed reviews.
Project description:Although regression models play a central role in the analysis of medical research projects, there still exist many misconceptions on various aspects of modeling leading to faulty analyses. Indeed, the rapidly developing statistical methodology and its recent advances in regression modeling do not seem to be adequately reflected in many medical publications. This problem of knowledge transfer from statistical research to application was identified by some medical journals, which have published series of statistical tutorials and (shorter) papers mainly addressing medical researchers. The aim of this review was to assess the current level of knowledge with regard to regression modeling contained in such statistical papers. We searched for target series by a request to international statistical experts. We identified 23 series including 57 topic-relevant articles. Within each article, two independent raters analyzed the content by investigating 44 predefined aspects on regression modeling. We assessed to what extent the aspects were explained and if examples, software advices, and recommendations for or against specific methods were given. Most series (21/23) included at least one article on multivariable regression. Logistic regression was the most frequently described regression type (19/23), followed by linear regression (18/23), Cox regression and survival models (12/23) and Poisson regression (3/23). Most general aspects on regression modeling, e.g. model assumptions, reporting and interpretation of regression results, were covered. We did not find many misconceptions or misleading recommendations, but we identified relevant gaps, in particular with respect to addressing nonlinear effects of continuous predictors, model specification and variable selection. Specific recommendations on software were rarely given. Statistical guidance should be developed for nonlinear effects, model specification and variable selection to better support medical researchers who perform or interpret regression analyses.
Project description:BackgroundThe application of statistics in reported research in trauma and orthopaedic surgery has become ever more important and complex. Despite the extensive use of statistical analysis, it is still a subject which is often not conceptually well understood, resulting in clear methodological flaws and inadequate reporting in many papers.MethodsA detailed statistical survey sampled 100 representative orthopaedic papers using a validated questionnaire that assessed the quality of the trial design and statistical analysis methods.ResultsThe survey found evidence of failings in study design, statistical methodology and presentation of the results. Overall, in 17% (95% confidence interval; 10-26%) of the studies investigated the conclusions were not clearly justified by the results, in 39% (30-49%) of studies a different analysis should have been undertaken and in 17% (10-26%) a different analysis could have made a difference to the overall conclusions.ConclusionIt is only by an improved dialogue between statistician, clinician, reviewer and journal editor that the failings in design methodology and analysis highlighted by this survey can be addressed.
Project description:The title of an article is the main entrance for reading the full article. The aim of our work therefore is to examine differences of title content and form between original research articles and its changes over time. Using PubMed we examined title properties of 500 randomly chosen original research articles published in the general major medical journals BMJ, JAMA, Lancet, NEJM and PLOS Medicine between 2011 and 2020. Articles were manually evaluated with two independent raters. To analyze differences between journals and changes over time, we performed random effect meta-analyses and logistic regression models. Mentioning of results, providing any quantitative or semi-quantitative information, using a declarative title, a dash or a question mark were rarely used in the title in all considered journals. The use of a subtitle, methods-related items, such as mentioning of methods, clinical context or treatment increased over time (all p < 0.05), while the use of phrasal tiles decreased over time (p = 0.044). Not a single NEJM title contained a study name, while the Lancet had the highest usage of it (45%). The use of study names increased over time (per year odds ratio: 1.13 (95% CI: [1.03‒1.24]), p = 0.008). Investigating title content and form was time-consuming because some criteria could only be adequately evaluated by hand. Title content changed over time and differed substantially between the five major medical journals. Authors are advised to carefully study titles of journal articles in their target journal prior to manuscript submission.
Project description:Statistical methods are vital to biomedical research. Our aim was to find out whether progress has been made in the last decade in the use of statistical methods in Chinese medical research. We reviewed 10 leading Chinese medical journals published in 1998 and in 2008. Regarding statistical methods, using a multiple t-test for multiple group comparison was the most common error in the t-test in both years, which significantly decreased in 2008. In contingency tables, no significant level adjustment for multiple comparison significantly decreased in 2008. In ANOVA, over a quarter of articles misused the method of multiple pair-wise comparison in both years, and no significant difference was seen between the two years. In the rank transformation nonparametric test, the error of using multiple pair-wise comparison for multiple group comparison became less common. Many mistakes were found in the randomised controlled trial (56.3% in 1998; 67.9% in 2008), non- randomised clinical trial (57.3%; 58.6%), basic science study (72.9%; 65.5%), case study or case series study (48.4%; 47.2%), and cross-sectional study (57.1%; 44.2%). Progress has been made in the use of statistical methods in Chinese medical journals, but much is yet to be done.
Project description:BackgroundTo assist educators and researchers in improving the quality of medical research, we surveyed the editors and statistical reviewers of high-impact medical journals to ascertain the most frequent and critical statistical errors in submitted manuscripts.FindingsThe Editors-in-Chief and statistical reviewers of the 38 medical journals with the highest impact factor in the 2007 Science Journal Citation Report and the 2007 Social Science Journal Citation Report were invited to complete an online survey about the statistical and design problems they most frequently found in manuscripts. Content analysis of the responses identified major issues. Editors and statistical reviewers (n = 25) from 20 journals responded. Respondents described problems that we classified into two, broad themes: A. statistical and sampling issues and B. inadequate reporting clarity or completeness. Problems included in the first theme were (1) inappropriate or incomplete analysis, including violations of model assumptions and analysis errors, (2) uninformed use of propensity scores, (3) failing to account for clustering in data analysis, (4) improperly addressing missing data, and (5) power/sample size concerns. Issues subsumed under the second theme were (1) Inadequate description of the methods and analysis and (2) Misstatement of results, including undue emphasis on p-values and incorrect inferences and interpretations.ConclusionsThe scientific quality of submitted manuscripts would increase if researchers addressed these common design, analytical, and reporting issues. Improving the application and presentation of quantitative methods in scholarly manuscripts is essential to advancing medical research.
Project description:Journal policy on research data and code availability is an important part of the ongoing shift toward publishing reproducible computational science. This article extends the literature by studying journal data sharing policies by year (for both 2011 and 2012) for a referent set of 170 journals. We make a further contribution by evaluating code sharing policies, supplemental materials policies, and open access status for these 170 journals for each of 2011 and 2012. We build a predictive model of open data and code policy adoption as a function of impact factor and publisher and find higher impact journals more likely to have open data and code policies and scientific societies more likely to have open data and code policies than commercial publishers. We also find open data policies tend to lead open code policies, and we find no relationship between open data and code policies and either supplemental material policies or open access journal status. Of the journals in this study, 38% had a data policy, 22% had a code policy, and 66% had a supplemental materials policy as of June 2012. This reflects a striking one year increase of 16% in the number of data policies, a 30% increase in code policies, and a 7% increase in the number of supplemental materials policies. We introduce a new dataset to the community that categorizes data and code sharing, supplemental materials, and open access policies in 2011 and 2012 for these 170 journals.